* fix(server): tighten CORS allowlist for the agent server
Replace the permissive `origin || '*'` reflection in
`defaultCorsConfig` with an explicit allowlist composed of:
- a static list (empty by default)
- comma-separated origins from `BROWSEROS_TRUSTED_ORIGINS`
Add a small `requireTrustedOrigin` middleware that actively
rejects (403) any request whose `Origin` header is present and
not in the allowlist. The middleware is permissive when the
`Origin` header is absent — CLI tools, internal Node clients,
and some service-worker fetches legitimately omit it; the
threat model only covers cross-origin browser fetches, which
always carry `Origin` (it's on the Forbidden Header List, so
JS cannot suppress it).
Mount the middleware globally in `createHttpServer` after the
existing `cors()` layer. Document the new env var in
`.env.example`.
Tests cover allowlist parsing (empty, single, multi, trims,
case sensitivity, port match) and middleware behaviour
(missing Origin allowed, allowlisted Origin allowed, unknown
Origin rejected, "null" rejected, port mismatch rejected,
disallowed Origin doesn't reach the handler).
* fix(server): include published extension origin in default allowlist
Pin the published BrowserOS extension origin in the static
allowlist so the default install accepts the legitimate
extension without requiring `BROWSEROS_TRUSTED_ORIGINS` to be
populated. Additional origins (dev / alpha) keep working
through the env override.
* chore(server): trim .env.example comments
* chore(server): drop redundant comments from cors helpers
* feat: add deterministic eval graders (AGI SDK + WebArena-Infinity)
Two new benchmark integrations with programmatic grading — no LLM judge.
AGI SDK / REAL Bench (52 tasks):
- 11 React/Next.js clones of consumer apps (DoorDash, Amazon, Gmail, etc.)
- Grader navigates browser to /finish, extracts state diff from <pre> tag
- Python verifier checks exact values via jmespath queries
WebArena-Infinity (50 hard tasks):
- 13 LLM-generated SaaS clones (Gmail, GitLab, Linear, Figma, etc.)
- InfinityAppManager starts fresh app server per task per worker
- Python verifier calls /api/state and asserts on JSON state
Infrastructure:
- GraderInput extended with mcpUrl + infinityAppUrl for parallel workers
- Each worker gets isolated ports (no cross-worker state contamination)
- CI workflow: pip install agisdk, clone webarena-infinity repo
* chore: switch eval configs back to kimi-k2p5
* fix: register deterministic graders in pass rate calculation
Add agisdk_state_diff and infinity_state to PASS_FAIL_GRADER_ORDER
in both runner types and weekly report script, so scores show correctly
in the dashboard.
* chore: temp switch to opus 4.6 for eval run
* chore: restore kimi-k2p5 as default eval config
* ci: add timeout and continue-on-error for trend report step
* feat(llm): Minimax Chinese and International Users providers
* fix(llm): Patch for p2 bugs
* fix(agent): correct MiniMax base URL handling and enforce API key validation
* fix(agent): add minimax entry to PROVIDER_DISPLAY_NAMES
The Record<ProviderType, string> map in ChatError.tsx was missing
the new minimax key added in this PR, causing a typecheck failure.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: krish-mm <112251957+krish-mm@users.noreply.github.com>
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix(credits): move credits fetch to extension side using install_id
Extension now reads `browseros.metrics_install_id` pref directly and fetches
credits from `llm.browseros.com` without going through the bundled server.
Unblocks the referral submit flow in prod without requiring a BrowserOS
binary release.
- Revert `/credits` route change that added `browserosId` to the response.
- Add `getOrCreateBrowserosId()` helper reading from BrowserOS prefs.
- Add `CREDITS_GATEWAY` to shared EXTERNAL_URLS.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* refactor(credits): drop fallback UUID, read install_id directly
Extension only runs inside BrowserOS, so the prefs API is always available.
The chrome.storage fallback was dead code that would generate a ghost ID
diverging from the server's install_id anyway. Rename the helper to match
its simpler contract.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix(credits): guard against empty install_id pref
Address Greptile P1 — throw instead of silently fetching `/credits/null`
when `browseros.metrics_install_id` is unset. Fails loudly so the broken
state is observable rather than masquerading as a credits outage.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix(agent): declare @browseros/shared as workspace dependency
The agent app imports @browseros/shared/constants/urls in
lib/referral/submit-referral.ts but never declared the package in its
dependencies, so vite failed to resolve the import during dev.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat(referral): cap daily referral earnings at 500 credits
Block tweet submissions client-side once the user's balance reaches
500 to prevent unlimited credit farming via repeated shares.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat(referral): randomize tweet variations for Twitter share
Replace the single hardcoded share text with 10 feature-specific
variations (agent mode, chat, scheduled tasks, connect apps, cowork,
workflows, memory, skills, local models, ad blocking) and pick one at
random each time the share button is clicked.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix(referral): regenerate share URL on click
Previously getShareOnTwitterUrl() was evaluated once at render time as
a static href, so every click produced the same tweet variation. Move
the call into onClick so a new random variation is picked each time.
Addresses Greptile P1 review on PR #737.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix(agent): clarify upstream provider rate-limit errors
When a non-BrowserOS provider (OpenAI, Anthropic, OpenRouter, etc.)
returned a 429, ChatError rendered the retry-wrapped message
"Failed after 3 attempts. Last error: The usage limit has been reached"
with a generic "Something went wrong" title, leading users to blame
BrowserOS for throttling imposed by their configured upstream.
Detect upstream 429s in parseErrorMessage, show the provider name in
the title ("OpenAI rate limit reached"), strip the retry prefix,
render the raw upstream message, and add clarifying subtext that
names the provider and explicitly excludes BrowserOS. Skip the
BrowserOS-specific ShareForCredits / survey / upgrade affordances on
this path — they do not apply.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: address Greptile review comments
- Tighten 429 pattern to \b429\b so it only matches the standalone
status code, not incidental substrings (model IDs, paths, etc.).
- Unwrap JSON-encoded provider error bodies on the upstream-rate-limit
path so users see the human-readable message instead of raw JSON.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat(referral): show share rules and lower default daily limit fallback
Surface the three referral validation rules (must mention @browserOS_ai,
posted within last 30 minutes, single-use) directly in the ShareForCredits
UI so users understand submission requirements before pasting a tweet link.
Also align the UsagePage daily-limit fallback (used while credits load) with
the gateway default of 50.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix(usage): handle credit balance exceeding daily limit
The "Credits used today" stat was computed as `dailyLimit - credits`,
which goes negative once a referral bonus pushes the balance above the
daily cap (e.g. balance 294 with cap 100 showed "-194 of 100"). Clamp
the math to zero and surface a separate "Bonus credits" stat when the
balance exceeds the daily allowance.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat: add Twitter share referral UI and expose browserosId
When credits are exhausted, users now see a "Share on Twitter" CTA with
a pre-filled tweet URL and an input to paste their tweet link. Reusable
ShareForCredits component used in both ChatError and UsagePage. Server's
GET /credits now includes browserosId for the extension to pass to the
referral service.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: rebuild chat session on provider change
* fix: address Greptile review comments
- Move referral service URL to EXTERNAL_URLS
- Guard submitReferral on !response.ok
- Remove stale TODO comment
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: enable agent interaction with elements inside iframes
Fetch accessibility trees from all frames via Page.getFrameTree() +
per-frame Accessibility.getFullAXTree(frameId), so iframe elements
appear in snapshots with valid backendNodeIds. Pages without iframes
take the original single-call path with zero overhead.
Update snapshot tree builders to walk multiple RootWebArea roots from
merged multi-frame trees. Extract same-origin iframe content in the
markdown walker; show [iframe: url] placeholder for cross-origin.
* fix: namespace AX nodeIds by frameId to prevent cross-frame collisions
CDP AXNodeId values are frame-scoped — each frame's accessibility tree
starts its own counter from 1. Prefix nodeId and childIds with frameId
before merging so the nodeMap in snapshot builders never overwrites
nodes from a different frame.
* docs: add uBlock Origin install info to getting started and ad-blocking pages
Chrome dropped support for the full uBlock Origin extension — highlight
that BrowserOS brings it back and make it easy to install from both the
getting started guide and the dedicated ad-blocking page.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: revert Kimi partnership UI, restore daily limit survey
Remove Kimi/Moonshot AI partnership branding from the rate limit
banner, provider card, provider templates, and LLM hub. Restore
the original survey CTA on daily limit errors. Moonshot AI remains
as a regular provider template without the "Recommended" badge.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: address Greptile review comments
- Guard survey CTA with !isCreditsExhausted to avoid showing it for
credits-exhausted users who already see "View Usage & Billing"
- Remove dead kimi-launch feature flag files (kimi-launch.ts,
useKimiLaunch.ts)
- Remove unused KIMI_RATE_LIMIT analytics events
- Remove VITE_PUBLIC_KIMI_LAUNCH from env schema and .env.example
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
The merged PR (#661) injected custom entries into filteredModels, but
cmdk auto-scrolls to its first selected CommandItem, pushing the custom
entry out of view. Fix by using forceMount on a separate CommandGroup
and resetting scroll to top on every keystroke via requestAnimationFrame.
* feat: show custom model ID as first option in model selector
When typing in the model dropdown, the user's exact input now appears as the
first selectable row, followed by fuzzy search suggestions. This makes entering
custom model IDs intuitive — previously the option was hidden behind a
zero-results-only Enter shortcut that fuzzy search almost always prevented.
* fix: correct is_custom_model flag and prevent duplicate analytics events
- Use modelInfoList check instead of hardcoding is_custom_model: true in
the Enter key handler
- Add stopPropagation to prevent cmdk's root keydown handler from also
firing onSelect, which caused duplicate MODEL_SELECTED_EVENT emissions
* fix: install linux sysroot in configure, not via gclient hook
`gn gen` was failing on the arm64 leg with `Missing sysroot
(//build/linux/debian_bullseye_arm64-sysroot)`. The previous design
relied on `git_setup` writing `target_cpus` to `.gclient` so that
`gclient sync`'s DEPS hook would download the cross-arch sysroot. That
chain breaks for any chromium_src that was synced before cross-arch
support landed (the hook is gated on .gclient state at sync time) and
for partial pipeline runs that skip git_setup entirely. Nothing in
configure declared or verified its sysroot precondition.
Make configure self-healing: on Linux, invoke
`build/linux/sysroot_scripts/install-sysroot.py --arch=<target>`
directly before `gn gen`. install-sysroot.py is idempotent (stamp file
+ SHA check), fast when already installed, and decoupled from .gclient
— it's exactly what the failing assertion's error message recommends.
The script accepts our arch names directly: `x64` translates to `amd64`
internally via ARCH_TRANSLATIONS, and `arm64` is a valid pass-through.
Also temporarily pin release.linux.yaml to x64 only while we validate
the sysroot bootstrap end-to-end. Flip back to `[x64, arm64]` once
arm64 is green.
* chore: pin release.linux.yaml to arm64-only for sysroot bootstrap test
x64 already builds cleanly — the failing leg is arm64 cross-compile from
an x64 host. Pin the config to arm64 to exercise the new
install-sysroot.py path in configure without burning time on x64.
Flip back to [x64, arm64] once arm64 is green.
* feat(server): cache klavis createStrata to unblock /chat hot path
Conversation creation in /chat was blocking on a Worker-proxied
klavisClient.createStrata round-trip every time the user had any
managed Klavis app connected. The 5s KLAVIS_TIMEOUT_MS in the
ai-worker proxy existed specifically to bound this latency, but
the same cap also caused user-visible 504s on /klavis/servers/remove
since Strata DELETE operations routinely take >5s. Without caching
we couldn't raise the timeout without regressing chat creation.
This adds an in-process cache for Strata createStrata responses,
keyed by (browserosId, hashed sorted-server-set) and gated by a 1h
TTL. The cache stores only immutable JSON metadata (strataServerUrl,
strataId, addedServers); per-session MCP clients continue to be
opened and disposed by AiSdkAgent exactly as before, which keeps
the cache concurrency-safe by construction.
Cache invalidation has two layers: (a) the cache key embeds the
server set, so adding/removing apps naturally produces a different
key; (b) POST /klavis/servers/add and DELETE /klavis/servers/remove
explicitly call invalidate(browserosId) after their underlying
Klavis API call succeeds, as defense-in-depth.
Other changes:
- Consolidates klavis-related services into a new
apps/server/src/api/services/klavis/ directory; moves
register-klavis-mcp.ts -> strata-proxy.ts and adds strata-cache.ts
there. lib/clients/klavis/ stays unchanged.
- Refactors KlavisClient.removeServer into a low-level
deleteServersFromStrata(strataId, servers) primitive. The
cache-lookup + delete + invalidate orchestration moves up into
routes/klavis.ts where it belongs, eliminating the lib->api
layering inversion the original removeServer would have introduced.
- Uses Bun.hash (xxhash64) for fixed-width 16-hex-char keys, with
serverKey verified on read to make collision risk strictly zero.
- Dedupes concurrent fetches via in-flight Promise sharing, with
identity-checks before delete to avoid races between invalidate()
and a racing replacement insert.
Follow-up (separate PR): bump KLAVIS_TIMEOUT_MS to 30000 in
ai-worker/wrangler.toml so /klavis/servers/remove stops 504-ing.
* fix: address greptile review comments for klavis strata cache
- Drop dead `invalidated` field on InflightEntry. It was added to
support a "discard post-resolution if invalidated" check that I
later replaced with identity-checked deletes during self-review,
but I forgot to remove the field and the misleading comment
referencing it. Simplify Map<string, InflightEntry> to plain
Map<string, Promise<CacheEntry>>.
- Lower cache miss log from info to debug. Misses fire on every new
conversation; matching the existing debug-level for hits.
- Stop routing the /klavis/servers/remove handler through
klavisStrataCache.getOrFetch. The chat hot path keys its cache by
the user's full enabled-server set (e.g. hash('Gmail,Linear')),
so a single-server lookup here (hash('Gmail')) is guaranteed to
miss, write a spurious entry, and then have it immediately
cleared by invalidate() on the next line. Call createStrata
directly to recover the strataId, mirroring the original
removeServer flow.
`release.linux.yaml` now declares `architecture: [x64, arm64]` and the
runner loops the entire pipeline once per architecture. depot_tools
fetches both Linux sysroots automatically — `git_setup` idempotently
ensures `target_cpus = ['x64', 'arm64']` is in `.gclient` before
`gclient sync`, so cross-compiling arm64 from an x64 host just works.
The resolver returns `List[Context]` (single-element for the common
single-arch case), and `build/cli/build.py` loops `execute_pipeline` over
the per-arch contexts. Modules stay 100% arch-agnostic — no new
orchestration module, no new YAML schema beyond the list form.
Also fix a cross-compile bug in `build/modules/package/linux.py`: the
appimagetool binary must match the BUILD machine's arch (it executes
locally), not the target arch. Split into a host-keyed
`LINUX_HOST_APPIMAGETOOL` lookup vs the existing target-keyed
`LINUX_ARCHITECTURE_CONFIG`. Target arch is still passed to appimagetool
via the `ARCH` env var.
- build/common/resolver.py: scalar OR list `architecture` -> List[Context]
- build/cli/build.py: loop pipeline per arch, log multi-arch headers
- build/config/release.linux.yaml: `architecture: [x64, arm64]`
- build/modules/setup/git.py: idempotent `target_cpus` edit on Linux
- build/modules/package/linux.py: host vs target appimagetool split
- build/modules/package/linux_test.py: cover the host/target split
The --compile-only and --ci flags served overlapping purposes for CI
builds. Remove --compile-only entirely since --ci already handles the
CI use case (skip R2, skip prod env validation, local zip packaging)
and --no-upload covers the upload-skipping use case for full builds.
The server release CI workflow fails on ubuntu-latest because
patch-windows-exe.ts requires Wine to run rcedit. Thread the existing
--ci flag through compileServerBinaries so Windows PE metadata patching
is skipped in CI mode with a warning log.
* feat: add server release workflow
* fix: address PR review comments for 0331-add_server_release_workflow
* refactor: rework 0331-add_server_release_workflow based on feedback
* refactor: rework 0331-add_server_release_workflow based on feedback
* feat(cli): skip self-update prompts for package manager installs
Checks BROWSEROS_INSTALL_METHOD env var (npm, brew) and skips automatic
update checks. Users should use their package manager's update mechanism.
FormatNotice now shows the appropriate upgrade command based on install method.
* feat(cli): add npm bin wrapper for browseros-cli
* feat(cli): add npm postinstall script to download platform binary
Downloads the correct platform binary from GitHub releases during npm
install, verifies SHA256 checksums, and extracts to .binary directory.
* feat(cli): add npm package metadata and README
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: move npm package files to correct monorepo path
The bin wrapper and postinstall were created at apps/cli/npm/ instead of
packages/browseros-agent/apps/cli/npm/. Moves them to the correct location.
* style: use node: protocol for builtin module imports
* feat(cli): add Makefile npm targets and release workflow npm publish step
Adds npm-version and npm-publish Makefile targets for version sync.
Adds Node.js setup and npm publish step to the release workflow.
Adds npm/npx install instructions to release notes template.
* fix(cli): fail on missing checksum entry and limit redirect depth
- Abort if checksums.txt downloaded but archive entry is missing
- Warn if checksums.txt itself failed to download
- Cap redirect depth at 5 to prevent stack overflow on circular redirects
* fix(cli): match install.sh checksum behavior — warn instead of abort
The existing shell installer (install.sh) warns and continues when the
checksum entry is missing from checksums.txt. Match that behavior in the
npm postinstall to avoid unnecessary install failures. Both files come
from the same GitHub release, so the checksum is a corruption check,
not a strong security boundary.
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The model picker in NewProviderDialog rendered inline, causing dialog
resizing and lacked keyboard navigation. Replace it with a Popover +
Command (shadcn Combobox) pattern and add fuse.js for fuzzy search.
- Replace custom ModelPickerList with Popover + Command dropdown
- Add fuse.js for fuzzy model search (replaces string.includes)
- Add MODEL_SELECTED_EVENT and AI_PROVIDER_UPDATED_EVENT analytics
- Enrich PROVIDER_SELECTED_EVENT with model_id in chat sessions
* feat: add browseros-cli self-updater
* fix: address review comments for 0327-cli_self_updater
* fix: address PR review comments for 0327-cli_self_updater
* fix: replace goreleaser with Makefile-based release build
Remove .goreleaser.yml (required Pro license for monorepo field) and
consolidate cross-compilation into `make release`. CI now uses the same
Makefile target, fixing a bug where POSTHOG_API_KEY was missing from
release ldflags.
* fix: address critical self-updater bugs from code review
- Fix SHA256 checksum mismatch: verify archive checksum before extraction
instead of verifying extracted binary against archive hash (was always
failing). Add VerifyChecksum() and integration test.
- Fix JSON field name mismatch: TypeScript was emitting camelCase
(publishedAt, archiveFormat) but Go expected snake_case
(published_at, archive_format). Manifest parsing was silently broken.
- Add decompression size limit (256 MB) to prevent zip/gzip bombs.
- Don't update LastCheckedAt on transient errors so retry happens on
next CLI invocation instead of waiting 24h.
* feat: add PostHog usage analytics to CLI
Add anonymous command-level analytics to browseros-cli using the PostHog
Go SDK. Tracks which commands are executed, their success/failure status,
and duration — no PII or person profiles.
- New analytics package with Init/Track/Close singleton
- Distinct ID resolves from server's browseros_id (server.json), falls
back to CLI-generated UUID (~/.config/browseros-cli/install_id)
- API key injected at build time via ldflags (dev builds = silent no-op)
- Server now writes browseros_id into server.json for cross-surface
identity correlation
* fix: address PR review feedback for #603
- Return "unknown" for unrecognized args in commandName to avoid
sending arbitrary user input to PostHog
- Revert goreleaser to {{ .Env.POSTHOG_API_KEY }} (intentional hard
fail — release builds must have the key set)
- go mod tidy to fix posthog-go direct/indirect marker
- Add POSTHOG_API_KEY to .env.production.example
* feat: upload CLI binaries to CDN during release and gate workflow to core team
- Extend scripts/build/cli/upload.ts with uploadCliRelease() that pushes
archives + checksums to R2 under versioned (cli/v{VERSION}/) and latest
(cli/latest/) paths, plus a version.txt for lightweight latest resolution
- Update scripts/build/cli.ts entry point with --release/--version/--binaries-dir
flags (existing no-args behavior preserved for upload:cli-installers)
- Rewrite install.sh and install.ps1 to fetch from cdn.browseros.com instead of
GitHub releases API — eliminates rate limits and API dependency
- Add environment: release-core to release-cli.yml for core-team gating via
GitHub environment protection rules
- Add Bun setup + CDN upload step to the workflow between build and GitHub release
* fix: address review feedback for PR #602
- Make loadProdEnv return empty map when .env.production is absent so
pickEnv falls through to process.env in CI (Greptile P1)
- Add semver format validation for version string in install.sh and
install.ps1 to guard against malformed CDN responses
- Pass inputs.version via env var instead of inline ${{ }} interpolation
to prevent command injection in workflow shell
* test: temporarily allow release workflow on any branch
* fix(cli): restore main-only guard, remove goreleaser dependency
Replaces GoReleaser (Pro-only monorepo feature) with plain go build.
Tested: RC release created successfully on branch with all 6 binaries.
* fix(cli): fix hdiutil mount detection, update README with install/launch/init flow
* test: temporarily allow release workflow on any branch
* fix(cli): restore main-only guard, remove goreleaser dependency
Replaces GoReleaser (Pro-only monorepo feature) with plain go build.
Tested: RC release created successfully on branch with all 6 binaries.
* fix(cli): remove -quiet from hdiutil so mount point is detected
* fix: add refresh indicator to chat history when fetching latest conversations
Show a non-blocking "Fetching latest conversations" indicator at the top
of the history list while the cached data is being refreshed. Users can
still interact with the cached conversation list during the refresh.
* perf: reduce chat history query payload — fetch last 2 messages instead of 5
The conversation list only displays the last user message as a preview.
Fetching 5 messages per conversation was wasteful — each message contains
the full UIMessage object (tool calls, reasoning, etc.) multiplied by
50 conversations per page. Reduced to last 2 which is sufficient to
find the last user message in a user→assistant exchange.
* perf: use first+DESC instead of last+ASC to push LIMIT down to SQL
PostGraphile's `last: N` doesn't map to SQL LIMIT — it uses a padded
LIMIT 10 and slices in application code. Changing to `first: 2` with
ORDER_INDEX_DESC generates a true SQL LIMIT 2, reducing rows scanned
from 500 to 100 per page (50 conversations × 2 vs 10 messages each).
No UX impact — extractLastUserMessage() filters by role regardless
of message order.
* chore: update react query packages
* feat: replace localforage with idb-keyval
* fix: remove filesystem tools when no workspace is selected
- Make workingDir optional on ResolvedAgentConfig
- Remove resolveSessionDir() fallback that always created a session dir,
masking the no-workspace state and keeping filesystem tools available
- Gate buildFilesystemToolSet() on workingDir being defined
- Add workspace change detection mid-conversation — rebuilds the agent
session when workspace is added, removed, or switched (same pattern
as existing MCP server change detection)
- download_file falls back to tmpdir() when no workspace is set
- Memory/soul tools are unaffected — they use ~/BrowserOS/ paths
* fix: sanitize message history when session rebuilds with different tools
When a session is rebuilt due to workspace or MCP changes, the carried-over
message history may contain tool parts for tools that no longer exist in
the new session. The AI SDK validates messages against the current toolset
and rejects parts with no matching schema.
- Add toolNames getter to AiSdkAgent exposing registered tool names
- Add sanitizeMessagesForToolset() to strip tool parts referencing
removed tools from carried-over messages
- Apply sanitization in both MCP and workspace session rebuilds
* fix: prepend tool-change context to user message on session rebuild
When workspace or MCP integrations change mid-conversation, prepend a
[Context: ...] block to the user's message explaining what changed.
This prevents the LLM from hallucinating tool usage based on patterns
in the carried-over conversation history.
Context messages vary by change type:
- Workspace removed: lists unavailable filesystem tools, suggests
selecting a working directory
- Workspace added: confirms filesystem tools are available with path
- Workspace switched: notes the new working directory
- MCP changed: notes that some integration tools may have changed
Only fires on the first message after a rebuild. Invisible in the UI.
* fix: make MCP change context specific about which apps were added/removed
Diff the old and new MCP server keys to produce specific context like:
- "The following app integrations were disconnected: Gmail, Slack."
- "The following app integrations were connected: Linear."
instead of a generic "some tools may no longer be available" message.
* refactor: extract shared rebuildSession helper in ChatService
Eliminates the duplicated 20-line dispose→create→sanitize→store flow
that existed separately in both the MCP and workspace change-detection
blocks.
Co-authored-by: Dani Akash <DaniAkash@users.noreply.github.com>
* test: add sanitizeMessagesForToolset test suite
Tests for the message sanitization that runs when a session rebuilds
with a different toolset (workspace or MCP change mid-conversation):
- Preserves messages with no tool parts
- Preserves tool parts when tool is in the toolset
- Strips tool parts when tool is NOT in the toolset
- Strips multiple removed tool parts from same message
- Keeps browser tools while removing filesystem tools
- Removes messages that become empty after stripping
- Preserves non-tool parts (reasoning, step-start, file)
- Returns same references when no filtering needed
- Handles empty message array and empty toolset
* style: fix biome formatting in chat-service.ts
---------
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
* feat: isolate new-tab agent navigation from origin tab
Add origin-aware navigation isolation so the agent never navigates
away from the new-tab chat UI. This is a two-layer defense:
1. Prompt adaptation: When origin is 'newtab', the system prompt's
execution and tool-selection sections are rewritten to prohibit
navigating the active tab and default all lookups to new_page.
2. Tool-level guards: navigate_page and close_page reject attempts
to act on the origin tab when in newtab mode, returning an error
that teaches the agent to self-correct.
The client now sends an `origin` field ('sidepanel' | 'newtab')
instead of injecting a soft NEWTAB_SYSTEM_PROMPT that LLMs could
ignore. Backwards compatible — defaults to 'sidepanel'.
Closes TKT-592, addresses TKT-564
* test: add newtab origin navigation guard tests
- 14 new prompt tests verifying the system prompt adapts correctly
for newtab vs sidepanel origin (execution rules, tool selection table,
absence of conflicting single-tab guidance)
- 6 new integration tests for navigate_page and close_page guards:
rejects origin tab in newtab mode, allows non-origin tabs, allows
all tabs in sidepanel mode, backwards compatible with no session
- Simplify CLI section: remove confusing MCP jargon, clarify it works
from terminal and AI coding agents
- Replace "point the CLI at your MCP server" with plain language
- Add Vertical Tabs to the features list
* feat(cli): add install scripts for macOS, Linux, and Windows
Bash script (install.sh) for macOS/Linux and PowerShell script
(install.ps1) for Windows. Both download the correct platform binary
from GitHub Releases with checksum verification, version resolution,
and PATH setup.
* fix(cli): address PR review comments for install scripts
- Add checksum verification to install.ps1 using Get-FileHash
- Add warnings on all checksum skip paths in install.sh
- Use grep -F (fixed-string) instead of regex for filename matching
- Add ?per_page=100 to GitHub API call in install.ps1
- Use random temp directory name in install.ps1 to avoid collisions
* fix(cli): address installer review feedback
* fix(cli): use full path for dist artifacts in release step
* test: temporarily allow release workflow on any branch
* fix(cli): restore main-only guard, remove goreleaser dependency
Replaces GoReleaser (Pro-only monorepo feature) with plain go build.
Tested: RC release created successfully on branch with all 6 binaries.
* fix(cli): update goreleaser tag_prefix to match browseros-cli-v* format
* fix(cli): replace goreleaser with plain go build for releases
GoReleaser free version cannot parse prefixed tags (browseros-cli-v*).
monorepo.tag_prefix is a Pro-only feature.
Replaced with direct go build + gh release create:
- Builds all 6 targets with go build (verified locally)
- Creates tar.gz/zip archives with checksums
- Uses gh release create to publish
- No external tool dependency
GoReleaser free cannot parse slash-prefixed tags (cli/v0.0.1) as semver.
Switch to browseros-cli-v0.0.1 format which is valid semver after
stripping the prefix. Remove the monorepo config (GoReleaser Pro only).
* ci(cli): change release workflow to manual dispatch from main
- Trigger via Actions UI with a version input (e.g. "0.1.0")
- Only runs on main branch
- Creates git tag cli/v<version> automatically
- Then GoReleaser builds all 6 binaries and creates the GitHub Release
* feat: add scoped release notes, changelog PR, and idempotent tags to CLI workflow
- Add concurrency group to prevent parallel releases
- Add scoped release notes from commits touching the CLI directory
- Pass release notes to goreleaser via --release-notes flag
- Make tag creation idempotent for safe re-runs
- Tag the saved release SHA, not HEAD after branching
- Add CHANGELOG.md and auto-update via PR with auto-merge
- Add pull-requests: write permission
---------
Co-authored-by: Dani Akash <DaniAkash@users.noreply.github.com>
* feat: add release workflow for agent extension
Adds a workflow_dispatch workflow that builds the WXT extension,
creates a .zip for sideloading, generates scoped release notes with
contributors and PR links, creates a GitHub release with the zip
attached, and opens an auto-merge PR to update CHANGELOG.md.
* fix: correct API URL to api.browseros.com
* fix: remove duplicate PR numbers and contributors from extension release notes
Apply the same fixes from the agent-sdk workflow:
- Skip PR number if already in commit subject (squash merges)
- Remove custom Contributors section (GitHub auto-generates one)
- Clean up unused variables
* fix: use absolute path for extension zip in release upload
* fix: wxt zip already builds, use correct output path
- Remove separate build step since wxt zip runs the build internally
- Fix zip path from .output/*.zip to dist/*-chrome.zip
* fix: run codegen before wxt zip to generate graphql types
* feat: add release workflow for agent extension
Adds a workflow_dispatch workflow that builds the WXT extension,
creates a .zip for sideloading, generates scoped release notes with
contributors and PR links, creates a GitHub release with the zip
attached, and opens an auto-merge PR to update CHANGELOG.md.
* fix: correct API URL to api.browseros.com
* fix: remove duplicate PR numbers and contributors from extension release notes
Apply the same fixes from the agent-sdk workflow:
- Skip PR number if already in commit subject (squash merges)
- Remove custom Contributors section (GitHub auto-generates one)
- Clean up unused variables
* fix: use absolute path for extension zip in release upload
* fix: wxt zip already builds, use correct output path
- Remove separate build step since wxt zip runs the build internally
- Fix zip path from .output/*.zip to dist/*-chrome.zip
- Skip adding PR number if already present in the commit subject
(squash merges include "(#123)" automatically)
- Remove custom Contributors section since GitHub auto-generates one
with avatars at the bottom of every release
Add a compile-only mode to the server build pipeline for CI/CD
environments that don't have R2 credentials. The --compile-only flag
skips resource staging and upload, producing only compiled binaries.
* feat: create GitHub release with changelog on agent-sdk publish
After publishing to npm, the workflow now:
- Tags the commit as agent-sdk-v<version>
- Generates release notes from commits that modified the agent-sdk
directory since the last agent-sdk release tag
- Creates a GitHub release with those notes
First release will show "Initial release" since no previous tag exists.
* feat: update CHANGELOG.md on agent-sdk release
Add a CHANGELOG.md for @browseros-ai/agent-sdk and update the release
workflow to prepend a versioned entry with the release notes before
creating the GitHub release. The changelog is committed to main
automatically.
* fix: address review issues in agent-sdk release workflow
- Add explicit permissions: contents: write
- Replace sed with head/tail for safe CHANGELOG insertion (fixes
double-quote and backslash corruption in commit messages)
- Handle empty release notes with "No notable changes." fallback
- Make git tag idempotent for workflow reruns (2>/dev/null || true)
* fix: use PR with auto-merge for changelog updates
Direct push to main fails due to branch protection requiring PRs.
Instead, create a branch, open a PR, and auto-merge via squash.
* feat: add contributors and PR links to agent-sdk release notes
Release notes now include PR numbers (linked automatically by GitHub),
GitHub usernames for each commit author, and a contributors section
at the bottom. All scoped to commits that modified the agent-sdk path.
* fix: reorder release steps and fix tag/idempotency issues
- Capture release SHA before any branching so the tag always points
to the main commit that was built and published to npm
- Reorder: generate notes → publish → tag/release → changelog PR
(changelog is lowest-stakes, runs last)
- Make tag push and release create idempotent for safe re-runs
(fall back to gh release edit if release already exists)
- Add || true to gh pr merge --auto in case auto-merge is not enabled
- Explicit git checkout main before creating changelog branch
* fix: explicit error handling for tag/release and contributor dedup
- Replace silent || true guards with explicit checks that log what's
happening (tag exists, remote tag exists, release exists) so errors
are visible instead of swallowed
- Fix contributor dedup: use grep -qw (word match) instead of grep -qF
(substring match) so "dan" isn't excluded when "dansmith" exists
* fix: exclude current version tag when finding previous release
On re-runs, the current version's tag already exists on the remote, so
PREV_TAG resolves to it and git log produces empty output. Filter it
out so release notes are generated against the actual previous version.
* ci: prevent concurrent agent-sdk release runs
Add concurrency group so multiple dispatches queue instead of racing
on the same tag/release/PR.
* feat(cli): production-ready CLI with auto-launch, install, and cross-platform builds
- init: accept URL argument and --auto flag for non-interactive setup
- install: new command to download BrowserOS app for current platform
- launch: auto-detect and launch BrowserOS when server is not running
- discovery: prefer server.json (live) over config.yaml (may be stale)
- errors: actionable messages guiding users to init/install
- goreleaser: cross-platform builds for 6 targets (darwin/linux/windows × amd64/arm64)
- ci: GitHub Actions workflow to release CLI binaries on cli/v* tag push
* fix(cli): check health status code and add progress dots during launch
- Health check in newClient() now verifies HTTP 200, not just no error
- waitForServer prints dots during the 30s poll so users know it's working
* refactor(cli): make launch an explicit command, remove auto-launch from newClient
- launch: new explicit command to find and open BrowserOS app
- launch: probes server.json, config, and common ports before launching
- launch: if already running, reports URL instead of launching again
- init --auto: uses port probing to find running servers
- install --deb: errors on non-Linux instead of silently downloading DMG
- error messages: guide users to launch/install/init explicitly
- removed: auto-launch from newClient() — CLI never does something surprising
* fix(cli): platform-native detection, launch, and install for all OSes
Detection (isBrowserOSInstalled):
- macOS: uses `open -Ra` to query Launch Services (no hardcoded paths)
- Linux: checks /usr/bin/browseros (.deb), browseros.desktop, AppImage search
- Windows: checks %LOCALAPPDATA%\BrowserOS\Application\BrowserOS.exe
and HKCU/HKLM uninstall registry keys
Launch (startBrowserOS):
- macOS: `open -b com.browseros.BrowserOS` (bundle ID, not path)
- Linux: `browseros` binary, AppImage, or `gtk-launch browseros`
(fixed: was using xdg-open which opens by MIME type, not desktop files)
- Windows: runs BrowserOS.exe from known Chromium per-user install path
(fixed: was using `cmd /c start BrowserOS` which doesn't resolve)
Install (runPostInstall):
- macOS: hdiutil attach → cp -R to /Applications → hdiutil detach
- Linux: chmod +x for AppImage, dpkg -i instruction for .deb
- Windows: launches installer exe
- --deb flag now errors on non-Linux platforms
Removed auto-launch from newClient() — CLI never does surprising things.
Sources verified from:
- packages/browseros/build/common/context.py (binary names per platform)
- packages/browseros/build/modules/package/linux.py (.deb structure, .desktop file)
- packages/browseros/chromium_patches/chrome/install_static/chromium_install_modes.h
(Windows base_app_name="BrowserOS", registry GUID, install paths)
- /Applications/BrowserOS.app/Contents/Info.plist (bundle ID)
* fix: broaden connection error detection for main page and sidepanel
The connection error check required both "Failed to fetch" AND
"127.0.0.1" in the error message. On the main page, the browser
only produces "Failed to fetch" without the IP, so users saw a
generic "Something went wrong" instead of the troubleshooting link.
Broaden detection to also match "localhost" and bare "Failed to fetch"
errors that don't contain an external URL. Also pass providerType in
NewTabChat so provider-specific errors render correctly.
Closes#526
* fix: simplify connection error detection
All chat requests go through the local BrowserOS agent server, so any
"Failed to fetch" error is always a local connection issue. Remove the
unnecessary 127.0.0.1/localhost/URL checks.
* fix: pass providerType to agentUrlError ChatError instances
Port conflicts are expected — Chromium retries with a different port.
These errors were flooding Sentry (14k+ events) without user impact.
- handleStartupError: move Sentry.captureException below the
port-in-use check so it only fires for unexpected startup errors
- handleControllerStartupError: skip Sentry capture for port errors
- index.ts: exit early for port errors before Sentry capture
- Change dialog width from sm:max-w-2xl (672px) to sm:w-[70vw] sm:max-w-4xl
so it takes 70% of viewport width, capped at 896px
- Add overflow-x-auto on table wrappers so wide tables scroll horizontally
instead of being clipped
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: integrate models.dev for dynamic LLM provider/model data (#TKT-657)
Replace hardcoded model lists with data sourced from models.dev so new
providers and models appear automatically when the community adds them.
- Add build script (scripts/generate-models.ts) that fetches models.dev/api.json
and outputs a compact JSON with 10 providers and 520 models
- Replace hardcoded MODELS_DATA (50 models) with dynamic models.dev lookups
- Add searchable model combobox (Popover + Command) replacing plain Select dropdown
- Enrich provider templates with models.dev metadata (context window, image support)
- Keep chatgpt-pro, qwen-code, browseros, openai-compatible as hardcoded providers
* fix: address review — remove ollama-cloud mapping, fix default models, remove dead code
- Remove ollama from PROVIDER_MAP (ollama-cloud has cloud models, not local)
- Add ollama to CUSTOM_PROVIDER_MODELS with empty list (users type custom IDs)
- Update defaultModelIds to ones that exist in models.dev data:
openrouter → anthropic/claude-sonnet-4.5
lmstudio → openai/gpt-oss-20b
bedrock → anthropic.claude-sonnet-4-6
- Remove dead isCustomModel export
- Regenerate models-dev-data.json (9 providers, 486 models)
* fix: model suggestion list focus/dismiss behavior
- List only opens when input is focused or user types
- Clicking a model selects it and closes the list
- Clicking outside (blur) dismisses the list
- onMouseDown preventDefault on list items prevents blur race condition
* refactor: extract ModelPickerList component with proper open/close UX
- Collapsed state: Select-like trigger showing selected model + chevron
- Expanded state: search input + scrollable filtered list, inline
- Click outside or Escape to close, Enter to submit custom model
- Extracted as separate component (reduces dialog nesting, testable)
- No more setTimeout hacks for blur handling
* chore: remove plan doc from repo
* docs: add setup guides for ChatGPT Pro, GitHub Copilot, and Qwen Code
Add individual OAuth setup guide pages with step-by-step screenshots
for each provider. Add "Use Your Existing Subscription" section to the
Bring Your Own LLM page with card links to each guide. Register pages
in docs navigation.
* docs: add ChatGPT Pro setup screenshots
* docs: use custom provider icons for OAuth setup cards
* docs: inline SVG icons in provider cards for dark mode support
* docs: place provider icons above card titles
* feat: improve rate limit UX, usage page, and provider selector
- Show "Add your own provider for unlimited usage" CTA when BrowserOS
credits are exhausted or daily limit is reached
- Fix credit exhaustion detection to match actual error message
- Improve Usage page: remove disabled Add Credits button, add "Coming
soon" badge, add "Want unlimited usage?" section linking to providers
- Add "+ Add Provider" button at bottom of chat provider selector dropdown
* fix: use asChild pattern for Button+anchor in usage page
Replace nested <a><Button> (invalid HTML) with Button asChild
pattern per shadcn/ui convention.
* feat: UI improvements for OAuth dialog, provider badges, and events docs
- Replace OAuth device code toast with a proper Dialog showing the code
prominently with a copy button (GitHub Copilot, Qwen Code, ChatGPT Pro)
- Add "New" badge on provider template cards for ChatGPT Plus/Pro,
GitHub Copilot, and Qwen Code with orange border highlight
- Add events.md documenting all analytics events across the platform
* fix: add verificationUri to DeviceCodeDialog for popup-blocked fallback
Add verificationUri to PendingDeviceCode interface and pass it from
both handleClientAuth and handleServerAuth. Render a fallback "Open
verification page" link in DeviceCodeDialog so users can navigate
to the auth page if the popup was blocked.
- Add MCP promo banner on AI providers page with "New" badge and
"66+ tools" highlight, linking to /settings/mcp
- Add Quick Setup section on MCP settings page with copy-paste
commands for Claude Code, Gemini CLI, Codex, Claude Desktop, OpenClaw
- Consolidate MCP settings: move restart button inline with server URL,
remove separate MCP Server Settings card
- Add analytics event for promo banner clicks
* feat(eval): show mean score instead of pass/fail in report and viewer
* feat(eval): integrate NopeCHA CAPTCHA solver into eval pipeline
Add CAPTCHA detection and waiting so screenshots capture post-solve state.
Run headed with xvfb on CI since headless breaks extension content scripts.
- Add CaptchaWaiter module (detect reCAPTCHA/hCaptcha/Turnstile, poll until solved)
- Add optional `captcha` config block to EvalConfigSchema
- Wait for CAPTCHA solve before screenshot in single-agent and orchestrator-executor
- Patch NopeCHA manifest with API key before launching workers
- Fix CAPTCHA_EXT_DIR path (was pointing one level too high)
- Remove --incognito (extensions don't run in incognito; fresh user-data-dir isolates)
- CI: install xvfb, run headed via xvfb-run, pass NOPECHA_API_KEY secret
* fix: remove daily rate-limit middleware
The daily conversation rate limit is no longer needed. Remove the
middleware, RateLimiter class, fetch-config, error type, shared
constants, DB schema table, and integration tests.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: remove unused getDb() method
No longer needed after rate-limiter removal.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The eval's single-agent was passing raw task.query as the prompt,
without browser context (active tab URL, title). The agent didn't
know which page it was on, causing it to ask "which website?" instead
of browsing.
Use formatUserMessage() (same as chat-service.ts) to include browser
context in the prompt. Re-export formatUserMessage from agent/tool-loop.
* fix: prevent deleted scheduled tasks from reappearing after sync
When a scheduled task was deleted, the sync function would see the
remote job missing locally and re-add it, undoing the delete. Fix by
tracking pending deletions in storage so the sync function deletes
them from the backend instead of re-adding them locally.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: use read-modify-write for pending deletions to prevent concurrent clobber
Re-read pendingDeletionStorage before write-back and only remove
resolved IDs, preserving any new entries added by concurrent
removeJob calls during the sync's network I/O.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The test workflow captured exit codes but never failed the job, so PR
checks always showed green even when tests failed. Exit with the
captured code in the summarize step so each suite properly reports
pass/fail. Not a required check, so failures remain non-blocking.
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(eval): switch to ubuntu-latest runner, add OE-Clado config
- Switch workflow from self-hosted Mac Studio to ubuntu-latest
- Install BrowserOS Linux .deb in CI (no self-hosted runner needed)
- Add browseros-oe-clado-weekly.json config for orchestrator-executor
- Fix report chart to show date+time (not just date)
- Make BROWSEROS_BINARY configurable via env var
* feat(eval): add NopeCHA captcha solver extension to eval runs
- Auto-load NopeCHA extension in eval Chrome instances
- Works in incognito + headless mode
- CI workflow downloads NopeCHA before eval
- extensions/ directory gitignored (downloaded at runtime)
* feat(eval): per-config concurrency — different configs run in parallel
* feat(eval): remove concurrency limit — all runs execute in parallel
* ci: run browseros tests on pull requests
* refactor: rework 0320-github_action_for_tests based on feedback
* refactor: rework 0320-github_action_for_tests based on feedback
* chore: add CI artifacts to .gitignore
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: remove mikepenz/action-junit-report to fix check suite misattribution
The JUnit report action creates check runs that GitHub associates with the
CLA check suite instead of the Tests check suite, causing test reports to
appear under "CLA Assistant" in the PR checks UI.
Remove the action and rely on job status + step summary + artifact upload
for test result visibility.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(eval): weekly eval pipeline with R2 uploads and trend dashboard
Add infrastructure for running weekly evaluations and tracking score
trends over time:
- Auto-generated output dirs: results/{config-name}/{timestamp}/
Each eval run gets its own timestamped folder, nothing is overwritten.
- upload-run.ts: uploads eval results to Cloudflare R2. Supports
uploading a specific run or all un-uploaded runs for a config.
- weekly-report.ts: generates an interactive HTML dashboard from R2
data. Config dropdown, trend chart with hover tooltips, searchable
runs table. Groups runs by config name.
- viewer.html: client-facing 3-column run viewer (task list,
screenshots with autoplay, agent stream with messages.jsonl).
Shows performance grader axis breakdown with per-axis scores.
- browseros-agent-weekly.json: weekly benchmark config (kimi-k2p5,
webbench-2of4-50, 10 workers, performance grader, headless).
- eval-weekly.yml: GitHub Actions workflow with cron (Saturday 6am)
and manual trigger. Runs on self-hosted Mac Studio runner.
Concurrency group ensures only one eval runs at a time.
- Dashboard updates: load previous runs, messages.jsonl viewer,
grade badges show percentages, async stream loading.
- Grader updates: timeout 30min, max turns 100, DOM content
verification guidance for performance grader.
* fix(eval): address Greptile review — injection, nested dirs, escaping
- Fix script injection in eval-weekly.yml: pass github.event.inputs
through env var instead of interpolating into shell
- Fix /api/runs to enumerate nested results/{config}/{timestamp}/ dirs
- Fix /api/load-run to allow single-slash run names (config/timestamp)
- Add HTML escaping for R2-sourced values in weekly-report.ts
- Escape axis names in viewer.html renderAxesBreakdown
* fix(eval): fix biome lint — non-null assertion, template literals
* fix(eval): fix biome errors — replace var with let, fix inner function declaration
* fix(eval): address Greptile P2 issues
- isRunDir: check all subdirs for metadata.json, not just first 3
- eval-runner: guard configPath for dashboard-driven runs (fallback to 'eval')
- load-run: default unknown termination_reason to 'failed' not 'completed'
* feat(eval): make BROWSEROS_BINARY configurable via env var
The OAuth callback server on port 1455 was bound eagerly at startup,
crashing the server if another BrowserOS instance was already running.
Rewrite as a lazy class (OAuthCallbackServer) that:
- Only binds port 1455 when the user initiates a ChatGPT Pro login
- Sends GET /cancel to any existing server on the port first, then
retries up to 5 times (follows Codex CLI's cancel+retry pattern)
- Exposes /cancel endpoint so other instances/tools can cancel us
- Releases the port after the OAuth callback arrives
- Device-code providers (GitHub Copilot, Qwen) never touch port 1455
This allows running eval, dev instances, and multiple BrowserOS
instances without port conflicts. OAuth login works on whichever
instance initiates it — the others continue without OAuth.
* feat: auto-discover server port via ~/.browseros/server.json
Server writes its port to ~/.browseros/server.json on startup so the CLI
can auto-discover the server URL without requiring `browseros-cli init`.
Discovery chain: BROWSEROS_URL env > config.yaml > server.json > error
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: address review feedback for PR #504
- Use synchronous unlinkSync in stop() since process.exit() fires
immediately after, abandoning any pending async operations
- Wrap writeServerConfig in try/catch so a write failure doesn't crash
a healthy server for a convenience feature
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: type server discovery config and add version metadata
Add ServerDiscoveryConfig interface to @browseros/shared and enrich
server.json with server_version, browseros_version, and chromium_version.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: normalize URL from server.json for consistency
All other URL sources (env var, config.yaml) pass through
normalizeServerURL; apply the same to the server.json path.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: add voice recording UI with waveform overlay to new tab search bar
Add a microphone button to the NewTab search bar that opens a fullscreen
recording overlay powered by react-voice-visualizer. The overlay shows a
real-time waveform visualization during recording, recording time, and a
stop button. On completion, the audio is transcribed via the existing
gateway endpoint and the transcript auto-navigates to inline chat.
Changes:
- Extract transcribeAudio() to shared lib/voice/transcribe-audio.ts
- Add VoiceRecordingOverlay component with react-voice-visualizer
- Add Mic button to NewTab search bar
- Track analytics via existing NEWTAB_VOICE_* events
- Handle cancel (backdrop click) vs submit (stop button) correctly
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: address PR review comments for voice recording overlay
- Reset processingRef on transcription error to prevent stuck state
- Use stable callback refs to prevent useEffect re-runs from inline
arrow function props (fixes timer reset and unnecessary re-processing)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: replace voice overlay with inline sidepanel-style voice UI
Remove react-voice-visualizer dependency and VoiceRecordingOverlay.
Instead use the same inline voice pattern as the sidepanel ChatInput:
- Waveform bars replace the search input during recording
- Mic/stop/loading button states in the search bar
- Transcript populates the search input on completion
- Voice error shown inline below the search bar
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* test: add build smoke test to catch compile failures
Compiles the server binary (darwin-arm64) and verifies --version outputs
the correct version from package.json. Uses an empty resource manifest
and stub env vars so the test runs without R2 access or real secrets.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: address review feedback for PR #511
- Derive build target from process.platform/arch for CI portability
- Include binary stderr in --version assertion for better diagnostics
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
sharp is a native C module (libvips) whose .node binaries can't be
embedded in Bun compiled executables. It was imported at the top level
in copilot-fetch.ts, crashing the entire server at startup.
Replace with jimp (pure JavaScript, zero native deps) which bundles
cleanly into compiled binaries. Same resize algorithm preserved.
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: add Qwen Code as OAuth LLM provider with refactored OAuth hooks
Add Alibaba Qwen Code as a third OAuth provider using Device Code flow
with PKCE. Free tier: 2,000 requests/day, up to 1M token context.
Refactoring:
- Extract useOAuthProviderFlow hook (eliminates ~180 lines of duplicated
OAuth logic from AISettingsPage for ChatGPT Pro + Copilot + Qwen)
- Extract resolveOAuthConfig in config.ts (shared resolver for all OAuth
providers, parameterized by provider name, default model, refresh flag)
- Generalize token-manager device code flow to support PKCE
(code_challenge/code_verifier) and form-urlencoded content type
New code:
- Qwen Code provider config with PKCE + form encoding flags
- Provider factories (both provider.ts and provider-factory.ts)
- Extension UI (template card, models, analytics, dialog)
* fix: use portal.qwen.ai as API base URL for OAuth tokens
DashScope (dashscope.aliyuncs.com) expects Alibaba Cloud API keys,
not OAuth tokens from chat.qwen.ai. The correct endpoint for OAuth
Bearer tokens is portal.qwen.ai/v1.
* fix: correct Qwen Code model IDs and context windows
- coder-model (1M context): virtual alias that routes to best model
- qwen3-coder-plus (1M): was incorrectly 131K
- qwen3-coder-flash (1M): new, speed-optimized variant
- qwen3.5-plus (1M): was incorrectly 1048576 (power-of-two vs decimal)
- Removed qwen3-coder-next (local/self-hosted, not available via OAuth)
- Default model changed to coder-model (auto-routes server-side)
* fix: move Qwen device code request to extension (bypasses WAF)
Alibaba WAF blocks server-side requests to chat.qwen.ai. Move the
initial device code request to the extension (browser context with
cookies), then hand off the deviceCode + codeVerifier to the server
for background polling via new POST /oauth/:provider/poll endpoint.
* fix: persist OAuth flow-started flag in sessionStorage
The flowStartedRef was lost when the component remounted (e.g. user
navigated to onboarding then back to settings). Use sessionStorage
to persist the flag so auto-create works after navigation.
* revert: remove sessionStorage for OAuth flow flag
Revert to simple useRef pattern matching the original ChatGPT Pro
implementation. The auto-create works when the user stays on the
AI settings page during auth.
* revert: move Qwen back to server-side device code flow
WAF block was temporary (rate-limiting), not permanent. Server-side
fetch to chat.qwen.ai now works. Reverted client-side device code
approach — Qwen now uses the same clean server-side flow as Copilot.
Removed: clientSideDeviceCode config, startClientSideDeviceCode(),
POST /oauth/:provider/poll endpoint, startDeviceCodePolling().
* feat: add WAF detection, rate-limit protection, and token storage endpoint
- Detect WAF captcha responses (HTML instead of JSON) in device code
request and token polling, with user-friendly error messages
- Add 30s cooldown on "USE" button to prevent rapid clicks triggering WAF
- WAF-blocked poll requests silently retry instead of aborting
- Add POST /oauth/:provider/token endpoint for storing externally-provided
tokens (useful for future fallback flows)
- Add storeTokens() method to OAuthTokenManager
- Pass server error messages through to extension toast notifications
* refactor: remove 30s cooldown, simplify OAuth hook
The hook is now identical for all providers — server handles retries
via activeDeviceFlows.delete(). Removed flowStartedAtRef cooldown
that was blocking legitimate retries.
* feat: client-side OAuth for Copilot and Qwen Code
Move device code OAuth flow to the extension for GitHub Copilot and
Qwen Code. The extension makes requests using Chrome's network stack,
which bypasses Alibaba WAF TLS fingerprint detection that blocks
server-side Bun/Node.js fetch.
New files:
- client-oauth.ts: Client-side device code + PKCE + token polling
Changes:
- useOAuthProviderFlow: handleClientAuth() for providers with clientAuth
config, handleServerAuth() for others (ChatGPT Pro)
- AISettingsPage: clientAuth config for Copilot and Qwen Code
- WAF detection: opens provider site for captcha solving on block
Server-side device code flow preserved as fallback (token-manager.ts,
providers.ts). Token storage via POST /oauth/:provider/token endpoint.
* fix: export OAuthProviderFlowConfig type, fix typecheck errors
- Export OAuthProviderFlowConfig interface so AISettingsPage can use it
instead of duplicating the type inline
- Fix string | null → string | undefined for agentServerUrl parameter
Add CHATGPT_PRO_SUPPORT and GITHUB_COPILOT_SUPPORT feature flags gated
on minServerVersion 0.0.77. Hide template cards and provider type
dropdown options when the server doesn't support the OAuth endpoints.
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: add model selector to newtab search bar
Add AI provider/model selector button to the newtab homepage footer bar,
matching the existing button aesthetics (Workspace, Tabs, Apps). Reuses
ChatProviderSelector popover from sidepanel. Users can now see and change
their AI provider before starting a conversation from the newtab page.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: clean up newtab footer with icon-only buttons
Reduce visual clutter in the search bar footer by converting Provider,
Workspace, and Tabs buttons to compact icon-only buttons (8x8). Text
labels and chevron indicators are removed — native title tooltips
provide discoverability on hover. Apps button on the right keeps its
text label per user preference.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: add hover-expand labels to newtab footer icon buttons
Replace static title tooltips with smooth hover-expand animation —
buttons show icon-only by default, text label slides out on hover
via max-w transition. Gives a clean compact look while keeping
labels discoverable.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: revert workspace/tabs to full text, keep provider hover-expand only
Restore full text labels for Workspace and Tabs buttons. Only the
provider selector uses the compact icon + hover-expand pattern.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: simplify provider selector to plain icon button
Remove hover-expand animation, use a simple icon-only button with
native title tooltip for the provider selector.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: add GitHub Copilot as OAuth-based LLM provider
Add GitHub Copilot as a second OAuth provider using the Device Code flow
(RFC 8628). Users authenticate via github.com/login/device, and the server
polls for token completion. Supports 25+ models through a single Copilot
subscription.
Key changes:
- Device Code OAuth flow in token manager (poll with safety margin)
- Custom fetch wrapper injecting Copilot headers + vision detection
- Provider factory using createOpenAICompatible for Chat Completions API
- Extension UI with template card, auto-create on auth, and disconnect
* fix: address PR review comments for GitHub Copilot OAuth
- Validate device code response for error fields (GitHub can return 200
with error payload)
- Store empty refreshToken instead of access token for GitHub tokens
- Add closeButton to Toaster for dismissing device code toast
* fix: add github-copilot to agent provider factory
The chat route uses a separate provider-factory.ts (agent layer) from the
test-provider route (llm/provider.ts). Added createGitHubCopilotFactory
to the agent factory so chat works with GitHub Copilot.
* fix: add github-copilot to provider icons, models, and dialog
- Add Github icon from lucide-react to providerIcons map
- Add 8 Copilot models (GPT-4o, Claude, Gemini, Grok) to models.ts
- Add github-copilot to NewProviderDialog zod enum, validation skip,
canTest check, and OAuth credential message
* fix: reorder copilot models with free-tier models first
Put models available on Copilot Free at the top (gpt-4o, gpt-4.1,
gpt-5-mini, claude-haiku-4.5, grok-code-fast-1), followed by
premium models that require paid Copilot subscription.
* fix: set correct 64K context window for Copilot models
Copilot API enforces a 64K input token limit regardless of the
underlying model's native context window. Updated all model entries
and the default template to 64000 so compaction triggers correctly.
* fix: use actual per-model prompt limits from Copilot /models API
Queried api.githubcopilot.com/models for real max_prompt_tokens values.
GPT-4o/4.1 have 64K, Claude/gpt-5-mini have 128K, GPT-5.x have 272K.
Also updated model list to match what's actually available on the API
(e.g. claude-sonnet-4.6 instead of 4.5, added gpt-5.4/5.2-codex).
* feat: resize images for Copilot using VS Code's algorithm
Large screenshots cause 413 errors on Copilot's API. Resize images
following VS Code's approach: max 2048px longest side, 768px shortest
side, re-encode as JPEG at 75% quality. Uses sharp for server-side
image processing.
* fix: address all Greptile P1 review comments
- Add .catch() on fire-and-forget pollDeviceCode to prevent unhandled
rejection crashes (Node 15+)
- Add deduplication guard (activeDeviceFlows Set) to prevent concurrent
device code flows for the same provider
- Add runtime validation of server response in frontend before calling
window.open() and showing toast
- Remove dead GITHUB_DEVICE_VERIFICATION constant from urls.ts
* fix: upgrade biome to 2.4.8, fix all lint errors, and address review bugs
- Upgrade biome from 2.4.5 to 2.4.8 (matches CI) and migrate configs
- Fix image resize: only re-encode when dimensions actually change
- Fix device code polling: retry on transient network errors instead of aborting
- Allow restarting device code flow (clear old flow instead of throwing 500)
- Fix pre-existing noNonNullAssertion and noExplicitAny lint errors globally
* fix: address Greptile P2 review — image resize and config guard
- Fix early-return guard: check max/min sides against their respective
limits (MAX_LONG_SIDE/MAX_SHORT_SIDE) instead of both against SHORT
- Preserve PNG alpha: detect hasAlpha and keep PNG format instead of
unconditionally converting to lossy JPEG
- Keep browserosId guard in resolveGitHubCopilotConfig consistent with
ChatGPT Pro pattern (safety check that caller context is valid)
* feat: update Copilot models to full list from pricing page, default to gpt-5-mini
Added all 23 models from GitHub Copilot pricing page. Ordered with
free-tier models first (gpt-5-mini, claude-haiku-4.5), then premium.
Changed default from gpt-4o to gpt-5-mini since it's unlimited on
Pro plan and has 128K context (vs gpt-4o's 64K limit).
* fix(skills): read-only view mode for built-in skills
- SkillCard shows Eye icon + "View" for built-in, Pencil + "Edit" for user
- SkillDialog in read-only mode: disabled fields, no toolbar on markdown
editor, "View Skill" title, "Close" button, no "Update Skill"
- Hide tip section in read-only mode
* fix(skills): use react-markdown for read-only skill view
Replace MDXEditor with react-markdown for viewing built-in skills.
MDXEditor chokes on code fences, angle brackets, and image syntax
causing content truncation. react-markdown handles standard markdown
correctly with no rendering issues.
* Revert "feat: convert settings to popup dialog (#477)"
This reverts commit 42aa0ff1ef.
* fix: address review feedback for PR #498
- Remove erroneous SETTINGS_PAGE_VIEWED_EVENT tracking from SidebarLayout
(was firing on every non-settings page navigation)
- Fix mobile settings sidebar not closing on route change by merging
setMobileOpen(false) into the pathname-dependent analytics useEffect
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: select text and pass to sidepanel
* fix: lint issues
* fix: persist selection across tabs
* fix: review comments
* fix: change when the selection is cleared
* feat: sanitize url
* fix(skills): UI section separation and fix find-alternatives rendering
- Split skills page into "My Skills" (user) and "BrowserOS Skills" (built-in) sections
- Fix find-alternatives SKILL.md — replace angle bracket placeholders with curly
braces to prevent MDXEditor from parsing them as JSX and rendering empty content
* fix(skills): bump find-alternatives to v1.1 for CDN sync
* feat: updated chat ui from homepage
* fix: vertical scroll
* fix: horizontal scroll issue
* fix: lint issues
* fix: header width
* fix: message input from home to chat
* feat: created sidebar header support in new tab chat
* fix: remove history from new tab chat
* fix: remove the shared element transition
* fix: lint issues
* fix: review comments
* fix: defer the sendMessage callback
* fix: all code concerns
* fix: preserve state of chat on homepage
* fix: review comments
* fix(skills): separate built-in and user skills into distinct directories
- Move built-in skills to ~/.browseros/skills/builtin/, user skills stay in root
- Unify seed + sync into single syncBuiltinSkills() function, delete seed.ts
- Preserve user's enabled/disabled state during remote sync version updates
- Add catalog reconciliation — remove built-in skills dropped from remote catalog
- Fallback to bundled defaults per-skill when remote sync fails
- One-time migration moves existing default skills from root to builtin/
- Add builtIn field to SkillMeta, determined by directory (not metadata)
- UI shows "Built-in" badge, hides delete button for built-in skills
- Reject deletion of built-in skills in service layer
- Check both dirs for ID collision on skill creation
* fix(skills): address review — dedup by id, guard applyEnabled regex
- loader.ts: deduplication now keys on skill.id (directory slug) not
skill.name (display name), preventing silent drops on name collision
- remote-sync.ts: applyEnabled checks if regex matched before writing,
logs warning if remote content lacks an enabled field
* fix(skills): reconciliation preserves bundled defaults, delete returns 403
- reconcileRemovedSkills now keeps DEFAULT_SKILLS IDs in the safe set,
preventing delete-then-reinstall cycle that lost enabled:false state
- DELETE /skills/:id returns 403 for built-in skills instead of 500
* refactor(skills): simplify syncBuiltinSkills to single clean pass
Build content map (bundled + remote), iterate once, preserve enabled,
reconcile deletions. Removes 7 helper functions, 70 lines of code.
* refactor(skills): extract syncOneSkill, patch content before writing
- syncBuiltinSkills is now 15 lines: build map, iterate, clean up
- syncOneSkill: flat, patches enabled state before writing (single write)
- setEnabled: pure function for content patching
- removeObsoleteSkills: extracted from inline block
* feat: convert settings page to popup dialog, move workflows to main nav
Replace the dedicated settings page layout (SettingsSidebarLayout) with a
modal dialog (SettingsDialog) that opens on top of the current page. Settings
are now accessible via a dialog triggered from the main sidebar, eliminating
the confusing dual-sidebar navigation pattern.
- Create SettingsDialog with tabbed left panel and content area
- Move Workflows into main sidebar navigation (feature-gated)
- Remove /settings/* routes (except /settings/survey)
- Delete SettingsSidebarLayout and SettingsSidebar components
- Update backward compatibility redirects
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: setup new urls for the dialog box
* fix: dialog close button
* fix: settings analytics
* fix: address review comments
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Dani Akash <DaniAkash@users.noreply.github.com>
* feat: add ChatGPT Pro OAuth as LLM provider
Adds OAuth 2.0 (Authorization Code + PKCE) flow so users can authenticate
with their ChatGPT Pro subscription to power BrowserOS's agent, matching
the pattern used by Codex CLI, OpenCode, and Pi.
Server:
- OAuth token lifecycle (PKCE, exchange, refresh, SQLite storage)
- Dedicated callback server on port 1455 (Codex client ID registration)
- Codex fetch wrapper routing API calls to chatgpt.com/backend-api
- Config resolution + provider factories for all code paths (chat, test, refine)
Extension:
- ChatGPT Pro template card with OAuth flow trigger
- Status polling hook + auto-create provider on auth success
- Model list with Codex-supported models (gpt-5.x-codex family)
* fix: address Greptile PR review comments
- Wire OAuth callback server stop handle into onShutdown (P1: port 1455 leak)
- Guard against missing refresh token + clear stale tokens on failed refresh (P1)
- Add logger.warn to silent catch in codex-fetch body mutation
- Document JWT trust assumption in parseAccessTokenClaims
- Source model ID from provider template instead of hard-coding
* simplify: remove unnecessary OAuth shutdown wiring and useCallback
- Revert OAuthHandle interface — callback server port releases on process exit
- Remove stopCallbackServer from shutdown flow (dead code)
- Remove all useCallback from useOAuthStatus per CLAUDE.md guidance
* style: add readonly modifiers and braces per TS style guide
* docs: add E2E test screenshots for ChatGPT Pro OAuth
* fix: strip item IDs from Codex requests to fix multi-turn conversations
* fix: preserve function_call_output IDs in Codex requests
* fix: resolve Codex store=false + tool-use incompatibility
- Pass providerOptions { openai: { store: false } } to ToolLoopAgent
so the AI SDK inlines content instead of using item_reference
- Strip item IDs and previous_response_id in codex-fetch (safety net)
- Use .responses() model (Codex only speaks Responses API format)
* fix: remove non-Codex model gpt-5.2 from chatgpt-pro model list
* fix: strip unsupported Codex params and update model list
- Strip temperature, max_tokens, top_p from Codex requests (unsupported)
- Add all available Codex models including gpt-5.4, gpt-5.2, gpt-5.1
* chore: remove screenshots containing email
* feat: enable reasoning events for ChatGPT Pro Codex models
* chore: set reasoning effort to high for ChatGPT Pro
* feat: add configurable reasoning effort and summary for ChatGPT Pro
- Add reasoningEffort (none/low/medium/high) and reasoningSummary
(auto/concise/detailed) dropdowns in the Edit Provider dialog
- Pass through extension → chat request → agent config → providerOptions
- Defaults: effort=high, summary=auto
* fix: strip max_output_tokens from Codex requests (fixes compaction)
* fix: address Greptile P1 issues
- Fix default model fallback: gpt-4o → gpt-5.3-codex (Codex endpoint)
- Clear stale tokens on refresh failure (prevents infinite retry loop)
- Only auto-create provider after explicit OAuth flow, not on page load
- Add catch block to auto-create effect with error toast
* feat: add remote skill download and auto-sync
Download default skills from remote catalog on first setup with
bundled fallback when offline. Background sync every 45 minutes
checks for new/updated skills without overwriting user-customized
ones. Tracks installed defaults via content hashes in a local
manifest file.
* feat: make skills catalog URL configurable and add generation script
Add SKILLS_CATALOG_URL env var (following CODEGEN_SERVICE_URL pattern)
with fallback to the default constant. Add script to generate
catalog.json from bundled defaults for static hosting.
* feat: add R2 upload script and use cdn.browseros.com for catalog URL
Add upload-skills-catalog.ts that generates and uploads catalog.json
to Cloudflare R2 (same infra as existing build artifacts). Update
default catalog URL to cdn.browseros.com/skills/v1/catalog.json.
* test: add E2E tests for remote skill sync against live CDN
* fix: address code review findings — security, validation, DRY
- Add path traversal protection via safeSkillDir in writeSkillFile
and readSkillContent (reuses existing validation from service.ts)
- Add runtime type guards for catalog JSON and manifest JSON parsing
- Fix seedFromRemote to return false on partial failure so bundled
fallback kicks in
- Add per-skill error handling in syncRemoteSkills so one bad skill
doesn't crash the entire sync
- Wire stopSkillSync into Application.stop() shutdown path
- Extract version from frontmatter in seedFromBundled instead of
hardcoding '1.0'
- Consolidate duplicated logic: reuse installSkill/writeSkillFile/
contentHash/saveManifest from remote-sync.ts in seed.ts
- Extract shared catalog generation into scripts/catalog-utils.ts
* test: add flow tests for all four sync scenarios against live CDN
* refactor: remove redundant scripts and inline catalog generation
Drop generate-skills-catalog.ts, catalog-utils.ts, and
e2e-remote-sync.test.ts (covered by flows.test.ts). Inline
catalog generation into upload-skills-catalog.ts.
* test: add full E2E server flow test against live CDN
Tests all 7 steps of the real server lifecycle: fresh seed from CDN,
no-op sync, user edit preservation, skill reinstall, custom skill
protection, background timer firing, and second startup skip.
* chore: remove e2e-server-flow test
* fix: address Greptile review — entry validation, size limit, DRY, no-op saves
- Validate individual skill entries in catalog (id, version, content
must all be strings) not just the top-level shape
- Add 1MB response size limit on catalog fetch to prevent resource
exhaustion from compromised/misconfigured CDN
- Skip manifest save when sync cycle had no changes (avoids
unnecessary disk I/O every 45 minutes)
- Share extractVersion via remote-sync.ts export, remove duplicate
from seed.ts
* fix: prevent bundled fallback from overwriting partial remote seeds
When seedFromRemote partially fails, the bundled fallback now skips
skills already in the manifest (installed by the partial remote
seed). Also adds Content-Length early check before downloading the
full catalog response body.
* fix: run sync immediately on startup, not just on interval
Previously the first sync fired 45 minutes after boot. Now
startSkillSync runs one sync immediately so returning users
get skill updates right away.
* refactor: simplify sync — remote always wins, remove manifest
Remote catalog is the source of truth. If a skill exists in the
catalog, its version is compared against local frontmatter and
overwritten when newer. No manifest file, no content hashes.
User-created skills (IDs not in catalog) are never touched.
* fix: skip bundled skills already installed by partial remote seed
* chore: remove unreliable Content-Length check
* chore: remove size limit checks, fetch timeout is sufficient
* feat: add "Rewrite with AI" prompt refinement for scheduled tasks
Add a lightweight /refine-prompt endpoint that uses generateText to
rewrite rough scheduled task prompts into clear, actionable instructions.
The UI adds a sparkle-icon button next to the Prompt label in the
NewScheduledTaskDialog with loading state, undo support, and disabled
state when the textarea is empty.
* fix: clear stale undo ref on dialog re-open and pass providerId to refinePrompt
- Reset originalPromptRef when dialog opens and on form submit to
prevent stale "Undo rewrite" button on re-open
- Accept optional providerId in refinePrompt() so the form's selected
provider is used for refinement instead of always the system default
* fix: hide undo rewrite link while refinement is in flight
* fix: reset isRefining state on dialog re-open
* fix: ignore stale refine-prompt responses after dialog re-open
Use a request generation counter so that if the dialog is closed and
re-opened while a rewrite is in flight, the stale response is silently
discarded instead of overwriting the fresh form state.
* fix: invalidate stale refine requests on dialog reopen and rename to kebab-case
- Increment refineRequestIdRef on dialog open so in-flight requests
from a previous session are discarded when they complete
- Rename refinePrompt.ts to refine-prompt.ts per CLAUDE.md file naming
* feat: add voice input to agent chat sidebar
Allow users to record voice and transcribe to text in the chat input.
Mic button shows when input is empty, waveform visualizer during recording,
transcription via OpenAI (llm.browseros.com/api/transcribe).
- Extract shared useVoiceInput hook to lib/voice/
- Time-domain waveform bars that bounce per-frequency-band
- Bar height capped to fit input container
- Analytics events for recording lifecycle
* fix: address review — add fetch timeout, await stopRecording, deduplicate VoiceInputState
- Add AbortSignal.timeout(30s) to transcription fetch
- Await stopRecording() and track analytics after completion
- Export VoiceInputState from useVoiceInput, import in consumers
* fix: await startRecording before tracking, narrow SurveyChat effect deps
- Await startRecording() so analytics only fires after mic permission granted
- Narrow SurveyChat useEffect dependency from [voice] to [voice.transcript, voice.isTranscribing]
* fix: analytics only tracks on success, clean up stream on failure, type API response
- startRecording returns boolean; track(RECORDING_STARTED) only fires on success
- Catch block cleans up MediaStream tracks and AudioContext on partial failure
- Type transcription API response with TranscribeResponse interface
* fix: keep mic button always visible alongside send button
Mic and send are now separate buttons, both always visible.
Mic is disabled while AI is streaming. Send is disabled during
recording/transcribing. Buttons are no longer absolutely positioned
inside the textarea — they sit beside it in the flex row.
* fix: keep mic button always visible inside input alongside send
Both mic and send buttons are always visible inside the input field,
positioned on the right side (ChatGPT-style). Mic is disabled while
AI is streaming. Send is disabled during recording/transcribing.
* fix: remove unreachable CSS branch in recording waveform div
* feat: add CDP UI inspector script for dev self-testing
* fix: address code review feedback for inspect-ui script
- Use Delete key (not Backspace) to match server's keyboard.ts clearField
- Add windowId resolution to open-sidepanel (chrome.sidePanel.open requires it)
- Make target matching case-insensitive
- Replace process.exit(1) in eval with thrown error for proper cleanup
- Add comment referencing DEV_PORTS source of truth
* docs: add self-testing workflow for UI changes via CDP inspector
* fix: runtime fixes for inspect-ui discovered during live testing
- Remove Input.enable (domain has no enable method)
- Add DOM.getDocument before DOM operations (required by protocol)
- Use BrowserOS-specific sidePanel.browserosToggle API instead of
standard chrome.sidePanel.open (side panel starts disabled)
- Enable side panel with setOptions before toggling
* feat: add test-ui skill for visual testing of agent extension UI
Adds a Claude Code skill that lets the agent visually test both
surfaces of the BrowserOS extension:
- New tab page (app.html) — left sidebar with Home, Scheduled Tasks,
Settings, Skills, Memory, Soul, Connect Apps
- Right side panel (sidepanel.html) — chat interface
Includes all gotchas discovered through real testing: randomized ports,
fresh profile onboarding redirect, stale element IDs after navigation,
BrowserOS-specific sidePanel APIs, DOM.getDocument requirement.
* feat: add press_key, scroll, hover, select_option, wait_for to inspect-ui
Brings inspect-ui.ts to parity with server's MCP input tools:
- press_key: key combos like Enter, Control+A, Meta+Shift+P
(ported from keyboard.ts pressCombo)
- scroll: up/down/left/right with configurable amount
- hover: hover over element by ID for tooltip/hover state testing
- select_option: select dropdown option by value or visible text
(ported from browser.ts selectOption)
- wait_for: poll for text or CSS selector with 10s timeout
Updated skill documentation with new commands and examples.
* docs: prefer snapshot over screenshot, add holistic debugging guidance
- Add snapshot vs screenshot guidance table — prefer snapshot for
structural checks, screenshot only for visual/layout verification
- Add server log checking instructions ([agent], [server], [build] tags)
- Add JS error checking via eval
- Add API connectivity verification
- Add common issues troubleshooting table
- Update all examples to use snapshot as default verification
* fix: address Greptile review feedback
- Replace process.exit(1) with process.exitCode + return in cmdWaitFor
to allow async CDP cleanup in finally blocks
- Fix cmdScroll enabling Runtime instead of Page domain
- Add BROWSEROS_EXTENSION_ID env var override for extension ID
- Align CLAUDE.md dev server command with SKILL.md canonical command
take_snapshot only used the AX tree, which misses custom components
(cursor:pointer divs, onclick handlers, etc.) that lack ARIA roles.
These elements appeared as role="generic" and were invisible to the agent.
Changes:
- Merge findCursorInteractiveElements into snapshot() so take_snapshot
catches cursor:pointer, onclick, and tabindex elements
- Add DisclosureTriangle to INTERACTIVE_ROLES for <summary> elements
- Use aria-label as text fallback in cursor detection for icon-only buttons
- Fix dedup bug in enhancedSnapshot that was silently dropping all
cursor-detected elements by checking against all AX node IDs instead
of only already-included output IDs
- Add hover_at, type_at, drag_at coordinate tools to server
- Add hoverAt, typeAt, dragAt methods to Browser class
- Export server internals (browser, tool-loop, registry) for eval imports
- Copy eval app from enterprise repo with agents, graders, runner, dashboard
- Nest eval-targets inside apps/eval
- Adapt sessionExecutionDir → workingDir for current server API
- Add biome ignore for dashboard HTML to prevent lint breaking onclick handlers
* feat: add get_console_logs tool to surface browser console output
Captures Runtime.consoleAPICalled, Runtime.exceptionThrown, and
Log.entryAdded CDP events per page with a FIFO ring buffer (500 entries).
- ConsoleCollector: per-page buffers with O(1) session routing via Map lookup
- Session-aware CDP event dispatching (onSessionEvent) in CdpBackend
- Log.enable() added alongside Runtime.enable() in attachToPage
- Single tool with level hierarchy, text search, limit, and clear params
- Buffer clears on main-frame navigation, cleaned up on page close
* fix: address review — handle session re-attach, remove dead code
- ConsoleCollector.attach() now updates session mapping on re-attach
instead of early-returning, preventing silent event drops after
target detach/re-attach (e.g. tab crash, cross-process navigation)
- Remove unused clearConsoleLogs() and ConsoleCollector.clear()
* feat: add per-task LLM provider selection for scheduled tasks
Allow users to choose which AI provider a scheduled task runs with,
using the same ChatProviderSelector component from the new-tab page.
Falls back to the global default provider when none is selected or
if the selected provider has been deleted.
* fix: lint issues
* chore: updated to latest schema.graphql file
---------
Co-authored-by: Dani Akash <DaniAkash@users.noreply.github.com>
The AI SDK can produce assistant messages with empty parts (parts:[]) when
a stream is aborted, and providers reject assistant messages with empty text
content. This adds a validation utility that filters both cases before
sending messages to createAgentUIStreamResponse and when persisting them.
Mintlify deploys docs by cloning the repo but does not run `git lfs
pull`. The `.gitattributes` rule `docs/images/** filter=lfs` caused
all doc images to be stored as ~130-byte LFS pointer files, which
Mintlify served as-is — breaking every image on the site.
Removing the LFS rule and re-adding the files as regular git blobs
fixes all images without changing any paths or MDX files.
Also fixes broken Slack link placeholder in troubleshooting page.
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Images in docs/images/ are served as broken 130-byte placeholders by
Mintlify CDN. Co-locating images with the MDX file (matching the
working pattern in features/workflow/ and features/cowork/) bypasses
this issue. Also fixes the Slack link placeholder.
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: fallback to default BrowserOS provider when provider is null
When the extension first loads, provider config is loaded async from
storage. If a chat request fires before loading completes (race
condition), provider is null and the server receives provider: undefined,
causing a Zod validation error. This adds a fallback to
createDefaultBrowserOSProvider() in both chat paths (sidepanel and
scheduled tasks) so provider.type is always defined.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: fallback to first provider when default provider ID is stale
When defaultProviderId in storage doesn't match any loaded provider
(e.g. after Kimi/Moonshot rollout), selectedProvider was null causing
provider: undefined in chat requests. Now falls back to providers[0].
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: repair stale defaultProviderId in storage on load
When the stored default provider ID doesn't match any loaded provider,
write back the corrected ID (providers[0].id) to storage so it doesn't
silently persist across sessions.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Comment out non-working Canva and Exa integrations from the OAuth MCP
servers list and remove their imports/icon mappings from the UI.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: replace rate limit CTAs with Kimi/Moonshot partnership links
Comment out old "Learn more" and "take a quick survey" links on the
daily limit error banner. Replace with Kimi API key docs link and
direct Moonshot AI platform link for conversion tracking.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: remove partnership tagline from rate limit banner
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
The Docs link in the settings sidebar was using the Info icon (circle
with "i"). Changed it to BookOpen which is the standard icon for
documentation links.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Track docs/images/** and docs/videos/** with Git LFS
- Add packages/browseros/build/tools/ to .gitignore
- Remove appimagetool-x86_64.AppImage from version control (downloaded on demand by build script)
* fix: scheduled task agent not using hidden window for new pages
The agent prompt only told the agent to pass windowId with `new_page`
but not `new_hidden_page`, which the agent prefers for background work.
The agent also had no instruction against closing or replacing its
dedicated hidden window, causing pages to scatter across uncontrolled
windows.
Expanded the scheduled task prompt rules to:
- Cover both `new_page` and `new_hidden_page` windowId requirement
- Forbid closing the dedicated hidden window
- Forbid creating new windows
- Added `new_hidden_page` to tool reference for MCP consumers
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: remove duplicate hidden window creation from scheduled task frontend
The server's ChatService already creates a hidden window for scheduled
tasks (chat-service.ts:99-126), but the frontend (scheduledJobRuns.ts)
was also creating a minimized Chrome window that the server immediately
overwrote. This caused two windows to be created per scheduled task run,
with only one being used.
Removed from scheduledJobRuns.ts:
- chrome.windows.create() call
- 1-second race condition delay hack (FIXME)
- chrome.windows.remove() cleanup
- windowId/activeTab params to getChatServerResponse()
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* chore: bump server version
* fix: remove dead getCdpToolReference and unused prompt exports
The getCdpToolReference function was always excluded by the AI SDK agent
(tool schemas are injected by the SDK itself) and never used by the MCP
server (which has its own MCP_INSTRUCTIONS). Also removes unused exports
getSystemPrompt and PROMPT_SECTION_KEYS.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* chore: bump server version
* fix: move session dirs to ~/.browseros/sessions and update skill paths
Session directories now live under ~/.browseros/sessions/{conversationId}/
instead of executionDir/sessions/. Adds 30-day cleanup for stale sessions
at server startup. Updates 6 default skills to reference the working
directory instead of hardcoding ~/Downloads/.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* refactor: rename sessionExecutionDir to workingDir across server
Consistent naming for the per-conversation working directory:
- ResolvedAgentConfig.sessionExecutionDir → workingDir
- ToolDirectories.executionDir → workingDir
- resolveExecutionPath() → resolveWorkingPath()
- buildBrowserToolSet param: executionDir → workingDir
Server-level executionDir (DB, logs) unchanged.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: address PR review — restore emoji folder name, refresh session mtime
- Revert "Read Later" back to "📚 Read Later" to avoid creating
duplicate bookmark folders for existing users
- Touch session dir mtime on each message via utimes() so cleanup
correctly reflects last activity, not just directory creation time
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: address PR review round 2 — remove dead executionDir, fix emoji
- Remove executionDir from ChatServiceDeps and ChatRouteDeps since
resolveSessionDir now uses getSessionsDir() directly
- Fix missed emoji in notification format template
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
safeSkillDir() used a hardcoded `/` in the startsWith path traversal
check. On Windows, path.resolve() returns backslash paths, so the check
always failed — blocking getSkill, createSkill, updateSkill, deleteSkill.
Replace `${skillsDir}/` with `${skillsDir}${sep}` using path.sep from
node:path, which returns `\` on Windows and `/` on POSIX.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* fix: persist default kimi hub provider to BrowserOS prefs on first load
When VITE_PUBLIC_KIMI_LAUNCH is enabled, loadProviders() returned default
Kimi provider in-memory but never saved it to the BrowserOS pref. The
browser's C++ code reads the pref directly and found it empty, so Kimi
didn't appear in the toolbar until the user manually edited and saved.
Now loadProviders() persists defaults and ensureKimiFirst() additions to
the pref, keeping the browser in sync with what the extension UI shows.
Fixes#428
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: use reference equality for ensureKimiFirst change detection
Address PR review: reference check (normalized !== providers) is more
semantically precise than length comparison since ensureKimiFirst returns
the same reference when unchanged.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Return a friendly JSON response when users curl GET /mcp instead of
an opaque 503. Narrows the catch-all .all() to .post() since the MCP
Streamable HTTP transport only needs POST for stateless servers.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat: add missing patches to features.yaml
Add 37 patch files from chromium_patches/ that were not tracked in
features.yaml. Creates 3 new features (cdp-api, vertical-tabs,
crash-reporter) and adds missing files to 3 existing features
(chromium-ui-fixes, side-panel-fixes, first-run).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* test: split sparkle third-party from mac-sparkle-updater
Move third_party/sparkle/ into its own feature since the Sparkle
framework is downloaded on-the-fly during build, not a permanent
patch in the tree.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: minor
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Skills page navigation is now hidden when the server version is below
0.0.73, matching the gating pattern used for Memory, Soul, and Workflows.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat: move skills into main page navigation
Mirror the soul move pattern (166f6e1b) — promote Skills from
settings sidebar to primary navigation at /home/skills. Adds
backward-compat redirect from /settings/skills.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: remove missing dismiss-popups skill reference
The SKILL.md file doesn't exist on disk, causing a module
resolution error at server startup.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat: bootstrap 12 default agent skills for new users
Seed common browser automation skills (summarize, research, extract data,
fill forms, dismiss popups, screenshots, organize tabs, compare prices,
save page, monitor changes, read later, manage bookmarks) into
~/.browseros/skills/ on first startup when no user skills exist.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: populate skill edit dialog with existing content
The edit dialog form fields were empty because Radix Dialog's
onOpenChange doesn't fire when the open prop changes programmatically.
Replace the handleOpenChange wrapper with a useEffect that syncs form
state whenever editingSkill changes.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: correct tool names in default skill instructions
- memory_save → memory_write (actual tool name in memory toolset)
- delete_bookmark → remove_bookmark (actual tool name in registry)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* refactor: move skill content from TS template literals to separate SKILL.md files
Replace the monolithic defaults.ts (738-line file with escaped template
literals) with individual SKILL.md files per skill. Uses Bun's text
import (`with { type: 'text' }`) to inline content at bundle time.
Adds md.d.ts for TypeScript module resolution.
Much easier to read and edit skill content as plain markdown.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: add build:server:test and start:server:test scripts for local binary testing
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: refresh agent skills settings UI
* fix: address PR review comments for 0311-skills_ui_refresh
* feat: enhance default skills with file persistence, HTML reports, and add find-alternatives
Rewrite deep-research, extract-data, compare-prices, manage-bookmarks, and
read-later skills to follow a structured phase-based workflow. Key changes:
- All research skills now save data incrementally to disk instead of
accumulating in memory
- Add HTML report generation (light theme) with source links for
deep-research, extract-data, and compare-prices
- Use hidden windows and parallel tabs (max 10) for multi-source extraction
- Simplify read-later to just bookmark + PDF save
- Simplify manage-bookmarks to max 3-5 top-level folders with confirmation
- Add new find-alternatives skill for product alternative research with
1-5 star ranking
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* refactor: simplify skills page rendering
* fix: clean-up skill
* fix: address review feedback for PR #478
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat: add core memory viewer and editor to newtab
Adds a new Memory page (/home/memory) that lets users view and
inline-edit their agent's core memories (CORE.md). Includes server
API endpoints (GET/PUT /memory) with Zod validation, React Query
hook with optimistic updates, and example prompts to teach the
agent through conversation.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: improve memory examples with browser-aware prompts
Replace tech-specific examples with universal ones that leverage
the agent's browser tools — learning from bookmarks, summarizing
browsing history, reading open tabs, and setting communication
preferences.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: hide focus grid on memory page, same as soul page
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: reword history example to understand user, not just summarize
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: reset stale save error on edit/cancel, surface server errors
Address Greptile review:
- Reset mutation state in handleEdit/handleCancel/handleCreate to
prevent stale error from reappearing on re-entry to edit mode
- Parse server response body on save failure to show actual error
message (e.g. Zod validation) instead of generic "Failed to save"
* fix: cap memory viewer height with internal scroll
Long CORE.md content now scrolls within the card (max 480px) instead
of expanding the entire page. Applies to both read and edit modes.
* fix: polish memory viewer scroll UX
- Use viewport-relative max height (60vh) instead of fixed 480px
- Add styled-scrollbar for thin, themed scrollbar in both modes
- Add bottom fade gradient to hint at more content below
- Fixes width misalignment caused by system scrollbar stealing space
* feat: customize agent personality
* fix: reset soul with right types
* chore: use rpc client for setting personality
* fix: validation for new endpoint
* fix: compaction config for small context windows (≤32K)
Raise COMPACTION_SMALL_CONTEXT_WINDOW from 16K to 32K so models like
Haiku 4.5 (30K context) use proportional 50% reserve instead of the
fixed 20K reserve. Also scale fixedOverhead for small contexts (capped
at 40% of context window) to prevent the doom loop where overhead alone
triggers compaction on every step.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* docs: add compaction tuning guidance to limits constants
Explain the relationship between SMALL_CONTEXT_WINDOW and
FIXED_OVERHEAD so devs know the 24K minimum constraint when
tweaking these values.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Add window focus listener in ChatFooter that focuses the textarea when
the side panel receives focus. Handles both initial open (via
document.hasFocus check on mount) and re-focus scenarios (via window
focus event). Guards against stealing focus from other interactive
elements.
Companion Chromium fix: side_panel_coordinator.cc now always calls
RequestFocus() in PopulateSidePanel(), not just when there's no
previous entry — ensuring the side panel WebContents receives focus
on every open/toggle.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat: add 2-stage pruning to compaction pipeline before LLM summarization
Add two new lightweight stages to the compaction prepareStep pipeline that
recover context tokens cheaply before falling back to expensive LLM
summarization:
- Stage 2: Use AI SDK's pruneMessages to remove old tool call/result
pairs beyond the last 6 messages entirely
- Stage 3: Replace remaining tool output values with short placeholders
("[Cleared — N chars]") while preserving tool call structure and IDs
Both stages re-estimate tokens from message content (not stale step
usage) after modifying messages. The existing LLM summarization and
sliding window fallback remain as Stage 4.
Also adds estimateTokensForThreshold() helper, clearToolOutputs()
function, and COMPACTION_PRUNE_KEEP_RECENT_MESSAGES /
COMPACTION_CLEAR_OUTPUT_MIN_CHARS constants.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: reorder compaction pipeline — truncate before clear, protect recent tools
- Stage 0: Check threshold, return untouched when under (no data loss)
- Stage 1: Prune old tool call/result pairs beyond last 6 messages
- Stage 2: Truncate large tool outputs to 15K chars (keeps partial content)
- Stage 3: Clear old tool outputs with placeholders, protect last 2
- Stage 4: LLM-based compaction with sliding window fallback
clearToolOutputs now accepts keepRecentCount parameter (default 2) to
skip the N most recent tool messages from clearing.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: limits fixes
* fix: address review — preserve toKeep context, derive test values from constants
- When Stage 3 (clearToolOutputs) doesn't resolve overflow, pass
truncated (not cleared) messages to Stage 4 so toKeep retains
meaningful tool outputs for the agent's immediate context
- Add comment explaining intentional conservatism in post-prune
token estimation (step usage is stale, must re-estimate safely)
- Refactor computeConfig tests to derive expected values from
AGENT_LIMITS constants instead of hardcoding magic numbers
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
The system prompt referenced `browser_open_tab` which was renamed to
`new_page`. This caused models to infer a `browser_*` naming convention
and call non-existent tools like `browser_navigate`, resulting in
MCP error -32602.
Fixes TKT-540
Add changelog entry for BrowserOS v0.42.0 featuring SOUL.md, vertical tabs,
long-term memory, and Chromium 146 update. Include screenshots from the
GitHub release.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: new tools for breadcrumbs
* feat: setup scheduled task card
* feat: added dismiss cooldown
* chore: update prompt
* fix: support api key tool
* fix: prompt text to limit nudges
* fix: scheduled tasks card
* fix: update nudges prompt
* feat: skip nudges when user dismisses nudge
* fix: ensure nudges only show if they are not dismissed
* Revert "fix: ensure nudges only show if they are not dismissed"
This reverts commit d825254698829b8e9941aae7873bd440027d0c74.
* Revert "feat: skip nudges when user dismisses nudge"
This reverts commit 12b552b454d10ec4209b88668fc48681423ff6fc.
* Revert "fix: update nudges prompt"
This reverts commit 80b7520b953b4d3cbed2ed477b9e508e39938dca.
* feat: update agent with mcp when new mcp connection is added
* feat: created connect apps option as a blocking card system
* feat: schedule tasks passive without dismiss
* fix: nudges and prompt texts
* fix: biome lint errors
* fix: review comments
* fix: resolve comments
* fix: review comments
* fix: review comments
* fix: auto resolve state
* fix: eliminate the race where the async delete could resolve after the
new session
* feat: track ignored apps list
* fix: empty response text object on message reply
* feat: sync previously connected mcps
* feat: sync integrations with klavis
* feat: account for unauthenticated connections
* fix: analytics events
* fix: typescript issues
* fix: klavis client issue
* fix: invalid mcps causing entire responses from failing
* fix: prompt with card for integrations when the integration fails
* fix: prompt structure to support declined apps
* fix: refresh session on mcp changes
* feat: add agent skills system with catalog, loader, and UI
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: return 500 for server errors in PUT/DELETE skill routes
Previously both handlers returned 404 for all errors, masking filesystem
failures (disk full, permission denied) as "not found". Now only
"not found" errors return 404; everything else returns 500.
* fix: align SKILL.md format with agentskills.io spec
- Move `enabled` and `version` into `metadata` field (spec only allows
name, description, license, compatibility, metadata, allowed-tools)
- Frontmatter `name` now matches directory name (lowercase kebab-case)
- Human-readable name stored in `metadata.display-name`
- Add index signature to SkillMetadata for arbitrary string keys
- Validate frontmatter with type guard in getSkill (remove unsafe cast)
- updateSkill now preserves existing frontmatter fields (license, etc.)
- Tighten buildSkillMd param from Record<string, unknown> to SkillFrontmatter
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
- truncateToolOutputs: handle all output.type variants (text, json,
content) by checking output.value directly instead of branching on
type. The old code missed type 'content' (array of content parts),
causing 1M+ char tool results to pass through untouched.
- estimateTokens: change chars/4 to chars/3 — HTML/Markdown content
tokenizes at ~3.14 chars/token empirically, not 4.
- COMPACTION_FIXED_OVERHEAD: 5K → 12K to account for system prompt
(~2.5K tokens) + tool definitions as JSON Schema (~8-9K tokens).
- Apply truncateToolOutputs in prepareStep (Stage 0) before token
estimation, not just during summarization.
* fix: support artifact-extracted directory structure in OTA binary discovery
The download_resources system now extracts server binaries into
platform-specific subdirectories (e.g., darwin-arm64/resources/bin/),
but the OTA module only looked for flat binary names. This adds
find_server_binary() which checks both layouts, keeping backward
compatibility with --binaries while supporting the new structure.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: download server binaries from R2 instead of requiring --binaries
Remove the --binaries flag from `ota server release`. The module now
downloads artifact zips from artifacts/server/latest/ in R2, extracts
them, then signs and packages as before. This eliminates the need to
have mono build output locally.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* fix: robust compaction with Pi-style token counting + overflow middleware
Root cause: getCurrentTokenCount() returned stale inputTokens from the
previous step, ignoring new tool results added to messages since that
step. A large tool output (DOM snapshot, page content) caused a token
jump that bypassed the compaction threshold check, leading to
context_length_exceeded errors (322K tokens sent, model max 262K).
Layer 1 — Accurate token counting (proactive):
- Adopt Pi coding agent's additive approach: base(inputTokens) +
outputTokens + estimate(trailing tool results)
- Trailing tool results are estimated by walking backwards from end of
messages array until a non-tool message is found
- Falls back to full estimation with safety multiplier when no real
usage data is available (first step of a turn)
Layer 2 — Context overflow middleware (reactive):
- LanguageModelV3Middleware that wraps doGenerate/doStream
- Catches context_length_exceeded errors at the model call level
- Truncates prompt (keeps system messages + most recent non-system
messages targeting 60% of context window)
- Retries the model call once
Verified end-to-end with real model (Gemini Flash Lite via OpenRouter)
on 16K context window: 4 compactions triggered correctly across 8
steps, no context_length_exceeded errors.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: adopt Pi-style overflow detection patterns + fix truncation edge case
- Replace 6 generic substring matches with 17 provider-specific regex
patterns from Pi coding agent (Anthropic, OpenAI, Google, xAI, Groq,
OpenRouter, Bedrock, Copilot, llama.cpp, LM Studio, MiniMax, Kimi,
Mistral, z.ai)
- Fix truncatePrompt edge case: when the last message alone exceeds the
target, keepFrom was never updated → empty non-system messages. Now
always keeps at least the most recent non-system message.
- Add runtime guard for LanguageModelV3 cast in ai-sdk-agent.ts
- Add tests for false-positive rejection and truncation edge case
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
The Kimi K2.5 model supports a 256,000 token context window, not
128,000. Updated the provider template and model config to reflect
the correct value.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat: return element coordinates in tool responses and DPR in screenshots
- click, hover, fill, drag now return resolved coordinates in response text
- take_screenshot returns devicePixelRatio for mapping coordinates to pixels
- Coordinates are in CSS pixels; multiply by DPR to get screenshot pixels
* fix: use Promise.allSettled in screenshot to prevent DPR eval from aborting capture
Runtime.evaluate for devicePixelRatio can fail on PDF pages or
chrome-extension pages. Using Promise.allSettled ensures the screenshot
still succeeds, falling back to DPR=1.
* feat: gate Moonshot AI provider behind VITE_PUBLIC_KIMI_LAUNCH flag
Hide all Moonshot/Kimi provider UI when the launch flag is off:
- Filter moonshot from provider templates and type dropdown
- Gate Kimi flare badges in HubProviderRow
- Gate Kimi auto-insertion in LLM hub storage
- Add analytics events for Kimi API key configuration and guide clicks
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: allow editing existing moonshot providers when launch flag is off
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat: add search provider settings page with 5 engine options
Allow users to select their preferred search engine (Google, DuckDuckGo,
Bing, Brave Search, Yahoo) from a new settings page. The selected provider
drives search suggestions, search URL navigation, placeholder text, and
analytics tracking. Replaces all hardcoded Google references with the
stored preference. Adds Brave Search support, replacing Yandex.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: add error handling for search provider storage writes
Write to storage before updating React state so UI never diverges from
persisted value on failure. Add try/catch in the settings page to show
an error toast if the write fails.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* fix: migrate stale 400k context window for browseros provider
Existing installations cached the old 400k default in extension storage.
Always normalize the browseros provider's contextWindow to 200k on load,
matching the current default and preventing compaction from failing.
* fix: add browseros-auto model with 200k context length
* fix: setup migrations using the migrations api for context window size
---------
Co-authored-by: Dani Akash <DaniAkash@users.noreply.github.com>
* fix: anchor agent to active tab page ID from browser context
Generalize the scheduled-task page anchoring instruction to all tasks.
The agent now always uses the page ID from Browser Context instead of
calling get_active_page or list_pages, preventing it from operating
on the wrong tab.
* fix: add chatMode guard and scope windowLine to scheduled tasks
- Skip page-context section in chat mode where list_pages is allowed
- Only show windowId instruction for scheduled tasks (hidden window)
The app icon was oversized in the macOS Dock because the source icon
filled the entire 1024x1024 canvas with no padding. Apple's macOS Big
Sur+ HIG requires ~100px padding on each side (artwork at 824x824
within 1024x1024 canvas). Resized the source icon and regenerated all
platform icons.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat: integrate models.dev registry for auto-populated model defaults
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: fall back to upstream provider for model registry lookup
When the browseros meta-provider is used, the registry lookup now
also tries the upstream provider (e.g., openrouter, anthropic) so
that BrowserOS-hosted models get correct context window and image
support defaults.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: add Object.hasOwn guards to prevent prototype chain lookup
Addresses Greptile review: bracket notation on the registry object
could return prototype-chain properties for keys like __proto__ or
constructor, bypassing the 404 guard in the route handler.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat: add browseros-cli Go CLI for browser automation
Implements a full-featured CLI that communicates with the BrowserOS MCP
server over JSON-RPC 2.0 / StreamableHTTP. Covers all 54 MCP tools across
10 categories with a hybrid command structure (flat verbs for hot-path
commands, grouped noun-verb for resource management).
- MCP client with initialize + tools/call pattern, thread-safe request IDs
- Dual output: human-readable default, --json for structured/piped usage
- Implicit active page resolution with --page override
- 21 command files: open, nav, snap, click, fill, scroll, eval, ss, pdf,
dom, wait, dialog, pages, window, bookmark, history, group, health, info
- Cobra CLI framework with fatih/color for terminal formatting
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* test: add end-to-end integration tests for browseros-cli
Go integration tests gated by `//go:build integration` that exercise the
CLI binary against a running BrowserOS server. Tests build the binary,
run commands via exec.Command, and verify JSON output.
Covers: health, version, page lifecycle (open → text → snap → eval →
screenshot → nav → reload → close), active page, info, error handling,
and invalid page ID rejection. Skips gracefully when no server is running.
Run with: go test -tags integration -v ./...
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: add init command and fix MCP client bugs
- Add `browseros-cli init` command that prompts for the server URL,
verifies connectivity, and saves to ~/.config/browseros-cli/config.json
- Config priority: --server flag > BROWSEROS_URL env > config file > default
- Fix Accept header: include text/event-stream (required by StreamableHTTPTransport)
- Fix nil args: send empty object {} instead of null for tools with no params
- Update error messages to suggest `browseros-cli init` on connection failure
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* docs: add README for browseros-cli with setup, usage, and testing guide
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: always send arguments object in MCP tools/call
Go's json omitempty omits empty maps, causing the arguments field to be
missing from tools/call requests. The MCP SDK requires arguments to be
an object (even empty {}), not undefined. Remove omitempty from the tag.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: update help menu to be have groups
* refactor: replace hand-rolled MCP client with official Go SDK
Switch from custom JSON-RPC implementation to the official
github.com/modelcontextprotocol/go-sdk. This removes all hand-rolled
protocol types (jsonrpcRequest, jsonrpcResponse, RPCError, etc.) and
uses the SDK's StreamableClientTransport with DisableStandaloneSSE
for clean CLI process lifecycle.
Also adds URL normalization/validation, config command, and
updates init/README to reference YAML config.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Add server-level instructions that get injected into the LLM system
prompt when external MCP clients (Claude Desktop, Cursor, Gemini CLI)
connect. Covers browser automation workflow, Klavis integration
discovery, and auth flow guidance.
* feat: add inline chat experience to new tab page
Bring the full sidepanel chat experience to the new tab page. When
users select an AI suggestion from the search bar, the page transitions
inline to a full chat view instead of opening the sidepanel.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: remove unnecessary comments from NewTab.tsx
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: address PR review comments
- Move NEWTAB_CHAT_STARTED_EVENT tracking to startInlineChat where it
actually fires (was dead code in NewTabChat handleSubmit)
- Add NEWTAB_CHAT_RESET_EVENT tracking to handleNewConversation
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: gate newtab chat behind NEWTAB_CHAT_SUPPORT feature flag
When the flag is off (BrowserOS < 0.40.0), falls back to opening the
sidepanel via openSidePanelWithSearch (previous behavior). In dev mode
all features are enabled, so inline chat works during development.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: add newtab origin context to chat system prompt
When chatting from the new tab page, the AI is instructed to open
content in new tabs rather than navigating the current tab, keeping
the user's new tab page accessible.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
The AI SDK agent (v2) was allowing all 54 browser tools in chat mode,
while the Gemini agent correctly restricted to 6 read-only tools.
Extract CHAT_MODE_ALLOWED_TOOLS to a shared constant and filter
browser tools in AiSdkAgent.create() when chatMode is true.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat: expose Klavis MCP tools to external MCP clients
Connect to Klavis Strata at server startup and register discovered tools
on each per-request McpServer instance. This lets external MCP clients
(Claude Code, Gemini CLI) access Klavis-proxied integrations (Gmail,
Slack, GitHub, etc.) alongside browser tools.
- Add register-klavis-mcp.ts with connectKlavisProxy() and registerKlavisTools()
- Wire KlavisProxyHandle through server.ts -> mcp routes -> mcp-server
- Use structured logging and proper type imports
* fix: forward Klavis tool schemas and add shutdown cleanup
- Use zod-from-json-schema to convert Strata's JSON Schema to Zod,
so MCP clients see proper parameter names, types, and required fields
- Close Klavis proxy transport on server shutdown
- Move per-request Klavis tool registration logging to debug level
- Use proper type imports instead of inline import() types
- Fix connectKlavisProxy return type (never returns null)
* fix: add timeout to Klavis MCP connect/listTools and log shutdown errors
* fix: clear timeout timer and pre-compute Klavis tool schemas at startup
* fix: use client.close() instead of transport.close() for proper cleanup
* feat: update to 146, fix clean
* fix: update all 16 failed patches for Chromium 146.0.7680.31
- Update BASE_COMMIT to 4d3225104176d (Chromium 146)
- Shift BrowserOS command IDs to avoid upstream 40300-40302 conflict
- Fix settings BUILD.gn and menu patches for upstream removals
- Shift syncable prefs IDs to 100379-100380 after upstream additions
- Migrate theme patch from theme_service_factory.cc to theme_service.cc
(RegisterProfilePrefs moved upstream)
- Fix toolbar_actions_model.cc for upstream API changes
- Fix toolbar_pref_names.cc for upstream base::ListValue usage
- Fix ui_features.cc/.h for removed kPopupBrowserUseNewLayout
- Fix api_sources.gni for new upstream entries
- Shift infobar delegate ID to 132
- Shift extension histogram values by +4 (1961-1985)
- Shift api_permission_id kBrowserOS to 265
- Update histogram enums.xml to match shifted values
- Delete chromium_install_modes.cc patch (file removed in 146)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: enable vertical tabs
* feat: default browseros theme
* chore: bump PATCH and OFFSET
* fix: update extensions-manifestv2 series patch for Chromium 146
Regenerated the patch from a clean diff against 146.0.7680.31 to fix
line number offsets and context mismatches in extensions_ui.cc.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: update vertical_tab_strip_state_controller patch for Chromium 146
Upstream refactored includes and renamed NotifyStateChanged to
NotifyModeChanged. Regenerated patch with correct context.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: update default theme to neutral gray (136,136,136)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: rename base::Value::Dict/List to base::DictValue/ListValue for Chromium 146
Chromium 146 moved base::Value::Dict and base::Value::List to top-level
classes base::DictValue and base::ListValue. Updated all 23 patch files.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: regenerate browseros_prefs.cc patch (fix corrupt trailing newline)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: update patches for Chromium 146 build API changes
- browseros_action_utils.h: remove nonexistent base/containers/contains.h include
- chrome_content_browser_client.cc: PrivateNetworkRequestPolicyOverride → LocalNetworkAccessRequestPolicyOverride
- extension_updater.cc: InstallStageTracker::Get → InstallStageTrackerFactory::GetForBrowserContext
- toolbar_actions_model.cc: base::Contains → std::ranges::contains
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Add SOUL_SUPPORT feature flag to capabilities system requiring
minServerVersion 0.0.67. Hides "Agent Soul" nav item in settings
sidebar for older servers that lack the /soul endpoint.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
## Summary
- Add `VITE_PUBLIC_KIMI_LAUNCH` feature flag controlling Kimi partnership branding
- BrowserOS provider card shows "Powered by Kimi K2.5 from Moonshot AI" badge and "Extended usage limits for the next 2 weeks!" when flag is on
- Moonshot/Kimi highlighted as "Recommended" in provider templates
- LLM Hub defaults to Kimi, ChatGPT, Claude, Gemini (with legacy defaults migration)
- Kimi hub row shows "Powered by Moonshot AI" flare
- Model selector locked to kimi-k2.5
- "How to get a Kimi API key" link in provider dialog
- Moonshot provider fully integrated across frontend and backend
* fix: refactor SDK BrowserService to use Browser class directly
The tools system was completely rewritten with new tool names and response
formats. BrowserService was calling non-existent MCP tools (browser_get_active_tab,
browser_navigate, etc.) that returned structuredContent which no longer exists.
Replaced MCP HTTP client calls with direct Browser class method calls:
- getActiveTab → browser.getActivePage() / browser.listPages()
- getPageContent → browser.contentAsMarkdown()
- getScreenshot → browser.screenshot()
- navigate → browser.goto() with tabId/windowId resolution
- getPageLoadStatus → browser.listPages() with isLoading check
- getInteractiveElements → browser.snapshot() / browser.enhancedSnapshot()
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: address PR review — consistent tabId guard and remove dead PageContent type
- Change `if (tabId)` to `if (tabId !== undefined)` in navigate() to match
the guard style used for windowId and elsewhere in the file
- Remove orphaned PageContent interface no longer imported after refactor
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
SIGQUIT (Ctrl+\) was not in the signal notify list, causing Go's default
handler to dump goroutines. On macOS ARM64 this triggers a known runtime
bug where semasleep panics on the signal stack.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat: add "don't show again" checkbox to JTBD survey popup
Mirrors the ImportDataHint pattern — adds a checkbox that permanently
suppresses the survey popup when checked and dismissed.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: persist dontShowAgain when user clicks Take Survey
Addresses Greptile review — if the checkbox is checked and the user
clicks "Take Survey", persist the flag before opening the survey so
the popup won't reappear if the survey tab is closed without starting.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: show "don't show again" only after 2nd popup, increase interval to 10 msgs
- Track shownCount in storage, only show checkbox on 3rd+ appearance
- Increase MESSAGE_THRESHOLD from 5 to 10 messages between popups
- Add DONT_SHOW_AGAIN_AFTER constant (2) for configurability
- Pass showDontShowAgain through the component chain
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: thread dontShowAgain through onTakeSurvey to avoid duplicate analytics
Addresses Greptile review — previously clicking "Take Survey" with the
checkbox checked would fire both dismissed and clicked events. Now the
dontShowAgain flag is threaded through onTakeSurvey, which persists it
without firing a dismiss event.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
The --new flag creates a fresh temp profile directory but WXT's
chromiumProfile was hardcoded to /tmp/browseros-dev, ignoring it.
Pass BROWSEROS_USER_DATA_DIR env var from the Go dev tool and read
it in web-ext.config.ts.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* fix: filter out messages with empty parts to prevent follow-up crash
When an assistant response is interrupted or errors before producing content,
a UIMessage with empty parts remains in the chat state. On the next send, the
AI SDK validates all messages and rejects the empty-parts message with
"Message must contain at least one part". This filters them out when not
streaming and adds a safety guard in formatConversationHistory.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: filter empty-parts messages before persisting to storage
Addresses race condition where the save effect could persist messages
with empty parts before the cleanup effect's state update applies.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* fix: limit claude code review to PR creation and @claude comments
Reduces unnecessary action runs and token usage by only triggering the
review on initial PR open, and re-running when @claude is mentioned.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: restrict @claude trigger to trusted contributors
Only repo owners, org members, and collaborators can invoke the review
via @claude comments, preventing external users from consuming token quota.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: consolidate claude workflows and auto-run on PR creation
Remove separate claude-code-review.yml and add pull_request trigger
to claude.yml so it runs automatically on PR open without needing
@claude in the body.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: restore author_association guard on issue_comment trigger
The consolidation commit dropped the author_association check from the
issue_comment condition. Without it, any external commenter could invoke
Claude and consume token quota. Restores the guard to limit triggers to
OWNER, MEMBER, and COLLABORATOR.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: apply author_association guard to review comment triggers
Extends the OWNER/MEMBER/COLLABORATOR check to pull_request_review_comment
and pull_request_review events, preventing external users from triggering
Claude via review comments.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* fix: gate previousConversation array format behind BrowserOS 0.41.0.0
Older servers reject the array format for previousConversation with a
ZodError ("Expected string, received array"). Gate the feature behind
BrowserOS >= 0.41.0.0 which bundles server >= 0.0.64 that accepts both
array and string formats.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: use minServerVersion 0.0.64 for previousConversation gate
Server version is the direct indicator of schema support, more accurate
than using BrowserOS version as a proxy.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: fall back to string format for previousConversation on old servers
Instead of omitting previousConversation entirely on servers < 0.0.64,
serialize the conversation history as a "role: content" string which
old servers accept via their z.string() schema.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* chore: bump PATCH and OFFSET
* fix: add AppArmor profile and improve .deb packaging for Ubuntu 23.10+
Ship an AppArmor profile with the .deb package that grants the
`userns` permission, fixing the fatal sandbox crash on Ubuntu 23.10+
and other distros that restrict unprivileged user namespaces via
AppArmor (closes#165).
Also adds: Qt5/Qt6 shim libraries for native file dialogs on KDE,
update-alternatives registration for default browser selection,
prerm cleanup script, and Provides/Recommends metadata.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: correct CDN download path for .deb and add multi-size icons
Update .deb download path from lowercase "browseros.deb" to "BrowserOS.deb"
to match the URL advertised in README (cdn.browseros.com/download/BrowserOS.deb).
Also install icons at all available sizes instead of only 256x256.
Closes#368
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: add multi-size icons and AppStream metainfo to .deb package
Install product icons at all standard hicolor sizes (16, 22, 24, 32,
48, 64, 128, 256) instead of only 256px, so desktop environments can
pick the appropriate resolution for panels, menus, and task switchers.
Ship AppStream metainfo at /usr/share/metainfo/browseros.metainfo.xml
so GNOME Software, KDE Discover, and other software centers can
discover and display BrowserOS in their catalogs.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: guard postinst update-alternatives with $1=configure check
Matches prerm's pattern — only register alternatives during normal
configure, not during dpkg error-recovery paths (abort-upgrade, etc.)
where /usr/bin/browseros may not exist yet.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat: add bun binary signing for macOS and Windows
Register the bun runtime binary in the code signing pipelines so it gets
properly signed and notarized alongside browseros_server and codex.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: add bun runtime download and copy resource configs
Add bun binary entries for all platform/arch combos (macOS arm64/x64,
Linux arm64/x64, Windows x64) to download from R2 and copy into the
Chromium build output alongside browseros_server.
Also adds the server bundle (index.js) download and copy entries.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat: add get_dom and search_dom tools for HTML DOM inspection
Add two new observation tools:
- get_dom: Returns raw HTML of a page or scoped element via CSS selector
- search_dom: Fuzzy searches DOM elements by text, attributes, IDs, and
class names using Fuse.js with extended search syntax support
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* refactor: use CDP DOM protocol instead of injected scripts for DOM tools
Replace Runtime.evaluate-based approach with native CDP DOM methods:
- get_dom uses DOM.getDocument + DOM.querySelector + DOM.getOuterHTML
- search_dom uses DOM.performSearch + DOM.getSearchResults + DOM.describeNode
- Remove fuse.js dependency (CDP performSearch handles text/CSS/XPath natively)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* test: add comprehensive tests for get_dom and search_dom tools
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: resolve text nodes to parent elements in searchDom
CDP performSearch returns text nodes (nodeType 3) for plain text queries.
describeNode does not populate parentId, so use resolveNode + callFunctionOn
to get parentElement, then requestNode to obtain the parent's nodeId.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: add limit bounds validation and searchId leak prevention
- Add .int().min(1).max(200) to search_dom limit parameter
- Wrap searchDom result processing in try/finally to ensure
discardSearchResults is always called
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Tests were passing raw Chrome tabIds to group_tabs and ungroup_tabs tools,
but the Zod schemas expect pageIds (MCP-layer page IDs). The tabIds field
was silently stripped during validation, causing both tests to fail.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat: add new CDP tools for links, hidden pages/windows, show/move
- get_page_links: extract deduplicated links from a page via evaluate
- new_hidden_page: open a hidden tab for background automation
- create_hidden_window: create a hidden window for background automation
- show_page: restore a hidden page back into a visible window
- move_page: move a tab to a different window or position
- Default includeLinks to false in get_page_content
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: use AX tree for get_page_links, add tests, fix test scripts
- Refactor get_page_links to use accessibility tree instead of raw JS
evaluate — more reliable for role="link" elements and shadow DOM
- Add extractLinkNodes() to snapshot.ts and getPageLinks() to browser.ts
- Add tests for get_page_links (constructed HTML with dedup/filtering),
new_hidden_page, show_page, move_page, create_hidden_window
- Fix root package.json test scripts to match server's actual scripts
- Update CLAUDE.md test docs to reflect current structure
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat: move ChatV2Service to api/services layer and add resolvePageIds
Move ChatV2Service from agent/tool-loop/ to api/services/ where it
belongs as a service-layer concern. Add resolvePageIds() to convert
Chrome tab IDs to internal page IDs before they reach the agent,
fixing undefined pageId issues in browser automation tools.
Clean up server.ts by removing the USE_TOOL_AGENT flag, SessionManager,
and old chat route import — both /chat and /chat-v2 now directly use
createChatV2Routes.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: address review comments for chat-v2-service
- Fix TOCTOU race: derive isNewSession inside the creation block
instead of separate has()/get() calls
- Log warning when resolvePageIds can't map a tab ID
- Deduplicate tab IDs with Set before resolving
- Remove redundant null check on session in onFinish
- Add license header
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: update bun.lock
* fix: skip resolvePageIds for scheduled tasks to prevent pageId corruption
Scheduled tasks build browserContext with internal page IDs from
browser.newPage(), not Chrome tab IDs. The unconditional second
resolvePageIds() call was passing these internal IDs to resolveTabIds()
which expects Chrome tab IDs, causing the lookup to fail and overwrite
correct pageIds with undefined.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Add biome-ignore comments for noExcessiveCognitiveComplexity on compaction.ts
and grep.ts, and noExplicitAny on filesystem test helpers.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat: generalized compaction prompts with split turn handling
Replace browser-specific XML prompts with domain-agnostic markdown format.
Add split turn detection and parallel summarization for large single-turn
conversations. Switch compaction from generateText to streamText for
Fireworks API compatibility. Add comprehensive unit and E2E tests (84 total).
* fix: address code review issues for compaction (PR #391)
Enforce COMPACTION_MAX_SUMMARIZATION_INPUT cap, extract shared
callSummarizer helper, add runtime type guard for experimental_context,
move magic constants to AGENT_LIMITS, and remove dead constants.
* fix: cap truncatedTurnPrefix input to maxSummarizationInput
Apply the same sliding window cap to turn prefix messages that was
already applied to toSummarize, preventing unbounded LLM input for
long single-turn conversations with many tool calls.
* fix: reduce browseros-auto default context window to 200K
The 400K setting caused compaction to trigger at ~383K, but the actual
model limit is 262K. Conversations hit the hard limit before compaction
could kick in.
* feat: replace flaky TypeScript dev:watch with Go CLI (devwatch)
The Bun-based scripts/dev/start.ts orchestrator had fundamental issues with
WXT when launched via `bun run --filter` with cwd manipulation. This replaces
it with a Go CLI at tools/devwatch/ that provides:
- Process supervision with auto-restart on crash
- Colored log streaming with [tag] prefixes
- Automatic port discovery (--new flag)
- Fresh user-data directory creation
- Process group management for clean shutdown (SIGTERM → SIGKILL escalation)
- CDP readiness polling before starting the server
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: run agent codegen before wxt and add force-kill on double Ctrl+C
- Run graphql-codegen if generated/graphql/ doesn't exist, matching the
agent's own `dev` script behavior
- Second Ctrl+C sends SIGKILL to all process groups and exits immediately,
so you're never stuck in a restart loop
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: add run.sh wrapper that checks for Go and prompts to install
If Go isn't installed, shows a clear message with install instructions
(brew install go / go.dev/dl). Also skips rebuilding if the binary
already exists and main.go hasn't changed.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* chore: show double Ctrl+C hint at startup
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* refactor: swap ANSI escape codes for fatih/color
Adds proper TTY detection, NO_COLOR env var support, and cleaner
color API. Also improves help output with bold/dim styling.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* refactor: restructure devwatch into browseros-dev CLI with cobra subcommands
Expands the single-file devwatch into a modular CLI with three subcommands:
- `watch` — dev environment with process supervision (port of devwatch)
- `test` — start test env, run bun test, clean up (replaces TS test helpers)
- `cleanup` — kill ports + remove orphaned temp dirs (replaces cleanup.sh)
Shared Go packages for browser lifecycle (CDP polling, arg building),
server health checks (health + extension status), and process management
(managed proc, port killing, streaming, monorepo root finding).
Fixes PR #389 feedback:
- Add timeout after SIGKILL in Stop() to prevent indefinite hang
- Fix run.sh freshness check to detect changes in all .go files
- Add double Ctrl+C force-kill to test command
- Guard test cleanup with sync.Once to prevent race condition
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* chore: replace run.sh freshness logic with Makefile
Make handles timestamp-based dependency tracking natively. The Makefile
rebuilds only when any .go file, go.mod, or go.sum is newer than the
binary. run.sh just checks for Go, calls make, and execs the binary.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* fix: use fresh browser context for selected tabs on each message
Previously, session.browserContext (set on the first message) always
took precedence via the nullish coalescing operator. On subsequent
messages with different tab selections, the new selectedTabs from the
request were silently ignored.
Now normal messages always use request.browserContext so freshly
selected tabs are included. Scheduled tasks still use the stored
session context to preserve the hidden window's pageId/windowId.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: use singleton transport for MCP route
MCP SDK 1.26.0 added a strict guard in Protocol.connect() that throws
"Already connected to a transport" if called when already connected.
The previous code created a new transport per request and called
connect() each time, causing every request after the first to fail
with -32603 Internal server error.
Move transport creation outside the request handler and add
isConnected() check per @hono/mcp docs pattern.
* fix: per-request MCP server+transport for SDK 1.26.0 compat
MCP SDK 1.26.0 patched a security vulnerability (GHSA-345p-7cg4-v4c7)
where sharing a singleton McpServer across requests could leak
cross-client response data via message ID collisions.
Create fresh McpServer + StreamableHTTPTransport per request:
no shared state, no race conditions, no ID collisions.
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
The agent had no knowledge of its working directory, so it couldn't
reference created files by absolute path or help users locate them.
Pass sessionExecutionDir into buildSystemPrompt for both AiSdkAgent
and GeminiAgent so the prompt includes a <workspace> section with
the resolved directory path.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Previously, session.browserContext (set on the first message) always
took precedence via the nullish coalescing operator. On subsequent
messages with different tab selections, the new selectedTabs from the
request were silently ignored.
Now normal messages always use request.browserContext so freshly
selected tabs are included. Scheduled tasks still use the stored
session context to preserve the hidden window's pageId/windowId.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat: new onboarding flow
* feat: co-ordinate the sign in and import hints
* fix: ux on step one
* fix: make custom option friendlier
* feat: added required fields
* feat: setup step two redirection
* fix: remove copy url button
* feat: store profile info from onboarding
* feat: sync onboarding profile to api
* feat: show confetti when the onboarding completes
* fix: change the options in onboarding demo
* feat: setup missing analytics events
* fix: lint issues
* ci: fix typescript error
* fix: sign in hint
* fix: restore glow overlay for CDP-based tools
After migrating to CDP tools, glow broke because the hook looked for
input.tabId (controller tools) while CDP tools use input.page (pageId).
- Server: add getTabIdForPage() to Browser, include tabId in tool output
- Client: extract tabId from output, fall back to active Chrome tab
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* refactor: use ToolResultMetadata for tabId resolution
Move tabId resolution from tool-adapter into the framework layer:
- response.ts: add ToolResultMetadata interface with tabId field
- framework.ts: auto-resolve pageId→tabId after tool execution
- tool-adapter.ts: just forward metadata (no domain logic)
This makes metadata available to all ToolResult consumers, not just
the AI SDK adapter, and the metadata bag is extensible for future fields.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: add todo
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat: replace pi-mono filesystem tools with native Bun/Node.js implementation
Remove @mariozechner/pi-coding-agent and @mariozechner/pi-agent-core
dependencies that caused bun compile issues (tree traversal, package.json
resolution). Reimplement all 7 filesystem tools (read, write, edit, bash,
grep, find, ls) using only Bun and Node.js built-in libraries.
- No external binary dependencies (no ripgrep, fd, etc.)
- Cross-platform: Linux, macOS, Windows
- 107 tests covering all tools and utilities
- Pure JS grep/find using Bun.Glob and async directory walking
* fix: add explicit ENOENT handling in grep tool stat() call
Add a BibTeX @software citation block to README.md between
Credits and Stargazers sections, with authors Nithin Venkat Sonti,
Nikhil Venkat Sonti, and the BrowserOS team.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat: ensure scheduled tasks open in hidden tab
* fix: update scheduled task result in the UI
* fix: remove unnecessary useEffect
* fix: race condition with deleteSession
Instead of a hardcoded experimentId=daily_limit, randomly assign users
to one of four survey direction buckets (competitor, switching, workflow,
activation) matching the round 2 survey pattern.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Copy dev workflow skills (dev, dev1-start through dev7-pr, dev-debug,
ts-style-review) to project .claude/skills/ so they're available to all
contributors. Excludes twitter agent and browseros browser skills.
Update .gitignore to track .claude/skills/ and .claude/commands/.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: update to support more klavis MCP servers
* fix: minor icon fix
* fix: normalize klavis mcp auth flow compatibility
* feat: add API key auth flow for Klavis MCP servers
Servers that use API key authentication (Stripe, Cloudflare, Brave
Search, Exa, Mem0, Resend, Mixpanel, PostHog, Postman, Zendesk,
Intercom) were failing with "Failed to add app" because the frontend
only handled OAuth flows. This adds the complete API key auth path:
- Backend: apiKeyUrls in StrataCreateResponse, submitApiKey() method,
/servers/submit-api-key route
- Frontend: ApiKeyDialog component, useSubmitApiKey hook, ConnectMCP
updated to show dialog for API-key servers instead of opening OAuth
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: remove broken success check in Klavis submitApiKey
The Klavis /mcp-server/instance/set-auth endpoint returns
{ message: "Authentication updated successfully." } without a
success field. Our code checked `data.success` which was always
undefined, causing API key auth to fail even when Klavis accepted
the key. The request() method already throws on non-2xx responses,
so the explicit check was redundant and incorrect.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat: add Round 2 direction parameter to JTBD survey frontend
Thread direction parameter from popup trigger through URL params to the
survey chat API. Randomly assign one of 4 investigation directions
(competitor, switching, workflow, activation) when the in-app popup
triggers, encoding it as experimentId=r2_{direction} for analytics.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* refactor: remove direction param, encode in experimentId instead
Direction is now encoded entirely in experimentId (e.g., "r2_competitor").
Remove the separate direction URL param and prop threading — the backend
derives direction from experimentId. Simplifies the frontend to only
set experimentId with a random direction on popup trigger.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* fix: setup
* fix: compact workflow tidbits within streamed assistant parts
feat: collapse workflow tidbit status messages in graph chat
* Revert "fix: compact workflow tidbits within streamed assistant parts"
This reverts commit f5fa6d6b7a480dfc001ede6de7949f45c7777f37.
* fix: collapse workflow tidbit status messages in graph chat
Tidbit messages (jokes/status ending with ...) during workflow execution
now replace each other in place instead of stacking as separate chat
bubbles. Handles both consecutive tidbit messages and multiple tidbit
text parts within a single streamed message.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: compact multi-line tidbits within a single text part
Tidbits arrive as text-deltas accumulated into a single text part
(e.g. "Generating workflow…\nReticulating splines…\n..."). The previous
fix only handled separate parts and separate messages but not multiple
tidbit lines within one part. Added compactTidbitLinesInPart to trim
multi-line tidbit text to just the last line.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Added max-h-[85vh] and overflow-y-auto to DialogContent component
to enable scrolling when dialog content exceeds viewport height.
This fixes the scheduled task dialog not showing scroll when
content is too long.
https://claude.ai/code/session_01CP8aUnunJpW9mYwTbt3gpt
Co-authored-by: Claude <noreply@anthropic.com>
* chore: baseline setup
* fix: resolve stale closure bug in LLM Hub provider management
saveProvider and deleteProvider were wrapped in useCallback with
[providers] dependency, building updated arrays from the closure-captured
providers state. When adding a provider then deleting another, the delete
callback could have a stale providers array that didn't include the newly
added one — causing the new provider to be lost when written to storage.
Fix: read current state from persistent storage via loadProviders()
before every mutation, matching the pattern used in useLlmProviders.ts.
Remove useCallback wrappers since they no longer depend on providers state.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: inject stop button to pages controlled by agent (#334)
* chore: baseline setup
* feat(agent): When the agent is running, right now we inject an orange glow. See the `apps/age
Task ID: TOiaMuDz
* fix: clean up agent storage
* fix: improve the stop button style
* fix: type issues with stopAgentStorage
---------
Co-authored-by: BrowserOS Coding Agent <coding-agent@browseros.com>
Co-authored-by: Dani Akash <DaniAkash@users.noreply.github.com>
* fix: resolve stale closure bug in LLM Hub provider management
saveProvider and deleteProvider were wrapped in useCallback with
[providers] dependency, building updated arrays from the closure-captured
providers state. When adding a provider then deleting another, the delete
callback could have a stale providers array that didn't include the newly
added one — causing the new provider to be lost when written to storage.
Fix: read current state from persistent storage via loadProviders()
before every mutation, matching the pattern used in useLlmProviders.ts.
Remove useCallback wrappers since they no longer depend on providers state.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: BrowserOS Coding Agent <coding-agent@browseros.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Dani Akash <DaniAkash@users.noreply.github.com>
* chore: baseline setup
* feat(agent): When the agent is running, right now we inject an orange glow. See the `apps/age
Task ID: TOiaMuDz
* fix: clean up agent storage
* fix: improve the stop button style
* fix: type issues with stopAgentStorage
---------
Co-authored-by: BrowserOS Coding Agent <coding-agent@browseros.com>
Co-authored-by: Dani Akash <DaniAkash@users.noreply.github.com>
saveProvider and deleteProvider used useCallback with [providers]
dependency, causing a stale closure bug. When adding a new provider
then deleting another, the delete callback still referenced the old
providers array (before the add), losing the newly added provider.
Now reads current state from storage before each mutation, matching
the pattern used in useLlmProviders. Also removes unnecessary
useCallback wrappers per project conventions.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Check session existence in ChatService before passing previousConversation
to the agent. Only pass it for new sessions — existing sessions already
have real conversation history in the GeminiClient.
Automatically detect whether custom MCP servers use Streamable HTTP or
SSE transport by probing with a POST request before creating the config.
- Add detectMcpTransport() utility that probes the server endpoint
- If POST returns 200 with JSON/event-stream, use Streamable HTTP
- If POST returns 404/405 or fails, fall back to SSE transport
- Cache detection results per URL with 1-hour TTL
- Skip caching for transient errors (5xx, network failures)
Known servers (browseros-mcp, klavis-strata) skip detection and use
Streamable HTTP directly.
* fix: incorrect tool call for getting page snapshot
* feat: let llm know the page is loaded after enrichment is complete
* feat: improve prompt to prevent calling getActiveTab
* feat: added enrichment to the get_load_status tool
* fix: tips
* fix: show tips only 1/5 times
* fix: guard against empty tips array in getRandomTip
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: biome exhaustive deps in SurveyChat voice effect
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat: remove wrapped controller tools and enrich context with windowid
* refactor: remove windowid from all the tools
* feat: pass window id to mcp server via request headers
* feat: enrich possible toolcalls to reduce roundtrip
* feat: show scheduled tasks tab if job runs are empty
* chore: switch tabs after creating new tasks
* feat: provide option to cancel and retry scheduled tasks
* feat: provide option to retry and cancel jobs on the popups
* chore: fix minor race condition between window cleanup and job status
update
* fix: keep previous data in chat history
* feat: use react query for restoring conversation messages
* fix: loading issue with chat history
* fix: use state instead of ref for the restoredConversationId
* fix: handle not found scenario on both local and remote restoration
* Revert "fix: handle not found scenario on both local and remote restoration"
This reverts commit d4725134087af047fe18bc6519f5ad5244104544.
* fix: handle conversation not found scenario
* chore: added a loading indicator for the chat history page
* chore: reset restored conversation id state
* feat: do not create tab groups for scheduled tasks
* chore: simplify system prompt to make excluding steps easier
* chore: consistent prompt composer
* feat: created auth client
* feat: created login page for testing auth
* feat: setup logout page
* feat: setup graphql codegen
* feat: setup graphql + react query utils
* feat: setup queryprovider with localforage
* feat: created auth provider
* feat: update claude.md
* feat: documents for bulk conversation upload
* chore: install missing package
* fix: setup codegen to scan for .ts files
* chore: setup check conversation query
* feat: upload conversation by profileId
* chore: upload messages in batches
* feat: account for edge cases in conversation upload
* feat: delete uploaded conversations from localstorage
* feat: load conversation history from api
* feat: implement delete conversation using graphql
* feat: delete confirmation for conversation history
* fix: issue with clearing conversations after upload
* feat: implement pagination for graphql chat history
* chore: update CLAUDE.md
* chore: update claude.md
* feat: save conversations to server
* fix: handle streaming check on remote conversation save
* feat: restore conversation from graphql
* fix: timestamp issue on the chat history page
* feat: sync llm providers from background script
* feat: update llm providers on change via background script
* chore: added a try catch block
* feat: display incomplete providers in separate UI
* feat: delete provider on server when initiated by user
* feat: setup scheduled tasks storage to sync to graphql
* feat: auto run sync in background script
* fix: sync all keys of scheduled tasks based on updatedAt timestamp
* feat: added login dropdown on the sidebar
* feat: simplify sidenav header
* feat: update header design after login
* feat: setup profile page
* feat: added back button to profile page
* fix: scrollbar flash in profile page
* feat: finish login handshake
* feat: clear storage on logout
* fix: logout page style
* feat: added tooltip to encourage user to sign in
* feat: added back button to login page
* fix: upload logic for profile picture
* feat: account for profile name in sidebar branding
* chore: set file upload url from backend request
* chore: remove default placeholder from profile component
* chore: sync with main
* Revert "chore: sync with main"
This reverts commit 77e06b894ce30235d1bfa31c8e2699b34df423a5.
* Reapply "chore: sync with main"
This reverts commit dd921d97cc9794d1872e13689c881f68e4dfee47.
* chore: updated lock file
* fix: run codegen before build:ext
* fix: run codegen before build:gent
* fix: remove hardcoded localhost header in magic link
---------
Co-authored-by: Nikhil Sonti <nikhilsv92@gmail.com>
* fix: use source files for agent-sdk during development
Export src/index.ts directly in workspace mode so the server can import
without requiring a build step. publishConfig overrides exports to use
dist/ when publishing to npm.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
* fix: onboarding try it
* fix: summarize current page
* fix: ask browser os opens in agent mode
---------
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
* feat: agent mode on or off
* fix: cleaner whitelist for chat mode
* fix: cleaner whitelist for chat mode
* feat: agent mode with tooltip
* feat: agent mode chat mode final UI
* feat: previous conversation history
* fix: re-enable the DELETE endpoint
* fix: make bun run start:server show lgos
* fix: minor text change
* fix: keep 16k context window size
* fix: use message ref to get access to full restored messages (when create prev conversation history)
* fix: don't run watchdog in dev-mode
* Revert "fix: re-enable the DELETE endpoint"
This reverts commit 9cbbbab6768c7c412c8f65bd88643e2856fa5169.
---------
Co-authored-by: Nikhil Sonti <nikhilsv92@gmail.com>
* fix: add timeout and window based mutex to improve speed
* fix: add timeout and window based mutex to improve speed
* fix: add timeout and window based mutex to improve speed
* fix: add timeout and window based mutex to improve speed
* fix: add timeout and window based mutex to improve speed
* fix: add timeout and window based mutex to improve speed
* fix: add timeout and window based mutex to improve speed
* fix: add timeout and window based mutex to improve speed
* fix: add timeout and window based mutex to improve speed
* fix: move suspense boundary closer to corresponding pages
* fix: pre-resolve the client via singleton to speed up the clientPromise
* feat: apply theme background faster with plain script
* chore: update biome version
* feat: make rpc client persist promise with useMemo and remove loading
text
* fix: replace dvh with vh
* fix: replace dvh with vh in create graph
* fix: import clean-up + unit test for transformCode
* feat: improve formatter
* feat: grep interactive tool
* fix: simple, detailed, full formatter options
* fix: viewport legend
* fix: add vscode launch.json for debugging
* fix: grep show before and after, also click before type/clear
* feat: move to bun plugin to intercept WASM
* feat: new build/server.ts with refactored
* fix: clean-up source map dirs before build
* fix: remove elide for build
* fix: clean-up source map ordering
Add a changelog page documenting BrowserOS releases from v0.30.0 to v0.36.2.
Each version includes date and summary of changes, with links to GitHub
releases for full history.
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
* feat: v1 ui for the file selector
* feat: integrate with browseros.choosePath API
* feat: gate workspace folder for 0.36.0.4 as requires new browserOS.choosePath API
* fix: add default folder option
* fix: clean-up old code
* feat: create conversations storage hook
* feat: save conversation hook
* feat: created chat layout
* feat: created chat history button
* feat: setup chat history view links
* chore: updated placeholder
* fix: width of the chat history screen
* feat: provide navigation from history page back to conversation page
* fix: issue with restoring conversation id
* chore: do not update history when content doesn't change
* feat: mark active conversation id
* fix: syncing the conversation id ref
* feat: improve the logic for node width
* feat: use dagre to display loops
* chore: use animated dots for loops
* feat: create graph using cytoscape
* feat: use cytoscape html label
* feat: setup dynamic label height and width
* feat: set reasonable zoom levels
* feat: use theme colors for nodes
* feat: use mutation observer to change color schemes
* feat: implement dark mode with pure css
* chore: remove unused libraries
* fix: sanitize label with dompurify
* feat: add support for jtbd agent to accept max turns and experiment id as query params
* fix: add jtbd agent integration with workflow
* fix: change message threshold to 5
* fix: tempDir is executionDir and create per session execution dir
* fix: move create() in gemini-agent to top
* fix: log(debug) directories
* fix: chat routes bug
* feat: support userSessionDir in /chat request schema
* fix: clean-up un-used types
* fix: lint errors
- moved chatprovider selector to a shared component
- reimplement chat header as it was simple and we can have graph mode specific options there instead of reusing chat header from sidepanel
* feat: custom node component
* feat: create resizable panels for graph ui
* feat: setup hono rpc on agent
* feat: created getClient util
* feat: created rpc client provider
* chore: reafctor agent sdk
* chore: created usechat hook
* chore: graph create update endpoint return ai sdk stream
* chore: graph create update endpoint return ai sdk stream
* chore: graph create update endpoint return ai sdk stream
* chore: graph create update endpoint return ai sdk stream
* feat: graph chat component
* feat: integrate input field
* feat: make getActionForMessage optional
* feat: integrate chat messages ui
* feat: update graph canvas with latest message
* feat: support editing graph with new message
* feat: create chat test function
* fix: created chat test api integration
* chore: remove background window state
* chore: improve agent ui stream
* chore: print error
* feat: create workflow storage
* feat: created workflows screen on options page
* feat: added error handling to workflows chat
* chore: ignore graph code generation folder
* fix: provide a better header title name
* fix: buttons accessibility on graph canvas
* feat: improve test and save workflow button state
* chore: provide autofocus to the workflow header
* feat: setup save and edit options on the workflow
* feat: open the workflow in edit mode
* fix: use sentry to capture server exception
* feat: integrate run workflow using dialog box
* feat: display errors in the run dialog box
* fix: use rpc client to delete workflows
* feat: fix panel sizes on graph creation
* fix: provide suspense fallback boundary for the options page
* feat: auto fitview on graph updates
* fix: node colors in the graph
* chore: make minimap movable
* feat: provide styling to react flow controls
* fix: missing imports
* fix: pass personalization to workflow runs
* feat: provide back button in workflow page
* feat: added confirmation when leaving workflow page without saving
* feat: provide animation to nodes
* feat: autofit canvas to resizepanel size
* feat: added workflows to newtab page
* fix: typescript lint errors
* feat: enforce bun version
* fix: typecheck command
---------
Co-authored-by: shivammittal274 <mittal.shivam103@gmail.com>
* feat: v0.1 jtbd popup for users
* feat: v0.2 jtbd popup based on messages sent
* fix: clean up previous chat status and added comment
* chore: change threshold to 15
* fix: show popup only when every N messages
* fix: set survey taken only after clicking start on welcome page
* feat: v0.1 of voice transcription for JTBD survey
Add voice input capability to the JTBD Product Survey chat:
- useVoiceInput hook for audio recording and transcription
- VoiceInputButton component for mic/stop/loading states
- Waveform visualization during recording
- Integration with BrowserOS gateway transcription endpoint
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
* style: make voice button orange like send button
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
* chore: refactor jtbd agent
* chore: udpate text
* fix: clean up stop recording if stopped midway
---------
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
* fix: replace bun install with bun ci in workflow files
* chore: update react router
* chore: update posthog
* fix: vulnerable package versions
* Revert "fix: replace bun install with bun ci in workflow files"
This reverts commit 2924fe496fc340555506d305e57b81cb87d45dae.
* fix: add debug logging for start:dev
* feat: use eventsource-parser for schedule tasks
* fix: remove reasoning traces, minor UI updates for schedule task
* fix: bug with textdelta
* fix: controller-ext is built separately
* fix: remove un-used scripts in agent/
* fix: rename to assistant
* fix: add build scripts
* feat: new start:dev to start both
* fix: update gitignore
* feat: --new-ports support for dev:start
* feat: update start-all to support port and new data dir
* fix: add help insturctions for start:dev
* chore: refactoring
* fix: return all response parts from tool execution
Previously, handleToolExecution only returned responseParts[0], causing
data loss when tools returned multiple parts. This fix:
- Changes ToolExecutionResult.part to ToolExecutionResult.parts (array)
- Returns all responseParts instead of just the first one
- Spreads all parts into toolResponseParts in processToolRequests
* feat: ota release
* chore: clean-up old binaries
* fix: ota cli sub-commands, path fixes
* chore: browseros server binary update
* fix: add sparkle sign_update path as ENV
* fix: CLOUDFLARE_API_TOKEN to env
* fix: use same upload r2 module
* feat: upload appcast is separate
* feat: write sparkle sign in python
* fix: handle appcast update
* fix: add missing sparkle.py file
* fix: remove redudant cli options in ota
* chore: 0.0.37 macos signed release
* chore: linux browseros server ota
* fix: copy binaries to temp file and then sign
This workflow runs a daily security audit on the codebase, checking for vulnerabilities and sending the results to Slack. It includes steps for checking out the code, setting up Bun, installing dependencies, running the audit, parsing results, and notifying via Slack.
* feat: support browserOS server version in capabilities
* feat: add personalisation support flag
* fix: gate personalisation based on server support
* fix: gitignore minor
* fix: clean-up passing logger, bad pattern it's singleton
* feat: refactor main.ts (#148)
* fix: logger in main
* feat: refactor chat route and split into service (#149)
* fix: logger in chatserver
* feat: scheduled tasks base ui
* chore: fix biome version
* fix: type issues
* chore: remove use callback
* chore: refactor scheduleStorage types
* feat: create storage hooks for job & job runs
* feat: integrate listing with store
* feat: schedule tasks dialog integration
* feat: integrate view and runs
* feat: sync alarm state
* fix: check for enabled jobs in alarm state
* feat: createAlarmFromJob utility
* feat: updated edit hooks to update alarms
* feat: getChatServerResponse util
* feat: run jobs in schedule
* feat: update job run stat with storage
* feat: discard old runs over 15
* feat: provide graph mode entry
* feat: footer link with scheduler option
* feat: use a nicer loader for task runs
* feat: schedule results component
* feat: scheduler results in new tab page
* feat: nicer date formatting with dayjs
* feat: use run-result-dialog for displaying run results in new tab
* chore: delete mocked storage methods
* chore: remove unused code
* chore: remove all job runs when a job is deleted
* feat: use shadcn elements for schedule results component
* feat: render results in markdown view
* chore: added important update on logic sharing
* chore: remove loading state in scheduledtaskslist
* feat: run the background job in a unfocused window
* feat: provide mcp options to the background scheduled tasks
* chore: clean up stale jobs on chrome restart or update
* fix: background window not cleaned up on error
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
* chore: fix type issues
---------
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
* feat: agent-sdk outline
* feat: unit tests for agent-sdk
* feat: implement /sdk routes
* feat: integration test for agent-sdk with server
* feat: ENV to disble headless mode for testing
* feat: act() integration test working
* chore: refactor package/shared to have constants/ and /types separately
* feat: verify() and extract() sdk APIs
* feat: extract() use remote endpoint for extraction
* feat: verify() implemented - lazy parsing to avoid strong schema checks
* fix: remove generateStructuredOutput as not models support it
* fix: clean-up LLM types and use zod schema
* fix: typecheck vitetest error
* fix: remove directly calling GeminiAgent in sdk act()
* fix: lefthook for refactor warning
* fix: refactor routes/sdk to move business logic out
* feat: new extension installer + bundle support
* feat: support bundle extension download in cli
* chore: update release yaml to include new bundle_extensions module
* chore: fix monorepo setup
1) use single .env.development file at the root
2) update package.json to contain commands to start server and agent
3) rename "Assistant" package name to "agent"
4) rename HTTP_MCP_PORT to SERVER_PORT
* chore: update README
* chore: update .env.example
* ci: update dependabot to focus on security
Added open-pull-requests-limit, enabled beta ecosystems (for bun support) and only allow only security updates
* chore: fix whitespaces
* ci: update dependency groups to only apply to security-updates
* feat: use pino logger, use logger interface across ext and server
* fix: no need prefixes in logger as we parse stack trace
* chore: update claude.md
* fix: clean-up old docs
* feat: refactored test utils
* fix: clean-up dev scripts and move to scripts/dev
* fix: clean-up script
* fix: refactor tests into properly controller tests and cdp tests
* feat: import all the missing tests before refactor
* fix: biome errors for tests
* fix: few type errors and add exceptiosn
* fix: few more type errors
* fix: remove agent port from tests
* fix: exclude tests from tsconfig, bun run tests natively
* fix: mcpServer test now waits for extension connected
- Delete apps/server/src/mcp/server.ts and index.ts (replaced by http/routes/mcp.ts)
- Delete apps/server/src/agent/http/HttpServer.ts, types.ts, index.ts (replaced by http/)
- Move ChatRequestSchema and related types to http/types.ts
- Update imports in GeminiAgent.ts, agent/types.ts, agent/index.ts
- Remove deprecated exports from agent/index.ts
- Remove commented out startMcpServer and startAgentServer functions from main.ts
- Add routes/chat.ts with POST /chat and DELETE /chat/:conversationId
- SSE streaming with abort detection via honoStream.onAbort()
- Rate limiting for BrowserOS provider
- Session management via SessionManager
- Reuses existing GeminiAgent execution logic
* feat: browseros-server OTA updater
* chore: bump PATCH and OFFSET
* fix: updates to browseros-server ota updater -- status check, rollback support
* feat: move all browseros cli to switches
* chore: clean-up old agent v1 from installation
- Add routes/mcp.ts using StreamableHTTPTransport from @hono/mcp
- Per-request transport to prevent JSON-RPC request ID collisions
- Reuse tool registration logic from existing MCP server
- Security check with isLocalhostRequest() using Bun server.requestIP()
- Supports enableJsonResponse for JSON responses (not SSE)
- Add routes/provider.ts with Zod validation for provider testing
- Add routes/klavis.ts with all Klavis OAuth endpoints
- Update server.ts to compose new routes
* feat: refactor packages into single project
* feat: created apps directory
* chore: removed duplicate packages
* fix: delete package-lock.json
since project uses bun
* fix: enable sparkle build flag
* feat: cli new apply changed command for dev cli
* fix: sparkle patch fix
* fix: dev cli changed minor fix
* fix: dev cli - for download add --output support
* feat: mcp support
* feat: mcp support added
* feat: third party mcp support
* feat: third party mcp support
* feat: mcp support extended to all oauth urls and user integrations
---------
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
* fix: windows guids
* fix: generate new windows icons
* fix: bump patch
* feat: new icon generation script
* fix: new generated icons
* fix: new generated icons
* fix: new generated icons
* feat: fetch daily rate limit from the gateway
* chore: survey link for usage limit
* fix: remove initial query from rate limiter table to keep it simple (as it is not required)
* fix: handle rename during extract properly with deleting old patch
* patch: refactor broweros patches to be in chrome/browser/browseros
* patch: rename browseros_actions_config
* fix: features.yaml update to include new browseros folder
* patch: revalidate ports on restart
* patch: disable cdp notifications
* chore: new browseros-server binaries
Fixes "unexpected tool_use_id found in tool_result blocks" API errors that
occur after conversation compression removes one half of a tool_use/tool_result pair.
Root cause: The existing filter logic checked if tool_use IDs had matching
tool_results (and vice versa), but when filtering orphans, the IDs were not
removed from the tracking sets. This caused corresponding counterparts in
later Contents to pass through the filter, creating mismatched pairs.
Changes:
- Add cascading deletion: when filtering an orphan tool_result, also delete
its ID from allToolResultIds so later tool_uses with that ID are filtered
- Add cascading deletion: when filtering an orphan tool_use, also delete
its ID from allToolCallIds so later tool_results with that ID are filtered
- Add mergeConsecutiveToolMessages() to combine split tool messages into a
single message, satisfying the API requirement that all tool_results must
immediately follow their tool_use in one message
- Add comprehensive test coverage for orphan filtering scenarios
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
* chore: update appcast.xml
* chore: appcast.xml updates
* feat: fixes to browseros-server to better handle restarts and health checks
* feat: add chrome.BrowserOS.getBrowserosVersionNumber() API
* chore: new browseros-server binaries
* chore: bump PATCH and OFFSET
* fix: minor
* feat: support reading config from TOML file
* fix: wip toml config
* refactor: one config, merged from args, config and config.toml example
* fix: update package.json to have bun start:with_toml
* docs: add quick toml explaination
* refactor: clean-up /init endpoint, we'll use TOML to pass config
* fix: make reconnect interval every 5s
* fix: make host as 127.0.0.1 as some localhost can resolve to ipv6
* feat: make controller-ext check the port each time it reconnects
Switch from x64-modern (requires AVX2) to x64-baseline (SSE4.2 only)
for Linux and Windows builds. This fixes the "Illegal instruction"
crash on pre-Haswell Intel CPUs (Ivy Bridge, Sandy Bridge) and
pre-Excavator AMD CPUs that lack AVX2 support.
Fixes: MCP server crashes with SIGILL on Ivy Bridge CPUs
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude <noreply@anthropic.com>
- Request only accepts contextWindowSize
- GeminiAgent computes compressionThreshold internally using fixed 0.75 ratio
- Follows YAGNI principle - no need to expose compressionRatio to UI
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* vercel ai adpater for gemini cli
* tests fixed based upon v5
* remove logic for normalisation for openai (not needed)
* tests fixed based upon v5
* agent core logic
* fix: logger to truncate only in console, write full log to file
* fix: logs dir and proper env parsing
* feat: add focus event to switch the primary controller
* adding resources-dir arg and using that for finding codex binary
* write logs to resource-dir
* handle default executable path for codex
* fix: code-sdk-ts build to have bun
* update to use browseros config
* adding skipGitRepocheck and other configs
* new codex binary integration
* refactor agentConfig
* default eventGaptimeout is 120s
* minor updates
* update env
* fix: gateway gets the config and passes to AgentConfig
Changed mcp.servers to mcp_servers to match Codex CLI config format.
The Codex CLI expects MCP server configuration to use mcp_servers
(underscore) not mcp.servers (dot) in config.toml. This fixes
programmatic MCP configuration via -c CLI flags.
Changes:
- Use mcp_servers instead of mcp.servers
- Clear global config first with -c mcp_servers={}
- Set individual properties with dotted notation
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude <noreply@anthropic.com>
* rename PORT to AGENT_PORT
* rename WebsocketManger to ControllerBridge
* update the log info
* fix: rename wsManager to controllerBridge
* update Logger to use common/Logger
* fix: logger, unify and standarize the naming
* remove standalone agent
* rename to controller-based, cdp-based, cleaner imports in main and claude-sdk
* refactor: main.ts
* refactor: .env
* update controller-ext manifest
* add extension-controller build commands in main package.json
* remove controller-ext environments and move to constants
* update package.json build commands
* fix: controller-ext webpack to combine files for production
* webpack: enable console logs for controller-ext for now in prod
* update README
* adding agent-port arg and updating test
* fix: commander --help issue
* fix: mcp server package mis-match
* add browseros starting for test
* integrate test added
* fix tests to use BrowserOS
* monorepo: core
* monorepo: tools and server
* mono: repo refactor
* moved tests, removed old files
* update server tests
* agent server location and TBD
* fix formatting
* add new workflows
* rename core to common, mcp-server, to mcp, agent-server to agent
* remove nodejs tests
* test: add simple GitHub Actions workflow for running tests on PR
* test workflow
* feat: add test coverage reporting to GitHub Actions workflow
- Run tests with --coverage flag to generate coverage reports
- Display coverage summary in PR comments
- Upload coverage artifacts for analysis
- Show coverage in GitHub Actions summary
* simple test workflow
description: Write BrowserOS feature documentation. Use when the user wants to create or update documentation for a BrowserOS feature. This skill explores the codebase to understand features and writes concise Mintlify MDX docs.
github.event.comment.body == 'I have read the CLA Document and I hereby sign the CLA'))
steps:
- name:'CLA Assistant'
if:(github.event.comment.body == 'recheck' || github.event.comment.body == 'I have read the CLA Document and I hereby sign the CLA') || github.event_name == 'pull_request_target'
**CLA Assistant Lite bot** Thank you for your submission! We require contributors to sign our [Contributor License Agreement](https://github.com/browseros-ai/BrowserOS/blob/main/CLA.md) before we can accept your contribution.
Thank you for your contribution! Before we can merge this PR, we need you to sign our [Contributor License Agreement](https://github.com/${{ github.repository }}/blob/main/CLA.md).
By signing the CLA, you confirm that:
- You have read and agree to the AGPL-3.0 license terms
- Your contribution is your original work
- You grant us the rights to use your contribution under the AGPL-3.0 license
**To sign the CLA**, please add a comment to this PR with the following text:
**To sign the CLA, please comment on this PR with:**
`I have read the CLA Document and I hereby sign the CLA`
```
I have read the CLA Document and I hereby sign the CLA
```
You only need to sign once. After signing, this check will pass automatically.
---
<details>
<summary>Troubleshooting</summary>
- **Already signed but still failing?** Comment `recheck` to trigger a re-verification.
- **Signed with a different email?** Make sure your commit email matches your GitHub account email, or add your commit email to your GitHub account.
</details>
custom-pr-sign-comment:'I have read the CLA Document and I hereby sign the CLA'
custom-allsigned-prcomment:|
**CLA Assistant Lite bot** ✅ All contributors have signed the CLA. Thank you for helping make BrowserOS better!
# Lock PR after merge to prevent signature tampering
echo "This list is automatically updated every hour based on 👍 reactions."
echo ""
echo "## How to Use This List"
echo "## How to Vote"
echo ""
echo "**👍 Upvote features you want** - Items with more votes get prioritized."
echo "| Action | What it does |"
echo "|--------|--------------|"
echo "| 👍 on an issue | Adds your vote — we prioritize by vote count |"
echo "| 💬 Comment | Your feedback shapes what we build |"
echo ""
echo "**Don't see what you need?** Create a new [feature request](https://github.com/browseros-ai/BrowserOS/issues/new) or [bug report](https://github.com/browseros-ai/BrowserOS/issues/new)."
echo ""
echo "Thank you for helping us prioritize!"
echo ""
echo "**Last updated:** $current_time"
echo ""
echo "## Top Issues"
echo "---"
echo ""
echo "## 📣 RFCs — We Need Your Input"
echo ""
echo "> **These proposals are in review.** Your vote and comments directly influence what gets built."
echo ">"
echo "> 👍 = Yes, build this | 💬 = Share your use case or feedback"
echo ""
if [ -n "$sorted_rfcs" ]; then
echo "$sorted_rfcs"
else
echo "*No active RFCs right now. Check back soon!*"
<imgsrc="https://img.shields.io/badge/Download-macOS-black?style=flat&logo=apple&logoColor=white"alt="Download for macOS (beta)"/>
@@ -22,125 +23,183 @@
<br/>
</div>
##
🌐 BrowserOS is an open-source chromium fork that runs AI agents natively. **Your open-source, privacy-first alternative to ChatGPT Atlas, Perplexity Comet, Dia**.
BrowserOS is an open-source Chromium fork that runs AI agents natively. **The privacy-first alternative to ChatGPT Atlas, Perplexity Comet, and Dia.**
🔒 Privacy first - use your own API keys or run local models with Ollama. Your data stays on your computer.
Use your own API keys or run local models with Ollama. Your data never leaves your machine.
💡 Join our [Discord](https://discord.gg/YKwjt5vuKr) or [Slack](https://dub.sh/browserOS-slack) and help us build! Have feature requests? [Suggest here](https://github.com/browseros-ai/BrowserOS/issues/99).
2.**Import your Chrome data** (optional) — bookmarks, passwords, extensions all carry over
3.**Connect your AI provider** — Claude, OpenAI, Gemini, ChatGPT Pro via OAuth, or local models via Ollama/LM Studio
2. Import your Chrome data (optional)
## Features
3. Connect your AI provider (OpenAI, Anthropic, or local models via Ollama/LMStudio)
4. Start automating!
## What makes BrowserOS special
- 🏠 Feels like home - same familiar interface as Google Chrome, works with all your extensions
- 🤖 AI agents that run on YOUR browser, not in the cloud
-🔒 Privacy first - bring your own keys or use local models with Ollama. Your browsing history stays on your computer
- 🚀 Open source and community driven - see exactly what's happening under the hood
- 🤝 BrowserOS as MCP server - you can install our MCP server and use the browser from within `claude-code` or `gemini-cli`.
- 🛡️ (coming soon) Built-in AI ad blocker that works across more scenarios!
| Feature | Description | Docs |
|---------|-------------|------|
| **AI Agent** | 53+ browser automation tools — navigate, click, type, extract data, all with natural language | [Guide](https://docs.browseros.com/getting-started) |
| **MCP Server** | Control the browser from Claude Code, Gemini CLI, or any MCP client | [Setup](https://docs.browseros.com/features/use-with-claude-code) |
| **Workflows** | Build repeatable browser automations with a visual graph builder | [Docs](https://docs.browseros.com/features/workflows) |
| **Cowork** | Combine browser automation with local file operations — research the web, save reports to your folder | [Docs](https://docs.browseros.com/features/cowork) |
| **Scheduled Tasks** | Run agents on autopilot — daily, hourly, or every few minutes | [Docs](https://docs.browseros.com/features/scheduled-tasks) |
| **Memory**| Persistent memory across conversations — your assistant remembers context over time | [Docs](https://docs.browseros.com/features/memory) |
| **SOUL.md** | Define your AI's personality and instructions in a single markdown file | [Docs](https://docs.browseros.com/features/soul-md) |
| **LLM Hub** | Compare Claude, ChatGPT, and Gemini responses side-by-side on any page | [Docs](https://docs.browseros.com/features/llm-chat-hub) |
| **40+ App Integrations** | Gmail, Slack, GitHub, Linear, Notion, Figma, Salesforce, and more via MCP | [Docs](https://docs.browseros.com/features/connect-apps) |
| **Vertical Tabs** | Side-panel tab management — stay organized even with 100+ tabs open | [Docs](https://docs.browseros.com/features/vertical-tabs) |
| **Ad Blocking** | uBlock Origin + Manifest V2 support — [10x more protection](https://docs.browseros.com/features/ad-blocking) than Chrome | [Docs](https://docs.browseros.com/features/ad-blocking) |
| **Cloud Sync** | Sync browser config and agent history across devices | [Docs](https://docs.browseros.com/features/sync) |
| **Skills** | Custom instruction sets that shape how your AI assistant behaves | [Docs](https://docs.browseros.com/features/skills) |
| **Smart Nudges** | Contextual suggestions to connect apps and use features at the right moment | [Docs](https://docs.browseros.com/features/smart-nudges) |
## Demos
### 🤖 BrowserOS agent in action
### BrowserOS agent in action
[](https://www.youtube.com/watch?v=SoSFev5R5dI)
<br/><br/>
### 🎇 Install [BrowserOS as MCP](https://docs.browseros.com/browseros-mcp/how-to-guide) and control it from `claude-code`
### Install [BrowserOS as MCP](https://docs.browseros.com/features/use-with-claude-code) and control it from `claude-code`
For the first time since Netscape pioneered the web in 1994, AI gives us the chance to completely reimagine the browser. We've seen tools like Cursor deliver 10x productivity gains for developers—yet everyday browsing remains frustratingly archaic.
Use `browseros-cli` to launch and control BrowserOS from the terminal or from AI coding agents like Claude Code.
You're likely juggling 70+ tabs, battling your browser instead of having it assist you. Routine tasks, like ordering something from amazon or filling a form should be handled seamlessly by AI agents.
**macOS / Linux:**
At BrowserOS, we're convinced that AI should empower you by automating tasks locally and securely—keeping your data private. We are building the best browser for this future!
**Agent development** (TypeScript/Go) — see the [agent monorepo README](packages/browseros-agent/README.md) for setup instructions.
**Browser development** (C++/Python) — requires ~100GB disk space. See [`packages/browseros`](packages/browseros/) for build instructions.
## Credits
- [ungoogled-chromium](https://github.com/ungoogled-software/ungoogled-chromium) — BrowserOS uses some patches for enhanced privacy. Thanks to everyone behind this project!
- [The Chromium Project](https://www.chromium.org/) — at the core of BrowserOS, making it possible to exist in the first place.
## License
BrowserOS is open source under the [AGPL-3.0 license](LICENSE).
- [ungoogled-chromium](https://github.com/ungoogled-software/ungoogled-chromium) - BrowserOS uses some patches for enhanced privacy. Thanks to everyone behind this project!
- [The Chromium Project](https://www.chromium.org/) - At the core of BrowserOS, making it possible to exist in the first place.
## Stargazers
Thank you to all our supporters!
[](https://www.star-history.com/#browseros-ai/BrowserOS&Date)
- Fixed MacOS bug which caused the app to crash on startup for some users. This unfortunately also makes a breaking change, requiring re-installation of extensions and logins.
- Fixed MacOS bug which caused the app to crash on startup for some users. This unfortunately also makes a breaking change, requiring re-installation of extensions and logins.
Click the settings icon in BrowserOS, then click **USE** on the Claude card. Paste your API key and set your model. For Claude Sonnet 4.0, use model ID `claude-opus-4-20250514`, set **Context Window Size** to `128000`, and check **Supports Images**. Click **Save**.
Copy the key that appears. Keep it safe - you won't be able to see it again.
## Configure BrowserOS
Click the settings icon in BrowserOS, then click **USE** on the OpenAI card. Paste your API key and configure the settings based on your chosen model. For GPT-4.1, set **Context Window Size** to `128000` and check **Supports Images**. Click **Save**.
description: "Configure BrowserOS to use Open Router for access to multiple AI models"
---
OpenRouter gives you access to 500+ models through one API. Try different models without managing multiple API keys.
## Get your API key
Visit [openrouter.ai](https://openrouter.ai), sign up, and create an API key. OpenRouter shows your key right on the homepage under "Get your API key".
In BrowserOS, paste the model ID into **Model ID** field. The model ID should be in the format shown under "Custom" (e.g., `openai/gpt-4.1-mini`). Paste your OpenRouter API key, set **Context Window Size** based on the model, and check **Supports Images** if the model supports it. Click **Save**.
description: "Connect BrowserOS as an MCP server to Claude Code or Claude Desktop"
---
## How to use `BrowserOS-mcp` on Claude Code
1. Download binary from [BrowserOS.com](https://BrowserOS.com)
2. Open BrowserOS and from a new tab click the settings icon to open the settings page.
3. From the settings page, navigate to **MCP** in the sidebar and copy the MCP URL
4. In your terminal, type the below command (Replace `<mcp_url>` with the MCP URL you copied above):
```
claude mcp add --transport http browseros <mcp_url>
# example: claude mcp add --transport http browseros http://127.0.0.1:9226/mcp
```
5. Now start Claude Code: `claude --dangerously-skip-permissions` (so Claude doesn't ask for confirmation each time)
6. Now, in Claude Code, type `Open amazon.com on browseros` to open the tab in BrowserOS.
Here's a [loom video](https://www.loom.com/share/9a41b74f265649a2993c329b05f93b54?sid=009690dd-e1a0-47b9-9b41-abe544e90c78) capturing the above steps! 🥳
### gemini-cli
The steps are roughly the same as above, but to add the MCP server, run the following command:
1. Download binary from [BrowserOS.com](https://BrowserOS.com)
2. Open BrowserOS and from a new tab click the settings icon to open the settings page. From the settings page, navigate to **MCP** in the sidebar and note the port number (usually `9225`).
3. Open your Claude Desktop config file: `/Users/<username>/Library/Application Support/Claude/claude_desktop_config.json`
4. Add BrowserOS to your config (replace the port with the value shown in MCP settings page):
All notable changes to BrowserOS are documented here. For the full release history with download links, see our [GitHub Releases](https://github.com/browseros-ai/BrowserOS/releases).
---
## v0.42.0
<sub>March 9, 2026</sub>
- **SOUL.md** — Your assistant now has a soul. Tell it how you like to communicate, set boundaries, shape its personality — and it adapts on its own over time. The more you use it, the more it feels like *your* assistant. [Read more →](/features/soul)
- **Vertical tabs** — One of the most requested features is here. BrowserOS now ships with vertical tabs by default. More screen space, better tab management, and a cleaner layout out of the box. Prefer horizontal? You can switch back anytime in settings. [Read more →](/features/vertical-tabs)
- **Long-term memory** — Your assistant finally remembers you. Your name, your projects, what you talked about last week — it carries context across every conversation so you never have to repeat yourself. All stored locally on your machine. [Read more →](/features/memory)
- **Chromium 146** — Updated to the latest Chromium release with all recent upstream fixes and security patches
<Frame>
<img src="/images/changelog/0.42.0/soul-memory.png" alt="BrowserOS v0.42.0 SOUL.md feature for agent personalization" />
</Frame>
<Frame>
<img src="/images/changelog/0.42.0/vertical-tabs.png" alt="BrowserOS v0.42.0 vertical tabs toggle in settings" />
- **Tools — major upgrade** — Agent tools and MCP server both got a big overhaul. ~20 new tools (54 total) including file upload, save as PDF, background windows, and more. Connection with third-party coding agents (Claude Code, Codex, etc.) is much better now
- **Chromium 145** — Upgraded to the latest Chromium base with all recent upstream fixes and security patches
- **Login session import improvements** — Importing login sessions is now more reliable
- **Stability & reliability** — General improvements across the board
---
## v0.39.0
<sub>February 3, 2026</sub>
- **Sync** — Save your browser configuration, agent history, and scheduled tasks across machines. Your setup now follows you wherever you go
- **App Connector redesign** — Connecting MCP apps is now easier. The new App Connector UI makes it simpler to discover, install, and manage your connected apps
- **MCP port stability** — Additional fixes for users still experiencing port issues. More reliable connections across restarts
- **Keyboard shortcuts** — Updated shortcuts to avoid conflicts with European keyboards
- **MCP port fix on Windows & Linux** — Fixed an issue with MCP port. Port now stays consistent across restarts and made it more reliable
- **Settings fix** — Fixed `chrome.browser.settings` not working correctly. Settings should now persist and apply as expected
- **Improved agent** — Made the agent more reliable and performant. General stability fixes across the agent loop
---
## v0.37.0
<sub>January 21, 2026</sub>
- **Workflows** — Build reliable, repeatable browser automations with a visual graph builder. Chat with the workflow agent to define step-by-step automation—ideal for complex tasks where ad-hoc prompts aren't enough. [Read more →](/features/workflows)
<img src="/features/workflow/sample-workflow.png" alt="Workflows visual graph builder showing a data entry automation" />
- **Cowork** — Describe an outcome, step away, and come back to finished work. Combine browser automation with local file operations—research on the web and save reports, scrape data and export to files, all in one task. [Read more →](/features/cowork)
<img src="/features/cowork/cowork-research-example.png" alt="Agent researching Hacker News and generating an HTML report" />
---
## v0.36.3
<sub>January 15, 2026</sub>
- **Agent history** — Agent conversations are now saved automatically. View and resume them anytime from the Assistant panel
<img src="/images/changelog/0.36.3/agent-history-highlight.png" alt="Agent history button in the Assistant panel" />
<img src="/images/changelog/0.36.3/agent-history-example.png" alt="Agent history showing past conversations" />
---
## v0.36.2
<sub>January 10, 2026</sub>
Bug fix release focused on MCP stability.
- **MCP server disconnect fix** — Fixed port handling issue causing MCP connections to drop unexpectedly
---
## v0.36.0
<sub>January 8, 2026</sub>
- **Agent personalization** — Add your own prompts to personalize the agent. Tweak its behavior, adjust how it responds, set your preferred formatting, and more
- **Toolbar customization** — Hide the Hub chat and labels from the settings page to declutter your toolbar
- **MCP server port stability** — The port now stays consistent through browser restarts, so you don't have to keep updating your MCP clients
- **Fixed agent install/update issues** — The agent now handles installs and updates more proactively
---
## v0.35.0
<sub>December 25, 2025</sub>
- **Agent stability fixes** — Fixed bugs to make the agent loop much more reliable
- **Gemini 3 support** — Gemini 3 now supported through OpenRouter and Google adapters
- **Better error surfacing** — Error messages are now clearer
---
## v0.34.0
<sub>December 20, 2025</sub>
- **Third-party MCP server support** — Connect external MCP servers like Google Calendar, Notion, Google Docs, Gmail, and more. You can also connect your own custom MCP servers
- **Gemini 3 support** — Gemini 3 Pro and Flash models now work with BrowserOS
- **Windows icon fix** — The Windows icon now displays BrowserOS logo correctly
- **Agent & UI improvements** — Various agent loop fixes and UI polish
---
## v0.33.0
<sub>December 18, 2025</sub>
- **OpenAI-compatible provider support** — Connect any OpenAI-compatible API endpoint
- **Multi-window & multi-profile agent support** — Agent now works across multiple windows and browser profiles
- **MCP server reliability** — Fixed connection drops and improved stability
- **Agent reliability improvements** — General stability fixes
---
## v0.32.0
<sub>December 12, 2025</sub>
A complete revamp of BrowserOS.
**New features:**
- **New Agent** — Completely rebuilt agent: faster, smarter, and more reliable
- **Agent Per Tab** — Run multiple agents in different tabs simultaneously
description: "A developer-focused comparison of BrowserOS MCP and Chrome DevTools MCP for browser automation"
---
Both BrowserOS MCP and [Chrome DevTools MCP](https://github.com/ChromeDevTools/chrome-devtools-mcp) give AI agents control over a browser via the Model Context Protocol. But they're built for different scopes. Chrome DevTools MCP focuses on debugging and inspection, while BrowserOS MCP is a complete browser automation and app integration platform.
This page breaks down the differences for developers evaluating which to use with Claude Code, Gemini CLI, Cursor, or any MCP client.
BrowserOS MCP gives you a broader automation surface: browser control, content extraction, file operations, and 40+ app integrations through a single connection. Debugging and performance tools are coming soon to BrowserOS MCP, which will close the remaining gap with Chrome DevTools MCP. For most AI agent workflows, BrowserOS MCP already covers more ground out of the box.
description: "How BrowserOS Cowork compares to Claude Cowork for getting real work done with AI"
---
Both BrowserOS Cowork and [Claude Cowork](https://claude.com/product/cowork) let an AI agent work with your local files autonomously. You describe a task, step away, and come back to completed work. They share a similar file toolkit under the hood. The key difference is what else each product can do. BrowserOS Cowork runs inside a real browser with full web access and 40+ app integrations. Claude Cowork runs inside an isolated VM with professional document generation.
This page compares both products so you can decide which fits your workflow.
---
## At a Glance
| | **BrowserOS Cowork** | **Claude Cowork** |
|---|---|---|
| **Runs in** | Your real browser | Claude Desktop app (VM) |
| **File tools** | Read, write, edit, search, organize | Read, write, edit, search, organize |
| **Pricing** | Free (bring your own AI key) | Requires paid Claude subscription |
| **Platform** | Any OS with BrowserOS | macOS, Windows x64 |
---
## Feature Comparison
### File Operations
Both products provide a comparable set of file tools. You can read, write, edit, search, and organize files in both. This is table-stakes for both products.
| What you can do | BrowserOS Cowork | Claude Cowork |
|-----------------|:---:|:---:|
| Read and view files | Yes | Yes |
| Create and save new files | Yes | Yes |
| Edit specific parts of a file | Yes | Yes |
| Search inside files for text | Yes | Yes |
| Find files by name or pattern | Yes | Yes |
| List and browse folders | Yes | Yes |
| Run commands/scripts | Yes | Yes |
| Break work into parallel subtasks | Coming soon | Built-in sub-agents |
<Note>
The file tools are largely equivalent. The real differentiator is what else each product can do beyond file operations.
</Note>
### Working with the Web
This is the biggest difference. BrowserOS Cowork runs inside a real browser with your existing logins and sessions.
| What you can do | BrowserOS Cowork | Claude Cowork |
|-----------------|:---:|:---:|
| Open and navigate websites | Yes | No |
| Click buttons, fill forms, type text | Yes | No |
| Take screenshots of web pages | Yes | No |
| Extract content from web pages | Yes | No |
| Save pages as PDF | Yes | No |
| Download files from the web | Yes | No |
| Access sites where you're logged in | Yes (your real browser session) | No |
| Manage tabs, windows, and bookmarks | Yes | No |
| Search your browsing history | Yes | No |
Claude Cowork has no browser access. If your task involves anything on the web, whether that's researching, filling out forms, grabbing content from a site, or checking on a web app, you need BrowserOS.
### Connected Apps
BrowserOS connects to 40+ services directly. Claude Cowork has a handful of connectors.
| Service | BrowserOS Cowork | Claude Cowork |
|---------|:---:|:---:|
| Gmail | Yes | Yes |
| Google Drive | Yes | Yes |
| Google Calendar | Yes | Limited |
| Slack | Yes | No |
| GitHub | Yes | No |
| Linear / Jira / Asana | Yes | No |
| Notion | Yes | No |
| Figma | Yes | No |
| Salesforce / HubSpot | Yes | No |
| Shopify / Stripe | Yes | No |
| 30+ more services | Yes | No |
### Document Generation
Claude Cowork has an edge when it comes to creating polished office documents.
| What you can do | BrowserOS Cowork | Claude Cowork |
|-----------------|:---:|:---:|
| HTML and Markdown files | Yes | Yes |
| CSV and data files | Yes | Yes |
| Excel with working formulas | No | Yes |
| PowerPoint presentations | No | Yes |
| Formatted Word documents | No | Yes |
---
## How They Work
<Tabs>
<Tab title="BrowserOS Cowork">
BrowserOS Cowork runs inside the browser. The agent has access to your real browser session (cookies, logins, extensions) and a sandboxed folder on your computer.
- Works in your real browser with your existing logins
- File access sandboxed to the folder you select
- 40+ app integrations via OAuth
- Connect from any AI tool (Claude Code, Gemini CLI, Cursor, etc.)
- Uses whatever AI model you choose
</Tab>
<Tab title="Claude Cowork">
Claude Cowork runs in an isolated virtual machine on your desktop via the Claude Desktop app.
- Runs in a secure VM, isolated from your main system
- Comes pre-loaded with Python, Node.js, Ruby, and common tools
</Tab>
</Tabs>
---
## Where Claude Cowork Shines
- **Professional documents**: Create Excel spreadsheets with working formulas, PowerPoint presentations, and formatted Word documents
- **Parallel subtasks**: Automatically breaks complex work into smaller tasks that run at the same time
- **Stronger isolation**: Runs in a full virtual machine, giving you OS-level separation from your main system
- **Zero setup**: Works out of the box in the Claude Desktop app with pre-installed tools and languages
---
## Where BrowserOS Cowork Shines
- **Full browser access**: Navigate websites, fill forms, click buttons, take screenshots, and extract content from any page. Claude Cowork cannot touch the web.
- **Your real logins**: Because it runs in your actual browser, the agent can access sites where you're already logged in: dashboards, internal tools, social media, banking portals, anything.
- **40+ app integrations**: Gmail, Slack, GitHub, Calendar, Notion, Linear, Figma, Salesforce, and more. All accessible in the same session as your file work. Claude Cowork has about 4 connectors.
- **Pick your AI model**: Use Claude, GPT-5, Gemini, Kimi K2.5, or a local model. Claude Cowork only works with Claude.
- **Full internet access**: Your agent can visit any website. Claude Cowork's VM is restricted to a short list of allowed sites.
- **Free**: BrowserOS is free. Just bring your own AI API key. Claude Cowork requires a paid Claude subscription.
| Security model | Folder-level sandbox | VM isolation |
| Platform | Any OS | macOS, Windows x64 |
| Pricing | Free + API key | Paid subscription |
Both products handle file operations equally well. The choice comes down to what else you need. If your work touches the web, connected apps, or you want to choose your own AI model, BrowserOS Cowork gives you that. If you need polished office documents and prefer a fully isolated desktop experience, Claude Cowork is a good fit.
description: "How BrowserOS compares to OpenClaw for everyday AI assistance"
---
[OpenClaw](https://openclaw.ai/) is an open-source personal AI assistant that runs on your machine and connects through messaging apps like WhatsApp, Telegram, Slack, and Discord. It is a powerful tool for technical users who want a self-hosted, always-on AI agent.
BrowserOS takes a different approach. Instead of running a background server that you message through chat apps, BrowserOS puts the AI assistant directly inside your browser, where most of your work already happens. No terminal setup, no daemon management, no Node.js required.
This comparison is for users deciding which tool fits their needs.
## At a Glance
| | **BrowserOS** | **OpenClaw** |
|---|---|---|
| **What it is** | AI-powered browser with built-in assistant | Self-hosted AI agent you message through chat apps |
| **Setup** | Download and open | Install via npm, run onboarding wizard, configure daemon |
| **Technical skill needed** | None | Comfortable with terminal and Node.js |
| **Interface** | Built into your browser | WhatsApp, Telegram, Slack, Discord, iMessage, and 15+ more |
| **Personality** | SOUL.md (inspired by OpenClaw's original concept) | SOUL.md (originated the concept) |
| **LLM support** | 11+ providers including local models (Ollama, LM Studio) | Multiple providers with failover routing |
| **Runs on** | macOS, Windows, Linux | macOS, Windows, Linux (+ iOS/Android companion apps) |
| **Authentication** | OAuth or API key depending on the service | API keys, OAuth, pairing codes per channel |
| **Open source** | Yes (AGPL-3.0) | Yes (MIT) |
## Where BrowserOS Shines
### No technical setup required
OpenClaw requires Node.js 22+, npm installation, a terminal-based onboarding wizard, daemon configuration (launchd or systemd), and channel pairing for each messaging platform. If something goes wrong, you need `openclaw doctor` to diagnose issues.
BrowserOS is a browser. Download it, open it, and start talking to the assistant. There is no daemon to manage, no services to keep running, and no terminal needed.
### Browser automation built in
BrowserOS gives the assistant full control of your browser with 53 tools: clicking buttons, filling forms, navigating between pages, taking screenshots, managing tabs, organizing bookmarks, searching history, and more. The assistant sees what you see and can interact with any website you are logged into.
OpenClaw has browser automation through a dedicated Chrome instance with CDP, but it runs as a separate process rather than being integrated into the browser you are already using. With BrowserOS, the assistant works directly in your browsing session with all your cookies, logins, and open tabs.
### 40+ app integrations built in
BrowserOS connects to Gmail, Google Calendar, Slack, Notion, GitHub, Linear, Jira, Figma, Salesforce, Stripe, and 30+ more services out of the box. Most services connect through OAuth (one-click sign-in), while some require an API key. Either way, the assistant detects when an app is not connected and walks you through the setup right in the conversation.
OpenClaw uses a skills system where integrations are community-built plugins. Some popular services have skills available, but connecting a new service often means finding the right skill, installing it, and configuring credentials manually.
### Works where you already are
Most of your work happens in a browser. BrowserOS puts the assistant right there, so it can see the page you are on, interact with web apps, and pull data from your open tabs. There is no context-switching between a chat app and your browser.
OpenClaw's approach of messaging through WhatsApp or Telegram is clever for mobile use, but when you are at your computer working in a browser, having the assistant inside that browser is more natural and more capable.
## Where OpenClaw Shines
### Messaging app access
OpenClaw connects to 20+ messaging platforms including WhatsApp, Telegram, Signal, iMessage, Discord, Slack, Microsoft Teams, and more. You can message your assistant from your phone or any chat app without opening a specific application. This is ideal if you want AI help on the go through apps you already have open.
BrowserOS is a desktop browser. To use the assistant, you need to be in BrowserOS.
### Always-on background agent
OpenClaw runs as a daemon on your machine, processing tasks even when you are not actively chatting. It supports cron jobs, webhooks, and Gmail Pub/Sub for automated triggers. It can wake up, do something, and report back through your messaging app.
BrowserOS has [scheduled tasks](/features/scheduled-tasks) that run automations on a schedule, but the browser needs to be running. OpenClaw's daemon approach is more suited for server-like always-on operation.
### Mobile companion apps
OpenClaw offers iOS and Android companion apps with camera access, voice input, screen recording, and device-level actions (notifications, contacts, calendar, SMS). This extends the assistant to your phone in a way that BrowserOS cannot currently match.
### Agent-to-agent communication
OpenClaw supports multi-session agent coordination where agents can discover each other, read transcripts, and send messages between sessions. This is useful for complex workflows where multiple specialized agents collaborate.
### Self-modifying skills
OpenClaw agents can write and install their own skills during a conversation. If the assistant does not have a capability, it can create one on the fly. This makes it extremely flexible for power users who want the agent to extend itself.
## Feature Comparison
### App Integrations
| Service | BrowserOS | OpenClaw |
|---------|-----------|----------|
| Gmail | Built-in (OAuth) | Skill + API setup |
| Google Calendar | Built-in (OAuth) | Skill + API setup |
<Card title="Choose BrowserOS if you..." icon="browser">
- Want an AI assistant without any technical setup
- Do most of your work in a browser
- Need browser automation (filling forms, clicking buttons, extracting data)
- Want 40+ app integrations that connect with one click
- Prefer a visual interface over terminal commands
</Card>
<Card title="Choose OpenClaw if you..." icon="terminal">
- Want to message your AI from WhatsApp, Telegram, or Signal
- Need an always-on agent that runs 24/7 as a background service
- Are comfortable with Node.js and terminal-based setup
- Want mobile companion apps for on-the-go access
- Need agents that can write their own extensions
</Card>
</CardGroup>
## Using Both Together
BrowserOS and OpenClaw are not mutually exclusive. Some users run OpenClaw as their always-on mobile assistant (accessible through WhatsApp or Telegram) while using BrowserOS as their desktop browser for work that involves web apps, browser automation, and visual tasks. The two tools complement each other rather than compete directly.
description: "Let's build the best open-source browser!"
icon: "code-branch"
description: "Guide to contributing to BrowserOS"
---
Hey there! Thanks for your interest in BrowserOS. Whether you're fixing bugs, adding features, improving docs, or just poking around the code, we're glad you're here.
BrowserOS has two main parts you can contribute to:
BrowserOS is a monorepo with two main parts:
- **Agent** - The Chrome extension with AI features (TypeScript/React)
- **Browser** - The custom Chromium build (C++/Python)
- **Agent** — The AI features, UI, and browser automation (TypeScript/React)
- **Browser** — The custom Chromium build (C++/Python)
Most folks start with the agent since it's way easier to set up and iterate on.
Most contributors work on the Agent since it's much easier to set up.
You can contribute to BrowserOS in many ways! Whether you want to build features or help out in other ways, we appreciate all contributions.
**Report bugs** — [Open an issue](https://github.com/browseros-ai/BrowserOS/issues/new) with steps to reproduce, expected vs actual behavior, and screenshots.
<Tabs>
<Tab title="🐛 Report Bugs">
Found a bug? [Open an issue](https://github.com/browseros-ai/BrowserOS/issues/new) with:
| `bun run build:server` | Build server for production |
| `bun run build:agent` | Build agent extension |
| `bun run build:ext` | Build controller extension |
| `bun run test` | Run tests |
| `bun run lint` | Check with Biome |
| `bun run typecheck` | TypeScript check |
<Step title="Set Up Environment">
```bash
cp .env.example .env
```
Edit `.env` and add your `LITELLM_API_KEY`
</Step>
---
<Step title="Build the Extension">
```bash
yarn build:dev # One-time build
```
</Step>
</Steps>
## Path 2: Browser Development
#### Load in BrowserOS
Only go down this path if you're working on Chromium-level features like patches to the browser itself.
<Steps>
<Step title="Open Extensions Page">
Navigate to `chrome://extensions/`
</Step>
**Requirements:**
- ~100GB disk space
- 16GB+ RAM recommended
- 3+ hours for first build
<Step title="Enable Developer Mode">
Toggle **Developer mode** in the top right
</Step>
<Step title="Load Unpacked Extension">
Click **Load unpacked** and select `packages/browseros-agent/dist/`
</Step>
<Step title="Open Agent Panel">
Press the Agent icon from the extensions toolbar to open the agent panel
</Step>
</Steps>
<Note>
For detailed setup, architecture, and code standards, see the [Agent Contributing Guide](https://github.com/BrowserOS-ai/BrowserOS/blob/main/packages/browseros-agent/CONTRIBUTING.md).
</Note>
</Accordion>
### 3.2 Browser Development
Building the custom Chromium browser requires significant disk space and time. Only go down this path if you're working on browser-level features like patches to Chromium itself.
First, follow the official Chromium guide for your platform:
**1. Clone Chromium source**
**[Chromium: Get the Code](https://www.chromium.org/developers/how-tos/get-the-code/)**
Follow the official [Chromium: Get the Code](https://www.chromium.org/developers/how-tos/get-the-code/) guide. This sets up `depot_tools` and fetches ~100GB of source code.
This will set up `depot_tools` and fetch the ~100GB Chromium source tree. This typically takes 2-3 hours depending on your internet speed.
</Step>
Note the path where you clone it (e.g., `~/chromium/src`).
<Step title="Navigate to Build System">
Once you have Chromium checked out, navigate to our build system:
**2. Install UV and dependencies**
```bash
cd packages/browseros
```
</Step>
```bash
# Install UV
curl -LsSf https://astral.sh/uv/install.sh | sh
<Step title="Build Debug Version (for development)">
The built binary will be located in the `out/Default_x64/` directory. Run it with the `--user-data-dir` flag to create an isolated test profile.
</Tab>
</Tabs>
<Tip>
The `--user-data-dir` flag is useful for creating isolated test profiles during development.
</Tip>
</Step>
</Steps>
#### Troubleshooting
<Accordion title="Build fails with missing dependencies">
- Make sure you've followed all prerequisite steps from the Chromium build guide
- Ensure Xcode is up to date (macOS)
- Verify all required packages are installed (Linux)
- Check Visual Studio installation (Windows)
</Accordion>
<Accordion title="Out of disk space">
Chromium requires significant disk space (~100GB). Ensure you have enough free space before starting the build. You can use `df -h` on Unix systems or check Disk Management on Windows.
</Accordion>
<Accordion title="Build takes too long">
- Use ccache to speed up rebuilds
- Consider using a machine with more CPU cores
- Build only the components you need for development
- Use the debug build for faster compilation times
</Accordion>
</Accordion>
## 4. Making Your First Contribution
Open a PR on GitHub with:
- **Clear title** in conventional commit format
- **Description** explaining what changed and why
- **Screenshots/videos** for UI changes
- **Link to related issues** (e.g., "Fixes #123")
### Sign the CLA
On your first PR, our bot will ask you to sign the Contributor License Agreement:
<Steps>
<Step title="Read the CLA">
Read the [CLA document](https://github.com/BrowserOS-ai/BrowserOS/blob/main/CLA.md)
</Step>
<Step title="Sign via Comment">
Comment on your PR:
```
I have read the CLA Document and I hereby sign the CLA
```
</Step>
<Step title="Automatic Recording">
The bot will record your signature (one-time thing)
description: "BrowserOS supports full ad blocking with uBlock Origin"
---
BrowserOS supports full ad blocking through [uBlock Origin](https://ublockorigin.com/), the most powerful open-source ad blocker available — the full extension, not the watered-down "Lite" version.
## Why BrowserOS?
Chrome [killed support](https://developer.chrome.com/docs/extensions/develop/migrate/mv2-deprecation-timeline) for uBlock Origin by phasing out Manifest V2 extensions. The only option left on Chrome is "uBlock Origin Lite," a significantly weaker version that can't use advanced filtering rules.
**BrowserOS re-enabled full Manifest V2 support**, so you can install and run the original uBlock Origin at full power — the same extension Chrome no longer allows.
Install the full uBlock Origin extension from the Chrome Web Store. Works on BrowserOS out of the box.
</Card>
## BrowserOS vs Chrome
We ran both browsers through [adblock.turtlecute.org](https://adblock.turtlecute.org/), a test that measures how effectively a browser blocks ads and tracking scripts.
<CardGroup cols={2}>
<Card title="BrowserOS — 68%">
<img src="/images/adblock-browseros.png" alt="BrowserOS blocking 68% of ads" />
</Card>
<Card title="Chrome — 7%">
<img src="/images/adblock-chrome.png" alt="Chrome blocking only 7% of ads" />
</Card>
</CardGroup>
Out of 133 ad-related requests:
- **BrowserOS** blocked 91 (68%)
- **Chrome** blocked 9 (7%)
That's roughly **10x more protection** with zero configuration.
## What This Means
Fewer ads means faster page loads, less bandwidth usage, and significantly reduced tracking. BrowserOS handles this natively so you can focus on browsing.
description: "Connect your own AI models to BrowserOS"
---
BrowserOS includes a default AI model you can use right away, but it has strict rate limits. For the best experience, bring your own API keys or run models locally.
See how to connect your own LLM in under a minute:
Already paying for ChatGPT Pro, GitHub Copilot, or Qwen Code? Connect your existing account to BrowserOS with a single sign-in — no API keys, no extra cost.
Sign in with your Qwen account. Access Qwen 3 Coder with a 1 million token context window.
</Card>
</CardGroup>
---
## Which Model Should I Use?
| Mode | What works | Recommendation |
|------|------------|----------------|
| **Chat Mode** | Any model, including local | Ollama or Gemini Flash |
| **Agent Mode** | Cloud models only | Claude Opus 4.5, GPT-5, or Kimi K2.5 (open source) |
<Warning>
**Local LLMs aren't powerful for most agentic tasks yet.** They're great for Chat — asking questions about a page, summarizing, etc. But agent tasks need strong reasoning to click the right elements and handle multi-step workflows. Use Claude Opus 4.5, GPT-5, or Kimi K2.5 for agents.
</Warning>
---
## Kimi K2.5 — In Partnership with Moonshot AI
{/* <img src="/images/moonshot-partnership-banner.png" alt="BrowserOS x Moonshot AI" className="rounded-xl" /> */}
BrowserOS has partnered with [Moonshot AI](https://www.kimi.com) to bring **Kimi K2.5** as a first-class provider. Kimi K2.5 is now the **recommended model** in BrowserOS and is set as the default provider.
For a limited time, BrowserOS users get **extended usage limits** powered by Kimi K2.5. This means you can use the AI agent, chat, and other AI-powered features with increased limits at no cost.
<CardGroup cols={2}>
<Card title="Open Source" icon="code-branch">
Fully open-source model you can inspect and trust.
</Card>
<Card title="Multimodal" icon="image">
Supports images out of the box, including screenshots and visual context.
</Card>
<Card title="Great for Agents" icon="robot">
Strong reasoning for browser automation, form filling, and multi-step workflows.
</Card>
<Card title="Affordable" icon="piggy-bank">
Excellent agentic performance at a fraction of the cost of other frontier models.
</Card>
</CardGroup>
<div id="moonshot" />
### Why Kimi K2.5?
Kimi K2.5 offers excellent performance for agentic tasks at a fraction of the cost of other frontier models. It supports images, has a 128,000 token context window, and delivers strong results on browser automation tasks. Combined with BrowserOS's open-source agent framework, this makes for a powerful and affordable AI browsing experience.
### Bring Your Own Kimi API Key
You can also bring your own Kimi API key if you want to use Kimi K2.5 beyond the extended usage period, or if you want your own dedicated limits.
**Get your API key:**
1. Go to [platform.moonshot.ai](https://platform.moonshot.ai) and create an account
2. Navigate to the **API keys** section in your dashboard
3. Click **Create new API key** and copy the key
**Add to BrowserOS:**
1. Go to `chrome://browseros/settings`
2. Click **USE** on the **Moonshot AI** card
3. Enter your API key (it will be encrypted and stored locally on your machine)
4. The model is pre-configured to `kimi-k2.5` with a 128,000 context window
5. Click **Save**
<Tip>
The base URL for the Kimi API (`https://api.moonshot.ai/v1`) is pre-filled automatically when you select the Moonshot AI provider template.
</Tip>
---
## Cloud Providers
Connect to powerful AI models using your API keys. Your keys stay on your machine — requests go directly to the provider.
<AccordionGroup>
<div id="gemini" />
<Accordion title="Gemini (Free)" icon="google">
Gemini Flash is fast and free. Google gives you 20 requests per minute at no cost.
**Get your API key:**
1. Go to [aistudio.google.com](https://aistudio.google.com)
2. Click **Get API key** in the sidebar
3. Click **Create API key** and copy it

**Add to BrowserOS:**
1. Go to `chrome://browseros/settings`
2. Click **USE** on the Gemini card
3. Set **Model ID** to `gemini-2.5-flash` (or `gemini-2.5-pro`, `gemini-3-pro-preview`, `gemini-3-flash-preview`)
4. Paste your API key
5. Check **Supports Images**, set **Context Window** to `1000000`
NVIDIA's [build.nvidia.com](https://build.nvidia.com/models) hosts 80+ models — including GLM 5.1, MiniMax M2.7, GPT-OSS-120B, Qwen 3.5, Mistral, and Nemotron — behind a **free OpenAI-compatible API endpoint**. Great for chatting, prototyping, and personal projects.
**Get your API key:**
1. Go to [build.nvidia.com/models](https://build.nvidia.com/models) and sign in with a free NVIDIA developer account
2. Pick any model tagged **Free Endpoint** (e.g. [`minimaxai/minimax-m2.7`](https://build.nvidia.com/minimaxai/minimax-m2.7), [`z-ai/glm-5.1`](https://build.nvidia.com/z-ai/glm-5.1), [`qwen/qwen3.5-122b-a10b`](https://build.nvidia.com/qwen/qwen3.5-122b-a10b))
3. Click **Get API Key** on the model page and copy the `nvapi-...` key
**Add to BrowserOS:**
1. Go to `chrome://browseros/settings`
2. Click **USE** on the **OpenAI Compatible** card
3. Set **Base URL** to `https://integrate.api.nvidia.com/v1`
4. Set **Model ID** to a model from the catalog (e.g. `minimaxai/minimax-m2.7`, `z-ai/glm-5.1`, `qwen/qwen3.5-122b-a10b`)
5. Paste your NVIDIA API key
6. Set **Context Window** based on the model (most are `128000` or higher)
7. Click **Save**
<Tip>
NVIDIA's free endpoints share GPU capacity across all developers, so throughput is slower than a paid API. They're best for Chat Mode, exploring new open-source models, and personal projects. For production agent workloads, use a paid provider like Claude or Kimi.
</Tip>
</Accordion>
<div id="claude" />
<Accordion title="Claude (Best for Agents)" icon="message-bot">
Claude Opus 4.5 gives the best results for Agent Mode.
**Get your API key:**
1. Go to [console.anthropic.com](https://console.anthropic.com/dashboard)
2. Click **API keys** in the sidebar
3. Click **Create Key** and copy it

**Add to BrowserOS:**
1. Go to `chrome://browseros/settings`
2. Click **USE** on the Anthropic card
3. Set **Model ID** to `claude-opus-4-5-20251101` (or `claude-sonnet-4-5-20250929`, `claude-haiku-4-5-20251001`)
4. Paste your API key
5. Check **Supports Images**, set **Context Window** to `200000`
1. Go to [openrouter.ai](https://openrouter.ai) and sign up
2. Go to [openrouter.ai/keys](https://openrouter.ai/keys) and create a key
**Pick a model:**
Go to [openrouter.ai/models](https://openrouter.ai/models) and copy the model ID you want (e.g., `anthropic/claude-opus-4.5`, `google/gemini-2.5-flash`).
Use OpenAI models hosted on your own Azure subscription for enterprise compliance and data residency.
**Prerequisites:**
1. An Azure subscription with access to [Azure OpenAI Service](https://portal.azure.com/#view/Microsoft_Azure_ProjectOxford/CognitiveServicesHub/~/OpenAI)
2. A deployed model (e.g., GPT-4o) in your Azure OpenAI resource
**Get your credentials:**
1. Go to [portal.azure.com](https://portal.azure.com) → **Azure OpenAI** resource
2. Navigate to **Keys and Endpoint**
3. Copy **Key 1** and your **Endpoint URL**
**Add to BrowserOS:**
1. Go to `chrome://browseros/settings`
2. Click **USE** on the Azure card
3. Set **Base URL** to your Azure endpoint (e.g., `https://your-resource.openai.azure.com/openai/deployments/your-deployment`)
4. Set **Model ID** to your deployment name
5. Paste your API key
6. Check **Supports Images**, set **Context Window** to `128000`
7. Click **Save**
</Accordion>
<div id="bedrock" />
<Accordion title="AWS Bedrock" icon="aws">
Access Claude, Llama, and other models through your AWS account with IAM-based authentication.
**Prerequisites:**
1. An AWS account with [Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/getting-started.html) enabled
2. Model access granted in the Bedrock console for your desired models
**Get your credentials:**
1. Go to the [AWS Console](https://console.aws.amazon.com) → **IAM**
2. Create or use an existing access key with Bedrock permissions
3. Note your **Access Key ID**, **Secret Access Key**, and **Region**
**Add to BrowserOS:**
1. Go to `chrome://browseros/settings`
2. Click **USE** on the AWS Bedrock card
3. Set **Base URL** to your Bedrock endpoint (region-specific)
4. Set **Model ID** to the Bedrock model ID (e.g., `anthropic.claude-3-sonnet-20240229-v1:0`)
5. Paste your credentials
6. Check **Supports Images**, set **Context Window** to `200000`
7. Click **Save**
</Accordion>
<div id="openai-compatible" />
<Accordion title="OpenAI Compatible" icon="plug">
Connect to any provider that implements the OpenAI-compatible API format (e.g., Together AI, Fireworks, Groq, Perplexity).
**Add to BrowserOS:**
1. Go to `chrome://browseros/settings`
2. Click **USE** on the OpenAI Compatible card
3. Set **Base URL** to the provider's API endpoint
4. Set **Model ID** to the model you want to use
5. Paste your API key
6. Set **Supports Images** and **Context Window** based on the model
7. Click **Save**
<Tip>
Most newer AI providers support the OpenAI-compatible API format. Check your provider's docs for the base URL and available model IDs.
</Tip>
</Accordion>
</AccordionGroup>
---
## Local Models
<Card title="Local Model Guide" icon="server" href="/features/local-models">
Run AI completely offline with Ollama or LM Studio. Includes recommended models, context length setup, and configuration steps.
</Card>
---
## Switching Between Models
Use the model switcher in the Assistant panel to change providers anytime. The default provider is highlighted.

<Tip>
Use local models for sensitive work data. Switch to Claude for agent tasks that need complex reasoning.
description: "Use your ChatGPT subscription to power BrowserOS"
---
Connect your ChatGPT Pro or Plus subscription to BrowserOS and access GPT-5 Codex, GPT-5.4, and the full lineup of OpenAI's most advanced models — with up to 400K context. No API keys needed.
## Setup
**1.** Open BrowserOS and go to **Settings** (`chrome://browseros/settings`). You'll see the AI Providers section.
**4.** Once authorized, ChatGPT will appear as a provider in your settings. Select a model and start using it.
## Available Models
| Model | Context Window |
|-------|---------------|
| `gpt-5.4` | 400K |
| `gpt-5.3-codex` | 400K |
| `gpt-5.2-codex` | 400K |
| `gpt-5.2` | 200K |
| `gpt-5.1-codex` | 400K |
| `gpt-5.1-codex-max` | 400K |
| `gpt-5.1-codex-mini` | 400K |
| `gpt-5.1` | 200K |
<Info>
ChatGPT Pro subscribers have access to the full model lineup. ChatGPT Plus subscribers can access a subset of models depending on their plan. The available models will be shown automatically after you connect.
</Info>
<Tip>
The Codex models (e.g., `gpt-5.3-codex`) are optimized for code and reasoning tasks — ideal for complex browser automation workflows that involve form filling, data extraction, and multi-step navigation.
</Tip>
## Reasoning Settings
ChatGPT Pro includes additional settings for models that support reasoning:
- **Reasoning Effort** — Control how much the model "thinks" before responding. Options: none, low, medium, high.
- **Reasoning Summary** — Choose how reasoning is displayed. Options: auto, concise, detailed.
These settings are available in the provider configuration after connecting.
## Disconnecting
To disconnect your OpenAI account, go to **Settings**, find the ChatGPT Plus/Pro provider, and click **Disconnect**. Your OAuth tokens will be immediately deleted from your machine.
description: "Connect 40+ apps to BrowserOS so the assistant can work with your email, calendar, projects, and more"
---
Connect your favorite apps to BrowserOS and let the assistant work across all of them. Read emails, check your calendar, create tasks, post messages, manage files, and more, all through natural conversation.
BrowserOS uses the [Model Context Protocol (MCP)](https://modelcontextprotocol.io/) to connect your apps. You do not need to install anything or manage API keys. Just sign in once and the assistant handles the rest.
## Smart App Connection
When you ask the assistant to do something that needs an app you have not connected yet, it shows an interactive card right in the conversation. You can connect the app with one click or choose to skip it. No need to set things up in advance.
<Steps>
<Step title="You make a request">
Ask the assistant something like "What's on my calendar today?" or "Send an email to Sarah."
</Step>
<Step title="A connection card appears">
The assistant detects the app is not connected and shows a card explaining why connecting it would help. You get two choices: **Connect** or **Do it manually**.
</Step>
<Step title="You connect or skip">
- **Connect**: Opens a sign-in page. Authorize the app and the assistant continues with full integration access.
- **Do it manually**: The assistant skips the integration and navigates to the app's website directly using browser automation.
</Step>
<Step title="The assistant continues">
Once connected, the app stays linked for all future conversations. If you chose to skip, the assistant remembers and will not ask again.
</Step>
</Steps>
{/* <Frame caption="The assistant detects an unconnected app and shows a connection card">
<img src="/images/connect-apps-smart-connection.png" alt="Smart app connection prompt in chat" />
</Frame> */}
See [Smart Nudges](/features/smart-nudges#app-connection) for more details on how connection suggestions work.
You can also connect apps ahead of time from the sidebar if you prefer.
## Connect from the Sidebar
<Steps>
<Step title="Open Connect Apps">
Click **Connect Apps** in the sidebar.
</Step>
<Step title="Add an app">
Click **Add built-in app** and select the app you want
</Step>
<Step title="Sign in">
Complete the OAuth sign-in when prompted
</Step>
</Steps>
<Frame caption="Connected apps show a green 'Authenticated' badge">
- Create a new Linear issue for the homepage redesign
- What are my open tasks in Jira?
- Move the "Launch campaign" task to complete in Asana
- Add a comment to the latest ClickUp task
</Accordion>
<Accordion title="Documents" icon="cube">
- Add "Review Q4 report" to my Notion tasks database
- Create a new page in my Projects database for the website redesign
- What are my open tasks in Notion?
- Update the status of the "Launch campaign" task to complete
</Accordion>
</AccordionGroup>
## Cross-App Workflows
The real power of connected apps is combining them in a single request. The assistant can pull data from one app and use it in another without you switching between tabs.
<CardGroup cols={2}>
<Card title="Email to task" icon="envelope">
"Find action items in my latest emails and add them to my Notion tasks"
</Card>
<Card title="Meeting prep" icon="calendar">
"Check my calendar for tomorrow, then draft an email to John summarizing what we're meeting about"
</Card>
<Card title="Bug triage" icon="bug">
"Test the checkout flow on our staging site, file a Linear issue if anything is broken, and post a summary to #engineering on Slack"
</Card>
<Card title="Sales pipeline" icon="chart-line">
"Pull my open deals from Salesforce and create a summary spreadsheet in Google Sheets"
</Card>
<Card title="Content roundup" icon="newspaper">
"Check the latest pull requests on our main repo and post a daily summary to #dev-updates on Slack"
</Card>
<Card title="Expense tracking" icon="receipt">
"Find all receipts in my Gmail from this month and organize them in a Google Sheet"
</Card>
</CardGroup>
## Add a Custom MCP Server
You can connect any MCP-compatible server that exposes an SSE endpoint.
1. Go to **Settings > Connected Apps**
2. Click **Add custom app**
3. Enter your server URL (e.g., `http://localhost:8000/sse`) and give it a name
Custom servers appear alongside built-in apps and work the same way.
<Tip>
MCP has a growing ecosystem of servers. Browse [MCP servers on GitHub](https://github.com/modelcontextprotocol/servers) to find integrations for databases, APIs, and more.
</Tip>
### Connect to OAuth-Protected Remote Servers
Some remote MCP servers (like Atlassian Jira, GitHub, etc.) require OAuth authentication. Use [mcp-remote](https://www.npmjs.com/package/mcp-remote) and [supergateway](https://github.com/supercorp-ai/supergateway) to handle the OAuth flow locally:
description: "Give the agent controlled access to local files and commands alongside browser automation"
---
Cowork lets you describe complex tasks and let the agent handle them end-to-end. It combines browser automation with local file operations: research on the web, then save reports directly to your folder. Read code, edit files, run shell commands, and search through your project, all in the same session as your browser tasks.
Here's what it looks like to give the agent access to your local files:
Without Cowork, the agent can only interact with browser tabs. With Cowork enabled, it gains full access to a folder on your machine through 7 filesystem tools:
Read a file from the filesystem. Returns text content with line numbers, or image data for image files (PNG, JPG, GIF, WEBP, BMP, SVG, ICO). Supports pagination through large files with `offset` and `limit` parameters.
| Parameter | Type | Description |
|-----------|------|-------------|
| `path` | string (required) | File path relative to working directory |
| `offset` | number (optional) | Starting line number (1-indexed) |
| `limit` | number (optional) | Max lines to read |
Responses are capped at 2000 lines or 50KB per request.
Make a targeted edit by replacing an exact string match. If the exact match fails, a whitespace-tolerant fuzzy match is attempted. Preserves original line endings (CRLF, CR, LF) and BOM.
| Parameter | Type | Description |
|-----------|------|-------------|
| `path` | string (required) | File path relative to working directory |
| `old_string` | string (required) | Exact text to find |
| `new_string` | string (required) | Replacement text |
description: "Use your GitHub Copilot subscription to power BrowserOS"
---
Connect your GitHub Copilot subscription to BrowserOS and access 19+ models — including Claude, GPT-5, and Gemini — through a single GitHub sign-in. No API keys needed.
<Info>
**Free tier** includes GPT-5 Mini, Claude Haiku 4.5, GPT-4o, and GPT-4.1. **Copilot Pro** ($10/month) unlocks Claude Sonnet 4.6, Claude Opus 4.6, Gemini 3 Pro, GPT-5.4, and more.
</Info>
## Setup
**1.** Open BrowserOS and go to **Settings** (`chrome://browseros/settings`). You'll see the AI Providers section.
**2.** Click **USE** on the **GitHub Copilot** card. A device code will appear — copy it, then click the link to open GitHub's device authorization page.
**5.** Once authorized, GitHub Copilot will appear as a provider in your settings. Select a model and start using it.
## Available Models
### Free Tier
| Model | Context Window |
|-------|---------------|
| `gpt-5-mini` | 128K |
| `claude-haiku-4.5` | 128K |
| `gpt-4o` | 64K |
| `gpt-4.1` | 64K |
### Copilot Pro / Pro+
| Model | Context Window |
|-------|---------------|
| `claude-sonnet-4.6` | 200K |
| `claude-opus-4.6` | 200K |
| `gemini-2.5-pro` | 1M |
| `gemini-3-pro-preview` | 1M |
| `gpt-5.4` | 400K |
| `gpt-5.3-codex` | 400K |
| `gpt-5.2-codex` | 400K |
| `grok-code-fast-1` | 128K |
<Tip>
GitHub Copilot is the most versatile provider — one subscription gives you access to models from OpenAI, Anthropic, Google, and xAI. Great if you want to switch between models for different tasks.
</Tip>
## Disconnecting
To disconnect your GitHub account, go to **Settings**, find the GitHub Copilot provider, and click **Disconnect**. Your OAuth tokens will be immediately deleted from your machine.
description: "Access ChatGPT, Claude, and Gemini from any webpage with one click"
---
BrowserOS puts AI chat at your fingertips. Open a chat panel on any webpage to ask questions with full page context, or compare responses across multiple LLMs side-by-side.
description: "Run AI models locally with Ollama or LM Studio for free, private, offline use"
---
BrowserOS works great with local models for Chat Mode. Run models completely offline — your data never leaves your machine.
## Context Length
<Warning>
**Ollama defaults to 4,096 tokens of context — this is too low for BrowserOS.** Below 15K tokens, the context overflows and the agent gets stuck in a loop constantly trying to recover. Only Chat Mode will work at low context lengths. Set at least **15,000–20,000 tokens** for local models to function properly.
</Warning>
Set context length when starting Ollama:
```bash
OLLAMA_CONTEXT_LENGTH=20000 ollama serve
```
<Info>
Increasing context length uses more VRAM. Run `ollama ps` to check your current allocation. See the [Ollama context length docs](https://docs.ollama.com/context-length) for more details.
</Info>
---
## Setup
<Tabs>
<Tab title="Ollama" icon="terminal">
The easiest way to run models locally.
<Steps>
<Step title="Install Ollama">
Download from [ollama.com](https://ollama.com) and install it.
</Step>
<Step title="Pull a model">
```bash
ollama pull qwen/qwen3-4b
```
</Step>
<Step title="Start Ollama with higher context">
```bash
OLLAMA_CONTEXT_LENGTH=20000 ollama serve
```
</Step>
<Step title="Configure in BrowserOS">
1. Go to `chrome://browseros/settings`
2. Click **USE** on the Ollama card
3. Set **Model ID** to `qwen/qwen3-4b`
4. Set **Context Window** to `20000`
5. Click **Save**

</Step>
</Steps>
</Tab>
<Tab title="LM Studio" icon="desktop">
Nice GUI if you don't want to use the terminal.
<Steps>
<Step title="Install LM Studio">
Download from [lmstudio.ai](https://lmstudio.ai) and install it.
</Step>
<Step title="Load a model">
Open LM Studio → **Developer** tab → load a model. It runs a server at `http://localhost:1234/v1/`.
description: "Your assistant remembers what matters across every conversation"
---
The BrowserOS assistant has long-term memory. It remembers your name, your projects, the tools you use, and things that came up in past conversations. You do not need to repeat yourself. The assistant builds up knowledge about you over time and uses it to give better, more relevant answers.
## How Memory Works
Memory is automatic. As you chat, the assistant saves important facts and observations to local files on your machine. Before responding in future conversations, it searches these files to recall relevant context.
<CardGroup cols={2}>
<Card title="Remembers you" icon="user">
Your name, job, location, projects, and preferences are stored permanently and recalled whenever relevant.
Useful details from each conversation are saved as daily notes and kept for 30 days.
</Card>
<Card title="Searches before answering" icon="magnifying-glass">
The assistant proactively searches its memory before responding, so it can reference things you have mentioned before.
</Card>
<Card title="Stays on your machine" icon="hard-drive">
All memory files are plain Markdown stored locally. Memory is never uploaded to the cloud, even with Sync to Cloud enabled.
</Card>
</CardGroup>
## Two Types of Memory
BrowserOS uses a two-tier memory system to keep important facts separate from session notes.
### Core Memory
Core memory holds permanent facts about you. Things like your name, where you work, what projects you are working on, the tools and languages you use, and people you mention regularly. These facts persist forever and are never automatically deleted.
Core memory lives in a single file called `CORE.md`. When the assistant learns something new about you, it reads the existing core memory, merges the new fact in, and saves the updated file.
**Examples of what goes in core memory:**
- Your name and role
- Company and team
- Projects you are working on
- Tools, languages, and frameworks you use
- People you mention often
- Long-term preferences ("I prefer TypeScript over JavaScript")
### Daily Memory
Daily memory holds session notes, observations, and recent events. Each day gets its own file (e.g., `2026-03-07.md`), and entries are timestamped so the assistant can see when things happened.
Daily memories automatically expire after **30 days**. If something keeps coming up, the assistant promotes it to core memory so it is not lost.
**Examples of what goes in daily memory:**
- Tasks you worked on today
- Decisions made during a conversation
- Temporary context ("meeting with Sarah moved to Thursday")
- Research findings from a browsing session
## Memory in Action
You do not need to tell the assistant to remember things. It picks up on important details naturally. But you can also be explicit:
Just mention something in conversation and the assistant decides whether to save it:
- "I'm working on a project called Atlas at Acme Corp" -> saved to core memory
- "We decided to go with Postgres instead of MongoDB" -> saved to daily memory
- "My name is Sarah" -> saved to core memory
</Accordion>
<Accordion title="Ask it to remember" icon="bookmark">
Be explicit when you want something remembered:
- "Remember that our staging URL is staging.example.com"
- "Save this: the design review happens every Tuesday at 2pm"
- "Remember that I prefer dark mode in all my tools"
</Accordion>
<Accordion title="Ask it to recall" icon="rotate-left">
The assistant searches memory automatically, but you can also ask directly:
- "What do you remember about the Atlas project?"
- "What did we discuss yesterday?"
- "Do you know my team members' names?"
</Accordion>
<Accordion title="Ask it to forget" icon="eraser">
You can ask the assistant to remove specific memories:
- "Forget my phone number"
- "Remove the note about the staging URL"
- "Clear what you know about Project X"
</Accordion>
</AccordionGroup>
## Where Memory Lives
All memory files are stored locally on your machine in the BrowserOS data folder:
| File | Path | Purpose |
|------|------|---------|
| **Core memory** | `~/.browseros/memory/CORE.md` | Permanent facts about you |
| **Daily notes** | `~/.browseros/memory/2026-03-07.md` | Session notes, auto-expire after 30 days |
## Memory vs SOUL.md
BrowserOS separates what the assistant **knows** from how it **behaves**. These are two different systems that work together.
<Columns cols={2}>
<Card title="Memory" icon="brain">
**Facts about you and the world.** Your name, projects, preferences, recent events. Stored in CORE.md and daily files.
</Card>
<Card title="SOUL.md" icon="heart">
**How the assistant acts.** Personality, tone, communication style, boundaries. Stored in a single SOUL.md file. See [SOUL.md](/features/soul) for details.
</Card>
</Columns>
When the assistant learns that you work at Acme Corp, that goes in memory. When it learns that you prefer bullet points over paragraphs, that goes in SOUL.md. This separation means the assistant can change its personality without losing knowledge about you, and vice versa.
## Privacy
<Columns cols={2}>
<Card title="Never leaves your machine" icon="lock">
Memory files live on your machine and are never uploaded to any server. Even with Sync to Cloud enabled, memory stays local.
</Card>
<Card title="You control what is remembered" icon="toggle-on">
Ask the assistant to forget anything at any time. You can also directly edit or delete the memory files.
</Card>
<Card title="Plain text files" icon="file-lines">
Memory is stored as readable Markdown. No hidden databases or encrypted blobs. You can inspect everything.
</Card>
<Card title="30-day auto-cleanup" icon="clock">
Daily notes are automatically deleted after 30 days. Only facts you have promoted to core memory persist.
description: "Use your Qwen Code account to power BrowserOS"
---
Connect your Qwen Code account to BrowserOS and access Alibaba's coding models with up to a **1 million token context window** — the largest of any provider we support. No API keys needed.
## Setup
**1.** Open BrowserOS and go to **Settings** (`chrome://browseros/settings`). You'll see the AI Providers section.
**4.** Once authorized, Qwen Code will appear as a provider in your settings. Select a model and start using it.
## Available Models
| Model | Context Window |
|-------|---------------|
| `coder-model` | 1M |
| `qwen3-coder-plus` | 1M |
| `qwen3-coder-flash` | 1M |
| `qwen3.5-plus` | 1M |
<Tip>
Qwen Code's 1 million token context window is ideal for tasks that involve long documents, entire documentation sites, or working across many browser tabs simultaneously — the agent can hold everything in context at once.
</Tip>
## Disconnecting
To disconnect your Qwen account, go to **Settings**, find the Qwen Code provider, and click **Disconnect**. Your OAuth tokens will be immediately deleted from your machine.
description: "Run the BrowserOS agent automatically on a schedule"
---
Scheduled Tasks let you run the BrowserOS agent automatically, whether it is daily, every few hours, or every few minutes. Write a prompt once, set a schedule, and let the agent handle it on autopilot.
Watch how to set up a scheduled task from scratch:
Runs once a day at a specific time you choose (e.g., every morning at 8:00 AM).
</Card>
<Card title="Hourly" icon="clock">
Runs every N hours (e.g., every 2 hours, every 6 hours). Set an interval from 1 to 24 hours.
</Card>
<Card title="Minutes" icon="stopwatch">
Runs every N minutes (e.g., every 15 minutes, every 30 minutes). Set an interval from 1 to 60 minutes.
</Card>
</CardGroup>
## Example Use Cases
<AccordionGroup>
<Accordion title="Morning briefing" icon="sun">
> Every morning at 8am, check my Google Calendar and send me a summary of today's events. For each meeting, do a quick Google search on the attendees and include their LinkedIn summary.
> Check my Google Calendar for tomorrow's meetings, then post a summary to my Slack channel, and create a Notion page with prep notes for each meeting.
</Accordion>
</AccordionGroup>
Your scheduled task prompts can be as complex as you want. If you have [connected apps](/features/connect-mcps) like Google Calendar, Slack, Notion, or Gmail, your scheduled tasks can work across all of them.
## Viewing Results
When a scheduled task runs, you can see the results in two places:
- **New Tab page**: Results show up right on your new tab
- **Scheduled Tasks page**: View the full run history for each task
- **Test** a task manually without waiting for the next scheduled run
- **Retry** a failed task
- **Cancel** a task that is currently running
## How It Works
<Steps>
<Step title="Task triggers on schedule">
BrowserOS uses your browser's built-in alarm system to trigger tasks at the right time. If your laptop was closed at the scheduled time, the task runs as soon as you open BrowserOS again.
</Step>
<Step title="Background window opens">
A hidden browser window opens automatically. The task runs there so it never interrupts whatever you are working on. You will not see anything happen on screen.
</Step>
<Step title="Agent executes your prompt">
The agent runs your prompt with full access to browser automation and any connected apps. It can navigate pages, fill forms, extract data, and interact with your services.
</Step>
<Step title="Results are saved">
When the task finishes, the result is saved and appears on your New Tab page and in the task's run history. The hidden window closes automatically.
</Step>
</Steps>
<Note>
BrowserOS needs to be open for scheduled tasks to run. Tasks have a 10-minute timeout. If a task takes longer than that, it will be marked as failed and you can retry it.
</Note>
## Cloud Sync
If you are signed in, your scheduled task configurations sync across devices. Create a task on your laptop and it appears on your desktop. Edits sync both ways, and conflicts are resolved automatically using timestamps.
Only the schedule setup syncs (name, prompt, schedule type, and timing). Task run results and output stay on the device where the task ran.
See [Sync to Cloud](/features/sync-to-cloud) for more details.
## Privacy
<Columns cols={2}>
<Card title="Runs locally" icon="house-laptop">
All tasks run on your machine in a hidden browser window. Nothing is sent to external servers.
</Card>
<Card title="Full control" icon="toggle-on">
Enable, disable, edit, or delete any task at any time. You decide what runs and when.
description: "Teach your BrowserOS agent new abilities with reusable, custom instructions"
---
Skills let you teach the BrowserOS agent how to handle specific tasks. Each skill is a set of instructions written in plain Markdown that the agent loads when it recognizes a matching task. Think of skills as recipes: you write the steps once, and the agent follows them whenever that type of task comes up.
BrowserOS implements the open [Agent Skills specification](https://agentskills.io/specification), so skills you create are portable across any AI agent that supports the standard.
## How Skills Work
<Steps>
<Step title="You create a skill">
Give it a name, a short description of when to use it, and write the instructions in Markdown.
</Step>
<Step title="The agent sees the skill catalog">
When a conversation starts, the agent loads a list of all your enabled skills with their names and descriptions.
</Step>
<Step title="The agent matches a task">
When your request matches a skill's description, the agent loads that skill's full instructions and follows them.
</Step>
</Steps>
## Creating a Skill
<Steps>
<Step title="Open Skills settings">
Click **Skills** in the sidebar.
</Step>
<Step title="Click New Skill">
Click the **New Skill** button to open the creation form.
</Step>
<Step title="Fill in the details">
- **Name**: A short, descriptive name (e.g., "Morning Status Report")
- **Description**: Tell the agent when to use this skill. Be specific. For example: "When the user wants to read status updates from work across Notion, Linear, and Slack"
- **Content**: Write your instructions in Markdown. Include step-by-step directions, examples, and edge cases.
</Step>
<Step title="Save and enable">
Click **Create**. The skill is enabled by default and will be available to the agent immediately.
</Step>
</Steps>
<Tip>
Write your description like a trigger. The agent uses it to decide whether to activate the skill. A good description says both **what** the skill does and **when** to use it.
</Tip>
## Example Skills
<AccordionGroup>
<Accordion title="Morning status report">
**Description:** When the user wants to read status updates from work
**Instructions:**
```markdown
Always look for updates in 3 sources:
1. **Notion** - Check the team updates page for any new entries from today
2. **Linear** - Look at issues assigned to the user that were updated in the last 24 hours
3. **Slack** - Check the #team-updates and #engineering channels for unread messages
Summarize everything in a single report grouped by source.
If a source has no updates, say so.
```
</Accordion>
<Accordion title="PDF processing">
**Description:** Extract text and tables from PDF files, fill PDF forms, and merge multiple PDFs. Use when the user mentions PDFs, forms, or document extraction.
**Instructions:**
```markdown
When extracting text from a PDF:
1. Download or open the PDF in the browser
2. Use the page content tool to extract visible text
3. Preserve table structure using Markdown tables
4. If the PDF has multiple pages, process each page
When filling a PDF form:
- Ask the user for the values if not provided
- Fill each field carefully and confirm before submitting
See references/FORMS.md for common form templates.
```
</Accordion>
<Accordion title="Code review checklist">
**Description:** When the user asks to review code, a pull request, or wants feedback on code quality
**Instructions:**
```markdown
Follow this checklist for every code review:
1. Check for security issues (XSS, injection, hardcoded secrets)
2. Look for performance problems (N+1 queries, unnecessary re-renders)
3. Verify error handling is present and meaningful
4. Check that naming is clear and consistent
5. Look for missing tests for new logic
Format your review as a list of findings with severity: Critical, Warning, or Suggestion.
Always start with what the code does well.
```
</Accordion>
</AccordionGroup>
## Managing Skills
From the Skills page, you can:
- **Enable or disable** a skill using the toggle switch. Disabled skills are not loaded by the agent.
- **Edit** a skill's name, description, or instructions by clicking the edit icon.
- **Delete** a skill by clicking the trash icon. This removes the skill permanently.
## Skill File Format
Under the hood, each skill is stored as a `SKILL.md` file following the [Agent Skills specification](https://agentskills.io/specification):
```markdown
---
name: morning-status-report
description: When the user wants to read status updates from work
metadata:
display-name: Morning Status Report
enabled: "true"
---
Always look for updates in 3 sources:
1. Notion - Check the team updates page
2. Linear - Look at assigned issues updated in the last 24 hours
3. Slack - Check #team-updates and #engineering channels
Summarize everything in a single report grouped by source.
```
The file uses YAML frontmatter for metadata and Markdown for the instructions.
Move detailed references to separate files. The agent loads them only when needed, saving context space.
</Card>
</CardGroup>
<Note>
Skills follow the open [Agent Skills specification](https://agentskills.io/specification). Skills you create in BrowserOS work with any agent that supports the standard.
description: "BrowserOS suggests app connections and task scheduling at the right moment"
---
Smart Nudges are context-aware suggestions that appear as interactive cards during a conversation. The agent detects opportunities to connect an app or schedule a task, and shows you a card at the right moment. You decide whether to act on it or skip it.
There are two types of nudges: **App Connection** and **Schedule Suggestion**.
## App Connection
When you ask the agent to do something that involves an external app (like sending an email or checking your calendar), it checks whether that app is connected. If it is not, the agent shows a connection card before starting the task.
<Steps>
<Step title="You make a request">
For example: "Send Sarah an email with the meeting notes."
</Step>
<Step title="The agent detects an unconnected app">
Gmail is not connected yet, so the agent cannot send emails through the integration.
</Step>
<Step title="A connection card appears">
The card explains why connecting the app would help and gives you two choices: **Connect** or **Do it manually**.
</Step>
<Step title="You choose">
- **Connect**: Opens a sign-in page for the app. Once you authorize, the agent continues with full integration access.
- **Do it manually**: The agent skips the integration and uses browser automation instead (navigates to the website directly).
</Step>
</Steps>
### What happens after you choose
<CardGroup cols={2}>
<Card title="Connected" icon="circle-check">
The app is added to your connected list. The agent uses the integration for this and all future conversations. You can manage connected apps in [Connect Apps](/features/connect-mcps).
</Card>
<Card title="Declined" icon="forward">
The agent remembers your choice and will not ask about this app again. It uses browser automation to complete the task instead.
</Card>
</CardGroup>
<Tip>
If you declined an app but change your mind later, you can connect it anytime from the [Connect Apps](/features/connect-mcps) settings page.
</Tip>
### Supported apps
The agent can suggest connections for all 40+ built-in integrations, including Gmail, Google Calendar, Slack, Notion, GitHub, Linear, Jira, Figma, Salesforce, and many more. See [Connect Apps](/features/connect-mcps) for the full list.
## Schedule Suggestion
After the agent completes a task that could run on a recurring schedule, it shows a scheduling card. This helps you turn one-time tasks into automated routines without leaving the conversation.
<Steps>
<Step title="The agent completes a task">
For example: "Here are the top 5 tech headlines from today."
</Step>
<Step title="The agent recognizes a schedulable task">
News gathering, price monitoring, report building, data tracking, and similar tasks that do not need your real-time input are good candidates.
</Step>
<Step title="A scheduling card appears">
The card suggests a name and schedule. For example: "Run this automatically? 'Morning News Briefing' - daily at 09:00."
</Step>
<Step title="You choose">
- **Schedule this task**: Opens the Scheduled Tasks page with the details pre-filled. Review and confirm to create the task.
- **Maybe later**: Dismisses the card. You can always create the scheduled task manually later.
</Step>
</Steps>
### You can also ask directly
You do not have to wait for the agent to suggest it. Just tell the agent you want to schedule the task:
description: "Give your AI assistant a personality that grows with you"
---
Every time you start a new conversation, the BrowserOS assistant reads a file called `SOUL.md`. This file defines who the assistant is: how it talks, what it prioritizes, and how it behaves. Over time, it evolves based on your interactions, making the assistant feel less like a tool and more like _your_ assistant.
## What is SOUL.md?
SOUL.md is a plain text file that lives on your machine. It contains your assistant's personality, tone, communication style, rules, and boundaries.
Think of it as a personal guide the assistant reads before every conversation. It shapes how the assistant responds to you, not what it knows. Facts about you (your name, projects, preferences) are stored separately in [memory](#soul-vs-memory).
<Tip>
The SOUL.md concept was pioneered by [OpenClaw](https://openclaw.ai/) and inspired by [soul.md](https://soul.md/), which explore the idea of giving AI systems a persistent identity through written documents. BrowserOS builds on this concept with a file that the assistant can read and rewrite on its own.
</Tip>
## How It Works
When you first use BrowserOS, the assistant starts with a simple default personality:
> _Be genuinely helpful. Have opinions when asked. Be resourceful before asking. Earn trust through competence._
As you chat, the assistant picks up on how you like to communicate. If you prefer direct answers, it notices. If you set a boundary ("never send emails without asking me first"), it writes that into SOUL.md. Over time, the file becomes a reflection of how you and your assistant work together.
<Steps>
<Step title="First conversation">
The assistant starts with a default template. It watches for cues about your preferred style, tone, and boundaries.
</Step>
<Step title="The assistant learns your style">
Based on your interactions, the assistant rewrites SOUL.md to reflect your preferences. It will briefly tell you when it makes a change.
</Step>
<Step title="Every future conversation">
The assistant reads the updated SOUL.md before responding, so your preferences carry over across sessions.
</Step>
</Steps>
You do not need to write or edit SOUL.md yourself. The assistant handles it. But you can always view it or ask the assistant to change it.
## Viewing Your SOUL.md
Open **Agent Soul** from the sidebar to see what your assistant's personality file looks like right now. The page shows the current contents of SOUL.md in a read-only viewer.
{/* <Frame caption="View your assistant's personality in Settings">
You do not need to edit the file directly. Just talk to your assistant. Here are some ways to shape its personality:
<CardGroup cols={2}>
<Card title="Set the tone" icon="comment">
"Be more casual and direct. Skip the formalities."
</Card>
<Card title="Add a boundary" icon="shield">
"Never post to Slack or send emails without confirming with me first."
</Card>
<Card title="Change the personality" icon="masks-theater">
"Be more opinionated. If you think my approach is wrong, say so."
</Card>
<Card title="Start fresh" icon="rotate">
"Reset your personality to the default."
</Card>
</CardGroup>
The assistant will update SOUL.md based on your instructions and let you know what changed.
## Where SOUL.md Lives
SOUL.md is stored locally on your machine, inside the BrowserOS data folder:
| Operating System | Path |
|-----------------|------|
| **macOS** | `~/.browseros/SOUL.md` |
| **Windows** | `%APPDATA%/.browseros/SOUL.md` |
| **Linux** | `~/.browseros/SOUL.md` |
The file is plain Markdown, limited to 150 lines. You can open it in any text editor if you want to make manual edits, though we recommend letting the assistant manage it through conversation.
## SOUL vs Memory
BrowserOS keeps personality and knowledge separate on purpose.
<Columns cols={2}>
<Card title="SOUL.md" icon="heart">
**How the assistant behaves.** Personality, tone, communication style, rules, and boundaries. One file, updated by rewriting the whole thing.
</Card>
<Card title="Memory" icon="brain">
**What the assistant knows about you.** Your name, projects, tools, preferences, and recent events. Stored as core facts and daily notes.
</Card>
</Columns>
When the assistant learns that you prefer bullet points over paragraphs, that goes in SOUL.md. When it learns that you work at Acme Corp on a project called Atlas, that goes in memory.
This separation means the assistant can have a consistent personality even when its factual knowledge changes, and vice versa.
## Example SOUL.md
Here is what an evolved SOUL.md might look like after a few conversations:
```markdown
# SOUL.md
## Personality
- Direct and concise. No filler phrases.
- Have opinions and share them when relevant.
- Use humor sparingly but naturally.
## Communication Style
- Default to bullet points for lists and options.
- Keep status updates to one or two lines.
- When explaining something technical, use analogies.
## Boundaries
- Never send emails or post messages without explicit confirmation.
- Do not make purchases or financial transactions.
- Ask before modifying any file outside the current project.
## Preferences
- When researching, prioritize primary sources over summaries.
- For code tasks, prefer simple solutions over clever ones.
- Always explain trade-offs when suggesting approaches.
```
Your SOUL.md will look different because it is shaped by your conversations. No two are the same.
description: "Sign in to sync your conversations, settings, and automations across all your devices"
---
Sign in to BrowserOS and your data follows you everywhere. Your conversations, AI model settings, and scheduled tasks sync automatically to the cloud so you never lose your setup.
## Why Sign In?
Without an account, everything stays on one device. Sign in and your data is backed up and available wherever you use BrowserOS.
Open BrowserOS on a new device and your conversations, model settings, and scheduled tasks are already there.
</Card>
<Card title="Never lose your history" icon="clock-rotate-left">
Chat history is saved to the cloud automatically. Clear your browser data or switch machines and everything is still available.
</Card>
<Card title="Settings follow you" icon="sliders">
Set up your AI models once. Your provider configurations sync across devices so you never re-enter the same setup twice.
</Card>
<Card title="Automations stay in sync" icon="arrows-rotate">
Create a scheduled task on your laptop and it appears on your desktop. Edits sync both ways.
</Card>
</CardGroup>
## How to Sign In
<Steps>
<Step title="Open a new tab">
Open a new tab in BrowserOS to see the home page.
</Step>
<Step title="Click Sign In">
Click **Sign In** in the sidebar to open the login page.
</Step>
<Step title="Choose your sign-in method">
Enter your email for a magic link, or sign in with Google.
</Step>
<Step title="Verify and you're in">
Click the link in your email (or complete Google sign-in). BrowserOS starts syncing your data immediately.
</Step>
</Steps>
<Tip>
Magic link sign-in means you never need to create or remember a password. Just enter your email and click the link.
</Tip>
## What Gets Synced
<AccordionGroup>
<Accordion title="Conversations" icon="messages">
Your full chat history syncs to the cloud as you go. Every message is saved in real time so you can pick up any conversation on another device. Locally, BrowserOS keeps your 50 most recent conversations. In the cloud, there is no limit.
</Accordion>
<Accordion title="AI model settings" icon="microchip">
Your configured LLM providers (OpenAI, Anthropic, Google, Moonshot, Azure, Bedrock, and others) sync across devices. This includes the model name, provider type, base URL, temperature, and context window settings.
**Your API keys are never synced.** Sensitive credentials like API keys, access keys, and session tokens stay on the device where you entered them. You will need to re-enter API keys on each new device.
Your scheduled task configurations sync in both directions. Create a task on one device, edit it on another, and changes are merged automatically using timestamps to resolve conflicts. Only the schedule setup syncs (name, prompt, schedule type, and timing). Task run results and output stay on the device where the task ran.
</Accordion>
<Accordion title="Profile" icon="user">
Your name, profile picture, and account preferences sync across devices. Information you provide during onboarding (role, company) is also saved to your profile.
</Accordion>
</AccordionGroup>
## What Stays Local
Some settings are device-specific and do not sync to the cloud:
- **API keys and secrets** for LLM providers
- **Memory** (core facts and daily notes)
- **SOUL.md** (assistant personality)
- **Theme** (light/dark mode)
- **Workspace folder** selection
- **Connected MCP servers**
- **Workflows**
- **Scheduled task results** (run output stays on the device where the task ran)
This is intentional. Sensitive credentials never leave your device, memory and personality files stay private, and display preferences can differ between machines.
## How Sync Works
BrowserOS uses a local-first approach. Your data is always saved on your device first, then synced to the cloud in the background.
<Steps>
<Step title="Local save">
Every action (sending a message, adding a provider, creating a task) is saved locally first. BrowserOS works fully offline.
</Step>
<Step title="Background sync">
When you are signed in, changes are automatically pushed to the cloud. New chat messages sync in real time. Provider and task changes sync whenever they are updated.
</Step>
<Step title="Restore on new devices">
When you sign in on a new device, BrowserOS pulls your conversations, model settings, scheduled tasks, and profile from the cloud and merges them with any local data.
</Step>
</Steps>
<Note>
If the same scheduled task is edited on two devices before they sync, BrowserOS keeps the version with the most recent timestamp.
</Note>
## Security
<Columns cols={2}>
<Card title="API keys never leave your device" icon="key">
Sensitive credentials like API keys, access keys, and tokens are excluded from cloud sync entirely.
description: "Control your browser and 40+ apps from Claude Code, OpenClaw, Gemini CLI, or any MCP client"
---
BrowserOS is the best browser for AI coding agents. It comes with a built-in MCP server that gives your AI agent **full browser control** and **direct access to 40+ external services** — Gmail, Slack, GitHub, Google Calendar, Linear, Notion, and more — all through a single MCP connection.
<Note>
Unlike Chrome DevTools MCP which requires setting up debug profiles and running separate servers, BrowserOS MCP works out of the box. Just copy the URL from settings and connect.
</Note>
## Why Use BrowserOS with Claude Code?
<CardGroup cols={2}>
<Card title="Agentic Coding" icon="code">
Claude tests your web app, reads console errors, and fixes the code — all in one loop.
</Card>
<Card title="40+ App Integrations" icon="grid-2">
Gmail, Slack, GitHub, Jira, Notion, Google Sheets, and more — accessible directly from your AI agent.
</Card>
<Card title="Data Extraction" icon="download">
Extract your LinkedIn profile, tweets, or any authenticated page content.
</Card>
<Card title="Task Automation" icon="repeat">
Fill forms, navigate multi-step workflows, and automate repetitive browser tasks.
</Card>
<Card title="53+ MCP Tools" icon="wrench">
Full browser control: tabs, navigation, clicks, typing, screenshots, bookmarks, history, tab groups, and window management.
</Card>
<Card title="Zero Config Auth" icon="lock">
Connect external services via OAuth — credentials are managed securely, never stored in BrowserOS.
</Card>
</CardGroup>
<Tip>
Wondering how BrowserOS MCP compares to Chrome DevTools MCP or other browser automation tools? See our [detailed feature comparison](/comparisons/chrome-devtools-mcp) covering 53 browser tools, 40+ app integrations, and why BrowserOS MCP gives developers more out of the box.
</Tip>
## Getting Started
<Steps>
<Step title="Open BrowserOS Settings">
Navigate to `chrome://browseros/mcp` or click **Settings** → **BrowserOS as MCP** in the sidebar.
</Step>
<Step title="Copy the MCP URL">
Copy the Server URL shown on the page (e.g., `http://127.0.0.1:9239/mcp`).
<img src="/images/features--browseros-mcp-settings.png" alt="BrowserOS MCP settings page showing Server URL" />
</Step>
<Step title="Connect your MCP client">
Use the tabs below to connect your preferred client.
</Step>
</Steps>
<Tabs>
<Tab title="Claude Code">
Add BrowserOS to Claude Code:
```bash
claude mcp add --transport http browseros <mcp_url> --scope user
# Example: claude mcp add --transport http browseros http://127.0.0.1:9239/mcp --scope user
```
Start Claude Code and try it:
```bash
claude
> Open amazon.com in BrowserOS
```
<Tip>
Run `claude --dangerously-skip-permissions` to skip confirmation prompts for each browser action.
</Tip>
To remove later:
```bash
claude mcp remove browseros --scope user
```
</Tab>
<Tab title="Gemini CLI">
Add BrowserOS to Gemini CLI:
```bash
gemini mcp add local-server <mcp_url> --transport http --scope user
| `search_history` | Search browser history by text query |
| `get_recent_history` | Get the most recent history items |
| `delete_history_url` | Delete a specific URL from history |
| `delete_history_range` | Delete history within a time range |
</Accordion>
</AccordionGroup>
---
## 40+ External App Integrations
BrowserOS connects your AI agent directly to the tools you already use — no separate MCP servers to install or configure. Everything is accessible through the same BrowserOS MCP connection.
### How It Works
<Steps>
<Step title="Agent calls an external service tool">
Your AI agent calls a tool like `gmail_search_messages` through the BrowserOS MCP.
</Step>
<Step title="OAuth login (first time only)">
If this is your first time using that service, BrowserOS opens an OAuth login page in the browser. Log in and authorize access.
</Step>
<Step title="Tool executes and returns results">
Once authenticated, the tool runs and returns results to your agent. Future calls to the same service work automatically — no re-authentication needed.
</Step>
</Steps>
<Note>
Your credentials are managed securely via OAuth and are **never stored in BrowserOS**. Tokens are refreshed transparently, and you can revoke access at any time from the service provider.
description: "Move your tabs to the side for a cleaner, more organized browsing experience"
---
BrowserOS supports vertical tabs — a side panel that lists all your open tabs along the left edge of the browser window. Instead of shrinking tab titles into a cramped horizontal strip, vertical tabs give each tab its own full-width row so you can read titles at a glance, even with dozens of tabs open.
## Why Vertical Tabs?
Modern screens are wide, not tall. A horizontal tab bar wastes vertical space you could use for content, and tabs quickly become unreadable as they shrink. Vertical tabs solve both problems:
<CardGroup cols={2}>
<Card title="Read every tab title" icon="text">
Tabs stack vertically with full-width labels, so you always know what is open — no squinting at favicons.
</Card>
<Card title="Handle many tabs" icon="layer-group">
Open 30, 50, or 100 tabs without the strip becoming unusable. The side panel scrolls naturally.
The horizontal tab bar disappears, giving web pages more room on widescreen monitors.
</Card>
<Card title="Stay organized" icon="folder-tree">
Combine vertical tabs with tab groups to visually separate work, research, and personal browsing.
</Card>
</CardGroup>
## Enabling Vertical Tabs
Toggle vertical tabs on or off from the Customization settings page.
<Steps>
<Step title="Open Settings">
Go to `chrome://browseros/settings` in the address bar.
</Step>
<Step title="Go to Customization">
In the left sidebar, select **Customization**.
</Step>
<Step title="Toggle Use Vertical Tabs">
Flip the **Use Vertical Tabs** switch to on. The browser immediately moves your tabs to a side panel.
</Step>
</Steps>
<Frame caption="Enable vertical tabs in Settings > Customization">
<img src="/images/features--vertical-tabs-setting.png" alt="Vertical tabs toggle in BrowserOS Customization settings" />
</Frame>
To switch back, return to the same setting and turn the toggle off. Your tabs move back to the horizontal strip instantly.
## How It Works
When vertical tabs are enabled, the tab strip relocates from the top of the window to a collapsible side panel on the left. Each tab is displayed as a row showing the page favicon and full title.
- **Click** a tab row to switch to it.
- **Right-click** a tab for the standard context menu (pin, mute, close, move to group).
- **Drag** tabs up or down to reorder them, or drag them into and out of tab groups.
- The panel can be **collapsed** to show only favicons, freeing up even more horizontal space.
## Vertical Tabs + Tab Groups
Vertical tabs pair naturally with [tab groups](/features/workflows). Groups appear as collapsible sections in the side panel, making it easy to keep projects separate and fold away tabs you are not actively using.
description: "Build reliable, repeatable browser automations with a visual graph builder"
---
Workflows let you turn complex browser tasks into reliable, reusable automations. Instead of hoping the agent figures out the right steps each time, you define the exact sequence—and run it whenever you need.
- **Reliability matters** — The task needs to work the same way every time
- **Steps are complex** — Multiple pages, loops, conditionals, or parallel actions
- **You'll repeat it** — Run the same automation daily, weekly, or on-demand
For quick, one-off tasks, the regular agent works well. For serious automation, build a workflow.
## Creating Your First Workflow
<img src="/features/workflow/workflows-page.png" alt="Access Workflows from the sidebar or create a new workflow" />
1. Open the **Workflows** page from the sidebar
2. Click **+ New Workflow**
3. Describe what you want in the chat panel
Try this example—copy and paste it to create a workflow that fills out forms from spreadsheet data:
```
Navigate to the spreadsheet https://dub.sh/browseros/test-spreadsheet. Get the contact information and fill it out in the form https://dub.sh/browseros/test-form for each entry in the spreadsheet. Feel free to parallelize this, but ensure all entries are filled.
```
The workflow agent will generate a visual graph representing each step. You can refine the workflow by chatting further—ask it to add steps, handle edge cases, or adjust the logic.
<img src="/features/workflow/sample-workflow.png" alt="Generated workflow graph with parallel execution" />
4. Click **Test Workflow** to run it and verify it works
5. Click **Save Changes** to keep it for later
## Running Workflows
From the Workflows page, you can:
- **Run** — Execute the workflow immediately
- **Edit** — Open the graph builder to refine steps
- **Delete** — Remove workflows you no longer need
## Example Use Cases
**Data entry automation**
> Read contacts from a Google Sheet and submit each one to a web form—automatically handling pagination and parallel submissions.
**LinkedIn outreach**
> Visit each profile from a list, check if they match your criteria, and send a personalized connection request.
**Price monitoring**
> Check prices across multiple e-commerce sites, extract the data, and compile it into a spreadsheet.
**Bulk unsubscribes**
> Go through your Gmail, find subscription emails, and click unsubscribe on each one.
## Feedback
Workflows is a new feature. If you'd like to see scheduling support, sharing, or other capabilities, [open a GitHub issue](https://github.com/browseros-ai/BrowserOS/issues) with your request.
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.