The model picker in NewProviderDialog rendered inline, causing dialog
resizing and lacked keyboard navigation. Replace it with a Popover +
Command (shadcn Combobox) pattern and add fuse.js for fuzzy search.
- Replace custom ModelPickerList with Popover + Command dropdown
- Add fuse.js for fuzzy model search (replaces string.includes)
- Add MODEL_SELECTED_EVENT and AI_PROVIDER_UPDATED_EVENT analytics
- Enrich PROVIDER_SELECTED_EVENT with model_id in chat sessions
* fix: add refresh indicator to chat history when fetching latest conversations
Show a non-blocking "Fetching latest conversations" indicator at the top
of the history list while the cached data is being refreshed. Users can
still interact with the cached conversation list during the refresh.
* perf: reduce chat history query payload — fetch last 2 messages instead of 5
The conversation list only displays the last user message as a preview.
Fetching 5 messages per conversation was wasteful — each message contains
the full UIMessage object (tool calls, reasoning, etc.) multiplied by
50 conversations per page. Reduced to last 2 which is sufficient to
find the last user message in a user→assistant exchange.
* perf: use first+DESC instead of last+ASC to push LIMIT down to SQL
PostGraphile's `last: N` doesn't map to SQL LIMIT — it uses a padded
LIMIT 10 and slices in application code. Changing to `first: 2` with
ORDER_INDEX_DESC generates a true SQL LIMIT 2, reducing rows scanned
from 500 to 100 per page (50 conversations × 2 vs 10 messages each).
No UX impact — extractLastUserMessage() filters by role regardless
of message order.
* chore: update react query packages
* feat: replace localforage with idb-keyval
* feat: isolate new-tab agent navigation from origin tab
Add origin-aware navigation isolation so the agent never navigates
away from the new-tab chat UI. This is a two-layer defense:
1. Prompt adaptation: When origin is 'newtab', the system prompt's
execution and tool-selection sections are rewritten to prohibit
navigating the active tab and default all lookups to new_page.
2. Tool-level guards: navigate_page and close_page reject attempts
to act on the origin tab when in newtab mode, returning an error
that teaches the agent to self-correct.
The client now sends an `origin` field ('sidepanel' | 'newtab')
instead of injecting a soft NEWTAB_SYSTEM_PROMPT that LLMs could
ignore. Backwards compatible — defaults to 'sidepanel'.
Closes TKT-592, addresses TKT-564
* test: add newtab origin navigation guard tests
- 14 new prompt tests verifying the system prompt adapts correctly
for newtab vs sidepanel origin (execution rules, tool selection table,
absence of conflicting single-tab guidance)
- 6 new integration tests for navigate_page and close_page guards:
rejects origin tab in newtab mode, allows non-origin tabs, allows
all tabs in sidepanel mode, backwards compatible with no session
* feat: add release workflow for agent extension
Adds a workflow_dispatch workflow that builds the WXT extension,
creates a .zip for sideloading, generates scoped release notes with
contributors and PR links, creates a GitHub release with the zip
attached, and opens an auto-merge PR to update CHANGELOG.md.
* fix: correct API URL to api.browseros.com
* fix: remove duplicate PR numbers and contributors from extension release notes
Apply the same fixes from the agent-sdk workflow:
- Skip PR number if already in commit subject (squash merges)
- Remove custom Contributors section (GitHub auto-generates one)
- Clean up unused variables
* fix: use absolute path for extension zip in release upload
* fix: wxt zip already builds, use correct output path
- Remove separate build step since wxt zip runs the build internally
- Fix zip path from .output/*.zip to dist/*-chrome.zip
* fix: broaden connection error detection for main page and sidepanel
The connection error check required both "Failed to fetch" AND
"127.0.0.1" in the error message. On the main page, the browser
only produces "Failed to fetch" without the IP, so users saw a
generic "Something went wrong" instead of the troubleshooting link.
Broaden detection to also match "localhost" and bare "Failed to fetch"
errors that don't contain an external URL. Also pass providerType in
NewTabChat so provider-specific errors render correctly.
Closes#526
* fix: simplify connection error detection
All chat requests go through the local BrowserOS agent server, so any
"Failed to fetch" error is always a local connection issue. Remove the
unnecessary 127.0.0.1/localhost/URL checks.
* fix: pass providerType to agentUrlError ChatError instances
- Change dialog width from sm:max-w-2xl (672px) to sm:w-[70vw] sm:max-w-4xl
so it takes 70% of viewport width, capped at 896px
- Add overflow-x-auto on table wrappers so wide tables scroll horizontally
instead of being clipped
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: integrate models.dev for dynamic LLM provider/model data (#TKT-657)
Replace hardcoded model lists with data sourced from models.dev so new
providers and models appear automatically when the community adds them.
- Add build script (scripts/generate-models.ts) that fetches models.dev/api.json
and outputs a compact JSON with 10 providers and 520 models
- Replace hardcoded MODELS_DATA (50 models) with dynamic models.dev lookups
- Add searchable model combobox (Popover + Command) replacing plain Select dropdown
- Enrich provider templates with models.dev metadata (context window, image support)
- Keep chatgpt-pro, qwen-code, browseros, openai-compatible as hardcoded providers
* fix: address review — remove ollama-cloud mapping, fix default models, remove dead code
- Remove ollama from PROVIDER_MAP (ollama-cloud has cloud models, not local)
- Add ollama to CUSTOM_PROVIDER_MODELS with empty list (users type custom IDs)
- Update defaultModelIds to ones that exist in models.dev data:
openrouter → anthropic/claude-sonnet-4.5
lmstudio → openai/gpt-oss-20b
bedrock → anthropic.claude-sonnet-4-6
- Remove dead isCustomModel export
- Regenerate models-dev-data.json (9 providers, 486 models)
* fix: model suggestion list focus/dismiss behavior
- List only opens when input is focused or user types
- Clicking a model selects it and closes the list
- Clicking outside (blur) dismisses the list
- onMouseDown preventDefault on list items prevents blur race condition
* refactor: extract ModelPickerList component with proper open/close UX
- Collapsed state: Select-like trigger showing selected model + chevron
- Expanded state: search input + scrollable filtered list, inline
- Click outside or Escape to close, Enter to submit custom model
- Extracted as separate component (reduces dialog nesting, testable)
- No more setTimeout hacks for blur handling
* chore: remove plan doc from repo
* feat: improve rate limit UX, usage page, and provider selector
- Show "Add your own provider for unlimited usage" CTA when BrowserOS
credits are exhausted or daily limit is reached
- Fix credit exhaustion detection to match actual error message
- Improve Usage page: remove disabled Add Credits button, add "Coming
soon" badge, add "Want unlimited usage?" section linking to providers
- Add "+ Add Provider" button at bottom of chat provider selector dropdown
* fix: use asChild pattern for Button+anchor in usage page
Replace nested <a><Button> (invalid HTML) with Button asChild
pattern per shadcn/ui convention.
* feat: UI improvements for OAuth dialog, provider badges, and events docs
- Replace OAuth device code toast with a proper Dialog showing the code
prominently with a copy button (GitHub Copilot, Qwen Code, ChatGPT Pro)
- Add "New" badge on provider template cards for ChatGPT Plus/Pro,
GitHub Copilot, and Qwen Code with orange border highlight
- Add events.md documenting all analytics events across the platform
* fix: add verificationUri to DeviceCodeDialog for popup-blocked fallback
Add verificationUri to PendingDeviceCode interface and pass it from
both handleClientAuth and handleServerAuth. Render a fallback "Open
verification page" link in DeviceCodeDialog so users can navigate
to the auth page if the popup was blocked.
- Add MCP promo banner on AI providers page with "New" badge and
"66+ tools" highlight, linking to /settings/mcp
- Add Quick Setup section on MCP settings page with copy-paste
commands for Claude Code, Gemini CLI, Codex, Claude Desktop, OpenClaw
- Consolidate MCP settings: move restart button inline with server URL,
remove separate MCP Server Settings card
- Add analytics event for promo banner clicks
* fix: prevent deleted scheduled tasks from reappearing after sync
When a scheduled task was deleted, the sync function would see the
remote job missing locally and re-add it, undoing the delete. Fix by
tracking pending deletions in storage so the sync function deletes
them from the backend instead of re-adding them locally.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: use read-modify-write for pending deletions to prevent concurrent clobber
Re-read pendingDeletionStorage before write-back and only remove
resolved IDs, preserving any new entries added by concurrent
removeJob calls during the sync's network I/O.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: add voice recording UI with waveform overlay to new tab search bar
Add a microphone button to the NewTab search bar that opens a fullscreen
recording overlay powered by react-voice-visualizer. The overlay shows a
real-time waveform visualization during recording, recording time, and a
stop button. On completion, the audio is transcribed via the existing
gateway endpoint and the transcript auto-navigates to inline chat.
Changes:
- Extract transcribeAudio() to shared lib/voice/transcribe-audio.ts
- Add VoiceRecordingOverlay component with react-voice-visualizer
- Add Mic button to NewTab search bar
- Track analytics via existing NEWTAB_VOICE_* events
- Handle cancel (backdrop click) vs submit (stop button) correctly
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: address PR review comments for voice recording overlay
- Reset processingRef on transcription error to prevent stuck state
- Use stable callback refs to prevent useEffect re-runs from inline
arrow function props (fixes timer reset and unnecessary re-processing)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: replace voice overlay with inline sidepanel-style voice UI
Remove react-voice-visualizer dependency and VoiceRecordingOverlay.
Instead use the same inline voice pattern as the sidepanel ChatInput:
- Waveform bars replace the search input during recording
- Mic/stop/loading button states in the search bar
- Transcript populates the search input on completion
- Voice error shown inline below the search bar
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: add Qwen Code as OAuth LLM provider with refactored OAuth hooks
Add Alibaba Qwen Code as a third OAuth provider using Device Code flow
with PKCE. Free tier: 2,000 requests/day, up to 1M token context.
Refactoring:
- Extract useOAuthProviderFlow hook (eliminates ~180 lines of duplicated
OAuth logic from AISettingsPage for ChatGPT Pro + Copilot + Qwen)
- Extract resolveOAuthConfig in config.ts (shared resolver for all OAuth
providers, parameterized by provider name, default model, refresh flag)
- Generalize token-manager device code flow to support PKCE
(code_challenge/code_verifier) and form-urlencoded content type
New code:
- Qwen Code provider config with PKCE + form encoding flags
- Provider factories (both provider.ts and provider-factory.ts)
- Extension UI (template card, models, analytics, dialog)
* fix: use portal.qwen.ai as API base URL for OAuth tokens
DashScope (dashscope.aliyuncs.com) expects Alibaba Cloud API keys,
not OAuth tokens from chat.qwen.ai. The correct endpoint for OAuth
Bearer tokens is portal.qwen.ai/v1.
* fix: correct Qwen Code model IDs and context windows
- coder-model (1M context): virtual alias that routes to best model
- qwen3-coder-plus (1M): was incorrectly 131K
- qwen3-coder-flash (1M): new, speed-optimized variant
- qwen3.5-plus (1M): was incorrectly 1048576 (power-of-two vs decimal)
- Removed qwen3-coder-next (local/self-hosted, not available via OAuth)
- Default model changed to coder-model (auto-routes server-side)
* fix: move Qwen device code request to extension (bypasses WAF)
Alibaba WAF blocks server-side requests to chat.qwen.ai. Move the
initial device code request to the extension (browser context with
cookies), then hand off the deviceCode + codeVerifier to the server
for background polling via new POST /oauth/:provider/poll endpoint.
* fix: persist OAuth flow-started flag in sessionStorage
The flowStartedRef was lost when the component remounted (e.g. user
navigated to onboarding then back to settings). Use sessionStorage
to persist the flag so auto-create works after navigation.
* revert: remove sessionStorage for OAuth flow flag
Revert to simple useRef pattern matching the original ChatGPT Pro
implementation. The auto-create works when the user stays on the
AI settings page during auth.
* revert: move Qwen back to server-side device code flow
WAF block was temporary (rate-limiting), not permanent. Server-side
fetch to chat.qwen.ai now works. Reverted client-side device code
approach — Qwen now uses the same clean server-side flow as Copilot.
Removed: clientSideDeviceCode config, startClientSideDeviceCode(),
POST /oauth/:provider/poll endpoint, startDeviceCodePolling().
* feat: add WAF detection, rate-limit protection, and token storage endpoint
- Detect WAF captcha responses (HTML instead of JSON) in device code
request and token polling, with user-friendly error messages
- Add 30s cooldown on "USE" button to prevent rapid clicks triggering WAF
- WAF-blocked poll requests silently retry instead of aborting
- Add POST /oauth/:provider/token endpoint for storing externally-provided
tokens (useful for future fallback flows)
- Add storeTokens() method to OAuthTokenManager
- Pass server error messages through to extension toast notifications
* refactor: remove 30s cooldown, simplify OAuth hook
The hook is now identical for all providers — server handles retries
via activeDeviceFlows.delete(). Removed flowStartedAtRef cooldown
that was blocking legitimate retries.
* feat: client-side OAuth for Copilot and Qwen Code
Move device code OAuth flow to the extension for GitHub Copilot and
Qwen Code. The extension makes requests using Chrome's network stack,
which bypasses Alibaba WAF TLS fingerprint detection that blocks
server-side Bun/Node.js fetch.
New files:
- client-oauth.ts: Client-side device code + PKCE + token polling
Changes:
- useOAuthProviderFlow: handleClientAuth() for providers with clientAuth
config, handleServerAuth() for others (ChatGPT Pro)
- AISettingsPage: clientAuth config for Copilot and Qwen Code
- WAF detection: opens provider site for captcha solving on block
Server-side device code flow preserved as fallback (token-manager.ts,
providers.ts). Token storage via POST /oauth/:provider/token endpoint.
* fix: export OAuthProviderFlowConfig type, fix typecheck errors
- Export OAuthProviderFlowConfig interface so AISettingsPage can use it
instead of duplicating the type inline
- Fix string | null → string | undefined for agentServerUrl parameter
Add CHATGPT_PRO_SUPPORT and GITHUB_COPILOT_SUPPORT feature flags gated
on minServerVersion 0.0.77. Hide template cards and provider type
dropdown options when the server doesn't support the OAuth endpoints.
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: add model selector to newtab search bar
Add AI provider/model selector button to the newtab homepage footer bar,
matching the existing button aesthetics (Workspace, Tabs, Apps). Reuses
ChatProviderSelector popover from sidepanel. Users can now see and change
their AI provider before starting a conversation from the newtab page.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: clean up newtab footer with icon-only buttons
Reduce visual clutter in the search bar footer by converting Provider,
Workspace, and Tabs buttons to compact icon-only buttons (8x8). Text
labels and chevron indicators are removed — native title tooltips
provide discoverability on hover. Apps button on the right keeps its
text label per user preference.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: add hover-expand labels to newtab footer icon buttons
Replace static title tooltips with smooth hover-expand animation —
buttons show icon-only by default, text label slides out on hover
via max-w transition. Gives a clean compact look while keeping
labels discoverable.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: revert workspace/tabs to full text, keep provider hover-expand only
Restore full text labels for Workspace and Tabs buttons. Only the
provider selector uses the compact icon + hover-expand pattern.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: simplify provider selector to plain icon button
Remove hover-expand animation, use a simple icon-only button with
native title tooltip for the provider selector.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: add GitHub Copilot as OAuth-based LLM provider
Add GitHub Copilot as a second OAuth provider using the Device Code flow
(RFC 8628). Users authenticate via github.com/login/device, and the server
polls for token completion. Supports 25+ models through a single Copilot
subscription.
Key changes:
- Device Code OAuth flow in token manager (poll with safety margin)
- Custom fetch wrapper injecting Copilot headers + vision detection
- Provider factory using createOpenAICompatible for Chat Completions API
- Extension UI with template card, auto-create on auth, and disconnect
* fix: address PR review comments for GitHub Copilot OAuth
- Validate device code response for error fields (GitHub can return 200
with error payload)
- Store empty refreshToken instead of access token for GitHub tokens
- Add closeButton to Toaster for dismissing device code toast
* fix: add github-copilot to agent provider factory
The chat route uses a separate provider-factory.ts (agent layer) from the
test-provider route (llm/provider.ts). Added createGitHubCopilotFactory
to the agent factory so chat works with GitHub Copilot.
* fix: add github-copilot to provider icons, models, and dialog
- Add Github icon from lucide-react to providerIcons map
- Add 8 Copilot models (GPT-4o, Claude, Gemini, Grok) to models.ts
- Add github-copilot to NewProviderDialog zod enum, validation skip,
canTest check, and OAuth credential message
* fix: reorder copilot models with free-tier models first
Put models available on Copilot Free at the top (gpt-4o, gpt-4.1,
gpt-5-mini, claude-haiku-4.5, grok-code-fast-1), followed by
premium models that require paid Copilot subscription.
* fix: set correct 64K context window for Copilot models
Copilot API enforces a 64K input token limit regardless of the
underlying model's native context window. Updated all model entries
and the default template to 64000 so compaction triggers correctly.
* fix: use actual per-model prompt limits from Copilot /models API
Queried api.githubcopilot.com/models for real max_prompt_tokens values.
GPT-4o/4.1 have 64K, Claude/gpt-5-mini have 128K, GPT-5.x have 272K.
Also updated model list to match what's actually available on the API
(e.g. claude-sonnet-4.6 instead of 4.5, added gpt-5.4/5.2-codex).
* feat: resize images for Copilot using VS Code's algorithm
Large screenshots cause 413 errors on Copilot's API. Resize images
following VS Code's approach: max 2048px longest side, 768px shortest
side, re-encode as JPEG at 75% quality. Uses sharp for server-side
image processing.
* fix: address all Greptile P1 review comments
- Add .catch() on fire-and-forget pollDeviceCode to prevent unhandled
rejection crashes (Node 15+)
- Add deduplication guard (activeDeviceFlows Set) to prevent concurrent
device code flows for the same provider
- Add runtime validation of server response in frontend before calling
window.open() and showing toast
- Remove dead GITHUB_DEVICE_VERIFICATION constant from urls.ts
* fix: upgrade biome to 2.4.8, fix all lint errors, and address review bugs
- Upgrade biome from 2.4.5 to 2.4.8 (matches CI) and migrate configs
- Fix image resize: only re-encode when dimensions actually change
- Fix device code polling: retry on transient network errors instead of aborting
- Allow restarting device code flow (clear old flow instead of throwing 500)
- Fix pre-existing noNonNullAssertion and noExplicitAny lint errors globally
* fix: address Greptile P2 review — image resize and config guard
- Fix early-return guard: check max/min sides against their respective
limits (MAX_LONG_SIDE/MAX_SHORT_SIDE) instead of both against SHORT
- Preserve PNG alpha: detect hasAlpha and keep PNG format instead of
unconditionally converting to lossy JPEG
- Keep browserosId guard in resolveGitHubCopilotConfig consistent with
ChatGPT Pro pattern (safety check that caller context is valid)
* feat: update Copilot models to full list from pricing page, default to gpt-5-mini
Added all 23 models from GitHub Copilot pricing page. Ordered with
free-tier models first (gpt-5-mini, claude-haiku-4.5), then premium.
Changed default from gpt-4o to gpt-5-mini since it's unlimited on
Pro plan and has 128K context (vs gpt-4o's 64K limit).
* fix(skills): read-only view mode for built-in skills
- SkillCard shows Eye icon + "View" for built-in, Pencil + "Edit" for user
- SkillDialog in read-only mode: disabled fields, no toolbar on markdown
editor, "View Skill" title, "Close" button, no "Update Skill"
- Hide tip section in read-only mode
* fix(skills): use react-markdown for read-only skill view
Replace MDXEditor with react-markdown for viewing built-in skills.
MDXEditor chokes on code fences, angle brackets, and image syntax
causing content truncation. react-markdown handles standard markdown
correctly with no rendering issues.
* Revert "feat: convert settings to popup dialog (#477)"
This reverts commit 42aa0ff1ef.
* fix: address review feedback for PR #498
- Remove erroneous SETTINGS_PAGE_VIEWED_EVENT tracking from SidebarLayout
(was firing on every non-settings page navigation)
- Fix mobile settings sidebar not closing on route change by merging
setMobileOpen(false) into the pathname-dependent analytics useEffect
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: select text and pass to sidepanel
* fix: lint issues
* fix: persist selection across tabs
* fix: review comments
* fix: change when the selection is cleared
* feat: sanitize url
* fix(skills): UI section separation and fix find-alternatives rendering
- Split skills page into "My Skills" (user) and "BrowserOS Skills" (built-in) sections
- Fix find-alternatives SKILL.md — replace angle bracket placeholders with curly
braces to prevent MDXEditor from parsing them as JSX and rendering empty content
* fix(skills): bump find-alternatives to v1.1 for CDN sync
* feat: updated chat ui from homepage
* fix: vertical scroll
* fix: horizontal scroll issue
* fix: lint issues
* fix: header width
* fix: message input from home to chat
* feat: created sidebar header support in new tab chat
* fix: remove history from new tab chat
* fix: remove the shared element transition
* fix: lint issues
* fix: review comments
* fix: defer the sendMessage callback
* fix: all code concerns
* fix: preserve state of chat on homepage
* fix: review comments
* fix(skills): separate built-in and user skills into distinct directories
- Move built-in skills to ~/.browseros/skills/builtin/, user skills stay in root
- Unify seed + sync into single syncBuiltinSkills() function, delete seed.ts
- Preserve user's enabled/disabled state during remote sync version updates
- Add catalog reconciliation — remove built-in skills dropped from remote catalog
- Fallback to bundled defaults per-skill when remote sync fails
- One-time migration moves existing default skills from root to builtin/
- Add builtIn field to SkillMeta, determined by directory (not metadata)
- UI shows "Built-in" badge, hides delete button for built-in skills
- Reject deletion of built-in skills in service layer
- Check both dirs for ID collision on skill creation
* fix(skills): address review — dedup by id, guard applyEnabled regex
- loader.ts: deduplication now keys on skill.id (directory slug) not
skill.name (display name), preventing silent drops on name collision
- remote-sync.ts: applyEnabled checks if regex matched before writing,
logs warning if remote content lacks an enabled field
* fix(skills): reconciliation preserves bundled defaults, delete returns 403
- reconcileRemovedSkills now keeps DEFAULT_SKILLS IDs in the safe set,
preventing delete-then-reinstall cycle that lost enabled:false state
- DELETE /skills/:id returns 403 for built-in skills instead of 500
* refactor(skills): simplify syncBuiltinSkills to single clean pass
Build content map (bundled + remote), iterate once, preserve enabled,
reconcile deletions. Removes 7 helper functions, 70 lines of code.
* refactor(skills): extract syncOneSkill, patch content before writing
- syncBuiltinSkills is now 15 lines: build map, iterate, clean up
- syncOneSkill: flat, patches enabled state before writing (single write)
- setEnabled: pure function for content patching
- removeObsoleteSkills: extracted from inline block
* feat: convert settings page to popup dialog, move workflows to main nav
Replace the dedicated settings page layout (SettingsSidebarLayout) with a
modal dialog (SettingsDialog) that opens on top of the current page. Settings
are now accessible via a dialog triggered from the main sidebar, eliminating
the confusing dual-sidebar navigation pattern.
- Create SettingsDialog with tabbed left panel and content area
- Move Workflows into main sidebar navigation (feature-gated)
- Remove /settings/* routes (except /settings/survey)
- Delete SettingsSidebarLayout and SettingsSidebar components
- Update backward compatibility redirects
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: setup new urls for the dialog box
* fix: dialog close button
* fix: settings analytics
* fix: address review comments
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Dani Akash <DaniAkash@users.noreply.github.com>
* feat: add ChatGPT Pro OAuth as LLM provider
Adds OAuth 2.0 (Authorization Code + PKCE) flow so users can authenticate
with their ChatGPT Pro subscription to power BrowserOS's agent, matching
the pattern used by Codex CLI, OpenCode, and Pi.
Server:
- OAuth token lifecycle (PKCE, exchange, refresh, SQLite storage)
- Dedicated callback server on port 1455 (Codex client ID registration)
- Codex fetch wrapper routing API calls to chatgpt.com/backend-api
- Config resolution + provider factories for all code paths (chat, test, refine)
Extension:
- ChatGPT Pro template card with OAuth flow trigger
- Status polling hook + auto-create provider on auth success
- Model list with Codex-supported models (gpt-5.x-codex family)
* fix: address Greptile PR review comments
- Wire OAuth callback server stop handle into onShutdown (P1: port 1455 leak)
- Guard against missing refresh token + clear stale tokens on failed refresh (P1)
- Add logger.warn to silent catch in codex-fetch body mutation
- Document JWT trust assumption in parseAccessTokenClaims
- Source model ID from provider template instead of hard-coding
* simplify: remove unnecessary OAuth shutdown wiring and useCallback
- Revert OAuthHandle interface — callback server port releases on process exit
- Remove stopCallbackServer from shutdown flow (dead code)
- Remove all useCallback from useOAuthStatus per CLAUDE.md guidance
* style: add readonly modifiers and braces per TS style guide
* docs: add E2E test screenshots for ChatGPT Pro OAuth
* fix: strip item IDs from Codex requests to fix multi-turn conversations
* fix: preserve function_call_output IDs in Codex requests
* fix: resolve Codex store=false + tool-use incompatibility
- Pass providerOptions { openai: { store: false } } to ToolLoopAgent
so the AI SDK inlines content instead of using item_reference
- Strip item IDs and previous_response_id in codex-fetch (safety net)
- Use .responses() model (Codex only speaks Responses API format)
* fix: remove non-Codex model gpt-5.2 from chatgpt-pro model list
* fix: strip unsupported Codex params and update model list
- Strip temperature, max_tokens, top_p from Codex requests (unsupported)
- Add all available Codex models including gpt-5.4, gpt-5.2, gpt-5.1
* chore: remove screenshots containing email
* feat: enable reasoning events for ChatGPT Pro Codex models
* chore: set reasoning effort to high for ChatGPT Pro
* feat: add configurable reasoning effort and summary for ChatGPT Pro
- Add reasoningEffort (none/low/medium/high) and reasoningSummary
(auto/concise/detailed) dropdowns in the Edit Provider dialog
- Pass through extension → chat request → agent config → providerOptions
- Defaults: effort=high, summary=auto
* fix: strip max_output_tokens from Codex requests (fixes compaction)
* fix: address Greptile P1 issues
- Fix default model fallback: gpt-4o → gpt-5.3-codex (Codex endpoint)
- Clear stale tokens on refresh failure (prevents infinite retry loop)
- Only auto-create provider after explicit OAuth flow, not on page load
- Add catch block to auto-create effect with error toast
* feat: add "Rewrite with AI" prompt refinement for scheduled tasks
Add a lightweight /refine-prompt endpoint that uses generateText to
rewrite rough scheduled task prompts into clear, actionable instructions.
The UI adds a sparkle-icon button next to the Prompt label in the
NewScheduledTaskDialog with loading state, undo support, and disabled
state when the textarea is empty.
* fix: clear stale undo ref on dialog re-open and pass providerId to refinePrompt
- Reset originalPromptRef when dialog opens and on form submit to
prevent stale "Undo rewrite" button on re-open
- Accept optional providerId in refinePrompt() so the form's selected
provider is used for refinement instead of always the system default
* fix: hide undo rewrite link while refinement is in flight
* fix: reset isRefining state on dialog re-open
* fix: ignore stale refine-prompt responses after dialog re-open
Use a request generation counter so that if the dialog is closed and
re-opened while a rewrite is in flight, the stale response is silently
discarded instead of overwriting the fresh form state.
* fix: invalidate stale refine requests on dialog reopen and rename to kebab-case
- Increment refineRequestIdRef on dialog open so in-flight requests
from a previous session are discarded when they complete
- Rename refinePrompt.ts to refine-prompt.ts per CLAUDE.md file naming
* feat: add voice input to agent chat sidebar
Allow users to record voice and transcribe to text in the chat input.
Mic button shows when input is empty, waveform visualizer during recording,
transcription via OpenAI (llm.browseros.com/api/transcribe).
- Extract shared useVoiceInput hook to lib/voice/
- Time-domain waveform bars that bounce per-frequency-band
- Bar height capped to fit input container
- Analytics events for recording lifecycle
* fix: address review — add fetch timeout, await stopRecording, deduplicate VoiceInputState
- Add AbortSignal.timeout(30s) to transcription fetch
- Await stopRecording() and track analytics after completion
- Export VoiceInputState from useVoiceInput, import in consumers
* fix: await startRecording before tracking, narrow SurveyChat effect deps
- Await startRecording() so analytics only fires after mic permission granted
- Narrow SurveyChat useEffect dependency from [voice] to [voice.transcript, voice.isTranscribing]
* fix: analytics only tracks on success, clean up stream on failure, type API response
- startRecording returns boolean; track(RECORDING_STARTED) only fires on success
- Catch block cleans up MediaStream tracks and AudioContext on partial failure
- Type transcription API response with TranscribeResponse interface
* fix: keep mic button always visible alongside send button
Mic and send are now separate buttons, both always visible.
Mic is disabled while AI is streaming. Send is disabled during
recording/transcribing. Buttons are no longer absolutely positioned
inside the textarea — they sit beside it in the flex row.
* fix: keep mic button always visible inside input alongside send
Both mic and send buttons are always visible inside the input field,
positioned on the right side (ChatGPT-style). Mic is disabled while
AI is streaming. Send is disabled during recording/transcribing.
* fix: remove unreachable CSS branch in recording waveform div
* feat: add per-task LLM provider selection for scheduled tasks
Allow users to choose which AI provider a scheduled task runs with,
using the same ChatProviderSelector component from the new-tab page.
Falls back to the global default provider when none is selected or
if the selected provider has been deleted.
* fix: lint issues
* chore: updated to latest schema.graphql file
---------
Co-authored-by: Dani Akash <DaniAkash@users.noreply.github.com>
* fix: fallback to default BrowserOS provider when provider is null
When the extension first loads, provider config is loaded async from
storage. If a chat request fires before loading completes (race
condition), provider is null and the server receives provider: undefined,
causing a Zod validation error. This adds a fallback to
createDefaultBrowserOSProvider() in both chat paths (sidepanel and
scheduled tasks) so provider.type is always defined.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: fallback to first provider when default provider ID is stale
When defaultProviderId in storage doesn't match any loaded provider
(e.g. after Kimi/Moonshot rollout), selectedProvider was null causing
provider: undefined in chat requests. Now falls back to providers[0].
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: repair stale defaultProviderId in storage on load
When the stored default provider ID doesn't match any loaded provider,
write back the corrected ID (providers[0].id) to storage so it doesn't
silently persist across sessions.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>