* feat(llm): Minimax Chinese and International Users providers
* fix(llm): Patch for p2 bugs
* fix(agent): correct MiniMax base URL handling and enforce API key validation
* fix(agent): add minimax entry to PROVIDER_DISPLAY_NAMES
The Record<ProviderType, string> map in ChatError.tsx was missing
the new minimax key added in this PR, causing a typecheck failure.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: krish-mm <112251957+krish-mm@users.noreply.github.com>
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix(credits): move credits fetch to extension side using install_id
Extension now reads `browseros.metrics_install_id` pref directly and fetches
credits from `llm.browseros.com` without going through the bundled server.
Unblocks the referral submit flow in prod without requiring a BrowserOS
binary release.
- Revert `/credits` route change that added `browserosId` to the response.
- Add `getOrCreateBrowserosId()` helper reading from BrowserOS prefs.
- Add `CREDITS_GATEWAY` to shared EXTERNAL_URLS.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* refactor(credits): drop fallback UUID, read install_id directly
Extension only runs inside BrowserOS, so the prefs API is always available.
The chrome.storage fallback was dead code that would generate a ghost ID
diverging from the server's install_id anyway. Rename the helper to match
its simpler contract.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix(credits): guard against empty install_id pref
Address Greptile P1 — throw instead of silently fetching `/credits/null`
when `browseros.metrics_install_id` is unset. Fails loudly so the broken
state is observable rather than masquerading as a credits outage.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix(agent): declare @browseros/shared as workspace dependency
The agent app imports @browseros/shared/constants/urls in
lib/referral/submit-referral.ts but never declared the package in its
dependencies, so vite failed to resolve the import during dev.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat(referral): cap daily referral earnings at 500 credits
Block tweet submissions client-side once the user's balance reaches
500 to prevent unlimited credit farming via repeated shares.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat(referral): randomize tweet variations for Twitter share
Replace the single hardcoded share text with 10 feature-specific
variations (agent mode, chat, scheduled tasks, connect apps, cowork,
workflows, memory, skills, local models, ad blocking) and pick one at
random each time the share button is clicked.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix(referral): regenerate share URL on click
Previously getShareOnTwitterUrl() was evaluated once at render time as
a static href, so every click produced the same tweet variation. Move
the call into onClick so a new random variation is picked each time.
Addresses Greptile P1 review on PR #737.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix(agent): clarify upstream provider rate-limit errors
When a non-BrowserOS provider (OpenAI, Anthropic, OpenRouter, etc.)
returned a 429, ChatError rendered the retry-wrapped message
"Failed after 3 attempts. Last error: The usage limit has been reached"
with a generic "Something went wrong" title, leading users to blame
BrowserOS for throttling imposed by their configured upstream.
Detect upstream 429s in parseErrorMessage, show the provider name in
the title ("OpenAI rate limit reached"), strip the retry prefix,
render the raw upstream message, and add clarifying subtext that
names the provider and explicitly excludes BrowserOS. Skip the
BrowserOS-specific ShareForCredits / survey / upgrade affordances on
this path — they do not apply.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: address Greptile review comments
- Tighten 429 pattern to \b429\b so it only matches the standalone
status code, not incidental substrings (model IDs, paths, etc.).
- Unwrap JSON-encoded provider error bodies on the upstream-rate-limit
path so users see the human-readable message instead of raw JSON.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat(referral): show share rules and lower default daily limit fallback
Surface the three referral validation rules (must mention @browserOS_ai,
posted within last 30 minutes, single-use) directly in the ShareForCredits
UI so users understand submission requirements before pasting a tweet link.
Also align the UsagePage daily-limit fallback (used while credits load) with
the gateway default of 50.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix(usage): handle credit balance exceeding daily limit
The "Credits used today" stat was computed as `dailyLimit - credits`,
which goes negative once a referral bonus pushes the balance above the
daily cap (e.g. balance 294 with cap 100 showed "-194 of 100"). Clamp
the math to zero and surface a separate "Bonus credits" stat when the
balance exceeds the daily allowance.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat: add Twitter share referral UI and expose browserosId
When credits are exhausted, users now see a "Share on Twitter" CTA with
a pre-filled tweet URL and an input to paste their tweet link. Reusable
ShareForCredits component used in both ChatError and UsagePage. Server's
GET /credits now includes browserosId for the extension to pass to the
referral service.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: rebuild chat session on provider change
* fix: address Greptile review comments
- Move referral service URL to EXTERNAL_URLS
- Guard submitReferral on !response.ok
- Remove stale TODO comment
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* docs: add uBlock Origin install info to getting started and ad-blocking pages
Chrome dropped support for the full uBlock Origin extension — highlight
that BrowserOS brings it back and make it easy to install from both the
getting started guide and the dedicated ad-blocking page.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: revert Kimi partnership UI, restore daily limit survey
Remove Kimi/Moonshot AI partnership branding from the rate limit
banner, provider card, provider templates, and LLM hub. Restore
the original survey CTA on daily limit errors. Moonshot AI remains
as a regular provider template without the "Recommended" badge.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: address Greptile review comments
- Guard survey CTA with !isCreditsExhausted to avoid showing it for
credits-exhausted users who already see "View Usage & Billing"
- Remove dead kimi-launch feature flag files (kimi-launch.ts,
useKimiLaunch.ts)
- Remove unused KIMI_RATE_LIMIT analytics events
- Remove VITE_PUBLIC_KIMI_LAUNCH from env schema and .env.example
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
The merged PR (#661) injected custom entries into filteredModels, but
cmdk auto-scrolls to its first selected CommandItem, pushing the custom
entry out of view. Fix by using forceMount on a separate CommandGroup
and resetting scroll to top on every keystroke via requestAnimationFrame.
* feat: show custom model ID as first option in model selector
When typing in the model dropdown, the user's exact input now appears as the
first selectable row, followed by fuzzy search suggestions. This makes entering
custom model IDs intuitive — previously the option was hidden behind a
zero-results-only Enter shortcut that fuzzy search almost always prevented.
* fix: correct is_custom_model flag and prevent duplicate analytics events
- Use modelInfoList check instead of hardcoding is_custom_model: true in
the Enter key handler
- Add stopPropagation to prevent cmdk's root keydown handler from also
firing onSelect, which caused duplicate MODEL_SELECTED_EVENT emissions
The model picker in NewProviderDialog rendered inline, causing dialog
resizing and lacked keyboard navigation. Replace it with a Popover +
Command (shadcn Combobox) pattern and add fuse.js for fuzzy search.
- Replace custom ModelPickerList with Popover + Command dropdown
- Add fuse.js for fuzzy model search (replaces string.includes)
- Add MODEL_SELECTED_EVENT and AI_PROVIDER_UPDATED_EVENT analytics
- Enrich PROVIDER_SELECTED_EVENT with model_id in chat sessions
* fix: add refresh indicator to chat history when fetching latest conversations
Show a non-blocking "Fetching latest conversations" indicator at the top
of the history list while the cached data is being refreshed. Users can
still interact with the cached conversation list during the refresh.
* perf: reduce chat history query payload — fetch last 2 messages instead of 5
The conversation list only displays the last user message as a preview.
Fetching 5 messages per conversation was wasteful — each message contains
the full UIMessage object (tool calls, reasoning, etc.) multiplied by
50 conversations per page. Reduced to last 2 which is sufficient to
find the last user message in a user→assistant exchange.
* perf: use first+DESC instead of last+ASC to push LIMIT down to SQL
PostGraphile's `last: N` doesn't map to SQL LIMIT — it uses a padded
LIMIT 10 and slices in application code. Changing to `first: 2` with
ORDER_INDEX_DESC generates a true SQL LIMIT 2, reducing rows scanned
from 500 to 100 per page (50 conversations × 2 vs 10 messages each).
No UX impact — extractLastUserMessage() filters by role regardless
of message order.
* chore: update react query packages
* feat: replace localforage with idb-keyval
* feat: isolate new-tab agent navigation from origin tab
Add origin-aware navigation isolation so the agent never navigates
away from the new-tab chat UI. This is a two-layer defense:
1. Prompt adaptation: When origin is 'newtab', the system prompt's
execution and tool-selection sections are rewritten to prohibit
navigating the active tab and default all lookups to new_page.
2. Tool-level guards: navigate_page and close_page reject attempts
to act on the origin tab when in newtab mode, returning an error
that teaches the agent to self-correct.
The client now sends an `origin` field ('sidepanel' | 'newtab')
instead of injecting a soft NEWTAB_SYSTEM_PROMPT that LLMs could
ignore. Backwards compatible — defaults to 'sidepanel'.
Closes TKT-592, addresses TKT-564
* test: add newtab origin navigation guard tests
- 14 new prompt tests verifying the system prompt adapts correctly
for newtab vs sidepanel origin (execution rules, tool selection table,
absence of conflicting single-tab guidance)
- 6 new integration tests for navigate_page and close_page guards:
rejects origin tab in newtab mode, allows non-origin tabs, allows
all tabs in sidepanel mode, backwards compatible with no session
* feat: add release workflow for agent extension
Adds a workflow_dispatch workflow that builds the WXT extension,
creates a .zip for sideloading, generates scoped release notes with
contributors and PR links, creates a GitHub release with the zip
attached, and opens an auto-merge PR to update CHANGELOG.md.
* fix: correct API URL to api.browseros.com
* fix: remove duplicate PR numbers and contributors from extension release notes
Apply the same fixes from the agent-sdk workflow:
- Skip PR number if already in commit subject (squash merges)
- Remove custom Contributors section (GitHub auto-generates one)
- Clean up unused variables
* fix: use absolute path for extension zip in release upload
* fix: wxt zip already builds, use correct output path
- Remove separate build step since wxt zip runs the build internally
- Fix zip path from .output/*.zip to dist/*-chrome.zip
* fix: broaden connection error detection for main page and sidepanel
The connection error check required both "Failed to fetch" AND
"127.0.0.1" in the error message. On the main page, the browser
only produces "Failed to fetch" without the IP, so users saw a
generic "Something went wrong" instead of the troubleshooting link.
Broaden detection to also match "localhost" and bare "Failed to fetch"
errors that don't contain an external URL. Also pass providerType in
NewTabChat so provider-specific errors render correctly.
Closes#526
* fix: simplify connection error detection
All chat requests go through the local BrowserOS agent server, so any
"Failed to fetch" error is always a local connection issue. Remove the
unnecessary 127.0.0.1/localhost/URL checks.
* fix: pass providerType to agentUrlError ChatError instances
- Change dialog width from sm:max-w-2xl (672px) to sm:w-[70vw] sm:max-w-4xl
so it takes 70% of viewport width, capped at 896px
- Add overflow-x-auto on table wrappers so wide tables scroll horizontally
instead of being clipped
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: integrate models.dev for dynamic LLM provider/model data (#TKT-657)
Replace hardcoded model lists with data sourced from models.dev so new
providers and models appear automatically when the community adds them.
- Add build script (scripts/generate-models.ts) that fetches models.dev/api.json
and outputs a compact JSON with 10 providers and 520 models
- Replace hardcoded MODELS_DATA (50 models) with dynamic models.dev lookups
- Add searchable model combobox (Popover + Command) replacing plain Select dropdown
- Enrich provider templates with models.dev metadata (context window, image support)
- Keep chatgpt-pro, qwen-code, browseros, openai-compatible as hardcoded providers
* fix: address review — remove ollama-cloud mapping, fix default models, remove dead code
- Remove ollama from PROVIDER_MAP (ollama-cloud has cloud models, not local)
- Add ollama to CUSTOM_PROVIDER_MODELS with empty list (users type custom IDs)
- Update defaultModelIds to ones that exist in models.dev data:
openrouter → anthropic/claude-sonnet-4.5
lmstudio → openai/gpt-oss-20b
bedrock → anthropic.claude-sonnet-4-6
- Remove dead isCustomModel export
- Regenerate models-dev-data.json (9 providers, 486 models)
* fix: model suggestion list focus/dismiss behavior
- List only opens when input is focused or user types
- Clicking a model selects it and closes the list
- Clicking outside (blur) dismisses the list
- onMouseDown preventDefault on list items prevents blur race condition
* refactor: extract ModelPickerList component with proper open/close UX
- Collapsed state: Select-like trigger showing selected model + chevron
- Expanded state: search input + scrollable filtered list, inline
- Click outside or Escape to close, Enter to submit custom model
- Extracted as separate component (reduces dialog nesting, testable)
- No more setTimeout hacks for blur handling
* chore: remove plan doc from repo
* feat: improve rate limit UX, usage page, and provider selector
- Show "Add your own provider for unlimited usage" CTA when BrowserOS
credits are exhausted or daily limit is reached
- Fix credit exhaustion detection to match actual error message
- Improve Usage page: remove disabled Add Credits button, add "Coming
soon" badge, add "Want unlimited usage?" section linking to providers
- Add "+ Add Provider" button at bottom of chat provider selector dropdown
* fix: use asChild pattern for Button+anchor in usage page
Replace nested <a><Button> (invalid HTML) with Button asChild
pattern per shadcn/ui convention.
* feat: UI improvements for OAuth dialog, provider badges, and events docs
- Replace OAuth device code toast with a proper Dialog showing the code
prominently with a copy button (GitHub Copilot, Qwen Code, ChatGPT Pro)
- Add "New" badge on provider template cards for ChatGPT Plus/Pro,
GitHub Copilot, and Qwen Code with orange border highlight
- Add events.md documenting all analytics events across the platform
* fix: add verificationUri to DeviceCodeDialog for popup-blocked fallback
Add verificationUri to PendingDeviceCode interface and pass it from
both handleClientAuth and handleServerAuth. Render a fallback "Open
verification page" link in DeviceCodeDialog so users can navigate
to the auth page if the popup was blocked.
- Add MCP promo banner on AI providers page with "New" badge and
"66+ tools" highlight, linking to /settings/mcp
- Add Quick Setup section on MCP settings page with copy-paste
commands for Claude Code, Gemini CLI, Codex, Claude Desktop, OpenClaw
- Consolidate MCP settings: move restart button inline with server URL,
remove separate MCP Server Settings card
- Add analytics event for promo banner clicks
* fix: prevent deleted scheduled tasks from reappearing after sync
When a scheduled task was deleted, the sync function would see the
remote job missing locally and re-add it, undoing the delete. Fix by
tracking pending deletions in storage so the sync function deletes
them from the backend instead of re-adding them locally.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: use read-modify-write for pending deletions to prevent concurrent clobber
Re-read pendingDeletionStorage before write-back and only remove
resolved IDs, preserving any new entries added by concurrent
removeJob calls during the sync's network I/O.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: add voice recording UI with waveform overlay to new tab search bar
Add a microphone button to the NewTab search bar that opens a fullscreen
recording overlay powered by react-voice-visualizer. The overlay shows a
real-time waveform visualization during recording, recording time, and a
stop button. On completion, the audio is transcribed via the existing
gateway endpoint and the transcript auto-navigates to inline chat.
Changes:
- Extract transcribeAudio() to shared lib/voice/transcribe-audio.ts
- Add VoiceRecordingOverlay component with react-voice-visualizer
- Add Mic button to NewTab search bar
- Track analytics via existing NEWTAB_VOICE_* events
- Handle cancel (backdrop click) vs submit (stop button) correctly
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: address PR review comments for voice recording overlay
- Reset processingRef on transcription error to prevent stuck state
- Use stable callback refs to prevent useEffect re-runs from inline
arrow function props (fixes timer reset and unnecessary re-processing)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: replace voice overlay with inline sidepanel-style voice UI
Remove react-voice-visualizer dependency and VoiceRecordingOverlay.
Instead use the same inline voice pattern as the sidepanel ChatInput:
- Waveform bars replace the search input during recording
- Mic/stop/loading button states in the search bar
- Transcript populates the search input on completion
- Voice error shown inline below the search bar
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: add Qwen Code as OAuth LLM provider with refactored OAuth hooks
Add Alibaba Qwen Code as a third OAuth provider using Device Code flow
with PKCE. Free tier: 2,000 requests/day, up to 1M token context.
Refactoring:
- Extract useOAuthProviderFlow hook (eliminates ~180 lines of duplicated
OAuth logic from AISettingsPage for ChatGPT Pro + Copilot + Qwen)
- Extract resolveOAuthConfig in config.ts (shared resolver for all OAuth
providers, parameterized by provider name, default model, refresh flag)
- Generalize token-manager device code flow to support PKCE
(code_challenge/code_verifier) and form-urlencoded content type
New code:
- Qwen Code provider config with PKCE + form encoding flags
- Provider factories (both provider.ts and provider-factory.ts)
- Extension UI (template card, models, analytics, dialog)
* fix: use portal.qwen.ai as API base URL for OAuth tokens
DashScope (dashscope.aliyuncs.com) expects Alibaba Cloud API keys,
not OAuth tokens from chat.qwen.ai. The correct endpoint for OAuth
Bearer tokens is portal.qwen.ai/v1.
* fix: correct Qwen Code model IDs and context windows
- coder-model (1M context): virtual alias that routes to best model
- qwen3-coder-plus (1M): was incorrectly 131K
- qwen3-coder-flash (1M): new, speed-optimized variant
- qwen3.5-plus (1M): was incorrectly 1048576 (power-of-two vs decimal)
- Removed qwen3-coder-next (local/self-hosted, not available via OAuth)
- Default model changed to coder-model (auto-routes server-side)
* fix: move Qwen device code request to extension (bypasses WAF)
Alibaba WAF blocks server-side requests to chat.qwen.ai. Move the
initial device code request to the extension (browser context with
cookies), then hand off the deviceCode + codeVerifier to the server
for background polling via new POST /oauth/:provider/poll endpoint.
* fix: persist OAuth flow-started flag in sessionStorage
The flowStartedRef was lost when the component remounted (e.g. user
navigated to onboarding then back to settings). Use sessionStorage
to persist the flag so auto-create works after navigation.
* revert: remove sessionStorage for OAuth flow flag
Revert to simple useRef pattern matching the original ChatGPT Pro
implementation. The auto-create works when the user stays on the
AI settings page during auth.
* revert: move Qwen back to server-side device code flow
WAF block was temporary (rate-limiting), not permanent. Server-side
fetch to chat.qwen.ai now works. Reverted client-side device code
approach — Qwen now uses the same clean server-side flow as Copilot.
Removed: clientSideDeviceCode config, startClientSideDeviceCode(),
POST /oauth/:provider/poll endpoint, startDeviceCodePolling().
* feat: add WAF detection, rate-limit protection, and token storage endpoint
- Detect WAF captcha responses (HTML instead of JSON) in device code
request and token polling, with user-friendly error messages
- Add 30s cooldown on "USE" button to prevent rapid clicks triggering WAF
- WAF-blocked poll requests silently retry instead of aborting
- Add POST /oauth/:provider/token endpoint for storing externally-provided
tokens (useful for future fallback flows)
- Add storeTokens() method to OAuthTokenManager
- Pass server error messages through to extension toast notifications
* refactor: remove 30s cooldown, simplify OAuth hook
The hook is now identical for all providers — server handles retries
via activeDeviceFlows.delete(). Removed flowStartedAtRef cooldown
that was blocking legitimate retries.
* feat: client-side OAuth for Copilot and Qwen Code
Move device code OAuth flow to the extension for GitHub Copilot and
Qwen Code. The extension makes requests using Chrome's network stack,
which bypasses Alibaba WAF TLS fingerprint detection that blocks
server-side Bun/Node.js fetch.
New files:
- client-oauth.ts: Client-side device code + PKCE + token polling
Changes:
- useOAuthProviderFlow: handleClientAuth() for providers with clientAuth
config, handleServerAuth() for others (ChatGPT Pro)
- AISettingsPage: clientAuth config for Copilot and Qwen Code
- WAF detection: opens provider site for captcha solving on block
Server-side device code flow preserved as fallback (token-manager.ts,
providers.ts). Token storage via POST /oauth/:provider/token endpoint.
* fix: export OAuthProviderFlowConfig type, fix typecheck errors
- Export OAuthProviderFlowConfig interface so AISettingsPage can use it
instead of duplicating the type inline
- Fix string | null → string | undefined for agentServerUrl parameter
Add CHATGPT_PRO_SUPPORT and GITHUB_COPILOT_SUPPORT feature flags gated
on minServerVersion 0.0.77. Hide template cards and provider type
dropdown options when the server doesn't support the OAuth endpoints.
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: add model selector to newtab search bar
Add AI provider/model selector button to the newtab homepage footer bar,
matching the existing button aesthetics (Workspace, Tabs, Apps). Reuses
ChatProviderSelector popover from sidepanel. Users can now see and change
their AI provider before starting a conversation from the newtab page.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: clean up newtab footer with icon-only buttons
Reduce visual clutter in the search bar footer by converting Provider,
Workspace, and Tabs buttons to compact icon-only buttons (8x8). Text
labels and chevron indicators are removed — native title tooltips
provide discoverability on hover. Apps button on the right keeps its
text label per user preference.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: add hover-expand labels to newtab footer icon buttons
Replace static title tooltips with smooth hover-expand animation —
buttons show icon-only by default, text label slides out on hover
via max-w transition. Gives a clean compact look while keeping
labels discoverable.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: revert workspace/tabs to full text, keep provider hover-expand only
Restore full text labels for Workspace and Tabs buttons. Only the
provider selector uses the compact icon + hover-expand pattern.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: simplify provider selector to plain icon button
Remove hover-expand animation, use a simple icon-only button with
native title tooltip for the provider selector.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: add GitHub Copilot as OAuth-based LLM provider
Add GitHub Copilot as a second OAuth provider using the Device Code flow
(RFC 8628). Users authenticate via github.com/login/device, and the server
polls for token completion. Supports 25+ models through a single Copilot
subscription.
Key changes:
- Device Code OAuth flow in token manager (poll with safety margin)
- Custom fetch wrapper injecting Copilot headers + vision detection
- Provider factory using createOpenAICompatible for Chat Completions API
- Extension UI with template card, auto-create on auth, and disconnect
* fix: address PR review comments for GitHub Copilot OAuth
- Validate device code response for error fields (GitHub can return 200
with error payload)
- Store empty refreshToken instead of access token for GitHub tokens
- Add closeButton to Toaster for dismissing device code toast
* fix: add github-copilot to agent provider factory
The chat route uses a separate provider-factory.ts (agent layer) from the
test-provider route (llm/provider.ts). Added createGitHubCopilotFactory
to the agent factory so chat works with GitHub Copilot.
* fix: add github-copilot to provider icons, models, and dialog
- Add Github icon from lucide-react to providerIcons map
- Add 8 Copilot models (GPT-4o, Claude, Gemini, Grok) to models.ts
- Add github-copilot to NewProviderDialog zod enum, validation skip,
canTest check, and OAuth credential message
* fix: reorder copilot models with free-tier models first
Put models available on Copilot Free at the top (gpt-4o, gpt-4.1,
gpt-5-mini, claude-haiku-4.5, grok-code-fast-1), followed by
premium models that require paid Copilot subscription.
* fix: set correct 64K context window for Copilot models
Copilot API enforces a 64K input token limit regardless of the
underlying model's native context window. Updated all model entries
and the default template to 64000 so compaction triggers correctly.
* fix: use actual per-model prompt limits from Copilot /models API
Queried api.githubcopilot.com/models for real max_prompt_tokens values.
GPT-4o/4.1 have 64K, Claude/gpt-5-mini have 128K, GPT-5.x have 272K.
Also updated model list to match what's actually available on the API
(e.g. claude-sonnet-4.6 instead of 4.5, added gpt-5.4/5.2-codex).
* feat: resize images for Copilot using VS Code's algorithm
Large screenshots cause 413 errors on Copilot's API. Resize images
following VS Code's approach: max 2048px longest side, 768px shortest
side, re-encode as JPEG at 75% quality. Uses sharp for server-side
image processing.
* fix: address all Greptile P1 review comments
- Add .catch() on fire-and-forget pollDeviceCode to prevent unhandled
rejection crashes (Node 15+)
- Add deduplication guard (activeDeviceFlows Set) to prevent concurrent
device code flows for the same provider
- Add runtime validation of server response in frontend before calling
window.open() and showing toast
- Remove dead GITHUB_DEVICE_VERIFICATION constant from urls.ts
* fix: upgrade biome to 2.4.8, fix all lint errors, and address review bugs
- Upgrade biome from 2.4.5 to 2.4.8 (matches CI) and migrate configs
- Fix image resize: only re-encode when dimensions actually change
- Fix device code polling: retry on transient network errors instead of aborting
- Allow restarting device code flow (clear old flow instead of throwing 500)
- Fix pre-existing noNonNullAssertion and noExplicitAny lint errors globally
* fix: address Greptile P2 review — image resize and config guard
- Fix early-return guard: check max/min sides against their respective
limits (MAX_LONG_SIDE/MAX_SHORT_SIDE) instead of both against SHORT
- Preserve PNG alpha: detect hasAlpha and keep PNG format instead of
unconditionally converting to lossy JPEG
- Keep browserosId guard in resolveGitHubCopilotConfig consistent with
ChatGPT Pro pattern (safety check that caller context is valid)
* feat: update Copilot models to full list from pricing page, default to gpt-5-mini
Added all 23 models from GitHub Copilot pricing page. Ordered with
free-tier models first (gpt-5-mini, claude-haiku-4.5), then premium.
Changed default from gpt-4o to gpt-5-mini since it's unlimited on
Pro plan and has 128K context (vs gpt-4o's 64K limit).
* fix(skills): read-only view mode for built-in skills
- SkillCard shows Eye icon + "View" for built-in, Pencil + "Edit" for user
- SkillDialog in read-only mode: disabled fields, no toolbar on markdown
editor, "View Skill" title, "Close" button, no "Update Skill"
- Hide tip section in read-only mode
* fix(skills): use react-markdown for read-only skill view
Replace MDXEditor with react-markdown for viewing built-in skills.
MDXEditor chokes on code fences, angle brackets, and image syntax
causing content truncation. react-markdown handles standard markdown
correctly with no rendering issues.
* Revert "feat: convert settings to popup dialog (#477)"
This reverts commit 42aa0ff1ef.
* fix: address review feedback for PR #498
- Remove erroneous SETTINGS_PAGE_VIEWED_EVENT tracking from SidebarLayout
(was firing on every non-settings page navigation)
- Fix mobile settings sidebar not closing on route change by merging
setMobileOpen(false) into the pathname-dependent analytics useEffect
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: select text and pass to sidepanel
* fix: lint issues
* fix: persist selection across tabs
* fix: review comments
* fix: change when the selection is cleared
* feat: sanitize url
* fix(skills): UI section separation and fix find-alternatives rendering
- Split skills page into "My Skills" (user) and "BrowserOS Skills" (built-in) sections
- Fix find-alternatives SKILL.md — replace angle bracket placeholders with curly
braces to prevent MDXEditor from parsing them as JSX and rendering empty content
* fix(skills): bump find-alternatives to v1.1 for CDN sync