Compare commits

...

11 Commits

Author SHA1 Message Date
Nikhil Sonti
3db011fd7a feat: change agent extension theme from orange to pink
Swap all OKLCH hue values from orange (~41-47) to pink (350) in the
CSS custom properties. Updates primary, ring, chart, sidebar, and
accent-orange variables for both light and dark modes. Also updates
the glow-once keyframe hardcoded rgba values to pink.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-19 00:36:12 -07:00
Felarof
42aa0ff1ef feat: convert settings to popup dialog (#477)
* feat: convert settings page to popup dialog, move workflows to main nav

Replace the dedicated settings page layout (SettingsSidebarLayout) with a
modal dialog (SettingsDialog) that opens on top of the current page. Settings
are now accessible via a dialog triggered from the main sidebar, eliminating
the confusing dual-sidebar navigation pattern.

- Create SettingsDialog with tabbed left panel and content area
- Move Workflows into main sidebar navigation (feature-gated)
- Remove /settings/* routes (except /settings/survey)
- Delete SettingsSidebarLayout and SettingsSidebar components
- Update backward compatibility redirects

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat: setup new urls for the dialog box

* fix: dialog close button

* fix: settings analytics

* fix: address review comments

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Dani Akash <DaniAkash@users.noreply.github.com>
2026-03-18 23:26:13 +05:30
shivammittal274
4000f094f6 Feat/chatgpt pro polish (#484)
* fix: ChatGPT Pro UI polish — fix undefined display and add icon

- Fix "gpt-5.3-codex · undefined" — hide baseUrl when not set
- Add OpenAI icon for chatgpt-pro provider in icon map

* chore: rename ChatGPT Pro to ChatGPT Plus/Pro (supports both plans)

* chore: remove accidentally committed files
2026-03-18 22:51:22 +05:30
shivammittal274
151be81cee fix: ChatGPT Pro UI polish — fix undefined display and add icon (#483)
- Fix "gpt-5.3-codex · undefined" — hide baseUrl when not set
- Add OpenAI icon for chatgpt-pro provider in icon map
2026-03-18 22:23:28 +05:30
shivammittal274
46a8326140 feat: add ChatGPT Pro OAuth as LLM provider (#476)
* feat: add ChatGPT Pro OAuth as LLM provider

Adds OAuth 2.0 (Authorization Code + PKCE) flow so users can authenticate
with their ChatGPT Pro subscription to power BrowserOS's agent, matching
the pattern used by Codex CLI, OpenCode, and Pi.

Server:
- OAuth token lifecycle (PKCE, exchange, refresh, SQLite storage)
- Dedicated callback server on port 1455 (Codex client ID registration)
- Codex fetch wrapper routing API calls to chatgpt.com/backend-api
- Config resolution + provider factories for all code paths (chat, test, refine)

Extension:
- ChatGPT Pro template card with OAuth flow trigger
- Status polling hook + auto-create provider on auth success
- Model list with Codex-supported models (gpt-5.x-codex family)

* fix: address Greptile PR review comments

- Wire OAuth callback server stop handle into onShutdown (P1: port 1455 leak)
- Guard against missing refresh token + clear stale tokens on failed refresh (P1)
- Add logger.warn to silent catch in codex-fetch body mutation
- Document JWT trust assumption in parseAccessTokenClaims
- Source model ID from provider template instead of hard-coding

* simplify: remove unnecessary OAuth shutdown wiring and useCallback

- Revert OAuthHandle interface — callback server port releases on process exit
- Remove stopCallbackServer from shutdown flow (dead code)
- Remove all useCallback from useOAuthStatus per CLAUDE.md guidance

* style: add readonly modifiers and braces per TS style guide

* docs: add E2E test screenshots for ChatGPT Pro OAuth

* fix: strip item IDs from Codex requests to fix multi-turn conversations

* fix: preserve function_call_output IDs in Codex requests

* fix: resolve Codex store=false + tool-use incompatibility

- Pass providerOptions { openai: { store: false } } to ToolLoopAgent
  so the AI SDK inlines content instead of using item_reference
- Strip item IDs and previous_response_id in codex-fetch (safety net)
- Use .responses() model (Codex only speaks Responses API format)

* fix: remove non-Codex model gpt-5.2 from chatgpt-pro model list

* fix: strip unsupported Codex params and update model list

- Strip temperature, max_tokens, top_p from Codex requests (unsupported)
- Add all available Codex models including gpt-5.4, gpt-5.2, gpt-5.1

* chore: remove screenshots containing email

* feat: enable reasoning events for ChatGPT Pro Codex models

* chore: set reasoning effort to high for ChatGPT Pro

* feat: add configurable reasoning effort and summary for ChatGPT Pro

- Add reasoningEffort (none/low/medium/high) and reasoningSummary
  (auto/concise/detailed) dropdowns in the Edit Provider dialog
- Pass through extension → chat request → agent config → providerOptions
- Defaults: effort=high, summary=auto

* fix: strip max_output_tokens from Codex requests (fixes compaction)

* fix: address Greptile P1 issues

- Fix default model fallback: gpt-4o → gpt-5.3-codex (Codex endpoint)
- Clear stale tokens on refresh failure (prevents infinite retry loop)
- Only auto-create provider after explicit OAuth flow, not on page load
- Add catch block to auto-create effect with error toast
2026-03-18 22:07:43 +05:30
Dani Akash
4b18723a21 fix: undo shortcut in rewrite button (#472)
* fix: undo shortcut in rewrite button

* fix: address reviews
2026-03-18 07:04:48 +05:30
Nikhil
4909927c03 chore: bump PATCH and OFFSET (#479) 2026-03-17 17:41:45 -07:00
Nikhil
22c5e85707 chore: bump server version (#478) 2026-03-17 17:12:23 -07:00
shivammittal274
59b00a6837 feat: remote skill download and auto-sync (#468)
* feat: add remote skill download and auto-sync

Download default skills from remote catalog on first setup with
bundled fallback when offline. Background sync every 45 minutes
checks for new/updated skills without overwriting user-customized
ones. Tracks installed defaults via content hashes in a local
manifest file.

* feat: make skills catalog URL configurable and add generation script

Add SKILLS_CATALOG_URL env var (following CODEGEN_SERVICE_URL pattern)
with fallback to the default constant. Add script to generate
catalog.json from bundled defaults for static hosting.

* feat: add R2 upload script and use cdn.browseros.com for catalog URL

Add upload-skills-catalog.ts that generates and uploads catalog.json
to Cloudflare R2 (same infra as existing build artifacts). Update
default catalog URL to cdn.browseros.com/skills/v1/catalog.json.

* test: add E2E tests for remote skill sync against live CDN

* fix: address code review findings — security, validation, DRY

- Add path traversal protection via safeSkillDir in writeSkillFile
  and readSkillContent (reuses existing validation from service.ts)
- Add runtime type guards for catalog JSON and manifest JSON parsing
- Fix seedFromRemote to return false on partial failure so bundled
  fallback kicks in
- Add per-skill error handling in syncRemoteSkills so one bad skill
  doesn't crash the entire sync
- Wire stopSkillSync into Application.stop() shutdown path
- Extract version from frontmatter in seedFromBundled instead of
  hardcoding '1.0'
- Consolidate duplicated logic: reuse installSkill/writeSkillFile/
  contentHash/saveManifest from remote-sync.ts in seed.ts
- Extract shared catalog generation into scripts/catalog-utils.ts

* test: add flow tests for all four sync scenarios against live CDN

* refactor: remove redundant scripts and inline catalog generation

Drop generate-skills-catalog.ts, catalog-utils.ts, and
e2e-remote-sync.test.ts (covered by flows.test.ts). Inline
catalog generation into upload-skills-catalog.ts.

* test: add full E2E server flow test against live CDN

Tests all 7 steps of the real server lifecycle: fresh seed from CDN,
no-op sync, user edit preservation, skill reinstall, custom skill
protection, background timer firing, and second startup skip.

* chore: remove e2e-server-flow test

* fix: address Greptile review — entry validation, size limit, DRY, no-op saves

- Validate individual skill entries in catalog (id, version, content
  must all be strings) not just the top-level shape
- Add 1MB response size limit on catalog fetch to prevent resource
  exhaustion from compromised/misconfigured CDN
- Skip manifest save when sync cycle had no changes (avoids
  unnecessary disk I/O every 45 minutes)
- Share extractVersion via remote-sync.ts export, remove duplicate
  from seed.ts

* fix: prevent bundled fallback from overwriting partial remote seeds

When seedFromRemote partially fails, the bundled fallback now skips
skills already in the manifest (installed by the partial remote
seed). Also adds Content-Length early check before downloading the
full catalog response body.

* fix: run sync immediately on startup, not just on interval

Previously the first sync fired 45 minutes after boot. Now
startSkillSync runs one sync immediately so returning users
get skill updates right away.

* refactor: simplify sync — remote always wins, remove manifest

Remote catalog is the source of truth. If a skill exists in the
catalog, its version is compared against local frontmatter and
overwritten when newer. No manifest file, no content hashes.

User-created skills (IDs not in catalog) are never touched.

* fix: skip bundled skills already installed by partial remote seed

* chore: remove unreliable Content-Length check

* chore: remove size limit checks, fetch timeout is sufficient
2026-03-17 21:40:45 +05:30
Nikhil
44af9aea6d fix: clean-up old scripts (#474)
* fix: remove old scripts

* fix: remove vscode
2026-03-17 08:56:55 -07:00
Nikhil
1779e1e7bd fix: create user-data dir if missing (#473) 2026-03-17 08:30:39 -07:00
60 changed files with 2197 additions and 4249 deletions

File diff suppressed because it is too large Load Diff

View File

@@ -1,182 +0,0 @@
import {
ArrowLeft,
BookOpen,
Bot,
Compass,
GitBranch,
MessageSquare,
Palette,
RotateCcw,
Search,
Server,
} from 'lucide-react'
import type { FC } from 'react'
import { NavLink } from 'react-router'
import { ThemeToggle } from '@/components/elements/theme-toggle'
import { Feature } from '@/lib/browseros/capabilities'
import { useCapabilities } from '@/lib/browseros/useCapabilities'
import { cn } from '@/lib/utils'
type BaseNavItem = {
name: string
icon: typeof Bot
feature?: Feature
}
type InternalNavItem = BaseNavItem & {
href?: never
to: string
}
type ExternalNavItem = BaseNavItem & {
href: string
to?: never
}
type NavItem = InternalNavItem | ExternalNavItem
type NavSection = {
label: string
items: NavItem[]
}
function isExternalNavItem(item: NavItem): item is ExternalNavItem {
return 'href' in item
}
const getNavLinkClassName = (isActive: boolean) =>
cn(
'flex h-9 items-center gap-2 overflow-hidden whitespace-nowrap rounded-md px-3 font-medium text-sm transition-colors hover:bg-sidebar-accent hover:text-sidebar-accent-foreground',
isActive && 'bg-sidebar-accent text-sidebar-accent-foreground',
)
const getSectionClassName = (index: number) =>
cn(index > 0 && 'mt-3 border-t pt-3')
const sectionLabelClassName =
'mb-2 px-3 font-semibold text-[10px] text-muted-foreground uppercase tracking-[0.18em]'
const primarySettingsSections: NavSection[] = [
{
label: 'Provider Settings',
items: [
{ name: 'BrowserOS AI', to: '/settings/ai', icon: Bot },
{
name: 'Chat & Council Provider',
to: '/settings/chat',
icon: MessageSquare,
},
{ name: 'Search Provider', to: '/settings/search', icon: Search },
],
},
{
label: 'Other',
items: [
{
name: 'Customize BrowserOS',
to: '/settings/customization',
icon: Palette,
feature: Feature.CUSTOMIZATION_SUPPORT,
},
{ name: 'BrowserOS as MCP', to: '/settings/mcp', icon: Server },
{
name: 'Workflows',
to: '/workflows',
icon: GitBranch,
feature: Feature.WORKFLOW_SUPPORT,
},
],
},
]
const helpItems: NavItem[] = [
{ name: 'Docs', href: 'https://docs.browseros.com/', icon: BookOpen },
{ name: 'Features', to: '/onboarding/features', icon: Compass },
{ name: 'Revisit Onboarding', to: '/onboarding', icon: RotateCcw },
]
export const SettingsSidebar: FC = () => {
const { supports } = useCapabilities()
const filteredSections = primarySettingsSections
.map((section) => ({
...section,
items: section.items.filter(
(item) => !item.feature || supports(item.feature),
),
}))
.filter((section) => section.items.length > 0)
const filteredHelpItems = helpItems.filter(
(item) => !item.feature || supports(item.feature),
)
const renderNavItem = (item: NavItem) => {
const Icon = item.icon
if (isExternalNavItem(item)) {
return (
<a
key={item.href}
href={item.href}
target="_blank"
rel="noopener noreferrer"
className={getNavLinkClassName(false)}
>
<Icon className="size-4 shrink-0" />
<span className="truncate">{item.name}</span>
</a>
)
}
return (
<NavLink
key={item.to}
to={item.to}
end
className={({ isActive }) => getNavLinkClassName(isActive)}
>
<Icon className="size-4 shrink-0" />
<span className="truncate">{item.name}</span>
</NavLink>
)
}
const renderSection = (section: NavSection, index: number) => (
<div key={section.label} className={getSectionClassName(index)}>
<div className={sectionLabelClassName}>{section.label}</div>
<nav className="space-y-1">{section.items.map(renderNavItem)}</nav>
</div>
)
return (
<div className="flex h-full w-64 flex-col border-r bg-sidebar text-sidebar-foreground">
<div className="flex h-14 items-center justify-between border-b px-2">
<NavLink
to="/home"
className="flex h-9 items-center gap-2 overflow-hidden whitespace-nowrap rounded-md px-3 font-medium text-sm transition-colors hover:bg-sidebar-accent hover:text-sidebar-accent-foreground"
>
<ArrowLeft className="size-4 shrink-0" />
<span className="truncate">Back</span>
</NavLink>
<ThemeToggle
className="mr-1 h-8 w-8 shrink-0"
iconClassName="h-4 w-4"
/>
</div>
<div className="flex flex-1 flex-col overflow-y-auto overflow-x-hidden p-2">
<div className="mb-2 px-3 font-semibold text-muted-foreground text-xs uppercase tracking-wider">
Settings
</div>
<div>{filteredSections.map(renderSection)}</div>
<div className="mt-auto pt-4">
<div className={sectionLabelClassName}>Help</div>
<nav className="space-y-1">
{filteredHelpItems.map(renderNavItem)}
</nav>
</div>
</div>
</div>
)
}

View File

@@ -1,6 +1,7 @@
import {
Brain,
CalendarClock,
GitBranch,
Home,
PlugZap,
Settings,
@@ -17,6 +18,7 @@ import {
} from '@/components/ui/tooltip'
import { Feature } from '@/lib/browseros/capabilities'
import { useCapabilities } from '@/lib/browseros/useCapabilities'
import { useOpenSettings } from '@/lib/settings/useOpenSettings'
import { cn } from '@/lib/utils'
interface SidebarNavigationProps {
@@ -25,9 +27,10 @@ interface SidebarNavigationProps {
type NavItem = {
name: string
to: string
to?: string
icon: typeof Home
feature?: Feature
action?: 'settings'
}
const primaryNavItems: NavItem[] = [
@@ -39,6 +42,12 @@ const primaryNavItems: NavItem[] = [
feature: Feature.MANAGED_MCP_SUPPORT,
},
{ name: 'Scheduled Tasks', to: '/scheduled', icon: CalendarClock },
{
name: 'Workflows',
to: '/workflows',
icon: GitBranch,
feature: Feature.WORKFLOW_SUPPORT,
},
{
name: 'Skills',
to: '/home/skills',
@@ -57,14 +66,19 @@ const primaryNavItems: NavItem[] = [
icon: Sparkles,
feature: Feature.SOUL_SUPPORT,
},
{ name: 'Settings', to: '/settings/ai', icon: Settings },
{ name: 'Settings', icon: Settings, action: 'settings' },
]
const navItemClassName =
'flex h-9 items-center gap-2 overflow-hidden whitespace-nowrap rounded-md px-3 font-medium text-sm transition-colors hover:bg-sidebar-accent hover:text-sidebar-accent-foreground'
export const SidebarNavigation: FC<SidebarNavigationProps> = ({
expanded = true,
}) => {
const location = useLocation()
const openSettings = useOpenSettings()
const { supports } = useCapabilities()
const isSettingsActive = location.pathname.startsWith('/settings')
const filteredItems = primaryNavItems.filter(
(item) => !item.feature || supports(item.feature),
@@ -76,16 +90,52 @@ export const SidebarNavigation: FC<SidebarNavigationProps> = ({
<nav className="space-y-1">
{filteredItems.map((item) => {
const Icon = item.icon
const isActive =
item.to === '/settings/ai'
? location.pathname.startsWith('/settings')
: location.pathname === item.to
// Settings is a button that opens the dialog
if (item.action === 'settings') {
const settingsButton = (
<button
type="button"
onClick={() => openSettings()}
className={cn(
navItemClassName,
'w-full',
isSettingsActive &&
'bg-sidebar-accent text-sidebar-accent-foreground',
)}
>
<Icon className="size-4 shrink-0" />
<span
className={cn(
'truncate transition-opacity duration-200',
expanded ? 'opacity-100' : 'opacity-0',
)}
>
{item.name}
</span>
</button>
)
if (!expanded) {
return (
<Tooltip key="settings">
<TooltipTrigger asChild>{settingsButton}</TooltipTrigger>
<TooltipContent side="right">{item.name}</TooltipContent>
</Tooltip>
)
}
return <div key="settings">{settingsButton}</div>
}
// Regular nav items use NavLink
const itemPath = item.to ?? '/home'
const isActive = location.pathname === itemPath
const navItem = (
<NavLink
to={item.to}
to={itemPath}
className={cn(
'flex h-9 items-center gap-2 overflow-hidden whitespace-nowrap rounded-md px-3 font-medium text-sm transition-colors hover:bg-sidebar-accent hover:text-sidebar-accent-foreground',
navItemClassName,
isActive &&
'bg-sidebar-accent text-sidebar-accent-foreground',
)}

View File

@@ -1,5 +1,13 @@
import type { FC } from 'react'
import { HashRouter, Navigate, Route, Routes, useParams } from 'react-router'
import {
HashRouter,
type Location,
Navigate,
Route,
Routes,
useLocation,
useParams,
} from 'react-router'
import { NewTab } from '../newtab/index/NewTab'
import { NewTabLayout } from '../newtab/layout/NewTabLayout'
@@ -8,23 +16,18 @@ import { OnboardingDemo } from '../onboarding/demo/OnboardingDemo'
import { FeaturesPage } from '../onboarding/features/Features'
import { Onboarding } from '../onboarding/index/Onboarding'
import { StepsLayout } from '../onboarding/steps/StepsLayout'
import { AISettingsPage } from './ai-settings/AISettingsPage'
import { ConnectMCP } from './connect-mcp/ConnectMCP'
import { CreateGraphWrapper } from './create-graph/CreateGraphWrapper'
import { CustomizationPage } from './customization/CustomizationPage'
import { SurveyPage } from './jtbd-agent/SurveyPage'
import { AuthLayout } from './layout/AuthLayout'
import { SettingsSidebarLayout } from './layout/SettingsSidebarLayout'
import { SidebarLayout } from './layout/SidebarLayout'
import { LlmHubPage } from './llm-hub/LlmHubPage'
import { LoginPage } from './login/LoginPage'
import { LogoutPage } from './login/LogoutPage'
import { MagicLinkCallback } from './login/MagicLinkCallback'
import { MCPSettingsPage } from './mcp-settings/MCPSettingsPage'
import { MemoryPage } from './memory/MemoryPage'
import { ProfilePage } from './profile/ProfilePage'
import { ScheduledTasksPage } from './scheduled-tasks/ScheduledTasksPage'
import { SearchProviderPage } from './search-provider/SearchProviderPage'
import { SettingsDialog } from './settings-dialog/SettingsDialog'
import { SkillsPage } from './skills/SkillsPage'
import { SoulPage } from './soul/SoulPage'
import { WorkflowsPageWrapper } from './workflows/WorkflowsPageWrapper'
@@ -60,12 +63,29 @@ const OptionsRedirect: FC = () => {
return <Navigate to={newPath} replace />
}
export const App: FC = () => {
/** Redirect direct /settings/:tab visits so the dialog has a background page */
const SettingsRedirect: FC = () => {
const { tab } = useParams()
return (
<Navigate
to={`/settings/${tab || 'ai'}`}
state={{ backgroundLocation: { pathname: '/home' } }}
replace
/>
)
}
const AppRoutes: FC = () => {
const location = useLocation()
const surveyParams = getSurveyParams()
const backgroundLocation = (
location.state as { backgroundLocation?: Location } | null
)?.backgroundLocation
return (
<HashRouter>
<Routes>
<>
<Routes location={backgroundLocation || location}>
{/* Public auth routes */}
<Route element={<AuthLayout />}>
<Route path="login" element={<LoginPage />} />
@@ -91,18 +111,14 @@ export const App: FC = () => {
<Route path="scheduled" element={<ScheduledTasksPage />} />
</Route>
{/* Settings with dedicated sidebar */}
<Route element={<SettingsSidebarLayout />}>
<Route path="settings">
<Route index element={<Navigate to="/settings/ai" replace />} />
<Route path="ai" element={<AISettingsPage key="ai" />} />
<Route path="chat" element={<LlmHubPage />} />
<Route path="mcp" element={<MCPSettingsPage />} />
<Route path="customization" element={<CustomizationPage />} />
<Route path="search" element={<SearchProviderPage />} />
<Route path="survey" element={<SurveyPage {...surveyParams} />} />
</Route>
</Route>
{/* Survey page - standalone */}
<Route
path="settings/survey"
element={<SurveyPage {...surveyParams} />}
/>
{/* Direct /settings/:tab access without background location — redirect with one */}
<Route path="settings/:tab?" element={<SettingsRedirect />} />
{/* Full-screen without sidebar */}
<Route path="workflows/create-graph" element={<CreateGraphWrapper />} />
@@ -138,6 +154,19 @@ export const App: FC = () => {
{/* Fallback to home */}
<Route path="*" element={<Navigate to="/home" replace />} />
</Routes>
</HashRouter>
{/* Modal overlay — renders settings dialog on top of background page */}
{backgroundLocation && (
<Routes>
<Route path="settings/:tab?" element={<SettingsDialog />} />
</Routes>
)}
</>
)
}
export const App: FC = () => (
<HashRouter>
<AppRoutes />
</HashRouter>
)

View File

@@ -1,5 +1,5 @@
import { useQueryClient } from '@tanstack/react-query'
import { type FC, useMemo, useState } from 'react'
import { type FC, useEffect, useMemo, useRef, useState } from 'react'
import { toast } from 'sonner'
import {
AlertDialog,
@@ -13,14 +13,24 @@ import {
} from '@/components/ui/alert-dialog'
import { useSessionInfo } from '@/lib/auth/sessionStorage'
import { useAgentServerUrl } from '@/lib/browseros/useBrowserOSProviders'
import {
CHATGPT_PRO_OAUTH_COMPLETED_EVENT,
CHATGPT_PRO_OAUTH_DISCONNECTED_EVENT,
CHATGPT_PRO_OAUTH_STARTED_EVENT,
} from '@/lib/constants/analyticsEvents'
import { GetProfileIdByUserIdDocument } from '@/lib/conversations/graphql/uploadConversationDocument'
import { getQueryKeyFromDocument } from '@/lib/graphql/getQueryKeyFromDocument'
import { useGraphqlMutation } from '@/lib/graphql/useGraphqlMutation'
import { useGraphqlQuery } from '@/lib/graphql/useGraphqlQuery'
import type { ProviderTemplate } from '@/lib/llm-providers/providerTemplates'
import {
getProviderTemplate,
type ProviderTemplate,
} from '@/lib/llm-providers/providerTemplates'
import { testProvider } from '@/lib/llm-providers/testProvider'
import type { LlmProviderConfig } from '@/lib/llm-providers/types'
import { useLlmProviders } from '@/lib/llm-providers/useLlmProviders'
import { useOAuthStatus } from '@/lib/llm-providers/useOAuthStatus'
import { track } from '@/lib/metrics/track'
import { ConfiguredProvidersList } from './ConfiguredProvidersList'
import {
DeleteRemoteLlmProviderDocument,
@@ -101,12 +111,69 @@ export const AISettingsPage: FC = () => {
null,
)
// OAuth status for ChatGPT Plus/Pro
const {
status: chatgptProStatus,
startPolling: startChatGPTProPolling,
disconnect: disconnectChatGPTPro,
} = useOAuthStatus('chatgpt-pro')
// Track whether user explicitly started an OAuth flow this session
const oauthFlowStartedRef = useRef(false)
// Auto-create provider only when user actively completed OAuth,
// not on passive page load when server has old tokens
// biome-ignore lint/correctness/useExhaustiveDependencies: intentional — only trigger on auth status change
useEffect(() => {
if (!chatgptProStatus?.authenticated) return
if (!oauthFlowStartedRef.current) return
const exists = providers.some((p) => p.type === 'chatgpt-pro')
if (exists) return
const now = Date.now()
try {
const template = getProviderTemplate('chatgpt-pro')
saveProvider({
id: `chatgpt-pro-${now}`,
type: 'chatgpt-pro',
name: `ChatGPT Plus/Pro${chatgptProStatus.email ? ` (${chatgptProStatus.email})` : ''}`,
modelId: template?.defaultModelId ?? 'gpt-5.3-codex',
supportsImages: template?.supportsImages ?? true,
contextWindow: template?.contextWindow ?? 400000,
temperature: 0.2,
createdAt: now,
updatedAt: now,
})
track(CHATGPT_PRO_OAUTH_COMPLETED_EVENT, {
email: chatgptProStatus.email,
})
toast.success('ChatGPT Plus/Pro Connected', {
description: chatgptProStatus.email
? `Authenticated as ${chatgptProStatus.email}`
: 'Successfully authenticated with ChatGPT Plus/Pro',
})
} catch (err) {
toast.error('Failed to create ChatGPT Plus/Pro provider', {
description: err instanceof Error ? err.message : 'Unknown error',
})
} finally {
oauthFlowStartedRef.current = false
}
}, [chatgptProStatus?.authenticated])
const handleAddProvider = () => {
setTemplateValues(undefined)
setIsNewDialogOpen(true)
}
const handleUseTemplate = (template: ProviderTemplate) => {
// OAuth providers: trigger OAuth flow instead of opening form dialog
if (template.id === 'chatgpt-pro') {
handleStartChatGPTProOAuth()
return
}
setTemplateValues({
type: template.id,
name: template.name,
@@ -119,6 +186,27 @@ export const AISettingsPage: FC = () => {
setIsNewDialogOpen(true)
}
const handleStartChatGPTProOAuth = () => {
if (!agentServerUrl) {
toast.error('Server not available', {
description: 'Cannot start OAuth flow without server connection.',
})
return
}
oauthFlowStartedRef.current = true
const extensionSettingsUrl = chrome.runtime.getURL('app.html#/ai-settings')
const startUrl = `${agentServerUrl}/oauth/chatgpt-pro/start?redirect=${encodeURIComponent(extensionSettingsUrl)}`
window.open(startUrl, '_blank')
// Start polling for OAuth completion
startChatGPTProPolling()
track(CHATGPT_PRO_OAUTH_STARTED_EVENT)
toast.info('Authenticating with ChatGPT Plus/Pro', {
description: 'Complete the login in the opened tab.',
})
}
const handleEditProvider = (provider: LlmProviderConfig) => {
setEditingProvider(provider)
setIsEditDialogOpen(true)
@@ -130,6 +218,11 @@ export const AISettingsPage: FC = () => {
const confirmDeleteProvider = async () => {
if (providerToDelete) {
// Clear OAuth tokens on server for OAuth-based providers
if (providerToDelete.type === 'chatgpt-pro') {
await disconnectChatGPTPro()
track(CHATGPT_PRO_OAUTH_DISCONNECTED_EVENT)
}
await deleteProvider(providerToDelete.id)
deleteRemoteProviderMutation.mutate({ rowId: providerToDelete.id })
setProviderToDelete(null)

View File

@@ -61,6 +61,7 @@ const providerTypeEnum = z.enum([
'lmstudio',
'bedrock',
'browseros',
'chatgpt-pro',
])
/**
@@ -84,6 +85,9 @@ export const providerFormSchema = z
secretAccessKey: z.string().optional(),
region: z.string().optional(),
sessionToken: z.string().optional(),
// ChatGPT Pro (Codex)
reasoningEffort: z.enum(['none', 'low', 'medium', 'high']).optional(),
reasoningSummary: z.enum(['auto', 'concise', 'detailed']).optional(),
})
.superRefine((data, ctx) => {
// Azure: require either resourceName or baseUrl
@@ -127,6 +131,10 @@ export const providerFormSchema = z
})
}
}
// ChatGPT Pro: no credentials needed (server-managed OAuth)
else if (data.type === 'chatgpt-pro') {
// No validation needed — OAuth tokens are on the server
}
// Other providers: require baseUrl
else if (!data.baseUrl) {
ctx.addIssue({
@@ -209,6 +217,8 @@ export const NewProviderDialog: FC<NewProviderDialogProps> = ({
secretAccessKey: initialValues?.secretAccessKey || '',
region: initialValues?.region || '',
sessionToken: initialValues?.sessionToken || '',
reasoningEffort: initialValues?.reasoningEffort || 'high',
reasoningSummary: initialValues?.reasoningSummary || 'auto',
},
})
@@ -301,6 +311,8 @@ export const NewProviderDialog: FC<NewProviderDialogProps> = ({
secretAccessKey: initialValues.secretAccessKey || '',
region: initialValues.region || '',
sessionToken: initialValues.sessionToken || '',
reasoningEffort: initialValues.reasoningEffort || 'high',
reasoningSummary: initialValues.reasoningSummary || 'auto',
})
setIsCustomModel(false)
}
@@ -326,6 +338,8 @@ export const NewProviderDialog: FC<NewProviderDialogProps> = ({
secretAccessKey: '',
region: '',
sessionToken: '',
reasoningEffort: 'high',
reasoningSummary: 'auto',
})
setIsCustomModel(false)
}
@@ -363,6 +377,9 @@ export const NewProviderDialog: FC<NewProviderDialogProps> = ({
const canTest = (): boolean => {
if (!watchedModelId) return false
// ChatGPT Pro: always testable (server has the OAuth token)
if (watchedType === 'chatgpt-pro') return true
if (watchedType === 'azure') {
return !!(watchedResourceName || watchedBaseUrl) && !!watchedApiKey
}
@@ -444,6 +461,76 @@ export const NewProviderDialog: FC<NewProviderDialogProps> = ({
}
const renderProviderSpecificFields = () => {
// ChatGPT Pro: OAuth credentials + Codex reasoning settings
if (watchedType === 'chatgpt-pro') {
return (
<>
<div className="rounded-lg border border-green-200 bg-green-50 p-3 text-green-700 text-sm dark:border-green-800 dark:bg-green-950 dark:text-green-300">
Credentials are managed via OAuth. No API key needed.
</div>
<div className="grid gap-4 sm:grid-cols-2">
<FormField
control={form.control}
name="reasoningEffort"
render={({ field }) => (
<FormItem>
<FormLabel>Reasoning Effort</FormLabel>
<Select
onValueChange={field.onChange}
value={field.value || 'high'}
>
<FormControl>
<SelectTrigger className="w-full">
<SelectValue />
</SelectTrigger>
</FormControl>
<SelectContent>
<SelectItem value="none">None</SelectItem>
<SelectItem value="low">Low</SelectItem>
<SelectItem value="medium">Medium</SelectItem>
<SelectItem value="high">High</SelectItem>
</SelectContent>
</Select>
<FormDescription>
How much the model thinks before responding
</FormDescription>
<FormMessage />
</FormItem>
)}
/>
<FormField
control={form.control}
name="reasoningSummary"
render={({ field }) => (
<FormItem>
<FormLabel>Reasoning Summary</FormLabel>
<Select
onValueChange={field.onChange}
value={field.value || 'auto'}
>
<FormControl>
<SelectTrigger className="w-full">
<SelectValue />
</SelectTrigger>
</FormControl>
<SelectContent>
<SelectItem value="auto">Auto</SelectItem>
<SelectItem value="concise">Concise</SelectItem>
<SelectItem value="detailed">Detailed</SelectItem>
</SelectContent>
</Select>
<FormDescription>
Detail level of visible thinking steps
</FormDescription>
<FormMessage />
</FormItem>
)}
/>
</div>
</>
)
}
if (watchedType === 'azure') {
return (
<>

View File

@@ -104,7 +104,9 @@ export const ProviderCard: FC<ProviderCardProps> = ({
</>
)
) : (
`${provider.modelId}${provider.baseUrl}`
provider.baseUrl
? `${provider.modelId}${provider.baseUrl}`
: provider.modelId
)}
</p>
</div>

View File

@@ -23,6 +23,7 @@ export interface ModelsData {
bedrock: ModelInfo[]
browseros: ModelInfo[]
moonshot: ModelInfo[]
'chatgpt-pro': ModelInfo[]
}
/**
@@ -90,6 +91,16 @@ export const MODELS_DATA: ModelsData = {
],
bedrock: [],
browseros: [{ modelId: 'browseros-auto', contextLength: 200000 }],
'chatgpt-pro': [
{ modelId: 'gpt-5.4', contextLength: 400000 },
{ modelId: 'gpt-5.3-codex', contextLength: 400000 },
{ modelId: 'gpt-5.2-codex', contextLength: 400000 },
{ modelId: 'gpt-5.2', contextLength: 200000 },
{ modelId: 'gpt-5.1-codex', contextLength: 400000 },
{ modelId: 'gpt-5.1-codex-max', contextLength: 400000 },
{ modelId: 'gpt-5.1-codex-mini', contextLength: 400000 },
{ modelId: 'gpt-5.1', contextLength: 200000 },
],
}
/**

View File

@@ -1,68 +0,0 @@
import { Menu } from 'lucide-react'
import type { FC } from 'react'
import { useEffect, useState } from 'react'
import { Outlet, useLocation } from 'react-router'
import { SettingsSidebar } from '@/components/sidebar/SettingsSidebar'
import { Button } from '@/components/ui/button'
import { Sheet, SheetContent } from '@/components/ui/sheet'
import { useIsMobile } from '@/hooks/use-mobile'
import { SETTINGS_PAGE_VIEWED_EVENT } from '@/lib/constants/analyticsEvents'
import { track } from '@/lib/metrics/track'
import { RpcClientProvider } from '@/lib/rpc/RpcClientProvider'
export const SettingsSidebarLayout: FC = () => {
const location = useLocation()
const isMobile = useIsMobile()
const [mobileOpen, setMobileOpen] = useState(false)
useEffect(() => {
track(SETTINGS_PAGE_VIEWED_EVENT, { page: location.pathname })
}, [location.pathname])
useEffect(() => {
setMobileOpen(false)
}, [])
if (isMobile) {
return (
<RpcClientProvider>
<div className="flex min-h-screen flex-col bg-background">
<header className="flex h-14 shrink-0 items-center gap-2 border-b px-4">
<Button
variant="ghost"
size="icon"
className="-ml-1 size-7"
onClick={() => setMobileOpen(true)}
>
<Menu className="size-4" />
</Button>
<span className="font-semibold">Settings</span>
</header>
<main className="flex-1 overflow-y-auto">
<div className="mx-auto max-w-4xl px-4 py-8 sm:px-6 lg:px-8">
<Outlet />
</div>
</main>
<Sheet open={mobileOpen} onOpenChange={setMobileOpen}>
<SheetContent side="left" className="w-72 p-0">
<SettingsSidebar />
</SheetContent>
</Sheet>
</div>
</RpcClientProvider>
)
}
return (
<RpcClientProvider>
<div className="flex h-screen bg-background">
<SettingsSidebar />
<main className="flex-1 overflow-y-auto">
<div className="mx-auto max-w-4xl px-4 py-8 sm:px-6 lg:px-8">
<Outlet />
</div>
</main>
</div>
</RpcClientProvider>
)
}

View File

@@ -1,20 +1,17 @@
import { Menu } from 'lucide-react'
import type { FC } from 'react'
import { useCallback, useEffect, useRef, useState } from 'react'
import { Outlet, useLocation } from 'react-router'
import { Outlet } from 'react-router'
import { AppSidebar } from '@/components/sidebar/AppSidebar'
import { Button } from '@/components/ui/button'
import { Sheet, SheetContent } from '@/components/ui/sheet'
import { ShortcutsDialog } from '@/entrypoints/newtab/index/ShortcutsDialog'
import { useIsMobile } from '@/hooks/use-mobile'
import { SETTINGS_PAGE_VIEWED_EVENT } from '@/lib/constants/analyticsEvents'
import { track } from '@/lib/metrics/track'
import { RpcClientProvider } from '@/lib/rpc/RpcClientProvider'
const COLLAPSE_DELAY = 150
export const SidebarLayout: FC = () => {
const location = useLocation()
const isMobile = useIsMobile()
const [sidebarOpen, setSidebarOpen] = useState(false)
const [mobileOpen, setMobileOpen] = useState(false)
@@ -25,10 +22,6 @@ export const SidebarLayout: FC = () => {
setShortcutsDialogOpen(true)
}, [])
useEffect(() => {
track(SETTINGS_PAGE_VIEWED_EVENT, { page: location.pathname })
}, [location.pathname])
useEffect(() => {
setMobileOpen(false)
}, [])

View File

@@ -117,6 +117,7 @@ export const NewScheduledTaskDialog: FC<NewScheduledTaskDialogProps> = ({
const [isRefining, setIsRefining] = useState(false)
const originalPromptRef = useRef<string | null>(null)
const refineRequestIdRef = useRef(0)
const isProgrammaticChange = useRef(false)
// Load providers from storage
useEffect(() => {
@@ -179,6 +180,24 @@ export const NewScheduledTaskDialog: FC<NewScheduledTaskDialogProps> = ({
type: p.type,
}))
// Replace textarea content via execCommand so the browser's native undo
// stack (Cmd+Z / Ctrl+Z) records the change. Falls back to form.setValue
// if the textarea element can't be found.
const setQueryWithUndo = (value: string) => {
const textarea = document.querySelector(
'textarea[name="query"]',
) as HTMLTextAreaElement
if (textarea) {
isProgrammaticChange.current = true
textarea.focus()
textarea.select()
document.execCommand('insertText', false, value)
isProgrammaticChange.current = false
} else {
form.setValue('query', value)
}
}
const handleRefinePrompt = async () => {
const currentQuery = form.getValues('query').trim()
const currentName = form.getValues('name').trim()
@@ -195,7 +214,7 @@ export const NewScheduledTaskDialog: FC<NewScheduledTaskDialogProps> = ({
providerId: form.getValues('providerId'),
})
if (requestId !== refineRequestIdRef.current) return
form.setValue('query', refined)
setQueryWithUndo(refined)
track(SCHEDULED_TASK_PROMPT_REFINED_EVENT)
} catch {
if (requestId !== refineRequestIdRef.current) return
@@ -210,7 +229,7 @@ export const NewScheduledTaskDialog: FC<NewScheduledTaskDialogProps> = ({
const handleUndoRefine = () => {
if (originalPromptRef.current !== null) {
form.setValue('query', originalPromptRef.current)
setQueryWithUndo(originalPromptRef.current)
originalPromptRef.current = null
}
}
@@ -291,7 +310,10 @@ export const NewScheduledTaskDialog: FC<NewScheduledTaskDialogProps> = ({
{...field}
onChange={(e) => {
field.onChange(e)
if (originalPromptRef.current !== null) {
if (
!isProgrammaticChange.current &&
originalPromptRef.current !== null
) {
originalPromptRef.current = null
}
}}

View File

@@ -0,0 +1,225 @@
import {
BookOpen,
Bot,
Compass,
MessageSquare,
Palette,
RotateCcw,
Search,
Server,
X,
} from 'lucide-react'
import type { FC } from 'react'
import { useEffect } from 'react'
import {
type Location,
useLocation,
useNavigate,
useParams,
} from 'react-router'
import { Dialog, DialogContent, DialogTitle } from '@/components/ui/dialog'
import { Feature } from '@/lib/browseros/capabilities'
import { useCapabilities } from '@/lib/browseros/useCapabilities'
import { SETTINGS_PAGE_VIEWED_EVENT } from '@/lib/constants/analyticsEvents'
import { track } from '@/lib/metrics/track'
import { cn } from '@/lib/utils'
import { AISettingsPage } from '../ai-settings/AISettingsPage'
import { CustomizationPage } from '../customization/CustomizationPage'
import { LlmHubPage } from '../llm-hub/LlmHubPage'
import { MCPSettingsPage } from '../mcp-settings/MCPSettingsPage'
import { SearchProviderPage } from '../search-provider/SearchProviderPage'
type SettingsTab = {
id: string
name: string
icon: typeof Bot
feature?: Feature
component: FC
}
const settingsTabs: SettingsTab[] = [
{ id: 'ai', name: 'BrowserOS AI', icon: Bot, component: AISettingsPage },
{
id: 'chat',
name: 'Chat & Council Provider',
icon: MessageSquare,
component: LlmHubPage,
},
{
id: 'search',
name: 'Search Provider',
icon: Search,
component: SearchProviderPage,
},
{
id: 'customization',
name: 'Customize BrowserOS',
icon: Palette,
feature: Feature.CUSTOMIZATION_SUPPORT,
component: CustomizationPage,
},
{
id: 'mcp',
name: 'BrowserOS as MCP',
icon: Server,
component: MCPSettingsPage,
},
]
type HelpItem = {
name: string
icon: typeof Bot
href?: string
to?: string
}
const helpItems: HelpItem[] = [
{ name: 'Docs', href: 'https://docs.browseros.com/', icon: BookOpen },
{ name: 'Features', to: '/onboarding/features', icon: Compass },
{ name: 'Revisit Onboarding', to: '/onboarding', icon: RotateCcw },
]
export const SettingsDialog: FC = () => {
const { tab } = useParams<{ tab?: string }>()
const location = useLocation()
const navigate = useNavigate()
const { supports } = useCapabilities()
const backgroundLocation = (
location.state as { backgroundLocation?: Location } | null
)?.backgroundLocation
const visibleTabs = settingsTabs.filter(
(tabDef) => !tabDef.feature || supports(tabDef.feature),
)
const activeTab = visibleTabs.find((t) => t.id === tab) ? tab : 'ai'
useEffect(() => {
track(SETTINGS_PAGE_VIEWED_EVENT, { page: `settings/${activeTab}` })
}, [activeTab])
const handleClose = () => {
if (backgroundLocation) {
const target =
backgroundLocation.pathname +
(backgroundLocation.search || '') +
(backgroundLocation.hash || '')
navigate(target, { replace: true })
} else {
navigate('/home', { replace: true })
}
}
const handleTabChange = (tabId: string) => {
navigate(`/settings/${tabId}`, {
state: { backgroundLocation },
replace: true,
})
}
const handleHelpNavigation = (to: string) => {
navigate(to, { replace: true })
}
const activeTabConfig = visibleTabs.find((t) => t.id === activeTab)
const ActiveComponent = activeTabConfig?.component ?? AISettingsPage
return (
<Dialog
open
onOpenChange={(open) => {
if (!open) handleClose()
}}
>
<DialogContent
className="flex h-[85vh] max-h-[85vh] w-full flex-col gap-0 overflow-hidden p-0 sm:max-w-4xl"
showCloseButton={false}
>
<DialogTitle className="sr-only">Settings</DialogTitle>
<div className="flex h-full min-h-0">
{/* Left panel - tab navigation */}
<div className="flex w-52 shrink-0 flex-col border-r bg-muted/30">
<div className="px-4 pt-5 pb-3">
<span className="font-semibold text-muted-foreground text-xs uppercase tracking-wider">
Settings
</span>
</div>
<nav className="flex-1 space-y-0.5 overflow-y-auto px-2">
{visibleTabs.map((tabDef) => {
const Icon = tabDef.icon
return (
<button
key={tabDef.id}
type="button"
onClick={() => handleTabChange(tabDef.id)}
className={cn(
'flex w-full items-center gap-2 rounded-md px-3 py-2 font-medium text-sm transition-colors hover:bg-accent hover:text-accent-foreground',
activeTab === tabDef.id &&
'bg-accent text-accent-foreground',
)}
>
<Icon className="size-4 shrink-0" />
<span className="truncate">{tabDef.name}</span>
</button>
)
})}
</nav>
{/* Help section */}
<div className="border-t px-2 py-2">
<div className="mb-1 px-3 font-semibold text-[10px] text-muted-foreground uppercase tracking-[0.18em]">
Help
</div>
{helpItems.map((item) => {
const Icon = item.icon
if (item.href) {
return (
<a
key={item.name}
href={item.href}
target="_blank"
rel="noopener noreferrer"
className="flex w-full items-center gap-2 rounded-md px-3 py-2 font-medium text-sm transition-colors hover:bg-accent hover:text-accent-foreground"
>
<Icon className="size-4 shrink-0" />
<span className="truncate">{item.name}</span>
</a>
)
}
return (
<button
key={item.name}
type="button"
onClick={() => handleHelpNavigation(item.to ?? '/home')}
className="flex w-full items-center gap-2 rounded-md px-3 py-2 font-medium text-sm transition-colors hover:bg-accent hover:text-accent-foreground"
>
<Icon className="size-4 shrink-0" />
<span className="truncate">{item.name}</span>
</button>
)
})}
</div>
</div>
{/* Right panel - settings content */}
<div className="flex flex-1 flex-col overflow-hidden">
<div className="flex justify-end px-4 pt-3">
<button
type="button"
onClick={handleClose}
className="rounded-sm opacity-70 ring-offset-background transition-opacity hover:opacity-100"
>
<X className="size-4" />
<span className="sr-only">Close</span>
</button>
</div>
<div className="styled-scrollbar flex-1 overflow-y-auto px-6 pb-6">
<ActiveComponent />
</div>
</div>
</div>
</DialogContent>
</Dialog>
)
}

View File

@@ -308,6 +308,9 @@ export const useChatSession = (options?: ChatSessionOptions) => {
secretAccessKey: provider?.secretAccessKey,
region: provider?.region,
sessionToken: provider?.sessionToken,
// ChatGPT Pro (Codex)
reasoningEffort: provider?.reasoningEffort,
reasoningSummary: provider?.reasoningSummary,
browserContext,
userSystemPrompt:
options?.origin === 'newtab'

View File

@@ -29,6 +29,18 @@ export const CONVERSATION_RESET_EVENT = 'ui.conversation.reset'
/** @public */
export const AI_PROVIDER_ADDED_EVENT = 'settings.ai_provider.added'
/** @public */
export const CHATGPT_PRO_OAUTH_STARTED_EVENT =
'settings.chatgpt_pro.oauth_started'
/** @public */
export const CHATGPT_PRO_OAUTH_COMPLETED_EVENT =
'settings.chatgpt_pro.oauth_completed'
/** @public */
export const CHATGPT_PRO_OAUTH_DISCONNECTED_EVENT =
'settings.chatgpt_pro.oauth_disconnected'
/** @public */
export const HUB_PROVIDER_ADDED_EVENT = 'settings.hub_provider.added'

View File

@@ -32,6 +32,7 @@ const providerIconMap: Record<ProviderType, IconComponent | null> = {
bedrock: Bedrock,
browseros: null,
moonshot: Kimi,
'chatgpt-pro': OpenAI,
}
interface ProviderIconProps {

View File

@@ -20,6 +20,15 @@ export interface ProviderTemplate {
* @public
*/
export const providerTemplates: ProviderTemplate[] = [
{
id: 'chatgpt-pro',
name: 'ChatGPT Plus/Pro',
defaultBaseUrl: 'https://chatgpt.com/backend-api',
defaultModelId: 'gpt-5.3-codex',
supportsImages: true,
contextWindow: 400000,
setupGuideUrl: 'https://docs.browseros.com/features/chatgpt-pro-oauth',
},
{
id: 'moonshot',
name: 'Moonshot AI',
@@ -129,6 +138,7 @@ export const providerTemplates: ProviderTemplate[] = [
* @public
*/
export const providerTypeOptions: { value: ProviderType; label: string }[] = [
{ value: 'chatgpt-pro', label: 'ChatGPT Plus/Pro' },
{ value: 'moonshot', label: 'Moonshot AI' },
{ value: 'anthropic', label: 'Anthropic' },
{ value: 'openai', label: 'OpenAI' },
@@ -157,6 +167,7 @@ export const getProviderTemplate = (
* Auto-fills when user selects a provider type
*/
export const DEFAULT_BASE_URLS: Record<ProviderType, string> = {
'chatgpt-pro': 'https://chatgpt.com/backend-api',
moonshot: 'https://api.moonshot.ai/v1',
anthropic: 'https://api.anthropic.com/v1',
openai: 'https://api.openai.com/v1',

View File

@@ -14,6 +14,7 @@ export type ProviderType =
| 'bedrock'
| 'browseros'
| 'moonshot'
| 'chatgpt-pro'
/**
* LLM Provider configuration
@@ -56,6 +57,10 @@ export interface LlmProviderConfig {
region?: string
/** AWS session token (for temporary STS credentials) */
sessionToken?: string
// ChatGPT Pro (Codex) fields
reasoningEffort?: 'none' | 'low' | 'medium' | 'high'
reasoningSummary?: 'auto' | 'concise' | 'detailed'
}
/**

View File

@@ -0,0 +1,89 @@
import { useEffect, useRef, useState } from 'react'
import { getAgentServerUrl } from '@/lib/browseros/helpers'
interface OAuthStatus {
authenticated: boolean
email?: string
provider: string
}
interface UseOAuthStatusReturn {
status: OAuthStatus | null
isPolling: boolean
startPolling: () => void
stopPolling: () => void
refresh: () => Promise<OAuthStatus | null>
disconnect: () => Promise<void>
}
export function useOAuthStatus(provider: string): UseOAuthStatusReturn {
const [status, setStatus] = useState<OAuthStatus | null>(null)
const [isPolling, setIsPolling] = useState(false)
const pollIntervalRef = useRef<ReturnType<typeof setInterval> | null>(null)
const pollTimeoutRef = useRef<ReturnType<typeof setTimeout> | null>(null)
async function fetchStatus(): Promise<OAuthStatus | null> {
try {
const serverUrl = await getAgentServerUrl()
const res = await fetch(`${serverUrl}/oauth/${provider}/status`)
if (!res.ok) return null
const data = (await res.json()) as OAuthStatus
setStatus(data)
return data
} catch {
return null
}
}
function stopPolling() {
if (pollIntervalRef.current) clearInterval(pollIntervalRef.current)
if (pollTimeoutRef.current) clearTimeout(pollTimeoutRef.current)
pollIntervalRef.current = null
pollTimeoutRef.current = null
setIsPolling(false)
}
function startPolling() {
stopPolling()
setIsPolling(true)
pollIntervalRef.current = setInterval(async () => {
const result = await fetchStatus()
if (result?.authenticated) {
stopPolling()
}
}, 2_000)
pollTimeoutRef.current = setTimeout(stopPolling, 300_000)
}
async function disconnect() {
try {
const serverUrl = await getAgentServerUrl()
await fetch(`${serverUrl}/oauth/${provider}`, { method: 'DELETE' })
setStatus({ authenticated: false, provider })
} catch {
// Best-effort disconnect
}
}
// Initial status check on mount
// biome-ignore lint/correctness/useExhaustiveDependencies: only run on mount
useEffect(() => {
fetchStatus()
}, [])
// Cleanup on unmount
useEffect(() => {
return () => stopPolling()
}, [])
return {
status,
isPolling,
startPolling,
stopPolling,
refresh: fetchStatus,
disconnect,
}
}

View File

@@ -0,0 +1,21 @@
import { useCallback } from 'react'
import { useLocation, useNavigate } from 'react-router'
/**
* Hook to open the settings dialog from anywhere in the app.
* Uses React Router's background location pattern so the dialog
* overlays the current page without unmounting it.
*/
export function useOpenSettings() {
const location = useLocation()
const navigate = useNavigate()
return useCallback(
(tab = 'ai') => {
navigate(`/settings/${tab}`, {
state: { backgroundLocation: location },
})
},
[location, navigate],
)
}

View File

@@ -48,8 +48,8 @@
--card-foreground: oklch(0.141 0.005 285.823);
--popover: oklch(0.99 0.001 85);
--popover-foreground: oklch(0.141 0.005 285.823);
--primary: oklch(0.646 0.222 41.116);
--primary-foreground: oklch(0.98 0.016 73.684);
--primary: oklch(0.646 0.222 350);
--primary-foreground: oklch(0.98 0.016 350);
--secondary: oklch(0.97 0.002 85);
--secondary-foreground: oklch(0.21 0.006 285.885);
--muted: oklch(0.97 0.002 85);
@@ -60,24 +60,24 @@
--destructive-foreground: oklch(0.99 0 0);
--border: oklch(0.92 0.004 286.32);
--input: oklch(0.92 0.004 286.32);
--ring: oklch(0.75 0.183 55.934);
--chart-1: oklch(0.837 0.128 66.29);
--chart-2: oklch(0.705 0.213 47.604);
--chart-3: oklch(0.646 0.222 41.116);
--chart-4: oklch(0.553 0.195 38.402);
--chart-5: oklch(0.47 0.157 37.304);
--ring: oklch(0.75 0.15 350);
--chart-1: oklch(0.837 0.1 350);
--chart-2: oklch(0.705 0.17 350);
--chart-3: oklch(0.646 0.2 350);
--chart-4: oklch(0.553 0.17 350);
--chart-5: oklch(0.47 0.14 350);
--sidebar: oklch(0.98 0.002 85);
--sidebar-foreground: oklch(0.141 0.005 285.823);
--sidebar-primary: oklch(0.646 0.222 41.116);
--sidebar-primary-foreground: oklch(0.98 0.016 73.684);
--sidebar-primary: oklch(0.646 0.222 350);
--sidebar-primary-foreground: oklch(0.98 0.016 350);
--sidebar-accent: oklch(0.92 0.004 85);
--sidebar-accent-foreground: oklch(0.21 0.006 285.885);
--sidebar-border: oklch(0.92 0.004 286.32);
--sidebar-ring: oklch(0.75 0.183 55.934);
--sidebar-ring: oklch(0.75 0.15 350);
/* Custom accent color for BrowserOS branding */
--accent-orange: oklch(0.6781 0.1663 43.21);
/* Added brighter orange variant for shimmer animation */
--accent-orange-bright: oklch(0.7531 0.1963 43.21);
--accent-orange: oklch(0.6781 0.1663 350);
/* Brighter pink variant for shimmer animation */
--accent-orange-bright: oklch(0.7531 0.1963 350);
}
.dark {
@@ -87,8 +87,8 @@
--card-foreground: oklch(0.92 0 0);
--popover: oklch(0.28 0.01 265);
--popover-foreground: oklch(0.92 0 0);
--primary: oklch(0.705 0.213 47.604);
--primary-foreground: oklch(0.98 0.016 73.684);
--primary: oklch(0.705 0.17 350);
--primary-foreground: oklch(0.98 0.016 350);
--secondary: oklch(0.32 0.01 265);
--secondary-foreground: oklch(0.92 0 0);
--muted: oklch(0.32 0.01 265);
@@ -99,20 +99,20 @@
--destructive-foreground: oklch(0.92 0 0);
--border: oklch(0.38 0.01 265);
--input: oklch(0.38 0.01 265);
--ring: oklch(0.408 0.123 38.172);
--chart-1: oklch(0.837 0.128 66.29);
--chart-2: oklch(0.705 0.213 47.604);
--chart-3: oklch(0.646 0.222 41.116);
--chart-4: oklch(0.553 0.195 38.402);
--chart-5: oklch(0.47 0.157 37.304);
--ring: oklch(0.408 0.1 350);
--chart-1: oklch(0.837 0.1 350);
--chart-2: oklch(0.705 0.17 350);
--chart-3: oklch(0.646 0.2 350);
--chart-4: oklch(0.553 0.17 350);
--chart-5: oklch(0.47 0.14 350);
--sidebar: oklch(0.25 0.01 265);
--sidebar-foreground: oklch(0.92 0 0);
--sidebar-primary: oklch(0.705 0.213 47.604);
--sidebar-primary: oklch(0.705 0.17 350);
--sidebar-primary-foreground: oklch(0.92 0 0);
--sidebar-accent: oklch(0.32 0.01 265);
--sidebar-accent-foreground: oklch(0.92 0 0);
--sidebar-border: oklch(0.38 0.01 265);
--sidebar-ring: oklch(0.408 0.123 38.172);
--sidebar-ring: oklch(0.408 0.1 350);
}
@theme inline {
@@ -155,7 +155,7 @@
--color-sidebar-border: var(--sidebar-border);
--color-sidebar-ring: var(--sidebar-ring);
--color-accent-orange: var(--accent-orange);
/* Added brighter orange to theme */
/* Brighter pink variant for theme */
--color-accent-orange-bright: var(--accent-orange-bright);
/* Custom keyframe animations for hero section */
@@ -261,9 +261,9 @@
}
50% {
text-shadow:
0 0 20px rgba(214, 123, 68, 0.6),
0 0 40px rgba(214, 123, 68, 0.4),
0 0 60px rgba(214, 123, 68, 0.2);
0 0 20px rgba(219, 112, 147, 0.6),
0 0 40px rgba(219, 112, 147, 0.4),
0 0 60px rgba(219, 112, 147, 0.2);
transform: scale(1.02);
}
100% {

View File

@@ -1,6 +1,6 @@
{
"name": "@browseros/server",
"version": "0.0.75",
"version": "0.0.76",
"description": "BrowserOS server",
"type": "module",
"main": "./src/index.ts",

View File

@@ -4,6 +4,7 @@ import type {
LanguageModelV3Middleware,
} from '@ai-sdk/provider'
import { AGENT_LIMITS } from '@browseros/shared/constants/limits'
import { LLM_PROVIDERS } from '@browseros/shared/schemas/llm'
import type { BrowserContext } from '@browseros/shared/schemas/browser-context'
import {
type LanguageModel,
@@ -189,13 +190,29 @@ export class AiSdkAgent {
),
})
// Create the ToolLoopAgent
// Codex requires store=false — tell the SDK to inline content
// instead of using item_reference (which fails with store=false)
const isChatGPTPro =
config.resolvedConfig.provider === LLM_PROVIDERS.CHATGPT_PRO
const agent = new ToolLoopAgent({
model,
instructions,
tools,
stopWhen: [stepCountIs(AGENT_LIMITS.MAX_TURNS)],
prepareStep,
...(isChatGPTPro && {
providerOptions: {
openai: {
store: false,
reasoningEffort:
config.resolvedConfig.reasoningEffort || 'high',
reasoningSummary:
config.resolvedConfig.reasoningSummary || 'auto',
include: ['reasoning.encrypted_content'],
},
},
}),
})
logger.info('Agent session created (v2)', {

View File

@@ -7,6 +7,7 @@ import { createOpenAICompatible } from '@ai-sdk/openai-compatible'
import { LLM_PROVIDERS } from '@browseros/shared/schemas/llm'
import { createOpenRouter } from '@openrouter/ai-sdk-provider'
import type { LanguageModel } from 'ai'
import { createCodexFetch } from '../lib/clients/oauth/codex-fetch'
import { logger } from '../lib/logger'
import { createOpenRouterCompatibleFetch } from '../lib/openrouter-fetch'
import type { ResolvedAgentConfig } from './types'
@@ -148,6 +149,17 @@ function createMoonshotFactory(
})
}
function createChatGPTProFactory(
config: ResolvedAgentConfig,
): (modelId: string) => unknown {
if (!config.apiKey)
throw new Error('ChatGPT Plus/Pro requires OAuth authentication')
return createOpenAI({
apiKey: config.apiKey,
fetch: createCodexFetch(config.accountId) as typeof globalThis.fetch,
}).responses
}
const PROVIDER_FACTORIES: Record<string, ProviderFactory> = {
[LLM_PROVIDERS.ANTHROPIC]: createAnthropicFactory,
[LLM_PROVIDERS.OPENAI]: createOpenAIFactory,
@@ -160,6 +172,7 @@ const PROVIDER_FACTORIES: Record<string, ProviderFactory> = {
[LLM_PROVIDERS.BROWSEROS]: createBrowserOSFactory,
[LLM_PROVIDERS.OPENAI_COMPATIBLE]: createOpenAICompatibleFactory,
[LLM_PROVIDERS.MOONSHOT]: createMoonshotFactory,
[LLM_PROVIDERS.CHATGPT_PRO]: createChatGPTProFactory,
}
export function createLanguageModel(

View File

@@ -30,6 +30,9 @@ export interface ResolvedAgentConfig {
accessKeyId?: string
secretAccessKey?: string
sessionToken?: string
accountId?: string
reasoningEffort?: string
reasoningSummary?: string
contextWindowSize?: number
userSystemPrompt?: string
workingDir: string

View File

@@ -0,0 +1,58 @@
/**
* @license
* Copyright 2025 BrowserOS
* SPDX-License-Identifier: AGPL-3.0-or-later
*
* OAuth routes for subscription-based LLM provider authentication.
*/
import { Hono } from 'hono'
import { getOAuthProvider } from '../../lib/clients/oauth/providers'
import type { OAuthTokenManager } from '../../lib/clients/oauth/token-manager'
import { logger } from '../../lib/logger'
interface OAuthRouteDeps {
tokenManager: OAuthTokenManager
}
export function createOAuthRoutes(deps: OAuthRouteDeps) {
const { tokenManager } = deps
return new Hono()
.get('/:provider/start', async (c) => {
const providerId = c.req.param('provider')
const redirectBackUrl = c.req.query('redirect')
const provider = getOAuthProvider(providerId)
if (!provider) {
return c.text(`Unknown OAuth provider: ${providerId}`, 400)
}
try {
const authUrl = await tokenManager.generateAuthorizationUrl(
providerId,
redirectBackUrl,
)
return c.redirect(authUrl)
} catch (error) {
logger.error('Failed to start OAuth flow', {
provider: providerId,
error: error instanceof Error ? error.message : String(error),
})
return c.text('Failed to start authentication. Please try again.', 500)
}
})
.get('/:provider/status', (c) => {
const providerId = c.req.param('provider')
const status = tokenManager.getStatus(providerId)
return c.json(status)
})
.delete('/:provider', (c) => {
const providerId = c.req.param('provider')
tokenManager.deleteTokens(providerId)
logger.info('OAuth tokens deleted', { provider: providerId })
return c.json({ success: true })
})
}

View File

@@ -10,7 +10,11 @@ import { testProviderConnection } from '../../lib/clients/llm/test-provider'
import { logger } from '../../lib/logger'
import { AgentLLMConfigSchema } from '../types'
export function createProviderRoutes() {
interface ProviderRouteDeps {
browserosId?: string
}
export function createProviderRoutes(deps: ProviderRouteDeps = {}) {
return new Hono().post(
'/',
zValidator('json', AgentLLMConfigSchema),
@@ -22,7 +26,7 @@ export function createProviderRoutes() {
model: config.model,
})
const result = await testProviderConnection(config)
const result = await testProviderConnection(config, deps.browserosId)
logger.info('Provider test result', {
provider: config.provider,

View File

@@ -10,7 +10,11 @@ const RefinePromptRequestSchema = AgentLLMConfigSchema.extend({
name: z.string().min(1, 'Task name cannot be empty'),
})
export function createRefinePromptRoutes() {
interface RefinePromptRouteDeps {
browserosId?: string
}
export function createRefinePromptRoutes(deps: RefinePromptRouteDeps = {}) {
return new Hono().post(
'/',
zValidator('json', RefinePromptRequestSchema),
@@ -23,7 +27,11 @@ export function createRefinePromptRoutes() {
taskName: name,
})
const result = await refinePrompt(llmConfig, { prompt, name })
const result = await refinePrompt(
llmConfig,
{ prompt, name },
deps.browserosId,
)
logger.info('Refine prompt result', {
provider: llmConfig.provider,

View File

@@ -15,6 +15,8 @@ import { cors } from 'hono/cors'
import type { ContentfulStatusCode } from 'hono/utils/http-status'
import { HttpAgentError } from '../agent/errors'
import { KlavisClient } from '../lib/clients/klavis/klavis-client'
import { initializeOAuth } from '../lib/clients/oauth'
import { getDb } from '../lib/db'
import { logger } from '../lib/logger'
import { createChatRoutes } from './routes/chat'
import { createGraphRoutes } from './routes/graph'
@@ -22,6 +24,7 @@ import { createHealthRoute } from './routes/health'
import { createKlavisRoutes } from './routes/klavis'
import { createMcpRoutes } from './routes/mcp'
import { createMemoryRoutes } from './routes/memory'
import { createOAuthRoutes } from './routes/oauth'
import { createProviderRoutes } from './routes/provider'
import { createRefinePromptRoutes } from './routes/refine-prompt'
import { createSdkRoutes } from './routes/sdk'
@@ -75,6 +78,11 @@ export async function createHttpServer(config: HttpServerConfig) {
const { onShutdown } = config
// Initialize OAuth token manager + callback server (port released on process exit)
const tokenManager = browserosId
? initializeOAuth(getDb(), browserosId)
: null
// Connect Klavis proxy (non-blocking: browser tools still work if this fails)
let klavisProxy: KlavisProxyHandle | null = null
if (browserosId) {
@@ -113,8 +121,16 @@ export async function createHttpServer(config: HttpServerConfig) {
.route('/soul', createSoulRoutes())
.route('/memory', createMemoryRoutes())
.route('/skills', createSkillsRoutes())
.route('/test-provider', createProviderRoutes())
.route('/refine-prompt', createRefinePromptRoutes())
.route('/test-provider', createProviderRoutes({ browserosId }))
.route('/refine-prompt', createRefinePromptRoutes({ browserosId }))
.route(
'/oauth',
tokenManager
? createOAuthRoutes({ tokenManager })
: new Hono().all('/*', (c) =>
c.json({ error: 'OAuth not available' }, 503),
),
)
.route('/klavis', createKlavisRoutes({ browserosId: browserosId || '' }))
.route(
'/mcp',

View File

@@ -54,6 +54,9 @@ export class ChatService {
accessKeyId: llmConfig.accessKeyId,
secretAccessKey: llmConfig.secretAccessKey,
sessionToken: llmConfig.sessionToken,
accountId: llmConfig.accountId,
reasoningEffort: request.reasoningEffort,
reasoningSummary: request.reasoningSummary,
contextWindowSize: request.contextWindowSize,
userSystemPrompt: request.userSystemPrompt,
workingDir,

View File

@@ -19,6 +19,7 @@ export const INLINED_ENV = {
CODEGEN_SERVICE_URL: process.env.CODEGEN_SERVICE_URL,
POSTHOG_API_KEY: process.env.POSTHOG_API_KEY,
BROWSEROS_CONFIG_URL: process.env.BROWSEROS_CONFIG_URL,
SKILLS_CATALOG_URL: process.env.SKILLS_CATALOG_URL,
} as const
export const REQUIRED_FOR_PRODUCTION = [

View File

@@ -10,19 +10,57 @@ import { LLM_PROVIDERS, type LLMConfig } from '@browseros/shared/schemas/llm'
import { INLINED_ENV } from '../../../env'
import { logger } from '../../logger'
import { fetchBrowserOSConfig, getLLMConfigFromProvider } from '../gateway'
import { getOAuthTokenManager } from '../oauth'
import type { ResolvedLLMConfig } from './types'
export async function resolveLLMConfig(
config: LLMConfig,
browserosId?: string,
): Promise<ResolvedLLMConfig> {
if (config.provider !== LLM_PROVIDERS.BROWSEROS) {
if (!config.model) {
throw new Error(`model is required for ${config.provider} provider`)
}
return config as ResolvedLLMConfig
// ChatGPT Pro: resolve OAuth token from server-side storage
if (config.provider === LLM_PROVIDERS.CHATGPT_PRO) {
return resolveChatGPTProConfig(config, browserosId)
}
// BrowserOS gateway: fetch config from remote service
if (config.provider === LLM_PROVIDERS.BROWSEROS) {
return resolveBrowserOSConfig(config, browserosId)
}
// All other providers: passthrough with model validation
if (!config.model) {
throw new Error(`model is required for ${config.provider} provider`)
}
return config as ResolvedLLMConfig
}
async function resolveChatGPTProConfig(
config: LLMConfig,
browserosId?: string,
): Promise<ResolvedLLMConfig> {
const tokenManager = getOAuthTokenManager()
if (!tokenManager || !browserosId) {
throw new Error('Not authenticated with ChatGPT Plus/Pro. Please login first.')
}
const tokens = await tokenManager.refreshIfExpired('chatgpt-pro')
if (!tokens) {
throw new Error('Not authenticated with ChatGPT Plus/Pro. Please login first.')
}
return {
...config,
model: config.model || 'gpt-5.3-codex',
apiKey: tokens.accessToken,
upstreamProvider: 'openai',
accountId: tokens.accountId,
}
}
async function resolveBrowserOSConfig(
config: LLMConfig,
browserosId?: string,
): Promise<ResolvedLLMConfig> {
const configUrl = INLINED_ENV.BROWSEROS_CONFIG_URL
if (!configUrl) {
throw new Error(

View File

@@ -16,6 +16,7 @@ import { LLM_PROVIDERS } from '@browseros/shared/schemas/llm'
import { createOpenRouter } from '@openrouter/ai-sdk-provider'
import type { LanguageModel } from 'ai'
import { logger } from '../../logger'
import { createCodexFetch } from '../oauth/codex-fetch'
import { createOpenRouterCompatibleFetch } from '../../openrouter-fetch'
import type { ResolvedLLMConfig } from './types'
@@ -134,6 +135,15 @@ function createMoonshotModel(config: ResolvedLLMConfig): LanguageModel {
})(config.model)
}
function createChatGPTProModel(config: ResolvedLLMConfig): LanguageModel {
if (!config.apiKey)
throw new Error('ChatGPT Plus/Pro requires OAuth authentication')
return createOpenAI({
apiKey: config.apiKey,
fetch: createCodexFetch(config.accountId) as typeof globalThis.fetch,
}).responses(config.model)
}
const PROVIDER_FACTORIES: Record<string, ProviderFactory> = {
[LLM_PROVIDERS.ANTHROPIC]: createAnthropicModel,
[LLM_PROVIDERS.OPENAI]: createOpenAIModel,
@@ -146,6 +156,7 @@ const PROVIDER_FACTORIES: Record<string, ProviderFactory> = {
[LLM_PROVIDERS.BROWSEROS]: createBrowserOSModel,
[LLM_PROVIDERS.OPENAI_COMPATIBLE]: createOpenAICompatibleModel,
[LLM_PROVIDERS.MOONSHOT]: createMoonshotModel,
[LLM_PROVIDERS.CHATGPT_PRO]: createChatGPTProModel,
}
export function createLLMProvider(config: ResolvedLLMConfig): LanguageModel {

View File

@@ -1,6 +1,6 @@
import { TIMEOUTS } from '@browseros/shared/constants/timeouts'
import type { LLMConfig } from '@browseros/shared/schemas/llm'
import { generateText } from 'ai'
import { streamText } from 'ai'
import { resolveLLMConfig } from './config'
import { createLLMProvider } from './provider'
@@ -38,18 +38,21 @@ Write it as a natural instruction — like telling a capable assistant what to d
export async function refinePrompt(
llmConfig: RefinePromptConfig,
request: RefinePromptRequest,
browserosId?: string,
): Promise<RefinePromptResult> {
try {
const resolvedConfig = await resolveLLMConfig(llmConfig)
const resolvedConfig = await resolveLLMConfig(llmConfig, browserosId)
const model = createLLMProvider(resolvedConfig)
const response = await generateText({
// streamText works for all providers including Codex (which requires streaming)
const stream = streamText({
model,
system: buildSystemPrompt(request.name),
messages: [{ role: 'user', content: request.prompt }],
abortSignal: AbortSignal.timeout(TIMEOUTS.REFINE_PROMPT),
})
const refined = (await stream.text)?.trim()
const refined = response.text?.trim()
if (!refined) {
return { success: false, message: 'Provider returned an empty response' }
}

View File

@@ -6,7 +6,7 @@
import { TIMEOUTS } from '@browseros/shared/constants/timeouts'
import type { LLMConfig } from '@browseros/shared/schemas/llm'
import { generateText } from 'ai'
import { streamText } from 'ai'
import { resolveLLMConfig } from './config'
import { createLLMProvider } from './provider'
@@ -25,20 +25,22 @@ const TEST_PROMPT = "Respond with exactly: 'ok'"
export async function testProviderConnection(
config: ProviderTestConfig,
browserosId?: string,
): Promise<ProviderTestResult> {
const startTime = performance.now()
try {
const resolvedConfig = await resolveLLMConfig(config)
const resolvedConfig = await resolveLLMConfig(config, browserosId)
const model = createLLMProvider(resolvedConfig)
const response = await generateText({
// streamText works for all providers including Codex (which requires streaming)
const stream = streamText({
model,
messages: [{ role: 'user', content: TEST_PROMPT }],
abortSignal: AbortSignal.timeout(TIMEOUTS.TEST_PROVIDER),
})
const text = await stream.text
const responseTime = Math.round(performance.now() - startTime)
const text = response.text
if (text) {
const preview = text.length > 100 ? `${text.slice(0, 100)}...` : text

View File

@@ -11,4 +11,5 @@ import type { LLMConfig } from '@browseros/shared/schemas/llm'
export interface ResolvedLLMConfig extends LLMConfig {
model: string
upstreamProvider?: string
accountId?: string
}

View File

@@ -0,0 +1,97 @@
/**
* @license
* Copyright 2025 BrowserOS
* SPDX-License-Identifier: AGPL-3.0-or-later
*
* Temporary HTTP server on port 1455 for OAuth callbacks.
* OpenAI's OAuth requires redirect_uri to use this specific port
* (matching the Codex CLI client ID registration).
*/
import { OAUTH_CALLBACK_PORT } from '@browseros/shared/constants/ports'
import { logger } from '../../logger'
import type { OAuthTokenManager } from './token-manager'
export function startOAuthCallbackServer(
tokenManager: OAuthTokenManager,
): { stop: () => void } {
const server = Bun.serve({
port: OAUTH_CALLBACK_PORT,
hostname: '127.0.0.1',
fetch: async (req) => {
const url = new URL(req.url)
if (url.pathname !== '/auth/callback') {
return new Response('Not found', { status: 404 })
}
const code = url.searchParams.get('code')
const state = url.searchParams.get('state')
const error = url.searchParams.get('error')
if (error) {
const description =
url.searchParams.get('error_description') || error
logger.warn('OAuth callback received error', { error, description })
return htmlResponse(errorPage(description))
}
if (!code || !state) {
return htmlResponse(errorPage('Missing authorization code or state'))
}
try {
await tokenManager.handleCallback(code, state)
// Always show success page — chrome-extension:// redirects are blocked by Chromium.
// The extension polls /oauth/:provider/status and detects auth automatically.
return htmlResponse(successPage())
} catch (err) {
logger.error('OAuth callback failed', {
error: err instanceof Error ? err.message : String(err),
})
return htmlResponse(
errorPage(
err instanceof Error ? err.message : 'Authentication failed',
),
)
}
},
})
logger.info('OAuth callback server started', { port: OAUTH_CALLBACK_PORT })
return {
stop: () => {
server.stop()
logger.info('OAuth callback server stopped')
},
}
}
function htmlResponse(html: string): Response {
return new Response(html, {
headers: { 'Content-Type': 'text/html; charset=utf-8' },
})
}
function successPage(): string {
return `<!DOCTYPE html>
<html><head><title>BrowserOS - Authentication Successful</title>
<style>body{font-family:system-ui;display:flex;align-items:center;justify-content:center;height:100vh;margin:0;background:#f8f9fa}
.card{text-align:center;padding:2rem;background:white;border-radius:12px;box-shadow:0 2px 8px rgba(0,0,0,0.1)}
h1{color:#22c55e;font-size:1.5rem}p{color:#6b7280}</style></head>
<body><div class="card"><h1>Authentication Successful</h1><p>You can close this tab and return to BrowserOS.</p></div></body></html>`
}
function errorPage(message: string): string {
const escaped = message
.replace(/&/g, '&amp;')
.replace(/</g, '&lt;')
.replace(/>/g, '&gt;')
return `<!DOCTYPE html>
<html><head><title>BrowserOS - Authentication Failed</title>
<style>body{font-family:system-ui;display:flex;align-items:center;justify-content:center;height:100vh;margin:0;background:#f8f9fa}
.card{text-align:center;padding:2rem;background:white;border-radius:12px;box-shadow:0 2px 8px rgba(0,0,0,0.1)}
h1{color:#ef4444;font-size:1.5rem}p{color:#6b7280}</style></head>
<body><div class="card"><h1>Authentication Failed</h1><p>${escaped}</p><p>Please close this tab and try again.</p></div></body></html>`
}

View File

@@ -0,0 +1,71 @@
/**
* @license
* Copyright 2025 BrowserOS
* SPDX-License-Identifier: AGPL-3.0-or-later
*/
import { logger } from '../../logger'
const CODEX_API_ENDPOINT = 'https://chatgpt.com/backend-api/codex/responses'
export function createCodexFetch(accountId?: string) {
return (input: RequestInfo | URL, init?: RequestInit) => {
let inputUrl: string
if (typeof input === 'string') {
inputUrl = input
} else if (input instanceof URL) {
inputUrl = input.toString()
} else if (input instanceof Request) {
inputUrl = input.url
} else {
inputUrl = String(input)
}
const parsed = new URL(inputUrl)
const shouldRewrite =
parsed.pathname.includes('/v1/responses') ||
parsed.pathname.includes('/chat/completions')
const url = shouldRewrite ? new URL(CODEX_API_ENDPOINT) : parsed
const headers = new Headers(init?.headers as HeadersInit)
if (accountId) {
headers.set('ChatGPT-Account-Id', accountId)
}
headers.set('originator', 'browseros')
headers.set('OpenAI-Beta', 'responses=experimental')
let body = init?.body
if (shouldRewrite && body && typeof body === 'string') {
try {
const json = JSON.parse(body)
json.stream = true
json.store = false
delete json.previous_response_id
delete json.temperature
delete json.max_tokens
delete json.max_output_tokens
delete json.top_p
if (!json.instructions) {
json.instructions = 'You are a helpful assistant.'
}
// Strip item IDs — Codex doesn't persist items with store=false.
// The SDK should already inline content (via providerOptions store=false),
// but this is a safety net matching OpenCode's approach.
if (Array.isArray(json.input)) {
for (const item of json.input) {
if ('id' in item) {
delete item.id
}
}
}
body = JSON.stringify(json)
} catch (err) {
logger.warn('Failed to inject Codex-required fields into request body', {
error: err instanceof Error ? err.message : String(err),
})
}
}
return fetch(url, { ...init, headers, body })
}
}

View File

@@ -0,0 +1,26 @@
/**
* @license
* Copyright 2025 BrowserOS
* SPDX-License-Identifier: AGPL-3.0-or-later
*/
import type { Database } from 'bun:sqlite'
import { startOAuthCallbackServer } from './callback-server'
import { OAuthTokenManager } from './token-manager'
import { OAuthTokenStore } from './token-store'
let tokenManager: OAuthTokenManager | null = null
export function initializeOAuth(
db: Database,
browserosId: string,
): OAuthTokenManager {
const store = new OAuthTokenStore(db)
tokenManager = new OAuthTokenManager(store, browserosId)
startOAuthCallbackServer(tokenManager)
return tokenManager
}
export function getOAuthTokenManager(): OAuthTokenManager | null {
return tokenManager
}

View File

@@ -0,0 +1,41 @@
/**
* @license
* Copyright 2025 BrowserOS
* SPDX-License-Identifier: AGPL-3.0-or-later
*/
import { EXTERNAL_URLS } from '@browseros/shared/constants/urls'
export interface OAuthProviderConfig {
id: string
name: string
clientId: string
authEndpoint: string
tokenEndpoint: string
scopes: string[]
extraAuthParams?: Record<string, string>
upstreamLLMProvider: string
}
export const OAUTH_PROVIDERS: Record<string, OAuthProviderConfig> = {
'chatgpt-pro': {
id: 'chatgpt-pro',
name: 'ChatGPT Plus/Pro',
clientId: 'app_EMoamEEZ73f0CkXaXp7hrann',
authEndpoint: EXTERNAL_URLS.OPENAI_AUTH,
tokenEndpoint: EXTERNAL_URLS.OPENAI_TOKEN,
scopes: ['openid', 'profile', 'email', 'offline_access'],
extraAuthParams: {
id_token_add_organizations: 'true',
codex_cli_simplified_flow: 'true',
originator: 'browseros',
},
upstreamLLMProvider: 'openai',
},
}
export function getOAuthProvider(
providerId: string,
): OAuthProviderConfig | undefined {
return OAUTH_PROVIDERS[providerId]
}

View File

@@ -0,0 +1,279 @@
/**
* @license
* Copyright 2025 BrowserOS
* SPDX-License-Identifier: AGPL-3.0-or-later
*/
import { OAUTH_CALLBACK_PORT } from '@browseros/shared/constants/ports'
import { TIMEOUTS } from '@browseros/shared/constants/timeouts'
import { logger } from '../../logger'
import { getOAuthProvider } from './providers'
import type { OAuthTokenStore, StoredOAuthTokens } from './token-store'
interface PendingOAuthFlow {
provider: string
codeVerifier: string
state: string
redirectBackUrl?: string
createdAt: number
}
interface OAuthTokenResponse {
access_token: string
refresh_token?: string
expires_in: number
id_token?: string
}
export class OAuthTokenManager {
private readonly pendingFlows = new Map<string, PendingOAuthFlow>()
private readonly refreshLocks = new Map<string, Promise<StoredOAuthTokens | null>>()
constructor(
private readonly store: OAuthTokenStore,
private readonly browserosId: string,
) {}
async generateAuthorizationUrl(
providerId: string,
redirectBackUrl?: string,
): Promise<string> {
const provider = getOAuthProvider(providerId)
if (!provider) throw new Error(`Unknown OAuth provider: ${providerId}`)
const codeVerifier = generateCodeVerifier()
const codeChallenge = await generateCodeChallenge(codeVerifier)
const state = generateRandomState()
this.pendingFlows.set(state, {
provider: providerId,
codeVerifier,
state,
redirectBackUrl,
createdAt: Date.now(),
})
this.cleanExpiredFlows()
const redirectUri = buildRedirectUri()
const params = new URLSearchParams({
response_type: 'code',
client_id: provider.clientId,
redirect_uri: redirectUri,
code_challenge: codeChallenge,
code_challenge_method: 'S256',
scope: provider.scopes.join(' '),
state,
...provider.extraAuthParams,
})
return `${provider.authEndpoint}?${params.toString()}`
}
async handleCallback(
code: string,
state: string,
): Promise<{ tokens: StoredOAuthTokens; redirectBackUrl?: string }> {
const flow = this.pendingFlows.get(state)
if (!flow) throw new Error('Invalid or expired OAuth state')
if (Date.now() - flow.createdAt > TIMEOUTS.OAUTH_FLOW_TTL) {
this.pendingFlows.delete(state)
throw new Error('OAuth flow expired. Please try again.')
}
const provider = getOAuthProvider(flow.provider)
if (!provider) throw new Error(`Unknown OAuth provider: ${flow.provider}`)
const redirectUri = buildRedirectUri()
const tokenResponse = await fetch(provider.tokenEndpoint, {
method: 'POST',
headers: { 'Content-Type': 'application/x-www-form-urlencoded' },
body: new URLSearchParams({
grant_type: 'authorization_code',
client_id: provider.clientId,
code,
redirect_uri: redirectUri,
code_verifier: flow.codeVerifier,
}),
})
if (!tokenResponse.ok) {
const error = await tokenResponse.text()
logger.error('OAuth token exchange failed', {
status: tokenResponse.status,
error,
})
throw new Error(`Token exchange failed: ${tokenResponse.status}`)
}
const data = (await tokenResponse.json()) as OAuthTokenResponse
if (!data.refresh_token) {
logger.warn('OAuth token response missing refresh_token — token refresh will not be available', {
provider: flow.provider,
})
}
const { accountId, email } = parseAccessTokenClaims(data.access_token)
const tokens: StoredOAuthTokens = {
accessToken: data.access_token,
refreshToken: data.refresh_token ?? '',
expiresAt: Date.now() + data.expires_in * 1000,
email,
accountId,
}
this.store.upsertTokens(this.browserosId, flow.provider, tokens)
this.pendingFlows.delete(state)
logger.info('OAuth authentication successful', {
provider: flow.provider,
email,
})
return { tokens, redirectBackUrl: flow.redirectBackUrl }
}
// Mutex-protected refresh: concurrent callers share one in-flight refresh
async refreshIfExpired(provider: string): Promise<StoredOAuthTokens | null> {
const tokens = this.store.getTokens(this.browserosId, provider)
if (!tokens) return null
if (Date.now() < tokens.expiresAt - TIMEOUTS.OAUTH_TOKEN_EXPIRY_BUFFER) {
return tokens
}
// If a refresh is already in progress, await it instead of starting another
const existing = this.refreshLocks.get(provider)
if (existing) return existing
const refreshPromise = this.executeRefresh(provider, tokens)
this.refreshLocks.set(provider, refreshPromise)
try {
return await refreshPromise
} finally {
this.refreshLocks.delete(provider)
}
}
private async executeRefresh(
provider: string,
tokens: StoredOAuthTokens,
): Promise<StoredOAuthTokens> {
if (!tokens.refreshToken) {
this.store.deleteTokens(this.browserosId, provider)
throw new Error(`${provider} session expired (no refresh token). Please re-login.`)
}
const providerConfig = getOAuthProvider(provider)
if (!providerConfig) {
throw new Error(`Unknown OAuth provider: ${provider}`)
}
logger.debug('Refreshing OAuth token', { provider })
const response = await fetch(providerConfig.tokenEndpoint, {
method: 'POST',
headers: { 'Content-Type': 'application/x-www-form-urlencoded' },
body: new URLSearchParams({
grant_type: 'refresh_token',
client_id: providerConfig.clientId,
refresh_token: tokens.refreshToken,
}),
})
if (!response.ok) {
logger.error('OAuth token refresh failed', {
provider,
status: response.status,
})
this.store.deleteTokens(this.browserosId, provider)
const providerName = providerConfig.name
throw new Error(`${providerName} session expired. Please re-login.`)
}
const data = (await response.json()) as OAuthTokenResponse
const { accountId, email } = parseAccessTokenClaims(data.access_token)
const refreshed: StoredOAuthTokens = {
accessToken: data.access_token,
refreshToken: data.refresh_token ?? tokens.refreshToken,
expiresAt: Date.now() + data.expires_in * 1000,
email: email ?? tokens.email,
accountId: accountId ?? tokens.accountId,
}
this.store.upsertTokens(this.browserosId, provider, refreshed)
return refreshed
}
getStatus(provider: string) {
return this.store.getStatus(this.browserosId, provider)
}
deleteTokens(provider: string): void {
this.store.deleteTokens(this.browserosId, provider)
}
private cleanExpiredFlows(): void {
const now = Date.now()
for (const [state, flow] of this.pendingFlows) {
if (now - flow.createdAt > TIMEOUTS.OAUTH_FLOW_TTL) {
this.pendingFlows.delete(state)
}
}
}
}
function buildRedirectUri(): string {
return `http://localhost:${OAUTH_CALLBACK_PORT}/auth/callback`
}
function generateCodeVerifier(): string {
const bytes = crypto.getRandomValues(new Uint8Array(32))
return base64UrlEncode(bytes)
}
async function generateCodeChallenge(verifier: string): Promise<string> {
const encoder = new TextEncoder()
const digest = await crypto.subtle.digest('SHA-256', encoder.encode(verifier))
return base64UrlEncode(new Uint8Array(digest))
}
function generateRandomState(): string {
const bytes = crypto.getRandomValues(new Uint8Array(16))
return base64UrlEncode(bytes)
}
function base64UrlEncode(bytes: Uint8Array): string {
const base64 = btoa(String.fromCharCode(...bytes))
return base64.replace(/\+/g, '-').replace(/\//g, '_').replace(/=+$/, '')
}
// Extracts claims without signature verification — safe because the token
// comes directly from OpenAI's HTTPS token endpoint. Do not reuse for
// caller-supplied or externally-sourced tokens.
function parseAccessTokenClaims(accessToken: string): {
accountId?: string
email?: string
} {
try {
const parts = accessToken.split('.')
if (parts.length !== 3) return {}
const payload = JSON.parse(
atob(parts[1].replace(/-/g, '+').replace(/_/g, '/')),
)
const authClaims = payload['https://api.openai.com/auth']
const profileClaims = payload['https://api.openai.com/profile']
return {
accountId:
authClaims?.chatgpt_account_id ??
payload.chatgpt_account_id ??
payload.account_id,
email:
profileClaims?.email ??
payload.email,
}
} catch {
return {}
}
}

View File

@@ -0,0 +1,99 @@
/**
* @license
* Copyright 2025 BrowserOS
* SPDX-License-Identifier: AGPL-3.0-or-later
*
* SQLite storage for OAuth tokens.
*/
import type { Database } from 'bun:sqlite'
export interface StoredOAuthTokens {
accessToken: string
refreshToken: string
expiresAt: number
email?: string
accountId?: string
}
export interface OAuthStatus {
authenticated: boolean
email?: string
provider: string
}
export class OAuthTokenStore {
constructor(private readonly db: Database) {}
upsertTokens(
browserosId: string,
provider: string,
tokens: StoredOAuthTokens,
): void {
const stmt = this.db.prepare(`
INSERT INTO oauth_tokens (browseros_id, provider, access_token, refresh_token, expires_at, email, account_id, updated_at)
VALUES (?, ?, ?, ?, ?, ?, ?, datetime('now'))
ON CONFLICT (browseros_id, provider) DO UPDATE SET
access_token = excluded.access_token,
refresh_token = excluded.refresh_token,
expires_at = excluded.expires_at,
email = excluded.email,
account_id = excluded.account_id,
updated_at = datetime('now')
`)
stmt.run(
browserosId,
provider,
tokens.accessToken,
tokens.refreshToken,
tokens.expiresAt,
tokens.email ?? null,
tokens.accountId ?? null,
)
}
getTokens(browserosId: string, provider: string): StoredOAuthTokens | null {
const row = this.db
.prepare(
'SELECT access_token, refresh_token, expires_at, email, account_id FROM oauth_tokens WHERE browseros_id = ? AND provider = ?',
)
.get(browserosId, provider) as {
access_token: string
refresh_token: string
expires_at: number
email: string | null
account_id: string | null
} | null
if (!row) return null
return {
accessToken: row.access_token,
refreshToken: row.refresh_token,
expiresAt: row.expires_at,
email: row.email ?? undefined,
accountId: row.account_id ?? undefined,
}
}
deleteTokens(browserosId: string, provider: string): void {
this.db
.prepare(
'DELETE FROM oauth_tokens WHERE browseros_id = ? AND provider = ?',
)
.run(browserosId, provider)
}
getStatus(browserosId: string, provider: string): OAuthStatus {
const row = this.db
.prepare(
'SELECT email FROM oauth_tokens WHERE browseros_id = ? AND provider = ?',
)
.get(browserosId, provider) as { email: string | null } | null
return {
authenticated: row !== null,
email: row?.email ?? undefined,
provider,
}
}
}

View File

@@ -21,7 +21,22 @@ CREATE TABLE IF NOT EXISTS identity (
created_at TEXT NOT NULL DEFAULT (datetime('now'))
)`
const OAUTH_TOKENS_TABLE = `
CREATE TABLE IF NOT EXISTS oauth_tokens (
browseros_id TEXT NOT NULL,
provider TEXT NOT NULL,
access_token TEXT NOT NULL,
refresh_token TEXT NOT NULL,
expires_at INTEGER NOT NULL,
email TEXT,
account_id TEXT,
created_at TEXT NOT NULL DEFAULT (datetime('now')),
updated_at TEXT NOT NULL DEFAULT (datetime('now')),
PRIMARY KEY (browseros_id, provider)
)`
export function initSchema(db: Database): void {
db.exec(RATE_LIMITER_TABLE)
db.exec(IDENTITY_TABLE)
db.exec(OAUTH_TOKENS_TABLE)
}

View File

@@ -28,6 +28,7 @@ import { fetchDailyRateLimit } from './lib/rate-limiter/fetch-config'
import { RateLimiter } from './lib/rate-limiter/rate-limiter'
import { Sentry } from './lib/sentry'
import { seedSoulTemplate } from './lib/soul'
import { startSkillSync, stopSkillSync } from './skills/remote-sync'
import { seedDefaultSkills } from './skills/seed'
import { registry } from './tools/registry'
import { VERSION } from './version'
@@ -112,12 +113,14 @@ export class Application {
)
this.logStartupSummary(controllerServerStarted)
startSkillSync()
metrics.log('http_server.started', { version: VERSION })
}
stop(reason?: string): void {
logger.info('Shutting down server...', { reason })
stopSkillSync()
// Immediate exit without graceful shutdown. Chromium may kill us on update/restart,
// and we need to free the port instantly so the HTTP port doesn't keep switching.

View File

@@ -0,0 +1,173 @@
import { mkdir, readFile, writeFile } from 'node:fs/promises'
import { join } from 'node:path'
import { TIMEOUTS } from '@browseros/shared/constants/timeouts'
import { EXTERNAL_URLS } from '@browseros/shared/constants/urls'
import { INLINED_ENV } from '../env'
import { getSkillsDir } from '../lib/browseros-dir'
import { logger } from '../lib/logger'
import { safeSkillDir } from './service'
import type { RemoteSkillCatalog, RemoteSkillEntry } from './types'
let syncTimer: ReturnType<typeof setInterval> | null = null
export function extractVersion(content: string): string {
const match = content.match(/^\s*version:\s*["']?([^"'\n]+)["']?/m)
return match?.[1]?.trim() || '1.0'
}
function isValidSkillEntry(entry: unknown): entry is RemoteSkillEntry {
if (typeof entry !== 'object' || entry === null) return false
const e = entry as Record<string, unknown>
return (
typeof e.id === 'string' &&
typeof e.version === 'string' &&
typeof e.content === 'string'
)
}
function isValidCatalog(data: unknown): data is RemoteSkillCatalog {
if (typeof data !== 'object' || data === null) return false
const d = data as Record<string, unknown>
return (
typeof d.version === 'number' &&
Array.isArray(d.skills) &&
d.skills.every(isValidSkillEntry)
)
}
function getCatalogUrl(): string {
return INLINED_ENV.SKILLS_CATALOG_URL || EXTERNAL_URLS.SKILLS_CATALOG
}
export async function fetchRemoteCatalog(): Promise<RemoteSkillCatalog | null> {
try {
const response = await fetch(getCatalogUrl(), {
signal: AbortSignal.timeout(TIMEOUTS.SKILLS_FETCH),
})
if (!response.ok) {
logger.warn('Failed to fetch remote skill catalog', {
status: response.status,
})
return null
}
const data: unknown = await response.json()
if (!isValidCatalog(data)) {
logger.warn('Remote skill catalog has invalid format')
return null
}
return data
} catch (err) {
logger.debug('Remote skill catalog unavailable', {
error: err instanceof Error ? err.message : String(err),
})
return null
}
}
async function getLocalVersion(skillId: string): Promise<string | null> {
try {
const safeDir = safeSkillDir(skillId)
const content = await readFile(join(safeDir, 'SKILL.md'), 'utf-8')
return extractVersion(content)
} catch {
return null
}
}
export async function writeSkillFile(
skillId: string,
content: string,
): Promise<void> {
const safeDir = safeSkillDir(skillId)
await mkdir(safeDir, { recursive: true })
await writeFile(join(safeDir, 'SKILL.md'), content)
}
export async function syncRemoteSkills(): Promise<{
installed: number
updated: number
}> {
const result = { installed: 0, updated: 0 }
const catalog = await fetchRemoteCatalog()
if (!catalog) return result
for (const remoteSkill of catalog.skills) {
try {
const localVersion = await getLocalVersion(remoteSkill.id)
if (!localVersion) {
await writeSkillFile(remoteSkill.id, remoteSkill.content)
result.installed++
continue
}
if (localVersion === remoteSkill.version) {
continue
}
await writeSkillFile(remoteSkill.id, remoteSkill.content)
result.updated++
} catch (err) {
logger.warn('Failed to sync skill', {
id: remoteSkill.id,
error: err instanceof Error ? err.message : String(err),
})
}
}
return result
}
export async function seedFromRemote(): Promise<boolean> {
const catalog = await fetchRemoteCatalog()
if (!catalog || catalog.skills.length === 0) return false
let seeded = 0
for (const skill of catalog.skills) {
try {
await writeSkillFile(skill.id, skill.content)
seeded++
} catch (err) {
logger.warn('Failed to seed remote skill', {
id: skill.id,
error: err instanceof Error ? err.message : String(err),
})
}
}
if (seeded > 0) {
logger.info(`Seeded ${seeded}/${catalog.skills.length} skills from remote catalog`)
}
return seeded === catalog.skills.length
}
async function runSync(): Promise<void> {
try {
const { installed, updated } = await syncRemoteSkills()
if (installed > 0 || updated > 0) {
logger.info('Remote skill sync completed', { installed, updated })
}
} catch (err) {
logger.warn('Skill sync failed', {
error: err instanceof Error ? err.message : String(err),
})
}
}
export function startSkillSync(): void {
if (syncTimer) return
runSync()
syncTimer = setInterval(runSync, TIMEOUTS.SKILLS_SYNC_INTERVAL)
syncTimer.unref()
}
export function stopSkillSync(): void {
if (syncTimer) {
clearInterval(syncTimer)
syncTimer = null
}
}

View File

@@ -1,8 +1,9 @@
import { mkdir, readdir, writeFile } from 'node:fs/promises'
import { readdir, stat } from 'node:fs/promises'
import { join } from 'node:path'
import { getSkillsDir } from '../lib/browseros-dir'
import { logger } from '../lib/logger'
import { DEFAULT_SKILLS } from './defaults'
import { seedFromRemote, writeSkillFile } from './remote-sync'
async function hasExistingSkills(skillsDir: string): Promise<boolean> {
try {
@@ -13,16 +14,27 @@ async function hasExistingSkills(skillsDir: string): Promise<boolean> {
}
}
async function skillExists(skillsDir: string, id: string): Promise<boolean> {
try {
await stat(join(skillsDir, id, 'SKILL.md'))
return true
} catch {
return false
}
}
export async function seedDefaultSkills(): Promise<void> {
const skillsDir = getSkillsDir()
if (await hasExistingSkills(skillsDir)) return
const remoteSucceeded = await seedFromRemote()
if (remoteSucceeded) return
let seeded = 0
for (const skill of DEFAULT_SKILLS) {
if (await skillExists(skillsDir, skill.id)) continue
try {
const targetDir = join(skillsDir, skill.id)
await mkdir(targetDir, { recursive: true })
await writeFile(join(targetDir, 'SKILL.md'), skill.content)
await writeSkillFile(skill.id, skill.content)
seeded++
} catch (err) {
logger.warn('Failed to seed skill', {
@@ -33,6 +45,6 @@ export async function seedDefaultSkills(): Promise<void> {
}
if (seeded > 0) {
logger.info(`Seeded ${seeded} default skills`)
logger.info(`Seeded ${seeded} default skills (bundled)`)
}
}

View File

@@ -19,8 +19,7 @@ export function slugify(name: string): string {
.replace(/^-|-$/g, '')
}
// Prevents path traversal — ensures resolved path stays inside skills directory
function safeSkillDir(id: string): string {
export function safeSkillDir(id: string): string {
const skillsDir = getSkillsDir()
const resolved = resolve(skillsDir, id)
if (!resolved.startsWith(`${skillsDir}${sep}`)) {

View File

@@ -38,3 +38,15 @@ export type CreateSkillInput = {
export type UpdateSkillInput = Partial<CreateSkillInput> & {
enabled?: boolean
}
export type RemoteSkillEntry = {
id: string
version: string
content: string
}
export type RemoteSkillCatalog = {
version: number
skills: RemoteSkillEntry[]
}

View File

@@ -0,0 +1,90 @@
/**
* E2E flow tests against live CDN.
*/
import { afterAll, beforeAll, describe, it, mock } from 'bun:test'
import assert from 'node:assert'
import { mkdir, readdir, readFile, rm, writeFile } from 'node:fs/promises'
import { tmpdir } from 'node:os'
import { join } from 'node:path'
let testDir: string
mock.module('../../src/lib/browseros-dir', () => ({
getSkillsDir: () => testDir,
}))
mock.module('../../src/env', () => ({
INLINED_ENV: {
SKILLS_CATALOG_URL: 'https://cdn.browseros.com/skills/v1/catalog.json',
},
}))
const { seedFromRemote, syncRemoteSkills } =
await import('../../src/skills/remote-sync')
async function listSkills(): Promise<string[]> {
const entries = await readdir(testDir)
return entries.filter((e) => !e.startsWith('.')).sort()
}
beforeAll(async () => {
testDir = join(tmpdir(), `flow-test-${Date.now()}`)
await mkdir(testDir, { recursive: true })
})
afterAll(async () => {
await rm(testDir, { recursive: true, force: true })
})
describe('Flow tests against live CDN', () => {
it('seeds all skills from CDN on fresh install', async () => {
const result = await seedFromRemote()
assert.strictEqual(result, true)
const skills = await listSkills()
assert.strictEqual(skills.length, 12)
})
it('sync does nothing when already up to date', async () => {
const result = await syncRemoteSkills()
assert.strictEqual(result.installed, 0)
assert.strictEqual(result.updated, 0)
})
it('remote overwrites local edits when version differs', async () => {
const skillPath = join(testDir, 'summarize-page', 'SKILL.md')
const original = await readFile(skillPath, 'utf-8')
// User edits the file AND we fake a version mismatch
const edited = original.replace(/version: "1.0"/, 'version: "0.9"') + '\n## My Notes\n'
await writeFile(skillPath, edited)
const result = await syncRemoteSkills()
assert.strictEqual(result.updated >= 1, true)
const afterSync = await readFile(skillPath, 'utf-8')
assert.ok(!afterSync.includes('My Notes'))
})
it('installs skill deleted locally', async () => {
await rm(join(testDir, 'save-page'), { recursive: true })
const result = await syncRemoteSkills()
assert.strictEqual(result.installed, 1)
const content = await readFile(join(testDir, 'save-page', 'SKILL.md'), 'utf-8')
assert.ok(content.includes('name: save-page'))
})
it('user-created skill is never touched', async () => {
const customDir = join(testDir, 'my-workflow')
await mkdir(customDir, { recursive: true })
const custom = '---\nname: my-workflow\ndescription: custom\n---\n# Mine\n'
await writeFile(join(customDir, 'SKILL.md'), custom)
await syncRemoteSkills()
const afterSync = await readFile(join(customDir, 'SKILL.md'), 'utf-8')
assert.strictEqual(afterSync, custom)
})
})

View File

@@ -0,0 +1,247 @@
import { afterEach, beforeEach, describe, it, mock, spyOn } from 'bun:test'
import assert from 'node:assert'
import { mkdtemp, readFile, rm, writeFile, mkdir } from 'node:fs/promises'
import { tmpdir } from 'node:os'
import { join } from 'node:path'
import type { RemoteSkillCatalog } from '../../src/skills/types'
let testDir: string
const mockGetSkillsDir = mock(() => testDir)
mock.module('../../src/lib/browseros-dir', () => ({
getSkillsDir: mockGetSkillsDir,
}))
const { fetchRemoteCatalog, syncRemoteSkills, seedFromRemote } =
await import('../../src/skills/remote-sync')
function makeCatalog(
skills: { id: string; version: string; content: string }[],
): RemoteSkillCatalog {
return { version: 1, skills }
}
const SKILL_V1 = `---
name: test-skill
description: A test skill
metadata:
display-name: Test Skill
enabled: "true"
version: "1.0"
---
# Test Skill
Do the thing.
`
const SKILL_V2 = `---
name: test-skill
description: A test skill (updated)
metadata:
display-name: Test Skill
enabled: "true"
version: "2.0"
---
# Test Skill v2
Do the thing better.
`
beforeEach(async () => {
testDir = await mkdtemp(join(tmpdir(), 'skill-sync-'))
})
afterEach(async () => {
await rm(testDir, { recursive: true, force: true })
mock.restore()
})
describe('fetchRemoteCatalog', () => {
it('returns null on network failure', async () => {
const spy = spyOn(globalThis, 'fetch').mockRejectedValue(new Error('offline'))
assert.strictEqual(await fetchRemoteCatalog(), null)
spy.mockRestore()
})
it('returns null on non-ok response', async () => {
const spy = spyOn(globalThis, 'fetch').mockResolvedValue(
new Response('Not Found', { status: 404 }),
)
assert.strictEqual(await fetchRemoteCatalog(), null)
spy.mockRestore()
})
it('returns catalog on success', async () => {
const catalog = makeCatalog([{ id: 'test', version: '1.0', content: 'hello' }])
const spy = spyOn(globalThis, 'fetch').mockResolvedValue(
new Response(JSON.stringify(catalog), { status: 200 }),
)
assert.deepStrictEqual(await fetchRemoteCatalog(), catalog)
spy.mockRestore()
})
it('returns null for invalid catalog shape', async () => {
const spy = spyOn(globalThis, 'fetch').mockResolvedValue(
new Response(JSON.stringify({ skills: 'not-an-array' }), { status: 200 }),
)
assert.strictEqual(await fetchRemoteCatalog(), null)
spy.mockRestore()
})
it('returns null when skill entries have invalid shape', async () => {
const spy = spyOn(globalThis, 'fetch').mockResolvedValue(
new Response(
JSON.stringify({ version: 1, skills: [{ id: 123, version: '1.0', content: null }] }),
{ status: 200 },
),
)
assert.strictEqual(await fetchRemoteCatalog(), null)
spy.mockRestore()
})
})
describe('syncRemoteSkills', () => {
it('returns zeros when remote is unavailable', async () => {
const spy = spyOn(globalThis, 'fetch').mockRejectedValue(new Error('offline'))
const result = await syncRemoteSkills()
assert.deepStrictEqual(result, { installed: 0, updated: 0 })
spy.mockRestore()
})
it('installs new skills that do not exist locally', async () => {
const spy = spyOn(globalThis, 'fetch').mockResolvedValue(
new Response(JSON.stringify(makeCatalog([
{ id: 'new-skill', version: '1.0', content: SKILL_V1 },
])), { status: 200 }),
)
const result = await syncRemoteSkills()
assert.strictEqual(result.installed, 1)
const content = await readFile(join(testDir, 'new-skill', 'SKILL.md'), 'utf-8')
assert.strictEqual(content, SKILL_V1)
spy.mockRestore()
})
it('updates skill when remote has newer version', async () => {
await mkdir(join(testDir, 'test-skill'), { recursive: true })
await writeFile(join(testDir, 'test-skill', 'SKILL.md'), SKILL_V1)
const spy = spyOn(globalThis, 'fetch').mockResolvedValue(
new Response(JSON.stringify(makeCatalog([
{ id: 'test-skill', version: '2.0', content: SKILL_V2 },
])), { status: 200 }),
)
const result = await syncRemoteSkills()
assert.strictEqual(result.updated, 1)
const content = await readFile(join(testDir, 'test-skill', 'SKILL.md'), 'utf-8')
assert.strictEqual(content, SKILL_V2)
spy.mockRestore()
})
it('overwrites user-edited skill when remote has newer version', async () => {
await mkdir(join(testDir, 'test-skill'), { recursive: true })
await writeFile(join(testDir, 'test-skill', 'SKILL.md'), SKILL_V1 + '\n## My Notes\n')
const spy = spyOn(globalThis, 'fetch').mockResolvedValue(
new Response(JSON.stringify(makeCatalog([
{ id: 'test-skill', version: '2.0', content: SKILL_V2 },
])), { status: 200 }),
)
const result = await syncRemoteSkills()
assert.strictEqual(result.updated, 1)
const content = await readFile(join(testDir, 'test-skill', 'SKILL.md'), 'utf-8')
assert.strictEqual(content, SKILL_V2)
assert.ok(!content.includes('My Notes'))
spy.mockRestore()
})
it('skips when version matches', async () => {
await mkdir(join(testDir, 'test-skill'), { recursive: true })
await writeFile(join(testDir, 'test-skill', 'SKILL.md'), SKILL_V1)
const spy = spyOn(globalThis, 'fetch').mockResolvedValue(
new Response(JSON.stringify(makeCatalog([
{ id: 'test-skill', version: '1.0', content: SKILL_V1 },
])), { status: 200 }),
)
const result = await syncRemoteSkills()
assert.strictEqual(result.installed, 0)
assert.strictEqual(result.updated, 0)
spy.mockRestore()
})
it('does not touch user-created skills not in catalog', async () => {
await mkdir(join(testDir, 'my-custom'), { recursive: true })
const custom = '---\nname: my-custom\ndescription: mine\nmetadata:\n version: "1.0"\n---\n# Mine\n'
await writeFile(join(testDir, 'my-custom', 'SKILL.md'), custom)
const spy = spyOn(globalThis, 'fetch').mockResolvedValue(
new Response(JSON.stringify(makeCatalog([
{ id: 'other-skill', version: '1.0', content: SKILL_V1 },
])), { status: 200 }),
)
await syncRemoteSkills()
const content = await readFile(join(testDir, 'my-custom', 'SKILL.md'), 'utf-8')
assert.strictEqual(content, custom)
spy.mockRestore()
})
it('rejects path traversal in skill ids', async () => {
const spy = spyOn(globalThis, 'fetch').mockResolvedValue(
new Response(JSON.stringify(makeCatalog([
{ id: '../../etc/evil', version: '1.0', content: SKILL_V1 },
])), { status: 200 }),
)
const result = await syncRemoteSkills()
assert.strictEqual(result.installed, 0)
spy.mockRestore()
})
})
describe('seedFromRemote', () => {
it('returns false when remote is unavailable', async () => {
const spy = spyOn(globalThis, 'fetch').mockRejectedValue(new Error('offline'))
assert.strictEqual(await seedFromRemote(), false)
spy.mockRestore()
})
it('seeds all skills from remote', async () => {
const spy = spyOn(globalThis, 'fetch').mockResolvedValue(
new Response(JSON.stringify(makeCatalog([
{ id: 'skill-a', version: '1.0', content: SKILL_V1 },
{ id: 'skill-b', version: '1.0', content: SKILL_V2 },
])), { status: 200 }),
)
assert.strictEqual(await seedFromRemote(), true)
const content = await readFile(join(testDir, 'skill-a', 'SKILL.md'), 'utf-8')
assert.strictEqual(content, SKILL_V1)
spy.mockRestore()
})
it('returns false for empty catalog', async () => {
const spy = spyOn(globalThis, 'fetch').mockResolvedValue(
new Response(JSON.stringify(makeCatalog([])), { status: 200 }),
)
assert.strictEqual(await seedFromRemote(), false)
spy.mockRestore()
})
it('returns false on partial failure', async () => {
const spy = spyOn(globalThis, 'fetch').mockResolvedValue(
new Response(JSON.stringify(makeCatalog([
{ id: 'good-skill', version: '1.0', content: SKILL_V1 },
{ id: '../../traversal', version: '1.0', content: 'evil' },
])), { status: 200 }),
)
assert.strictEqual(await seedFromRemote(), false)
spy.mockRestore()
})
})

View File

@@ -167,7 +167,7 @@
},
"apps/server": {
"name": "@browseros/server",
"version": "0.0.75",
"version": "0.0.76",
"bin": {
"browseros-server": "./src/index.ts",
},

View File

@@ -40,4 +40,7 @@ export const DEV_PORTS = {
extension: 9310,
} as const
// OAuth callback port — must match the redirect_uri registered with OpenAI's Codex client ID
export const OAUTH_CALLBACK_PORT = 1455
export type Ports = typeof DEFAULT_PORTS

View File

@@ -32,6 +32,8 @@ export const TIMEOUTS = {
// External API calls
KLAVIS_FETCH: 30_000,
SKILLS_FETCH: 15_000,
SKILLS_SYNC_INTERVAL: 45 * 60_000,
// Navigation/DOM
NAVIGATION: 10_000,
@@ -46,6 +48,12 @@ export const TIMEOUTS = {
WS_HEARTBEAT_TIMEOUT: 5_000,
WS_CONNECTION_TIMEOUT: 10_000,
WS_REQUEST_TIMEOUT: 30_000,
// OAuth
OAUTH_FLOW_TTL: 300_000,
OAUTH_TOKEN_EXPIRY_BUFFER: 300_000,
OAUTH_POLL_INTERVAL: 2_000,
OAUTH_POLL_TIMEOUT: 300_000,
} as const
export type TimeoutKey = keyof typeof TIMEOUTS

View File

@@ -10,4 +10,7 @@ export const EXTERNAL_URLS = {
KLAVIS_PROXY: 'https://llm.browseros.com/klavis',
POSTHOG_DEFAULT: 'https://us.i.posthog.com',
CODEGEN_SERVICE: 'https://graph.browseros.com',
OPENAI_AUTH: 'https://auth.openai.com/oauth/authorize',
OPENAI_TOKEN: 'https://auth.openai.com/oauth/token',
SKILLS_CATALOG: 'https://cdn.browseros.com/skills/v1/catalog.json',
} as const

View File

@@ -24,6 +24,7 @@ export const LLM_PROVIDERS = {
BROWSEROS: 'browseros',
OPENAI_COMPATIBLE: 'openai-compatible',
MOONSHOT: 'moonshot',
CHATGPT_PRO: 'chatgpt-pro',
} as const
/**
@@ -42,6 +43,7 @@ export const LLMProviderSchema: z.ZodEnum<
'browseros',
'openai-compatible',
'moonshot',
'chatgpt-pro',
]
> = z.enum([
LLM_PROVIDERS.ANTHROPIC,
@@ -55,6 +57,7 @@ export const LLMProviderSchema: z.ZodEnum<
LLM_PROVIDERS.BROWSEROS,
LLM_PROVIDERS.OPENAI_COMPATIBLE,
LLM_PROVIDERS.MOONSHOT,
LLM_PROVIDERS.CHATGPT_PRO,
])
export type LLMProvider = z.infer<typeof LLMProviderSchema>
@@ -73,6 +76,8 @@ export const LLMConfigSchema: z.ZodObject<{
accessKeyId: z.ZodOptional<z.ZodString>
secretAccessKey: z.ZodOptional<z.ZodString>
sessionToken: z.ZodOptional<z.ZodString>
reasoningEffort: z.ZodOptional<z.ZodEnum<['none', 'low', 'medium', 'high']>>
reasoningSummary: z.ZodOptional<z.ZodEnum<['auto', 'concise', 'detailed']>>
}> = z.object({
provider: LLMProviderSchema,
model: z.string().optional(),
@@ -85,6 +90,9 @@ export const LLMConfigSchema: z.ZodObject<{
accessKeyId: z.string().optional(),
secretAccessKey: z.string().optional(),
sessionToken: z.string().optional(),
// ChatGPT Pro (Codex)
reasoningEffort: z.enum(['none', 'low', 'medium', 'high']).optional(),
reasoningSummary: z.enum(['auto', 'concise', 'detailed']).optional(),
})
export type LLMConfig = z.infer<typeof LLMConfigSchema>

View File

@@ -0,0 +1,71 @@
import { readdir, readFile, stat } from 'node:fs/promises'
import { join } from 'node:path'
import { PutObjectCommand, S3Client } from '@aws-sdk/client-s3'
import type { RemoteSkillCatalog, RemoteSkillEntry } from '../apps/server/src/skills/types'
const DEFAULTS_DIR = join(import.meta.dir, '../apps/server/src/skills/defaults')
const R2_KEY = 'skills/v1/catalog.json'
function extractVersion(content: string): string {
const match = content.match(/^\s*version:\s*["']?([^"'\n]+)["']?/m)
return match?.[1]?.trim() || '1.0'
}
async function generateCatalog(): Promise<RemoteSkillCatalog> {
const entries = await readdir(DEFAULTS_DIR)
const skills: RemoteSkillEntry[] = []
for (const entry of entries) {
const entryPath = join(DEFAULTS_DIR, entry)
const info = await stat(entryPath)
if (!info.isDirectory()) continue
const skillPath = join(entryPath, 'SKILL.md')
try {
const content = await readFile(skillPath, 'utf-8')
skills.push({ id: entry, version: extractVersion(content), content })
} catch {
console.error(`Skipping ${entry}: no SKILL.md found`)
}
}
skills.sort((a, b) => a.id.localeCompare(b.id))
return { version: 1, skills }
}
function requireEnv(name: string): string {
const value = process.env[name]
if (!value) {
console.error(`Missing required env var: ${name}`)
process.exit(1)
}
return value
}
const accountId = requireEnv('R2_ACCOUNT_ID')
const accessKeyId = requireEnv('R2_ACCESS_KEY_ID')
const secretAccessKey = requireEnv('R2_SECRET_ACCESS_KEY')
const bucket = requireEnv('R2_BUCKET')
const client = new S3Client({
region: 'auto',
endpoint: `https://${accountId}.r2.cloudflarestorage.com`,
credentials: { accessKeyId, secretAccessKey },
})
const catalog = await generateCatalog()
const body = JSON.stringify(catalog, null, 2)
console.log(`Generated catalog with ${catalog.skills.length} skills`)
await client.send(
new PutObjectCommand({
Bucket: bucket,
Key: R2_KEY,
Body: body,
ContentType: 'application/json',
CacheControl: 'public, max-age=300',
}),
)
console.log(`Uploaded to R2: ${bucket}/${R2_KEY}`)

View File

@@ -58,6 +58,9 @@ func runWatch(cmd *cobra.Command, args []string) error {
userDataDir = dir
proc.LogMsgf(proc.TagInfo, "Created fresh profile: %s", userDataDir)
} else {
if err := os.MkdirAll(userDataDir, 0o755); err != nil {
return fmt.Errorf("creating user-data dir: %w", err)
}
proc.LogMsg(proc.TagInfo, "Killing processes on preferred ports...")
proc.KillPorts(defaultPorts)
proc.LogMsg(proc.TagInfo, "Ports cleared")

View File

@@ -1,4 +1,4 @@
BROWSEROS_MAJOR=0
BROWSEROS_MINOR=43
BROWSEROS_BUILD=0
BROWSEROS_PATCH=1
BROWSEROS_PATCH=2

View File

@@ -1,39 +0,0 @@
#!/usr/bin/env python3
"""
Save clipboard image to a specified path.
Usage: python scripts/save_clipboard.py <output_path>
"""
import sys
import os
try:
from PIL import ImageGrab
except ImportError:
print("Installing Pillow...")
import subprocess
subprocess.check_call([sys.executable, "-m", "pip", "install", "Pillow", "-q"])
from PIL import ImageGrab
def main():
if len(sys.argv) != 2:
print("Usage: python scripts/save_clipboard.py <output_path>")
print("Example: python scripts/save_clipboard.py docs/images/screenshot.png")
sys.exit(1)
output_path = sys.argv[1]
# Ensure directory exists
os.makedirs(os.path.dirname(output_path) or ".", exist_ok=True)
# Grab from clipboard
img = ImageGrab.grabclipboard()
if img is None:
print("❌ No image in clipboard. Copy an image first (Cmd+C).")
sys.exit(1)
img.save(output_path)
print(f"✅ Saved to {output_path}")
if __name__ == "__main__":
main()

View File

@@ -1,15 +0,0 @@
#!/usr/bin/env bash
set -euo pipefail
DIR="packages/browseros-agent"
BRANCH="${1:-main}"
git -C "$DIR" fetch origin "$BRANCH" --tags
git -C "$DIR" checkout -q "$BRANCH"
git -C "$DIR" pull -q --ff-only origin "$BRANCH"
NEW_SHA=$(git -C "$DIR" rev-parse --short HEAD)
git add "$DIR"
git commit -m "chore: sync packages/browseros-agent submodule (to $NEW_SHA)" || { echo "No changes"; exit 0; }
echo "Bumped $DIR to $NEW_SHA"