feat: add GitHub Copilot as OAuth LLM provider (#500)

* feat: add GitHub Copilot as OAuth-based LLM provider

Add GitHub Copilot as a second OAuth provider using the Device Code flow
(RFC 8628). Users authenticate via github.com/login/device, and the server
polls for token completion. Supports 25+ models through a single Copilot
subscription.

Key changes:
- Device Code OAuth flow in token manager (poll with safety margin)
- Custom fetch wrapper injecting Copilot headers + vision detection
- Provider factory using createOpenAICompatible for Chat Completions API
- Extension UI with template card, auto-create on auth, and disconnect

* fix: address PR review comments for GitHub Copilot OAuth

- Validate device code response for error fields (GitHub can return 200
  with error payload)
- Store empty refreshToken instead of access token for GitHub tokens
- Add closeButton to Toaster for dismissing device code toast

* fix: add github-copilot to agent provider factory

The chat route uses a separate provider-factory.ts (agent layer) from the
test-provider route (llm/provider.ts). Added createGitHubCopilotFactory
to the agent factory so chat works with GitHub Copilot.

* fix: add github-copilot to provider icons, models, and dialog

- Add Github icon from lucide-react to providerIcons map
- Add 8 Copilot models (GPT-4o, Claude, Gemini, Grok) to models.ts
- Add github-copilot to NewProviderDialog zod enum, validation skip,
  canTest check, and OAuth credential message

* fix: reorder copilot models with free-tier models first

Put models available on Copilot Free at the top (gpt-4o, gpt-4.1,
gpt-5-mini, claude-haiku-4.5, grok-code-fast-1), followed by
premium models that require paid Copilot subscription.

* fix: set correct 64K context window for Copilot models

Copilot API enforces a 64K input token limit regardless of the
underlying model's native context window. Updated all model entries
and the default template to 64000 so compaction triggers correctly.

* fix: use actual per-model prompt limits from Copilot /models API

Queried api.githubcopilot.com/models for real max_prompt_tokens values.
GPT-4o/4.1 have 64K, Claude/gpt-5-mini have 128K, GPT-5.x have 272K.
Also updated model list to match what's actually available on the API
(e.g. claude-sonnet-4.6 instead of 4.5, added gpt-5.4/5.2-codex).

* feat: resize images for Copilot using VS Code's algorithm

Large screenshots cause 413 errors on Copilot's API. Resize images
following VS Code's approach: max 2048px longest side, 768px shortest
side, re-encode as JPEG at 75% quality. Uses sharp for server-side
image processing.

* fix: address all Greptile P1 review comments

- Add .catch() on fire-and-forget pollDeviceCode to prevent unhandled
  rejection crashes (Node 15+)
- Add deduplication guard (activeDeviceFlows Set) to prevent concurrent
  device code flows for the same provider
- Add runtime validation of server response in frontend before calling
  window.open() and showing toast
- Remove dead GITHUB_DEVICE_VERIFICATION constant from urls.ts

* fix: upgrade biome to 2.4.8, fix all lint errors, and address review bugs

- Upgrade biome from 2.4.5 to 2.4.8 (matches CI) and migrate configs
- Fix image resize: only re-encode when dimensions actually change
- Fix device code polling: retry on transient network errors instead of aborting
- Allow restarting device code flow (clear old flow instead of throwing 500)
- Fix pre-existing noNonNullAssertion and noExplicitAny lint errors globally

* fix: address Greptile P2 review — image resize and config guard

- Fix early-return guard: check max/min sides against their respective
  limits (MAX_LONG_SIDE/MAX_SHORT_SIDE) instead of both against SHORT
- Preserve PNG alpha: detect hasAlpha and keep PNG format instead of
  unconditionally converting to lossy JPEG
- Keep browserosId guard in resolveGitHubCopilotConfig consistent with
  ChatGPT Pro pattern (safety check that caller context is valid)

* feat: update Copilot models to full list from pricing page, default to gpt-5-mini

Added all 23 models from GitHub Copilot pricing page. Ordered with
free-tier models first (gpt-5-mini, claude-haiku-4.5), then premium.
Changed default from gpt-4o to gpt-5-mini since it's unlimited on
Pro plan and has 128K context (vs gpt-4o's 64K limit).
This commit is contained in:
shivammittal274
2026-03-20 02:33:09 +05:30
committed by GitHub
parent cee9c764b1
commit 720baaed3e
28 changed files with 607 additions and 34 deletions

View File

@@ -1,5 +1,5 @@
{ {
"$schema": "https://biomejs.dev/schemas/2.4.5/schema.json", "$schema": "https://biomejs.dev/schemas/2.4.8/schema.json",
"root": false, "root": false,
"extends": "//", "extends": "//",
"vcs": { "vcs": {

View File

@@ -18,6 +18,7 @@ const Toaster = ({ ...props }: ToasterProps) => {
<Sonner <Sonner
theme={theme as ToasterProps['theme']} theme={theme as ToasterProps['theme']}
className="toaster group" className="toaster group"
closeButton
icons={{ icons={{
success: <CircleCheckIcon className="size-4" />, success: <CircleCheckIcon className="size-4" />,
info: <InfoIcon className="size-4" />, info: <InfoIcon className="size-4" />,

View File

@@ -17,6 +17,9 @@ import {
CHATGPT_PRO_OAUTH_COMPLETED_EVENT, CHATGPT_PRO_OAUTH_COMPLETED_EVENT,
CHATGPT_PRO_OAUTH_DISCONNECTED_EVENT, CHATGPT_PRO_OAUTH_DISCONNECTED_EVENT,
CHATGPT_PRO_OAUTH_STARTED_EVENT, CHATGPT_PRO_OAUTH_STARTED_EVENT,
GITHUB_COPILOT_OAUTH_COMPLETED_EVENT,
GITHUB_COPILOT_OAUTH_DISCONNECTED_EVENT,
GITHUB_COPILOT_OAUTH_STARTED_EVENT,
} from '@/lib/constants/analyticsEvents' } from '@/lib/constants/analyticsEvents'
import { GetProfileIdByUserIdDocument } from '@/lib/conversations/graphql/uploadConversationDocument' import { GetProfileIdByUserIdDocument } from '@/lib/conversations/graphql/uploadConversationDocument'
import { getQueryKeyFromDocument } from '@/lib/graphql/getQueryKeyFromDocument' import { getQueryKeyFromDocument } from '@/lib/graphql/getQueryKeyFromDocument'
@@ -118,8 +121,16 @@ export const AISettingsPage: FC = () => {
disconnect: disconnectChatGPTPro, disconnect: disconnectChatGPTPro,
} = useOAuthStatus('chatgpt-pro') } = useOAuthStatus('chatgpt-pro')
// OAuth status for GitHub Copilot
const {
status: copilotStatus,
startPolling: startCopilotPolling,
disconnect: disconnectCopilot,
} = useOAuthStatus('github-copilot')
// Track whether user explicitly started an OAuth flow this session // Track whether user explicitly started an OAuth flow this session
const oauthFlowStartedRef = useRef(false) const oauthFlowStartedRef = useRef(false)
const copilotOAuthStartedRef = useRef(false)
// Auto-create provider only when user actively completed OAuth, // Auto-create provider only when user actively completed OAuth,
// not on passive page load when server has old tokens // not on passive page load when server has old tokens
@@ -162,6 +173,42 @@ export const AISettingsPage: FC = () => {
} }
}, [chatgptProStatus?.authenticated]) }, [chatgptProStatus?.authenticated])
// Auto-create GitHub Copilot provider on successful OAuth
// biome-ignore lint/correctness/useExhaustiveDependencies: intentional — only trigger on auth status change
useEffect(() => {
if (!copilotStatus?.authenticated) return
if (!copilotOAuthStartedRef.current) return
const exists = providers.some((p) => p.type === 'github-copilot')
if (exists) return
const now = Date.now()
try {
const template = getProviderTemplate('github-copilot')
saveProvider({
id: `github-copilot-${now}`,
type: 'github-copilot',
name: 'GitHub Copilot',
modelId: template?.defaultModelId ?? 'gpt-4o',
supportsImages: template?.supportsImages ?? true,
contextWindow: template?.contextWindow ?? 128000,
temperature: 0.2,
createdAt: now,
updatedAt: now,
})
track(GITHUB_COPILOT_OAUTH_COMPLETED_EVENT)
toast.success('GitHub Copilot Connected', {
description: 'Successfully authenticated with GitHub Copilot',
})
} catch (err) {
toast.error('Failed to create GitHub Copilot provider', {
description: err instanceof Error ? err.message : 'Unknown error',
})
} finally {
copilotOAuthStartedRef.current = false
}
}, [copilotStatus?.authenticated])
const handleAddProvider = () => { const handleAddProvider = () => {
setTemplateValues(undefined) setTemplateValues(undefined)
setIsNewDialogOpen(true) setIsNewDialogOpen(true)
@@ -173,6 +220,10 @@ export const AISettingsPage: FC = () => {
handleStartChatGPTProOAuth() handleStartChatGPTProOAuth()
return return
} }
if (template.id === 'github-copilot') {
handleStartGitHubCopilotOAuth()
return
}
setTemplateValues({ setTemplateValues({
type: template.id, type: template.id,
@@ -207,6 +258,47 @@ export const AISettingsPage: FC = () => {
}) })
} }
const handleStartGitHubCopilotOAuth = async () => {
if (!agentServerUrl) {
toast.error('Server not available', {
description: 'Cannot start OAuth flow without server connection.',
})
return
}
copilotOAuthStartedRef.current = true
try {
// Device Code flow: get user code from server, then open GitHub
const res = await fetch(`${agentServerUrl}/oauth/github-copilot/start`)
if (!res.ok) throw new Error(`Server returned ${res.status}`)
const data = (await res.json()) as {
userCode?: string
verificationUri?: string
}
if (!data.userCode || !data.verificationUri) {
throw new Error('Invalid response from server')
}
// Open GitHub device verification page
window.open(data.verificationUri, '_blank')
// Start polling for completion
startCopilotPolling()
track(GITHUB_COPILOT_OAUTH_STARTED_EVENT)
toast.info(`Enter code: ${data.userCode}`, {
description: 'Paste this code on the GitHub page that just opened.',
duration: 60_000,
})
} catch (err) {
copilotOAuthStartedRef.current = false
toast.error('Failed to start GitHub Copilot authentication', {
description: err instanceof Error ? err.message : 'Unknown error',
})
}
}
const handleEditProvider = (provider: LlmProviderConfig) => { const handleEditProvider = (provider: LlmProviderConfig) => {
setEditingProvider(provider) setEditingProvider(provider)
setIsEditDialogOpen(true) setIsEditDialogOpen(true)
@@ -223,6 +315,10 @@ export const AISettingsPage: FC = () => {
await disconnectChatGPTPro() await disconnectChatGPTPro()
track(CHATGPT_PRO_OAUTH_DISCONNECTED_EVENT) track(CHATGPT_PRO_OAUTH_DISCONNECTED_EVENT)
} }
if (providerToDelete.type === 'github-copilot') {
await disconnectCopilot()
track(GITHUB_COPILOT_OAUTH_DISCONNECTED_EVENT)
}
await deleteProvider(providerToDelete.id) await deleteProvider(providerToDelete.id)
deleteRemoteProviderMutation.mutate({ rowId: providerToDelete.id }) deleteRemoteProviderMutation.mutate({ rowId: providerToDelete.id })
setProviderToDelete(null) setProviderToDelete(null)

View File

@@ -62,6 +62,7 @@ const providerTypeEnum = z.enum([
'bedrock', 'bedrock',
'browseros', 'browseros',
'chatgpt-pro', 'chatgpt-pro',
'github-copilot',
]) ])
/** /**
@@ -131,8 +132,8 @@ export const providerFormSchema = z
}) })
} }
} }
// ChatGPT Pro: no credentials needed (server-managed OAuth) // OAuth providers: no credentials needed (server-managed)
else if (data.type === 'chatgpt-pro') { else if (data.type === 'chatgpt-pro' || data.type === 'github-copilot') {
// No validation needed — OAuth tokens are on the server // No validation needed — OAuth tokens are on the server
} }
// Other providers: require baseUrl // Other providers: require baseUrl
@@ -377,8 +378,9 @@ export const NewProviderDialog: FC<NewProviderDialogProps> = ({
const canTest = (): boolean => { const canTest = (): boolean => {
if (!watchedModelId) return false if (!watchedModelId) return false
// ChatGPT Pro: always testable (server has the OAuth token) // OAuth providers: always testable (server has the OAuth token)
if (watchedType === 'chatgpt-pro') return true if (watchedType === 'chatgpt-pro' || watchedType === 'github-copilot')
return true
if (watchedType === 'azure') { if (watchedType === 'azure') {
return !!(watchedResourceName || watchedBaseUrl) && !!watchedApiKey return !!(watchedResourceName || watchedBaseUrl) && !!watchedApiKey
@@ -461,6 +463,14 @@ export const NewProviderDialog: FC<NewProviderDialogProps> = ({
} }
const renderProviderSpecificFields = () => { const renderProviderSpecificFields = () => {
// GitHub Copilot: OAuth credentials only
if (watchedType === 'github-copilot') {
return (
<div className="rounded-lg border border-green-200 bg-green-50 p-3 text-green-700 text-sm dark:border-green-800 dark:bg-green-950 dark:text-green-300">
Credentials are managed via GitHub OAuth. No API key needed.
</div>
)
}
// ChatGPT Pro: OAuth credentials + Codex reasoning settings // ChatGPT Pro: OAuth credentials + Codex reasoning settings
if (watchedType === 'chatgpt-pro') { if (watchedType === 'chatgpt-pro') {
return ( return (

View File

@@ -24,6 +24,7 @@ export interface ModelsData {
browseros: ModelInfo[] browseros: ModelInfo[]
moonshot: ModelInfo[] moonshot: ModelInfo[]
'chatgpt-pro': ModelInfo[] 'chatgpt-pro': ModelInfo[]
'github-copilot': ModelInfo[]
} }
/** /**
@@ -101,6 +102,32 @@ export const MODELS_DATA: ModelsData = {
{ modelId: 'gpt-5.1-codex-mini', contextLength: 400000 }, { modelId: 'gpt-5.1-codex-mini', contextLength: 400000 },
{ modelId: 'gpt-5.1', contextLength: 200000 }, { modelId: 'gpt-5.1', contextLength: 200000 },
], ],
'github-copilot': [
// Free tier (unlimited with Pro)
{ modelId: 'gpt-5-mini', contextLength: 128000 },
{ modelId: 'claude-haiku-4.5', contextLength: 128000 },
{ modelId: 'gpt-4o', contextLength: 64000 },
{ modelId: 'gpt-4.1', contextLength: 64000 },
// Premium models (Pro: 300/mo, Pro+: 1500/mo)
{ modelId: 'claude-sonnet-4.6', contextLength: 128000 },
{ modelId: 'claude-sonnet-4.5', contextLength: 128000 },
{ modelId: 'claude-sonnet-4', contextLength: 128000 },
{ modelId: 'claude-opus-4.6', contextLength: 128000 },
{ modelId: 'claude-opus-4.5', contextLength: 128000 },
{ modelId: 'gemini-2.5-pro', contextLength: 128000 },
{ modelId: 'gemini-3-pro-preview', contextLength: 128000 },
{ modelId: 'gemini-3-flash-preview', contextLength: 128000 },
{ modelId: 'gemini-3.1-pro-preview', contextLength: 128000 },
{ modelId: 'gpt-5.4', contextLength: 272000 },
{ modelId: 'gpt-5.4-mini', contextLength: 128000 },
{ modelId: 'gpt-5.3-codex', contextLength: 272000 },
{ modelId: 'gpt-5.2-codex', contextLength: 272000 },
{ modelId: 'gpt-5.2', contextLength: 128000 },
{ modelId: 'gpt-5.1-codex', contextLength: 128000 },
{ modelId: 'gpt-5.1-codex-max', contextLength: 128000 },
{ modelId: 'gpt-5.1', contextLength: 128000 },
{ modelId: 'grok-code-fast-1', contextLength: 128000 },
],
} }
/** /**

View File

@@ -24,6 +24,7 @@ export const useGetUserMCPIntegrations = () => {
const query = useQuery({ const query = useQuery({
queryKey: [INTEGRATIONS_QUERY_KEY, agentServerUrl], queryKey: [INTEGRATIONS_QUERY_KEY, agentServerUrl],
// biome-ignore lint/style/noNonNullAssertion: guarded by enabled
queryFn: () => getUserMCPIntegrations(agentServerUrl!), queryFn: () => getUserMCPIntegrations(agentServerUrl!),
enabled: !!agentServerUrl, enabled: !!agentServerUrl,
refetchOnWindowFocus: true, refetchOnWindowFocus: true,

View File

@@ -1,5 +1,6 @@
import { AlertCircle, Eye, Pencil, Plus, Trash2, Wand2 } from 'lucide-react' import { AlertCircle, Eye, Pencil, Plus, Trash2, Wand2 } from 'lucide-react'
import { type FC, useEffect, useState } from 'react' import { type FC, useEffect, useState } from 'react'
import Markdown from 'react-markdown'
import { toast } from 'sonner' import { toast } from 'sonner'
import { import {
AlertDialog, AlertDialog,
@@ -26,7 +27,6 @@ import { Label } from '@/components/ui/label'
import { MarkdownEditor } from '@/components/ui/MarkdownEditor' import { MarkdownEditor } from '@/components/ui/MarkdownEditor'
import { Switch } from '@/components/ui/switch' import { Switch } from '@/components/ui/switch'
import { Textarea } from '@/components/ui/textarea' import { Textarea } from '@/components/ui/textarea'
import Markdown from 'react-markdown'
import { type SkillDetail, type SkillMeta, useSkills } from './useSkills' import { type SkillDetail, type SkillMeta, useSkills } from './useSkills'
const loadingSkillCards = [ const loadingSkillCards = [
@@ -330,9 +330,15 @@ const SkillCard: FC<{
className="-ml-2 h-7 px-2 text-muted-foreground hover:bg-transparent hover:text-foreground" className="-ml-2 h-7 px-2 text-muted-foreground hover:bg-transparent hover:text-foreground"
> >
{skill.builtIn ? ( {skill.builtIn ? (
<><Eye className="size-3.5" />View</> <>
<Eye className="size-3.5" />
View
</>
) : ( ) : (
<><Pencil className="size-3.5" />Edit</> <>
<Pencil className="size-3.5" />
Edit
</>
)} )}
</Button> </Button>
{!skill.builtIn ? ( {!skill.builtIn ? (
@@ -408,7 +414,11 @@ const SkillDialog: FC<{
<DialogContent className="flex max-h-[90vh] flex-col gap-0 overflow-hidden p-0 sm:max-w-5xl"> <DialogContent className="flex max-h-[90vh] flex-col gap-0 overflow-hidden p-0 sm:max-w-5xl">
<DialogHeader className="border-b px-6 py-5"> <DialogHeader className="border-b px-6 py-5">
<DialogTitle> <DialogTitle>
{readOnly ? 'View Skill' : editingSkill ? 'Edit Skill' : 'Create Skill'} {readOnly
? 'View Skill'
: editingSkill
? 'Edit Skill'
: 'Create Skill'}
</DialogTitle> </DialogTitle>
<DialogDescription> <DialogDescription>
{readOnly {readOnly
@@ -472,7 +482,7 @@ const SkillDialog: FC<{
</div> </div>
{readOnly ? ( {readOnly ? (
<div className="prose prose-sm mt-4 min-h-[320px] max-w-none flex-1 overflow-y-auto rounded-md border p-4 text-sm dark:prose-invert"> <div className="prose prose-sm dark:prose-invert mt-4 min-h-[320px] max-w-none flex-1 overflow-y-auto rounded-md border p-4 text-sm">
<Markdown>{content}</Markdown> <Markdown>{content}</Markdown>
</div> </div>
) : ( ) : (

View File

@@ -502,6 +502,7 @@ export const useChatSession = (options?: ChatSessionOptions) => {
if (pending.action) { if (pending.action) {
setTextToAction((prev) => { setTextToAction((prev) => {
const next = new Map(prev) const next = new Map(prev)
// biome-ignore lint/style/noNonNullAssertion: guarded by if (pending.action) above
next.set(pending.text, pending.action!) next.set(pending.text, pending.action!)
return next return next
}) })

View File

@@ -41,6 +41,18 @@ export const CHATGPT_PRO_OAUTH_COMPLETED_EVENT =
export const CHATGPT_PRO_OAUTH_DISCONNECTED_EVENT = export const CHATGPT_PRO_OAUTH_DISCONNECTED_EVENT =
'settings.chatgpt_pro.oauth_disconnected' 'settings.chatgpt_pro.oauth_disconnected'
/** @public */
export const GITHUB_COPILOT_OAUTH_STARTED_EVENT =
'settings.github_copilot.oauth_started'
/** @public */
export const GITHUB_COPILOT_OAUTH_COMPLETED_EVENT =
'settings.github_copilot.oauth_completed'
/** @public */
export const GITHUB_COPILOT_OAUTH_DISCONNECTED_EVENT =
'settings.github_copilot.oauth_disconnected'
/** @public */ /** @public */
export const HUB_PROVIDER_ADDED_EVENT = 'settings.hub_provider.added' export const HUB_PROVIDER_ADDED_EVENT = 'settings.hub_provider.added'

View File

@@ -9,7 +9,7 @@ import {
OpenAI, OpenAI,
OpenRouter, OpenRouter,
} from '@lobehub/icons' } from '@lobehub/icons'
import { Bot } from 'lucide-react' import { Bot, Github } from 'lucide-react'
import type { FC, SVGProps } from 'react' import type { FC, SVGProps } from 'react'
import ProductLogoSvg from '@/assets/product_logo.svg' import ProductLogoSvg from '@/assets/product_logo.svg'
import type { ProviderType } from './types' import type { ProviderType } from './types'
@@ -33,6 +33,7 @@ const providerIconMap: Record<ProviderType, IconComponent | null> = {
browseros: null, browseros: null,
moonshot: Kimi, moonshot: Kimi,
'chatgpt-pro': OpenAI, 'chatgpt-pro': OpenAI,
'github-copilot': Github,
} }
interface ProviderIconProps { interface ProviderIconProps {

View File

@@ -29,6 +29,15 @@ export const providerTemplates: ProviderTemplate[] = [
contextWindow: 400000, contextWindow: 400000,
setupGuideUrl: 'https://docs.browseros.com/features/chatgpt-pro-oauth', setupGuideUrl: 'https://docs.browseros.com/features/chatgpt-pro-oauth',
}, },
{
id: 'github-copilot',
name: 'GitHub Copilot',
defaultBaseUrl: 'https://api.githubcopilot.com',
defaultModelId: 'gpt-5-mini',
supportsImages: true,
contextWindow: 128000,
setupGuideUrl: 'https://docs.browseros.com/features/github-copilot-oauth',
},
{ {
id: 'moonshot', id: 'moonshot',
name: 'Moonshot AI', name: 'Moonshot AI',
@@ -139,6 +148,7 @@ export const providerTemplates: ProviderTemplate[] = [
*/ */
export const providerTypeOptions: { value: ProviderType; label: string }[] = [ export const providerTypeOptions: { value: ProviderType; label: string }[] = [
{ value: 'chatgpt-pro', label: 'ChatGPT Plus/Pro' }, { value: 'chatgpt-pro', label: 'ChatGPT Plus/Pro' },
{ value: 'github-copilot', label: 'GitHub Copilot' },
{ value: 'moonshot', label: 'Moonshot AI' }, { value: 'moonshot', label: 'Moonshot AI' },
{ value: 'anthropic', label: 'Anthropic' }, { value: 'anthropic', label: 'Anthropic' },
{ value: 'openai', label: 'OpenAI' }, { value: 'openai', label: 'OpenAI' },
@@ -168,6 +178,7 @@ export const getProviderTemplate = (
*/ */
export const DEFAULT_BASE_URLS: Record<ProviderType, string> = { export const DEFAULT_BASE_URLS: Record<ProviderType, string> = {
'chatgpt-pro': 'https://chatgpt.com/backend-api', 'chatgpt-pro': 'https://chatgpt.com/backend-api',
'github-copilot': 'https://api.githubcopilot.com',
moonshot: 'https://api.moonshot.ai/v1', moonshot: 'https://api.moonshot.ai/v1',
anthropic: 'https://api.anthropic.com/v1', anthropic: 'https://api.anthropic.com/v1',
openai: 'https://api.openai.com/v1', openai: 'https://api.openai.com/v1',

View File

@@ -15,6 +15,7 @@ export type ProviderType =
| 'browseros' | 'browseros'
| 'moonshot' | 'moonshot'
| 'chatgpt-pro' | 'chatgpt-pro'
| 'github-copilot'
/** /**
* LLM Provider configuration * LLM Provider configuration

View File

@@ -173,7 +173,9 @@ async function annotateScreenshot(
const image = sharp(inputPath) const image = sharp(inputPath)
const metadata = await image.metadata() const metadata = await image.metadata()
// biome-ignore lint/style/noNonNullAssertion: sharp metadata always has dimensions for valid images
const imgWidth = metadata.width! const imgWidth = metadata.width!
// biome-ignore lint/style/noNonNullAssertion: sharp metadata always has dimensions for valid images
const imgHeight = metadata.height! const imgHeight = metadata.height!
const sx = Math.round(action.cssX * dpr) const sx = Math.round(action.cssX * dpr)

View File

@@ -49,10 +49,13 @@ async function callMcpTool(
const result = await Promise.race([toolPromise, timeoutPromise]) const result = await Promise.race([toolPromise, timeoutPromise])
const duration = Date.now() - start const duration = Date.now() - start
if ((result as any).isError) { const res = result as Record<string, unknown>
if (res.isError) {
const content = res.content as
| Array<{ type: string; text?: string }>
| undefined
const errorText = const errorText =
(result as any).content?.find((c: any) => c.type === 'text')?.text || content?.find((c) => c.type === 'text')?.text || 'Unknown error'
'Unknown error'
return { success: false, error: errorText, duration } return { success: false, error: errorText, duration }
} }
@@ -96,13 +99,19 @@ async function main() {
}) })
// Try structured content first // Try structured content first
windowId = (result as any).structuredContent?.windowId const createRes = result as Record<string, unknown>
tabId = (result as any).structuredContent?.tabId const structured = createRes.structuredContent as
| Record<string, number>
| undefined
windowId = structured?.windowId ?? 0
tabId = structured?.tabId ?? 0
// Fall back to parsing text // Fall back to parsing text
if (!windowId || !tabId) { if (!windowId || !tabId) {
const text = const content = createRes.content as
(result as any).content?.find((c: any) => c.type === 'text')?.text || '' | Array<{ type: string; text?: string }>
| undefined
const text = content?.find((c) => c.type === 'text')?.text || ''
const windowMatch = text.match(/window\s+(\d+)/i) const windowMatch = text.match(/window\s+(\d+)/i)
const tabMatch = const tabMatch =
text.match(/Tab ID:\s*(\d+)/i) || text.match(/tab\s+(\d+)/i) text.match(/Tab ID:\s*(\d+)/i) || text.match(/tab\s+(\d+)/i)

View File

@@ -63,8 +63,8 @@
"@ai-sdk/amazon-bedrock": "^4.0.62", "@ai-sdk/amazon-bedrock": "^4.0.62",
"@ai-sdk/anthropic": "^3.0.46", "@ai-sdk/anthropic": "^3.0.46",
"@ai-sdk/azure": "^3.0.31", "@ai-sdk/azure": "^3.0.31",
"@ai-sdk/google": "^3.0.30",
"@ai-sdk/devtools": "^0.0.15", "@ai-sdk/devtools": "^0.0.15",
"@ai-sdk/google": "^3.0.30",
"@ai-sdk/mcp": "^1.0.21", "@ai-sdk/mcp": "^1.0.21",
"@ai-sdk/openai": "^3.0.30", "@ai-sdk/openai": "^3.0.30",
"@ai-sdk/openai-compatible": "^2.0.30", "@ai-sdk/openai-compatible": "^2.0.30",
@@ -93,6 +93,7 @@
"pino": "^9.6.0", "pino": "^9.6.0",
"posthog-node": "^4.17.0", "posthog-node": "^4.17.0",
"puppeteer-core": "24.23.0", "puppeteer-core": "24.23.0",
"sharp": "^0.34.5",
"ws": "^8.18.0", "ws": "^8.18.0",
"zod": "^3.24.2", "zod": "^3.24.2",
"zod-from-json-schema": "^0.1.0" "zod-from-json-schema": "^0.1.0"

View File

@@ -4,10 +4,12 @@ import { createAzure } from '@ai-sdk/azure'
import { createGoogleGenerativeAI } from '@ai-sdk/google' import { createGoogleGenerativeAI } from '@ai-sdk/google'
import { createOpenAI } from '@ai-sdk/openai' import { createOpenAI } from '@ai-sdk/openai'
import { createOpenAICompatible } from '@ai-sdk/openai-compatible' import { createOpenAICompatible } from '@ai-sdk/openai-compatible'
import { EXTERNAL_URLS } from '@browseros/shared/constants/urls'
import { LLM_PROVIDERS } from '@browseros/shared/schemas/llm' import { LLM_PROVIDERS } from '@browseros/shared/schemas/llm'
import { createOpenRouter } from '@openrouter/ai-sdk-provider' import { createOpenRouter } from '@openrouter/ai-sdk-provider'
import type { LanguageModel } from 'ai' import type { LanguageModel } from 'ai'
import { createCodexFetch } from '../lib/clients/oauth/codex-fetch' import { createCodexFetch } from '../lib/clients/oauth/codex-fetch'
import { createCopilotFetch } from '../lib/clients/oauth/copilot-fetch'
import { logger } from '../lib/logger' import { logger } from '../lib/logger'
import { createOpenRouterCompatibleFetch } from '../lib/openrouter-fetch' import { createOpenRouterCompatibleFetch } from '../lib/openrouter-fetch'
import type { ResolvedAgentConfig } from './types' import type { ResolvedAgentConfig } from './types'
@@ -149,6 +151,19 @@ function createMoonshotFactory(
}) })
} }
function createGitHubCopilotFactory(
config: ResolvedAgentConfig,
): (modelId: string) => unknown {
if (!config.apiKey)
throw new Error('GitHub Copilot requires OAuth authentication')
return createOpenAICompatible({
name: 'github-copilot',
baseURL: EXTERNAL_URLS.GITHUB_COPILOT_API,
apiKey: config.apiKey,
fetch: createCopilotFetch() as typeof globalThis.fetch,
})
}
function createChatGPTProFactory( function createChatGPTProFactory(
config: ResolvedAgentConfig, config: ResolvedAgentConfig,
): (modelId: string) => unknown { ): (modelId: string) => unknown {
@@ -173,6 +188,7 @@ const PROVIDER_FACTORIES: Record<string, ProviderFactory> = {
[LLM_PROVIDERS.OPENAI_COMPATIBLE]: createOpenAICompatibleFactory, [LLM_PROVIDERS.OPENAI_COMPATIBLE]: createOpenAICompatibleFactory,
[LLM_PROVIDERS.MOONSHOT]: createMoonshotFactory, [LLM_PROVIDERS.MOONSHOT]: createMoonshotFactory,
[LLM_PROVIDERS.CHATGPT_PRO]: createChatGPTProFactory, [LLM_PROVIDERS.CHATGPT_PRO]: createChatGPTProFactory,
[LLM_PROVIDERS.GITHUB_COPILOT]: createGitHubCopilotFactory,
} }
export function createLanguageModel( export function createLanguageModel(

View File

@@ -29,6 +29,17 @@ export function createOAuthRoutes(deps: OAuthRouteDeps) {
} }
try { try {
// Device Code flow: return JSON with user code for the extension to display
if (provider.authFlow === 'device-code') {
const result = await tokenManager.startDeviceCodeFlow(providerId)
return c.json({
userCode: result.userCode,
verificationUri: result.verificationUri,
expiresIn: result.expiresIn,
})
}
// PKCE flow: redirect to auth server
const authUrl = await tokenManager.generateAuthorizationUrl( const authUrl = await tokenManager.generateAuthorizationUrl(
providerId, providerId,
redirectBackUrl, redirectBackUrl,

View File

@@ -17,10 +17,13 @@ export async function resolveLLMConfig(
config: LLMConfig, config: LLMConfig,
browserosId?: string, browserosId?: string,
): Promise<ResolvedLLMConfig> { ): Promise<ResolvedLLMConfig> {
// ChatGPT Pro: resolve OAuth token from server-side storage // OAuth providers: resolve token from server-side storage
if (config.provider === LLM_PROVIDERS.CHATGPT_PRO) { if (config.provider === LLM_PROVIDERS.CHATGPT_PRO) {
return resolveChatGPTProConfig(config, browserosId) return resolveChatGPTProConfig(config, browserosId)
} }
if (config.provider === LLM_PROVIDERS.GITHUB_COPILOT) {
return resolveGitHubCopilotConfig(config, browserosId)
}
// BrowserOS gateway: fetch config from remote service // BrowserOS gateway: fetch config from remote service
if (config.provider === LLM_PROVIDERS.BROWSEROS) { if (config.provider === LLM_PROVIDERS.BROWSEROS) {
@@ -61,6 +64,32 @@ async function resolveChatGPTProConfig(
} }
} }
async function resolveGitHubCopilotConfig(
config: LLMConfig,
browserosId?: string,
): Promise<ResolvedLLMConfig> {
const tokenManager = getOAuthTokenManager()
if (!tokenManager || !browserosId) {
throw new Error(
'Not authenticated with GitHub Copilot. Please login first.',
)
}
// GitHub tokens never expire — no refresh needed
const tokens = tokenManager.getTokens('github-copilot')
if (!tokens) {
throw new Error(
'Not authenticated with GitHub Copilot. Please login first.',
)
}
return {
...config,
model: config.model || 'gpt-5-mini',
apiKey: tokens.accessToken,
}
}
async function resolveBrowserOSConfig( async function resolveBrowserOSConfig(
config: LLMConfig, config: LLMConfig,
browserosId?: string, browserosId?: string,

View File

@@ -12,12 +12,14 @@ import { createAzure } from '@ai-sdk/azure'
import { createGoogleGenerativeAI } from '@ai-sdk/google' import { createGoogleGenerativeAI } from '@ai-sdk/google'
import { createOpenAI } from '@ai-sdk/openai' import { createOpenAI } from '@ai-sdk/openai'
import { createOpenAICompatible } from '@ai-sdk/openai-compatible' import { createOpenAICompatible } from '@ai-sdk/openai-compatible'
import { EXTERNAL_URLS } from '@browseros/shared/constants/urls'
import { LLM_PROVIDERS } from '@browseros/shared/schemas/llm' import { LLM_PROVIDERS } from '@browseros/shared/schemas/llm'
import { createOpenRouter } from '@openrouter/ai-sdk-provider' import { createOpenRouter } from '@openrouter/ai-sdk-provider'
import type { LanguageModel } from 'ai' import type { LanguageModel } from 'ai'
import { logger } from '../../logger' import { logger } from '../../logger'
import { createOpenRouterCompatibleFetch } from '../../openrouter-fetch' import { createOpenRouterCompatibleFetch } from '../../openrouter-fetch'
import { createCodexFetch } from '../oauth/codex-fetch' import { createCodexFetch } from '../oauth/codex-fetch'
import { createCopilotFetch } from '../oauth/copilot-fetch'
import type { ResolvedLLMConfig } from './types' import type { ResolvedLLMConfig } from './types'
type ProviderFactory = (config: ResolvedLLMConfig) => LanguageModel type ProviderFactory = (config: ResolvedLLMConfig) => LanguageModel
@@ -135,6 +137,17 @@ function createMoonshotModel(config: ResolvedLLMConfig): LanguageModel {
})(config.model) })(config.model)
} }
function createGitHubCopilotModel(config: ResolvedLLMConfig): LanguageModel {
if (!config.apiKey)
throw new Error('GitHub Copilot requires OAuth authentication')
return createOpenAICompatible({
name: 'github-copilot',
baseURL: EXTERNAL_URLS.GITHUB_COPILOT_API,
apiKey: config.apiKey,
fetch: createCopilotFetch() as typeof globalThis.fetch,
})(config.model)
}
function createChatGPTProModel(config: ResolvedLLMConfig): LanguageModel { function createChatGPTProModel(config: ResolvedLLMConfig): LanguageModel {
if (!config.apiKey) if (!config.apiKey)
throw new Error('ChatGPT Plus/Pro requires OAuth authentication') throw new Error('ChatGPT Plus/Pro requires OAuth authentication')
@@ -157,6 +170,7 @@ const PROVIDER_FACTORIES: Record<string, ProviderFactory> = {
[LLM_PROVIDERS.OPENAI_COMPATIBLE]: createOpenAICompatibleModel, [LLM_PROVIDERS.OPENAI_COMPATIBLE]: createOpenAICompatibleModel,
[LLM_PROVIDERS.MOONSHOT]: createMoonshotModel, [LLM_PROVIDERS.MOONSHOT]: createMoonshotModel,
[LLM_PROVIDERS.CHATGPT_PRO]: createChatGPTProModel, [LLM_PROVIDERS.CHATGPT_PRO]: createChatGPTProModel,
[LLM_PROVIDERS.GITHUB_COPILOT]: createGitHubCopilotModel,
} }
export function createLLMProvider(config: ResolvedLLMConfig): LanguageModel { export function createLLMProvider(config: ResolvedLLMConfig): LanguageModel {

View File

@@ -0,0 +1,135 @@
/**
* @license
* Copyright 2025 BrowserOS
* SPDX-License-Identifier: AGPL-3.0-or-later
*
* Custom fetch wrapper for GitHub Copilot API requests.
* Injects required Copilot headers and resizes images following
* VS Code's algorithm (max 2048px longest side, 768px shortest side).
*/
import sharp from 'sharp'
import { logger } from '../../logger'
const MAX_LONG_SIDE = 2048
const MAX_SHORT_SIDE = 768
export function createCopilotFetch() {
return async (input: RequestInfo | URL, init?: RequestInit) => {
const headers = new Headers(init?.headers as HeadersInit)
headers.set('Openai-Intent', 'conversation-edits')
headers.set('x-initiator', 'user')
let body = init?.body
if (body && typeof body === 'string') {
try {
const json = JSON.parse(body)
if (hasImageContent(json)) {
headers.set('Copilot-Vision-Request', 'true')
await shrinkImages(json)
body = JSON.stringify(json)
}
} catch {
// Not JSON or resize failed, send as-is
}
}
return fetch(input, { ...init, headers, body })
}
}
function hasImageContent(body: Record<string, unknown>): boolean {
if (!Array.isArray(body.messages)) return false
for (const msg of body.messages) {
if (!Array.isArray(msg?.content)) continue
for (const part of msg.content) {
if (part?.type === 'image_url') return true
}
}
return false
}
// Resize images following VS Code's algorithm for OpenAI vision token optimization
async function shrinkImages(body: Record<string, unknown>): Promise<void> {
if (!Array.isArray(body.messages)) return
for (const msg of body.messages) {
if (!Array.isArray(msg?.content)) continue
for (const part of msg.content) {
if (part?.type !== 'image_url' || !part.image_url) continue
const url = part.image_url.url as string
if (!url?.startsWith('data:')) continue
try {
const resized = await resizeDataUrl(url)
if (resized) part.image_url.url = resized
} catch (err) {
logger.warn('Failed to resize image for Copilot', {
error: err instanceof Error ? err.message : String(err),
})
}
}
}
}
async function resizeDataUrl(dataUrl: string): Promise<string | null> {
const commaIdx = dataUrl.indexOf(',')
if (commaIdx === -1) return null
const base64Data = dataUrl.substring(commaIdx + 1)
const buffer = Buffer.from(base64Data, 'base64')
const image = sharp(buffer)
const metadata = await image.metadata()
if (!metadata.width || !metadata.height) return null
let { width, height } = metadata
// Skip if already within both limits (no resize step will fire)
if (
Math.max(width, height) <= MAX_LONG_SIDE &&
Math.min(width, height) <= MAX_SHORT_SIDE
) {
return null
}
// Step 1: scale longest side to 2048
if (width > MAX_LONG_SIDE || height > MAX_LONG_SIDE) {
const scale = MAX_LONG_SIDE / Math.max(width, height)
width = Math.round(width * scale)
height = Math.round(height * scale)
}
// Step 2: scale shortest side to 768
const shortSide = Math.min(width, height)
if (shortSide > MAX_SHORT_SIDE) {
const scale = MAX_SHORT_SIDE / shortSide
width = Math.round(width * scale)
height = Math.round(height * scale)
}
// Preserve PNG for images with alpha, use JPEG otherwise
const hasAlpha = metadata.channels === 4 || metadata.hasAlpha
const resizedBuffer = hasAlpha
? await sharp(buffer)
.resize(width, height, { fit: 'inside' })
.png()
.toBuffer()
: await sharp(buffer)
.resize(width, height, { fit: 'inside' })
.jpeg({ quality: 75 })
.toBuffer()
const mime = hasAlpha ? 'image/png' : 'image/jpeg'
const originalKB = Math.round(base64Data.length / 1024)
const resizedB64 = resizedBuffer.toString('base64')
const resizedKB = Math.round(resizedB64.length / 1024)
logger.debug('Resized image for Copilot', {
original: `${metadata.width}x${metadata.height} (${originalKB}KB)`,
resized: `${width}x${height} (${resizedKB}KB)`,
})
return `data:${mime};base64,${resizedB64}`
}

View File

@@ -15,6 +15,7 @@ export interface OAuthProviderConfig {
scopes: string[] scopes: string[]
extraAuthParams?: Record<string, string> extraAuthParams?: Record<string, string>
upstreamLLMProvider: string upstreamLLMProvider: string
authFlow?: 'pkce' | 'device-code'
} }
export const OAUTH_PROVIDERS: Record<string, OAuthProviderConfig> = { export const OAUTH_PROVIDERS: Record<string, OAuthProviderConfig> = {
@@ -32,6 +33,16 @@ export const OAUTH_PROVIDERS: Record<string, OAuthProviderConfig> = {
}, },
upstreamLLMProvider: 'openai', upstreamLLMProvider: 'openai',
}, },
'github-copilot': {
id: 'github-copilot',
name: 'GitHub Copilot',
clientId: 'Ov23li8tweQw6odWQebz',
authEndpoint: EXTERNAL_URLS.GITHUB_DEVICE_CODE,
tokenEndpoint: EXTERNAL_URLS.GITHUB_OAUTH_TOKEN,
scopes: ['read:user'],
upstreamLLMProvider: 'github-copilot',
authFlow: 'device-code',
},
} }
export function getOAuthProvider( export function getOAuthProvider(

View File

@@ -25,6 +25,26 @@ interface OAuthTokenResponse {
id_token?: string id_token?: string
} }
export interface DeviceCodeResult {
userCode: string
verificationUri: string
expiresIn: number
}
interface GitHubDeviceCodeResponse {
device_code: string
user_code: string
verification_uri: string
expires_in: number
interval: number
}
interface GitHubTokenPollResponse {
access_token?: string
error?: string
interval?: number
}
export class OAuthTokenManager { export class OAuthTokenManager {
private readonly pendingFlows = new Map<string, PendingOAuthFlow>() private readonly pendingFlows = new Map<string, PendingOAuthFlow>()
private readonly refreshLocks = new Map< private readonly refreshLocks = new Map<
@@ -37,6 +57,8 @@ export class OAuthTokenManager {
private readonly browserosId: string, private readonly browserosId: string,
) {} ) {}
// --- PKCE flow (ChatGPT Plus/Pro) ---
async generateAuthorizationUrl( async generateAuthorizationUrl(
providerId: string, providerId: string,
redirectBackUrl?: string, redirectBackUrl?: string,
@@ -138,16 +160,150 @@ export class OAuthTokenManager {
return { tokens, redirectBackUrl: flow.redirectBackUrl } return { tokens, redirectBackUrl: flow.redirectBackUrl }
} }
// Mutex-protected refresh: concurrent callers share one in-flight refresh // --- Device Code flow (GitHub Copilot) ---
private readonly activeDeviceFlows = new Set<string>()
async startDeviceCodeFlow(providerId: string): Promise<DeviceCodeResult> {
const provider = getOAuthProvider(providerId)
if (!provider) throw new Error(`Unknown OAuth provider: ${providerId}`)
// Cancel any existing flow — user may be retrying
this.activeDeviceFlows.delete(providerId)
// Request a device code from GitHub
const response = await fetch(provider.authEndpoint, {
method: 'POST',
headers: {
Accept: 'application/json',
'Content-Type': 'application/json',
},
body: JSON.stringify({
client_id: provider.clientId,
scope: provider.scopes.join(' '),
}),
})
if (!response.ok) {
throw new Error(`Failed to request device code: ${response.status}`)
}
const data = (await response.json()) as GitHubDeviceCodeResponse
// GitHub can return 200 with an error payload (e.g. invalid scope)
const dataObj = data as unknown as Record<string, unknown>
if ('error' in dataObj) {
throw new Error(`GitHub device code error: ${dataObj.error}`)
}
if (!data.device_code || !data.user_code) {
throw new Error('Invalid device code response from GitHub')
}
// Start background polling with error handling
this.activeDeviceFlows.add(providerId)
this.pollDeviceCode(
providerId,
provider,
data.device_code,
data.interval,
data.expires_in,
).finally(() => this.activeDeviceFlows.delete(providerId))
return {
userCode: data.user_code,
verificationUri: data.verification_uri,
expiresIn: data.expires_in,
}
}
private async pollDeviceCode(
providerId: string,
provider: ReturnType<typeof getOAuthProvider> & {},
deviceCode: string,
initialInterval: number,
expiresIn: number,
): Promise<void> {
let interval = initialInterval
const deadline = Date.now() + expiresIn * 1000
while (Date.now() < deadline) {
// Wait before polling (interval + safety margin per OpenCode pattern)
await sleep(interval * 1000 + TIMEOUTS.DEVICE_CODE_POLL_SAFETY_MARGIN)
try {
const response = await fetch(provider.tokenEndpoint, {
method: 'POST',
headers: {
Accept: 'application/json',
'Content-Type': 'application/json',
},
body: JSON.stringify({
client_id: provider.clientId,
device_code: deviceCode,
grant_type: 'urn:ietf:params:oauth:grant-type:device_code',
}),
})
const data = (await response.json()) as GitHubTokenPollResponse
// Token received — store it and return
if (data.access_token) {
const tokens: StoredOAuthTokens = {
accessToken: data.access_token,
refreshToken: '',
expiresAt: 0,
email: undefined,
accountId: undefined,
}
this.store.upsertTokens(this.browserosId, providerId, tokens)
logger.info('Device code OAuth successful', { provider: providerId })
return
}
// Handle polling errors per RFC 8628
if (data.error === 'authorization_pending') continue
if (data.error === 'slow_down') {
interval = (data.interval ?? interval) + 5
continue
}
if (data.error === 'expired_token' || data.error === 'access_denied') {
logger.warn('Device code flow ended', {
provider: providerId,
error: data.error,
})
return
}
logger.warn('Unexpected device code poll response', {
provider: providerId,
error: data.error,
})
return
} catch (err) {
// Transient network error — loop continues to retry
logger.warn('Device code poll request failed, retrying', {
provider: providerId,
error: err instanceof Error ? err.message : String(err),
})
}
}
logger.warn('Device code flow timed out', { provider: providerId })
}
// --- Token refresh (PKCE providers only) ---
async refreshIfExpired(provider: string): Promise<StoredOAuthTokens | null> { async refreshIfExpired(provider: string): Promise<StoredOAuthTokens | null> {
const tokens = this.store.getTokens(this.browserosId, provider) const tokens = this.store.getTokens(this.browserosId, provider)
if (!tokens) return null if (!tokens) return null
// GitHub Copilot tokens never expire (expiresAt = 0)
if (tokens.expiresAt === 0) return tokens
if (Date.now() < tokens.expiresAt - TIMEOUTS.OAUTH_TOKEN_EXPIRY_BUFFER) { if (Date.now() < tokens.expiresAt - TIMEOUTS.OAUTH_TOKEN_EXPIRY_BUFFER) {
return tokens return tokens
} }
// If a refresh is already in progress, await it instead of starting another
const existing = this.refreshLocks.get(provider) const existing = this.refreshLocks.get(provider)
if (existing) return existing if (existing) return existing
@@ -214,6 +370,12 @@ export class OAuthTokenManager {
return refreshed return refreshed
} }
// --- Shared ---
getTokens(provider: string): StoredOAuthTokens | null {
return this.store.getTokens(this.browserosId, provider)
}
getStatus(provider: string) { getStatus(provider: string) {
return this.store.getStatus(this.browserosId, provider) return this.store.getStatus(this.browserosId, provider)
} }
@@ -257,6 +419,10 @@ function base64UrlEncode(bytes: Uint8Array): string {
return base64.replace(/\+/g, '-').replace(/\//g, '_').replace(/=+$/, '') return base64.replace(/\+/g, '-').replace(/\//g, '_').replace(/=+$/, '')
} }
function sleep(ms: number): Promise<void> {
return new Promise((resolve) => setTimeout(resolve, ms))
}
// Extracts claims without signature verification — safe because the token // Extracts claims without signature verification — safe because the token
// comes directly from OpenAI's HTTPS token endpoint. Do not reuse for // comes directly from OpenAI's HTTPS token endpoint. Do not reuse for
// caller-supplied or externally-sourced tokens. // caller-supplied or externally-sourced tokens.

View File

@@ -1,5 +1,5 @@
{ {
"$schema": "https://biomejs.dev/schemas/2.4.5/schema.json", "$schema": "https://biomejs.dev/schemas/2.4.8/schema.json",
"vcs": { "vcs": {
"enabled": true, "enabled": true,
"clientKind": "git", "clientKind": "git",

View File

@@ -6,7 +6,7 @@
"name": "browseros-monorepo", "name": "browseros-monorepo",
"devDependencies": { "devDependencies": {
"@aws-sdk/client-s3": "^3.933.0", "@aws-sdk/client-s3": "^3.933.0",
"@biomejs/biome": "2.4.5", "@biomejs/biome": "2.4.8",
"@sentry/cli": "^2.42.2", "@sentry/cli": "^2.42.2",
"@types/bun": "^1.3.5", "@types/bun": "^1.3.5",
"@types/node": "^24.3.3", "@types/node": "^24.3.3",
@@ -207,6 +207,7 @@
"pino": "^9.6.0", "pino": "^9.6.0",
"posthog-node": "^4.17.0", "posthog-node": "^4.17.0",
"puppeteer-core": "24.23.0", "puppeteer-core": "24.23.0",
"sharp": "^0.34.5",
"ws": "^8.18.0", "ws": "^8.18.0",
"zod": "^3.24.2", "zod": "^3.24.2",
"zod-from-json-schema": "^0.1.0", "zod-from-json-schema": "^0.1.0",
@@ -450,23 +451,23 @@
"@better-fetch/fetch": ["@better-fetch/fetch@1.1.21", "", {}, "sha512-/ImESw0sskqlVR94jB+5+Pxjf+xBwDZF/N5+y2/q4EqD7IARUTSpPfIo8uf39SYpCxyOCtbyYpUrZ3F/k0zT4A=="], "@better-fetch/fetch": ["@better-fetch/fetch@1.1.21", "", {}, "sha512-/ImESw0sskqlVR94jB+5+Pxjf+xBwDZF/N5+y2/q4EqD7IARUTSpPfIo8uf39SYpCxyOCtbyYpUrZ3F/k0zT4A=="],
"@biomejs/biome": ["@biomejs/biome@2.4.5", "", { "optionalDependencies": { "@biomejs/cli-darwin-arm64": "2.4.5", "@biomejs/cli-darwin-x64": "2.4.5", "@biomejs/cli-linux-arm64": "2.4.5", "@biomejs/cli-linux-arm64-musl": "2.4.5", "@biomejs/cli-linux-x64": "2.4.5", "@biomejs/cli-linux-x64-musl": "2.4.5", "@biomejs/cli-win32-arm64": "2.4.5", "@biomejs/cli-win32-x64": "2.4.5" }, "bin": { "biome": "bin/biome" } }, "sha512-OWNCyMS0Q011R6YifXNOg6qsOg64IVc7XX6SqGsrGszPbkVCoaO7Sr/lISFnXZ9hjQhDewwZ40789QmrG0GYgQ=="], "@biomejs/biome": ["@biomejs/biome@2.4.8", "", { "optionalDependencies": { "@biomejs/cli-darwin-arm64": "2.4.8", "@biomejs/cli-darwin-x64": "2.4.8", "@biomejs/cli-linux-arm64": "2.4.8", "@biomejs/cli-linux-arm64-musl": "2.4.8", "@biomejs/cli-linux-x64": "2.4.8", "@biomejs/cli-linux-x64-musl": "2.4.8", "@biomejs/cli-win32-arm64": "2.4.8", "@biomejs/cli-win32-x64": "2.4.8" }, "bin": { "biome": "bin/biome" } }, "sha512-ponn0oKOky1oRXBV+rlSaUlixUxf1aZvWC19Z41zBfUOUesthrQqL3OtiAlSB1EjFjyWpn98Q64DHelhA6jNlA=="],
"@biomejs/cli-darwin-arm64": ["@biomejs/cli-darwin-arm64@2.4.5", "", { "os": "darwin", "cpu": "arm64" }, "sha512-lGS4Nd5O3KQJ6TeWv10mElnx1phERhBxqGP/IKq0SvZl78kcWDFMaTtVK+w3v3lusRFxJY78n07PbKplirsU5g=="], "@biomejs/cli-darwin-arm64": ["@biomejs/cli-darwin-arm64@2.4.8", "", { "os": "darwin", "cpu": "arm64" }, "sha512-ARx0tECE8I7S2C2yjnWYLNbBdDoPdq3oyNLhMglmuctThwUsuzFWRKrHmIGwIRWKz0Mat9DuzLEDp52hGnrxGQ=="],
"@biomejs/cli-darwin-x64": ["@biomejs/cli-darwin-x64@2.4.5", "", { "os": "darwin", "cpu": "x64" }, "sha512-6MoH4tyISIBNkZ2Q5T1R7dLd5BsITb2yhhhrU9jHZxnNSNMWl+s2Mxu7NBF8Y3a7JJcqq9nsk8i637z4gqkJxQ=="], "@biomejs/cli-darwin-x64": ["@biomejs/cli-darwin-x64@2.4.8", "", { "os": "darwin", "cpu": "x64" }, "sha512-Jg9/PsB9vDCJlANE8uhG7qDhb5w0Ix69D7XIIc8IfZPUoiPrbLm33k2Ig3NOJ/7nb3UbesFz3D1aDKm9DvzjhQ=="],
"@biomejs/cli-linux-arm64": ["@biomejs/cli-linux-arm64@2.4.5", "", { "os": "linux", "cpu": "arm64" }, "sha512-U1GAG6FTjhAO04MyH4xn23wRNBkT6H7NentHh+8UxD6ShXKBm5SY4RedKJzkUThANxb9rUKIPc7B8ew9Xo/cWg=="], "@biomejs/cli-linux-arm64": ["@biomejs/cli-linux-arm64@2.4.8", "", { "os": "linux", "cpu": "arm64" }, "sha512-5CdrsJct76XG2hpKFwXnEtlT1p+4g4yV+XvvwBpzKsTNLO9c6iLlAxwcae2BJ7ekPGWjNGw9j09T5KGPKKxQig=="],
"@biomejs/cli-linux-arm64-musl": ["@biomejs/cli-linux-arm64-musl@2.4.5", "", { "os": "linux", "cpu": "arm64" }, "sha512-iqLDgpzobG7gpBF0fwEVS/LT8kmN7+S0E2YKFDtqliJfzNLnAiV2Nnyb+ehCDCJgAZBASkYHR2o60VQWikpqIg=="], "@biomejs/cli-linux-arm64-musl": ["@biomejs/cli-linux-arm64-musl@2.4.8", "", { "os": "linux", "cpu": "arm64" }, "sha512-Zo9OhBQDJ3IBGPlqHiTISloo5H0+FBIpemqIJdW/0edJ+gEcLR+MZeZozcUyz3o1nXkVA7++DdRKQT0599j9jA=="],
"@biomejs/cli-linux-x64": ["@biomejs/cli-linux-x64@2.4.5", "", { "os": "linux", "cpu": "x64" }, "sha512-NdODlSugMzTlENPTa4z0xB82dTUlCpsrOxc43///aNkTLblIYH4XpYflBbf5ySlQuP8AA4AZd1qXhV07IdrHdQ=="], "@biomejs/cli-linux-x64": ["@biomejs/cli-linux-x64@2.4.8", "", { "os": "linux", "cpu": "x64" }, "sha512-PdKXspVEaMCQLjtZCn6vfSck/li4KX9KGwSDbZdgIqlrizJ2MnMcE3TvHa2tVfXNmbjMikzcfJpuPWH695yJrw=="],
"@biomejs/cli-linux-x64-musl": ["@biomejs/cli-linux-x64-musl@2.4.5", "", { "os": "linux", "cpu": "x64" }, "sha512-NlKa7GpbQmNhZf9kakQeddqZyT7itN7jjWdakELeXyTU3pg/83fTysRRDPJD0akTfKDl6vZYNT9Zqn4MYZVBOA=="], "@biomejs/cli-linux-x64-musl": ["@biomejs/cli-linux-x64-musl@2.4.8", "", { "os": "linux", "cpu": "x64" }, "sha512-Gi8quv8MEuDdKaPFtS2XjEnMqODPsRg6POT6KhoP+VrkNb+T2ywunVB+TvOU0LX1jAZzfBr+3V1mIbBhzAMKvw=="],
"@biomejs/cli-win32-arm64": ["@biomejs/cli-win32-arm64@2.4.5", "", { "os": "win32", "cpu": "arm64" }, "sha512-EBfrTqRIWOFSd7CQb/0ttjHMR88zm3hGravnDwUA9wHAaCAYsULKDebWcN5RmrEo1KBtl/gDVJMrFjNR0pdGUw=="], "@biomejs/cli-win32-arm64": ["@biomejs/cli-win32-arm64@2.4.8", "", { "os": "win32", "cpu": "arm64" }, "sha512-LoFatS0tnHv6KkCVpIy3qZCih+MxUMvdYiPWLHRri7mhi2vyOOs8OrbZBcLTUEWCS+ktO72nZMy4F96oMhkOHQ=="],
"@biomejs/cli-win32-x64": ["@biomejs/cli-win32-x64@2.4.5", "", { "os": "win32", "cpu": "x64" }, "sha512-Pmhv9zT95YzECfjEHNl3mN9Vhusw9VA5KHY0ZvlGsxsjwS5cb7vpRnHzJIv0vG7jB0JI7xEaMH9ddfZm/RozBw=="], "@biomejs/cli-win32-x64": ["@biomejs/cli-win32-x64@2.4.8", "", { "os": "win32", "cpu": "x64" }, "sha512-vAn7iXDoUbqFXqVocuq1sMYAd33p8+mmurqJkWl6CtIhobd/O6moe4rY5AJvzbunn/qZCdiDVcveqtkFh1e7Hg=="],
"@braintree/sanitize-url": ["@braintree/sanitize-url@7.1.1", "", {}, "sha512-i1L7noDNxtFyL5DmZafWy1wRVhGehQmzZaz1HiN5e7iylJMSZR7ekOV7NsIqa5qBldlLrsKv4HbgFUVlQrz8Mw=="], "@braintree/sanitize-url": ["@braintree/sanitize-url@7.1.1", "", {}, "sha512-i1L7noDNxtFyL5DmZafWy1wRVhGehQmzZaz1HiN5e7iylJMSZR7ekOV7NsIqa5qBldlLrsKv4HbgFUVlQrz8Mw=="],

View File

@@ -45,7 +45,7 @@
"homepage": "https://github.com/browseros-ai/BrowserOS#readme", "homepage": "https://github.com/browseros-ai/BrowserOS#readme",
"devDependencies": { "devDependencies": {
"@aws-sdk/client-s3": "^3.933.0", "@aws-sdk/client-s3": "^3.933.0",
"@biomejs/biome": "2.4.5", "@biomejs/biome": "2.4.8",
"@sentry/cli": "^2.42.2", "@sentry/cli": "^2.42.2",
"@types/bun": "^1.3.5", "@types/bun": "^1.3.5",
"@types/node": "^24.3.3", "@types/node": "^24.3.3",

View File

@@ -54,6 +54,7 @@ export const TIMEOUTS = {
OAUTH_TOKEN_EXPIRY_BUFFER: 300_000, OAUTH_TOKEN_EXPIRY_BUFFER: 300_000,
OAUTH_POLL_INTERVAL: 2_000, OAUTH_POLL_INTERVAL: 2_000,
OAUTH_POLL_TIMEOUT: 300_000, OAUTH_POLL_TIMEOUT: 300_000,
DEVICE_CODE_POLL_SAFETY_MARGIN: 3_000,
} as const } as const
export type TimeoutKey = keyof typeof TIMEOUTS export type TimeoutKey = keyof typeof TIMEOUTS

View File

@@ -13,4 +13,7 @@ export const EXTERNAL_URLS = {
OPENAI_AUTH: 'https://auth.openai.com/oauth/authorize', OPENAI_AUTH: 'https://auth.openai.com/oauth/authorize',
OPENAI_TOKEN: 'https://auth.openai.com/oauth/token', OPENAI_TOKEN: 'https://auth.openai.com/oauth/token',
SKILLS_CATALOG: 'https://cdn.browseros.com/skills/v1/catalog.json', SKILLS_CATALOG: 'https://cdn.browseros.com/skills/v1/catalog.json',
GITHUB_DEVICE_CODE: 'https://github.com/login/device/code',
GITHUB_OAUTH_TOKEN: 'https://github.com/login/oauth/access_token',
GITHUB_COPILOT_API: 'https://api.githubcopilot.com',
} as const } as const

View File

@@ -25,6 +25,7 @@ export const LLM_PROVIDERS = {
OPENAI_COMPATIBLE: 'openai-compatible', OPENAI_COMPATIBLE: 'openai-compatible',
MOONSHOT: 'moonshot', MOONSHOT: 'moonshot',
CHATGPT_PRO: 'chatgpt-pro', CHATGPT_PRO: 'chatgpt-pro',
GITHUB_COPILOT: 'github-copilot',
} as const } as const
/** /**
@@ -44,6 +45,7 @@ export const LLMProviderSchema: z.ZodEnum<
'openai-compatible', 'openai-compatible',
'moonshot', 'moonshot',
'chatgpt-pro', 'chatgpt-pro',
'github-copilot',
] ]
> = z.enum([ > = z.enum([
LLM_PROVIDERS.ANTHROPIC, LLM_PROVIDERS.ANTHROPIC,
@@ -58,6 +60,7 @@ export const LLMProviderSchema: z.ZodEnum<
LLM_PROVIDERS.OPENAI_COMPATIBLE, LLM_PROVIDERS.OPENAI_COMPATIBLE,
LLM_PROVIDERS.MOONSHOT, LLM_PROVIDERS.MOONSHOT,
LLM_PROVIDERS.CHATGPT_PRO, LLM_PROVIDERS.CHATGPT_PRO,
LLM_PROVIDERS.GITHUB_COPILOT,
]) ])
export type LLMProvider = z.infer<typeof LLMProviderSchema> export type LLMProvider = z.infer<typeof LLMProviderSchema>