Compare commits

...

7 Commits

Author SHA1 Message Date
Felarof
7eb253f45b docs(byollm): add NVIDIA free endpoint provider
Document NVIDIA's free OpenAI-compatible API at build.nvidia.com — 80+ free models including GLM 5.1, MiniMax M2.7, Qwen 3.5, Mistral, and Nemotron — wired through BrowserOS's OpenAI Compatible provider template.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-22 12:39:20 -07:00
Felarof
b5bbbe1aff fix(credits): move credits fetch to extension side (#740)
* fix(credits): move credits fetch to extension side using install_id

Extension now reads `browseros.metrics_install_id` pref directly and fetches
credits from `llm.browseros.com` without going through the bundled server.
Unblocks the referral submit flow in prod without requiring a BrowserOS
binary release.

- Revert `/credits` route change that added `browserosId` to the response.
- Add `getOrCreateBrowserosId()` helper reading from BrowserOS prefs.
- Add `CREDITS_GATEWAY` to shared EXTERNAL_URLS.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor(credits): drop fallback UUID, read install_id directly

Extension only runs inside BrowserOS, so the prefs API is always available.
The chrome.storage fallback was dead code that would generate a ghost ID
diverging from the server's install_id anyway. Rename the helper to match
its simpler contract.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(credits): guard against empty install_id pref

Address Greptile P1 — throw instead of silently fetching `/credits/null`
when `browseros.metrics_install_id` is unset. Fails loudly so the broken
state is observable rather than masquerading as a credits outage.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-16 19:27:21 -07:00
Felarof
4f03afcac8 chore: add .auctor entries to gitignore (#738)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-16 18:00:20 -07:00
Felarof
6d3498c91b fix: randomized tweet variations + referral fixes (#737)
* fix(agent): declare @browseros/shared as workspace dependency

The agent app imports @browseros/shared/constants/urls in
lib/referral/submit-referral.ts but never declared the package in its
dependencies, so vite failed to resolve the import during dev.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(referral): cap daily referral earnings at 500 credits

Block tweet submissions client-side once the user's balance reaches
500 to prevent unlimited credit farming via repeated shares.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(referral): randomize tweet variations for Twitter share

Replace the single hardcoded share text with 10 feature-specific
variations (agent mode, chat, scheduled tasks, connect apps, cowork,
workflows, memory, skills, local models, ad blocking) and pick one at
random each time the share button is clicked.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(referral): regenerate share URL on click

Previously getShareOnTwitterUrl() was evaluated once at render time as
a static href, so every click produced the same tweet variation. Move
the call into onClick so a new random variation is picked each time.

Addresses Greptile P1 review on PR #737.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-16 17:09:28 -07:00
Felarof
7f2e387903 fix(agent): clarify upstream provider rate-limit errors (#734)
* fix(agent): clarify upstream provider rate-limit errors

When a non-BrowserOS provider (OpenAI, Anthropic, OpenRouter, etc.)
returned a 429, ChatError rendered the retry-wrapped message
"Failed after 3 attempts. Last error: The usage limit has been reached"
with a generic "Something went wrong" title, leading users to blame
BrowserOS for throttling imposed by their configured upstream.

Detect upstream 429s in parseErrorMessage, show the provider name in
the title ("OpenAI rate limit reached"), strip the retry prefix,
render the raw upstream message, and add clarifying subtext that
names the provider and explicitly excludes BrowserOS. Skip the
BrowserOS-specific ShareForCredits / survey / upgrade affordances on
this path — they do not apply.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: address Greptile review comments

- Tighten 429 pattern to \b429\b so it only matches the standalone
  status code, not incidental substrings (model IDs, paths, etc.).
- Unwrap JSON-encoded provider error bodies on the upstream-rate-limit
  path so users see the human-readable message instead of raw JSON.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-16 16:14:45 -07:00
Felarof
fc00ed23bf feat(referral): show tweet share rules and lower default daily limit fallback (#731)
* feat(referral): show share rules and lower default daily limit fallback

Surface the three referral validation rules (must mention @browserOS_ai,
posted within last 30 minutes, single-use) directly in the ShareForCredits
UI so users understand submission requirements before pasting a tweet link.
Also align the UsagePage daily-limit fallback (used while credits load) with
the gateway default of 50.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(usage): handle credit balance exceeding daily limit

The "Credits used today" stat was computed as `dailyLimit - credits`,
which goes negative once a referral bonus pushes the balance above the
daily cap (e.g. balance 294 with cap 100 showed "-194 of 100"). Clamp
the math to zero and surface a separate "Bonus credits" stat when the
balance exceeds the daily allowance.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-16 15:34:33 -07:00
Felarof
b6d6d4eb1d feat: Twitter share referral UI for credit rewards (#729)
* feat: add Twitter share referral UI and expose browserosId

When credits are exhausted, users now see a "Share on Twitter" CTA with
a pre-filled tweet URL and an input to paste their tweet link. Reusable
ShareForCredits component used in both ChatError and UsagePage. Server's
GET /credits now includes browserosId for the extension to pass to the
referral service.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: rebuild chat session on provider change

* fix: address Greptile review comments

- Move referral service URL to EXTERNAL_URLS
- Guard submitReferral on !response.ok
- Remove stale TODO comment

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 15:25:04 -07:00
15 changed files with 559 additions and 40 deletions

2
.gitignore vendored
View File

@@ -1,4 +1,6 @@
**/.DS_Store
**.auctor/**
.auctor.json
.gcs_entries
**/dmg
**/env

View File

@@ -131,6 +131,29 @@ Connect to powerful AI models using your API keys. Your keys stay on your machin
![Gemini config](/images/byollm--gemini-provider-config.png)
</Accordion>
<div id="nvidia" />
<Accordion title="NVIDIA (Free)" icon="microchip">
NVIDIA's [build.nvidia.com](https://build.nvidia.com/models) hosts 80+ models — including GLM 5.1, MiniMax M2.7, GPT-OSS-120B, Qwen 3.5, Mistral, and Nemotron — behind a **free OpenAI-compatible API endpoint**. Great for chatting, prototyping, and personal projects.
**Get your API key:**
1. Go to [build.nvidia.com/models](https://build.nvidia.com/models) and sign in with a free NVIDIA developer account
2. Pick any model tagged **Free Endpoint** (e.g. [`minimaxai/minimax-m2.7`](https://build.nvidia.com/minimaxai/minimax-m2.7), [`z-ai/glm-5.1`](https://build.nvidia.com/z-ai/glm-5.1), [`qwen/qwen3.5-122b-a10b`](https://build.nvidia.com/qwen/qwen3.5-122b-a10b))
3. Click **Get API Key** on the model page and copy the `nvapi-...` key
**Add to BrowserOS:**
1. Go to `chrome://browseros/settings`
2. Click **USE** on the **OpenAI Compatible** card
3. Set **Base URL** to `https://integrate.api.nvidia.com/v1`
4. Set **Model ID** to a model from the catalog (e.g. `minimaxai/minimax-m2.7`, `z-ai/glm-5.1`, `qwen/qwen3.5-122b-a10b`)
5. Paste your NVIDIA API key
6. Set **Context Window** based on the model (most are `128000` or higher)
7. Click **Save**
<Tip>
NVIDIA's free endpoints share GPU capacity across all developers, so throughput is slower than a paid API. They're best for Chat Mode, exploring new open-source models, and personal projects. For production agent workloads, use a paid provider like Claude or Kimi.
</Tip>
</Accordion>
<div id="claude" />
<Accordion title="Claude (Best for Agents)" icon="message-bot">
Claude Opus 4.5 gives the best results for Agent Mode.

View File

@@ -0,0 +1,148 @@
import { REFERRAL_LIMITS } from '@browseros/shared/constants/limits'
import { ExternalLink, Loader2, Send } from 'lucide-react'
import type { FC } from 'react'
import { useState } from 'react'
import { Button } from '@/components/ui/button'
import { Input } from '@/components/ui/input'
import { useCredits, useInvalidateCredits } from '@/lib/credits/useCredits'
import {
getShareOnTwitterUrl,
submitReferral,
} from '@/lib/referral/submit-referral'
interface ShareForCreditsProps {
compact?: boolean
}
export const ShareForCredits: FC<ShareForCreditsProps> = ({ compact }) => {
const [tweetUrl, setTweetUrl] = useState('')
const [isSubmitting, setIsSubmitting] = useState(false)
const [result, setResult] = useState<{
success: boolean
message: string
} | null>(null)
const { data } = useCredits()
const invalidateCredits = useInvalidateCredits()
const credits = data?.credits ?? 0
const atDailyMax = credits >= REFERRAL_LIMITS.MAX_DAILY_CREDITS
const handleSubmit = async () => {
if (!tweetUrl.trim() || !data?.browserosId || atDailyMax) return
setIsSubmitting(true)
setResult(null)
try {
const res = await submitReferral(tweetUrl.trim(), data.browserosId)
if (res.success) {
setResult({
success: true,
message: `${res.creditsAdded ?? 200} credits added!`,
})
setTweetUrl('')
invalidateCredits()
} else {
setResult({
success: false,
message: res.reason ?? 'Submission failed. Please try again.',
})
}
} catch {
setResult({
success: false,
message: 'Network error. Please try again.',
})
} finally {
setIsSubmitting(false)
}
}
if (atDailyMax) {
return (
<div className={compact ? 'space-y-2' : 'space-y-3'}>
<p className={compact ? 'text-muted-foreground text-xs' : 'text-sm'}>
You've reached the daily cap of {REFERRAL_LIMITS.MAX_DAILY_CREDITS}{' '}
credits. Come back tomorrow to earn more!
</p>
</div>
)
}
return (
<div className={compact ? 'space-y-2' : 'space-y-3'}>
<p className={compact ? 'text-muted-foreground text-xs' : 'text-sm'}>
Share BrowserOS on Twitter to earn{' '}
{REFERRAL_LIMITS.CREDITS_PER_REFERRAL} bonus credits!
</p>
<ul className="list-disc space-y-0.5 pl-4 text-muted-foreground text-xs">
<li>
Tweet must mention <span className="font-medium">@browserOS_ai</span>
</li>
<li>Tweet must be posted within the last 30 minutes</li>
<li>Each tweet can only be submitted once</li>
<li>
Daily cap of {REFERRAL_LIMITS.MAX_DAILY_CREDITS} credits — resets at
midnight UTC
</li>
</ul>
<Button variant="outline" size="sm" className="w-full gap-2" asChild>
<a
href={getShareOnTwitterUrl()}
target="_blank"
rel="noopener noreferrer"
onClick={(e) => {
e.currentTarget.href = getShareOnTwitterUrl()
}}
>
<ExternalLink className="h-3.5 w-3.5" />
Share on Twitter
</a>
</Button>
<p className="text-muted-foreground text-xs">
Already shared? Paste your tweet link:
</p>
<div className="flex gap-2">
<Input
type="url"
placeholder="https://x.com/..."
value={tweetUrl}
onChange={(e) => setTweetUrl(e.target.value)}
className="h-8 text-xs"
disabled={isSubmitting}
/>
<Button
variant="default"
size="sm"
onClick={handleSubmit}
disabled={isSubmitting || !tweetUrl.trim()}
className="shrink-0 gap-1.5"
>
{isSubmitting ? (
<Loader2 className="h-3.5 w-3.5 animate-spin" />
) : (
<Send className="h-3.5 w-3.5" />
)}
Submit
</Button>
</div>
{result && (
<p
className={
result.success
? 'text-green-600 text-xs dark:text-green-400'
: 'text-destructive text-xs'
}
>
{result.message}
</p>
)}
</div>
)
}

View File

@@ -1,5 +1,6 @@
import { AlertCircle, Clock, Coins, CreditCard, Zap } from 'lucide-react'
import { AlertCircle, Clock, Coins, Gift, Zap } from 'lucide-react'
import type { FC } from 'react'
import { ShareForCredits } from '@/components/referral/ShareForCredits'
import { Button } from '@/components/ui/button'
import {
getCreditBarColor,
@@ -43,8 +44,10 @@ export const UsagePage: FC = () => {
}
const credits = data?.credits ?? 0
const total = data?.dailyLimit ?? 100
const total = data?.dailyLimit ?? 50
const percentage = Math.min((credits / total) * 100, 100)
const bonusCredits = Math.max(0, credits - total)
const creditsUsed = Math.max(0, total - credits)
return (
<div className="space-y-6 p-6">
@@ -95,30 +98,32 @@ export const UsagePage: FC = () => {
<div className="flex items-center gap-2.5 rounded-lg bg-muted/50 px-3 py-2.5">
<Zap className="h-4 w-4 shrink-0 text-muted-foreground" />
<div>
<p className="font-medium text-xs">Credits used today</p>
<p className="text-muted-foreground text-xs">
{total - credits} of {total}
</p>
{bonusCredits > 0 ? (
<>
<p className="font-medium text-xs">Bonus credits</p>
<p className="text-muted-foreground text-xs">
+{bonusCredits} from referrals
</p>
</>
) : (
<>
<p className="font-medium text-xs">Credits used today</p>
<p className="text-muted-foreground text-xs">
{creditsUsed} of {total}
</p>
</>
)}
</div>
</div>
</div>
</div>
<div className="rounded-xl border p-5">
<div className="flex items-center gap-3">
<CreditCard className="h-5 w-5 text-muted-foreground" />
<div>
<p className="flex items-center gap-2 font-semibold text-sm">
Need more credits?
<span className="rounded-full bg-muted px-2 py-0.5 font-medium text-[10px] text-muted-foreground uppercase tracking-wide">
Coming soon
</span>
</p>
<p className="text-muted-foreground text-xs">
Additional credit packages will be available soon
</p>
</div>
<div className="mb-4 flex items-center gap-2">
<Gift className="h-5 w-5 text-muted-foreground" />
<span className="font-semibold text-sm">Earn More Credits</span>
</div>
<ShareForCredits />
</div>
<div className="rounded-xl border border-[var(--accent-orange)]/30 bg-[var(--accent-orange)]/5 p-5">

View File

@@ -1,7 +1,9 @@
import { AlertCircle, RefreshCw } from 'lucide-react'
import type { FC } from 'react'
import { useMemo } from 'react'
import { ShareForCredits } from '@/components/referral/ShareForCredits'
import { Button } from '@/components/ui/button'
import type { ProviderType } from '@/lib/llm-providers/types'
const SURVEY_DIRECTIONS = [
'competitor',
@@ -14,6 +16,44 @@ function pickRandomDirection(): string {
return SURVEY_DIRECTIONS[Math.floor(Math.random() * SURVEY_DIRECTIONS.length)]
}
const PROVIDER_DISPLAY_NAMES: Record<ProviderType, string> = {
anthropic: 'Anthropic',
openai: 'OpenAI',
'openai-compatible': 'OpenAI-compatible',
google: 'Google',
openrouter: 'OpenRouter',
azure: 'Azure OpenAI',
ollama: 'Ollama',
lmstudio: 'LM Studio',
bedrock: 'AWS Bedrock',
browseros: 'BrowserOS',
moonshot: 'Moonshot',
'chatgpt-pro': 'ChatGPT Pro',
'github-copilot': 'GitHub Copilot',
'qwen-code': 'Qwen Code',
}
const UPSTREAM_RATE_LIMIT_PATTERNS: Array<string | RegExp> = [
'usage limit',
'rate limit',
'rate-limit',
'quota',
/\b429\b/,
'too many requests',
'insufficient_quota',
]
function getProviderDisplayName(providerType?: string): string {
if (providerType && providerType in PROVIDER_DISPLAY_NAMES) {
return PROVIDER_DISPLAY_NAMES[providerType as ProviderType]
}
return 'your provider'
}
function stripRetryPrefix(message: string): string {
return message.replace(/^Failed after \d+ attempts?\.\s*Last error:\s*/i, '')
}
interface ChatErrorProps {
error: Error
onRetry?: () => void
@@ -29,6 +69,8 @@ function parseErrorMessage(
isRateLimit?: boolean
isCreditsExhausted?: boolean
isConnectionError?: boolean
isUpstreamRateLimit?: boolean
providerName?: string
} {
const isBrowserosProvider = providerType === 'browseros'
@@ -69,6 +111,28 @@ function parseErrorMessage(
}
}
// Detect rate limits from non-BrowserOS upstream providers. Users were
// confused that a quota/429 from OpenAI/Anthropic/etc. looked like a
// BrowserOS-imposed limit.
if (!isBrowserosProvider && providerType) {
const lower = message.toLowerCase()
const matchesRateLimit = UPSTREAM_RATE_LIMIT_PATTERNS.some((p) =>
typeof p === 'string' ? lower.includes(p) : p.test(lower),
)
if (matchesRateLimit) {
let stripped = stripRetryPrefix(message).trim()
try {
const parsed = JSON.parse(stripped)
if (parsed?.error?.message) stripped = parsed.error.message
} catch {}
return {
text: stripped || message,
isUpstreamRateLimit: true,
providerName: getProviderDisplayName(providerType),
}
}
}
let text = message
try {
const parsed = JSON.parse(message)
@@ -90,8 +154,15 @@ export const ChatError: FC<ChatErrorProps> = ({
onRetry,
providerType,
}) => {
const { text, url, isRateLimit, isCreditsExhausted, isConnectionError } =
parseErrorMessage(error.message, providerType)
const {
text,
url,
isRateLimit,
isCreditsExhausted,
isConnectionError,
isUpstreamRateLimit,
providerName,
} = parseErrorMessage(error.message, providerType)
const surveyUrl = useMemo(
() =>
@@ -100,6 +171,11 @@ export const ChatError: FC<ChatErrorProps> = ({
)
const getTitle = () => {
if (isUpstreamRateLimit) {
return providerName && providerName !== 'your provider'
? `${providerName} rate limit reached`
: 'Upstream rate limit reached'
}
if (isRateLimit) return 'Daily limit reached'
if (isConnectionError) return 'Connection failed'
return 'Something went wrong'
@@ -112,6 +188,14 @@ export const ChatError: FC<ChatErrorProps> = ({
<span className="font-medium text-sm">{getTitle()}</span>
</div>
<p className="text-center text-destructive text-xs">{text}</p>
{isUpstreamRateLimit && (
<p className="text-center text-muted-foreground text-xs">
This is a limit from{' '}
<span className="font-medium">{providerName}</span>
{' — your configured model provider — not BrowserOS. Check your '}
provider's dashboard for quota, usage, or billing details.
</p>
)}
{isConnectionError && url && (
<a
href={url}
@@ -122,15 +206,22 @@ export const ChatError: FC<ChatErrorProps> = ({
View troubleshooting guide
</a>
)}
{isCreditsExhausted && url && (
<a
href={url}
target="_blank"
rel="noopener noreferrer"
className="text-muted-foreground text-xs underline hover:text-foreground"
>
View Usage & Billing
</a>
{isCreditsExhausted && (
<>
<div className="w-full border-border/50 border-t pt-3">
<ShareForCredits compact />
</div>
{url && (
<a
href={url}
target="_blank"
rel="noopener noreferrer"
className="text-muted-foreground text-xs underline hover:text-foreground"
>
View Usage & Billing
</a>
)}
</>
)}
{isRateLimit && !isCreditsExhausted && (
<p className="text-muted-foreground text-xs">

View File

@@ -0,0 +1,15 @@
import { getBrowserOSAdapter } from '@/lib/browseros/adapter'
import { BROWSEROS_PREFS } from '@/lib/browseros/prefs'
// TODO(credits-identity): temporary shim — reuses the BrowserOS metrics
// install_id as the credits/referral identifier. Replace with a dedicated
// identity module once we have one.
export async function getBrowserosId(): Promise<string> {
const adapter = getBrowserOSAdapter()
const pref = await adapter.getPref(BROWSEROS_PREFS.INSTALL_ID)
const id = pref.value
if (typeof id !== 'string' || id.length === 0) {
throw new Error('browseros.metrics_install_id is not set')
}
return id
}

View File

@@ -1,20 +1,25 @@
import { EXTERNAL_URLS } from '@browseros/shared/constants/urls'
import { useQuery, useQueryClient } from '@tanstack/react-query'
import { getAgentServerUrl } from '@/lib/browseros/helpers'
import { getBrowserosId } from './browseros-id'
export interface CreditsInfo {
credits: number
dailyLimit: number
lastResetAt?: string
browserosId?: string
}
const CREDITS_QUERY_KEY = ['credits']
async function fetchCredits(): Promise<CreditsInfo> {
const baseUrl = await getAgentServerUrl()
const response = await fetch(`${baseUrl}/credits`)
const browserosId = await getBrowserosId()
const response = await fetch(
`${EXTERNAL_URLS.CREDITS_GATEWAY}/credits/${browserosId}`,
)
if (!response.ok)
throw new Error(`Failed to fetch credits: ${response.status}`)
return response.json()
const data = (await response.json()) as CreditsInfo
return { ...data, browserosId }
}
export function useCredits() {

View File

@@ -0,0 +1,108 @@
import { EXTERNAL_URLS } from '@browseros/shared/constants/urls'
interface ReferralResult {
success: boolean
creditsAdded?: number
reason?: string
}
export async function submitReferral(
tweetUrl: string,
browserosId: string,
): Promise<ReferralResult> {
const response = await fetch(
`${EXTERNAL_URLS.REFERRAL_SERVICE}/referral/submit`,
{
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ tweetUrl, browserosId }),
},
)
if (!response.ok) {
return {
success: false,
reason: `Request failed with status ${response.status}`,
}
}
return response.json()
}
const TWEET_VARIATIONS = [
`ngl @browseros_ai is kinda wild
just type what u want in plain english and it handles the annoying web shit
forms, research, data pulls... all automated
actually works`,
`been using @browseros_ai to chat with webpages lately
summarize articles, pull data, translate stuff
all happens in the same tab
no copy/paste, no switching windows
just ask and it does it`,
`wake up to @browseros_ai having already read ur emails and calendar while u were sleeping
scheduled agents are lowkey magic`,
`ngl @browseros_ai is kinda crazy
connects gmail, slack, linear, notion + 40 other apps into one ai assistant
just talk to it in plain english and it handles cross-app workflows for u
no more switching between tabs like a psycho`,
`i use @browseros_ai to automate research
it handles the browser work and drops reports straight into local folders
no switching between tools or manually saving files
just one task instead of three`,
`been messing with @browseros_ai lately
it comes with a prebuilt MCP server and I connect it claude code or codex and it just runs things for you
set it up once, use it whenever
way better than clicking through the same shit manually every time`,
`the ai actually remembers what we talked about yesterday
no more "here's the context again" every single conversation
@browseros_ai just picks up where we left off
feels like talking to someone who actually pays attention`,
`i built a skill library for my ai agent
now when i need it to do something specific, i just load the recipe i made earlier
@browseros_ai MCP is very handy`,
`been running @browseros_ai with ollama locally
everything stays on my machine, nothing gets sent out
kinda nice not having to think about what data i'm sharing`,
`switched to @browseros_ai from chrome
blocks 10x more ads and runs full ublock origin (not the lite version)
check it out`,
]
export function getShareOnTwitterUrl(): string {
const text =
TWEET_VARIATIONS[Math.floor(Math.random() * TWEET_VARIATIONS.length)]
return `https://x.com/intent/tweet?text=${encodeURIComponent(text)}`
}

View File

@@ -20,6 +20,7 @@
"dependencies": {
"@ai-sdk/react": "^3.0.96",
"@browseros/server": "workspace:*",
"@browseros/shared": "workspace:*",
"@hookform/resolvers": "^5.2.2",
"@lobehub/icons": "^2.44.0",
"@mdxeditor/editor": "^3.52.4",

View File

@@ -11,6 +11,8 @@ export interface AgentSession {
mcpServerKey?: string
/** Workspace directory when the session was created, for change detection. */
workingDir?: string
/** LLM config used when the session was created, for provider/model changes. */
llmConfigKey?: string
}
export class SessionStore {

View File

@@ -65,6 +65,7 @@ export class ChatService {
declinedApps: request.declinedApps,
browserosId: this.deps.browserosId,
}
const llmConfigKey = this.buildLlmConfigKey(agentConfig)
let session = sessionStore.get(request.conversationId)
let isNewSession = false
@@ -144,6 +145,24 @@ export class ChatService {
}
}
// Detect provider/model/auth change mid-conversation -> rebuild session.
// The AI SDK agent captures the language model at construction time, so a
// reused session would keep calling the previous provider.
if (session && session.llmConfigKey !== llmConfigKey) {
logger.info('LLM config changed mid-conversation, rebuilding session', {
conversationId: request.conversationId,
provider: agentConfig.provider,
model: agentConfig.model,
})
session = await this.rebuildSession(
session,
request,
agentConfig,
mcpServerKey,
llmConfigKey,
)
}
if (!session) {
isNewSession = true
let hiddenPageId: number | undefined
@@ -209,6 +228,7 @@ export class ChatService {
browserContext,
mcpServerKey,
workingDir: request.userWorkingDir,
llmConfigKey,
}
sessionStore.set(request.conversationId, session)
}
@@ -341,6 +361,7 @@ export class ChatService {
request: ChatRequest,
agentConfig: ResolvedAgentConfig,
mcpServerKey: string,
llmConfigKey = this.buildLlmConfigKey(agentConfig),
): Promise<AgentSession> {
const previousMessages = session.agent.messages
await session.agent.dispose()
@@ -365,6 +386,7 @@ export class ChatService {
browserContext,
mcpServerKey,
workingDir: request.userWorkingDir,
llmConfigKey,
}
newSession.agent.messages = sanitizeMessagesForToolset(
previousMessages,
@@ -374,6 +396,26 @@ export class ChatService {
return newSession
}
private buildLlmConfigKey(config: ResolvedAgentConfig): string {
return JSON.stringify({
provider: config.provider,
model: config.model,
apiKey: config.apiKey,
baseUrl: config.baseUrl,
upstreamProvider: config.upstreamProvider,
resourceName: config.resourceName,
region: config.region,
accessKeyId: config.accessKeyId,
secretAccessKey: config.secretAccessKey,
sessionToken: config.sessionToken,
accountId: config.accountId,
reasoningEffort: config.reasoningEffort,
reasoningSummary: config.reasoningSummary,
contextWindowSize: config.contextWindowSize,
supportsImages: config.supportsImages,
})
}
private buildMcpServerKey(browserContext?: BrowserContext): string {
const managed = browserContext?.enabledMcpServers?.slice().sort() ?? []
const custom =

View File

@@ -44,11 +44,19 @@ const createAgentUIStreamResponseSpy = mock(
},
)
const resolveLLMConfigSpy = mock(async () => ({
provider: 'openai',
model: 'gpt-5',
apiKey: 'test-key',
}))
const resolveLLMConfigSpy = mock(
async (config: {
provider?: string
model?: string
apiKey?: string
baseUrl?: string
}) => ({
provider: config.provider ?? 'openai',
model: config.model ?? 'gpt-5',
apiKey: config.apiKey ?? 'test-key',
baseUrl: config.baseUrl,
}),
)
mock.module('ai', () => ({
createAgentUIStreamResponse: createAgentUIStreamResponseSpy,
@@ -288,4 +296,65 @@ describe('ChatService scheduled task hidden page lifecycle', () => {
})
expect(browser.closePage).toHaveBeenCalledWith(88)
})
it('rebuilds an existing session when the LLM provider changes', async () => {
const firstAgent = createFakeAgent()
agentToReturn = firstAgent
streamResponseHandler = async ({ onFinish }) => {
await onFinish({ messages: agentToReturn?.messages ?? [] })
return new Response('ok')
}
const browser = {
resolveTabIds: mock(async () => new Map<number, number>()),
}
const sessionStore = createSessionStore()
const service = new ChatService({
sessionStore: sessionStore as never,
klavisClient: {} as never,
browser: browser as never,
registry: {} as never,
})
const conversationId = crypto.randomUUID()
const createCallsBefore = createAgentSpy.mock.calls.length
await service.processMessage(
{
conversationId,
message: 'First message',
provider: 'browseros',
model: 'browseros-auto',
mode: 'agent',
origin: 'sidepanel',
} as never,
new AbortController().signal,
)
const secondAgent = createFakeAgent()
agentToReturn = secondAgent
await service.processMessage(
{
conversationId,
message: 'Second message',
provider: 'chatgpt-pro',
model: 'gpt-5.3-codex',
mode: 'agent',
origin: 'sidepanel',
} as never,
new AbortController().signal,
)
expect(createAgentSpy.mock.calls.length).toBe(createCallsBefore + 2)
expect(firstAgent.dispose).toHaveBeenCalledTimes(1)
expect(sessionStore.get(conversationId)?.agent).toBe(secondAgent)
const latestCreateArgs = createAgentSpy.mock.calls.at(-1)?.[0] as {
resolvedConfig: { provider: string; model: string }
}
expect(latestCreateArgs.resolvedConfig).toMatchObject({
provider: 'chatgpt-pro',
model: 'gpt-5.3-codex',
})
})
})

View File

@@ -27,6 +27,7 @@
"dependencies": {
"@ai-sdk/react": "^3.0.96",
"@browseros/server": "workspace:*",
"@browseros/shared": "workspace:*",
"@hookform/resolvers": "^5.2.2",
"@lobehub/icons": "^2.44.0",
"@mdxeditor/editor": "^3.52.4",
@@ -2210,7 +2211,7 @@
"chrome-devtools-frontend": ["chrome-devtools-frontend@1.0.1577886", "", {}, "sha512-B9hY3o/0RuVCDWNYh9YnkEbRrPUMCY+NaOgBxvZRzGvqbGSMNckkVSdO67SwWR8bm4fo/qplXbUj0cSr229V6w=="],
"chrome-devtools-mcp": ["chrome-devtools-mcp@0.20.3", "", { "bin": { "chrome-devtools-mcp": "build/src/bin/chrome-devtools-mcp.js", "chrome-devtools": "build/src/bin/chrome-devtools.js" } }, "sha512-6MlNKlKa+J1FX9w4SUnFERF4MRGWLlrnZvIJGhhsuuMPM7qUG0F4SwheRyjwl0+tsTemxMCBHiib8mXkg5j6og=="],
"chrome-devtools-mcp": ["chrome-devtools-mcp@0.21.0", "", { "bin": { "chrome-devtools-mcp": "build/src/bin/chrome-devtools-mcp.js", "chrome-devtools": "build/src/bin/chrome-devtools.js" } }, "sha512-d+iqrRmcwpRFV3Q4DRCF2LCoq+WCRU3GhISKQ9v8g+1C2Uh8upj3urkjxNO4QIjhBMIYei/VQ1OQLFceby80Og=="],
"chrome-launcher": ["chrome-launcher@1.2.0", "", { "dependencies": { "@types/node": "*", "escape-string-regexp": "^4.0.0", "is-wsl": "^2.2.0", "lighthouse-logger": "^2.0.1" }, "bin": { "print-chrome-path": "bin/print-chrome-path.cjs" } }, "sha512-JbuGuBNss258bvGil7FT4HKdC3SC2K7UAEUqiPy3ACS3Yxo3hAW6bvFpCu2HsIJLgTqxgEX6BkujvzZfLpUD0Q=="],

View File

@@ -80,3 +80,8 @@ export const CONTENT_LIMITS = {
CONSOLE_DEFAULT_LIMIT: 50,
CONSOLE_MAX_LIMIT: 200,
} as const
export const REFERRAL_LIMITS = {
MAX_DAILY_CREDITS: 500,
CREDITS_PER_REFERRAL: 200,
} as const

View File

@@ -19,4 +19,6 @@ export const EXTERNAL_URLS = {
QWEN_DEVICE_CODE: 'https://chat.qwen.ai/api/v1/oauth2/device/code',
QWEN_OAUTH_TOKEN: 'https://chat.qwen.ai/api/v1/oauth2/token',
QWEN_CODE_API: 'https://portal.qwen.ai/v1',
REFERRAL_SERVICE: 'https://browseros-referral.fly.dev',
CREDITS_GATEWAY: 'https://llm.browseros.com',
} as const