mirror of
https://github.com/browseros-ai/BrowserOS.git
synced 2026-05-13 15:46:22 +00:00
* docs: add setup guides for ChatGPT Pro, GitHub Copilot, and Qwen Code Add individual OAuth setup guide pages with step-by-step screenshots for each provider. Add "Use Your Existing Subscription" section to the Bring Your Own LLM page with card links to each guide. Register pages in docs navigation. * docs: add ChatGPT Pro setup screenshots * docs: use custom provider icons for OAuth setup cards * docs: inline SVG icons in provider cards for dark mode support * docs: place provider icons above card titles
285 lines
16 KiB
Plaintext
285 lines
16 KiB
Plaintext
---
|
|
title: "Bring Your Own LLM"
|
|
description: "Connect your own AI models to BrowserOS"
|
|
---
|
|
|
|
BrowserOS includes a default AI model you can use right away, but it has strict rate limits. For the best experience, bring your own API keys or run models locally.
|
|
|
|
See how to connect your own LLM in under a minute:
|
|
|
|
<video
|
|
controls
|
|
className="w-full aspect-video rounded-xl"
|
|
src="https://pub-80f8a01e6e8b4239ae53a7652ef85877.r2.dev/resources/feature-videos/1-bring-your-own-LLM.mov"
|
|
></video>
|
|
|
|
## Use Your Existing Subscription
|
|
|
|
Already paying for ChatGPT Pro, GitHub Copilot, or Qwen Code? Connect your existing account to BrowserOS with a single sign-in — no API keys, no extra cost.
|
|
|
|
<CardGroup cols={3}>
|
|
<Card href="/features/chatgpt-pro-oauth">
|
|
<svg fill="currentColor" fillRule="evenodd" height="24" width="24" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg"><path d="M9.205 8.658v-2.26c0-.19.072-.333.238-.428l4.543-2.616c.619-.357 1.356-.523 2.117-.523 2.854 0 4.662 2.212 4.662 4.566 0 .167 0 .357-.024.547l-4.71-2.759a.797.797 0 00-.856 0l-5.97 3.473zm10.609 8.8V12.06c0-.333-.143-.57-.429-.737l-5.97-3.473 1.95-1.118a.433.433 0 01.476 0l4.543 2.617c1.309.76 2.189 2.378 2.189 3.948 0 1.808-1.07 3.473-2.76 4.163zM7.802 12.703l-1.95-1.142c-.167-.095-.239-.238-.239-.428V5.899c0-2.545 1.95-4.472 4.591-4.472 1 0 1.927.333 2.712.928L8.23 5.067c-.285.166-.428.404-.428.737v6.898zM12 15.128l-2.795-1.57v-3.33L12 8.658l2.795 1.57v3.33L12 15.128zm1.796 7.23c-1 0-1.927-.332-2.712-.927l4.686-2.712c.285-.166.428-.404.428-.737v-6.898l1.974 1.142c.167.095.238.238.238.428v5.233c0 2.545-1.974 4.472-4.614 4.472zm-5.637-5.303l-4.544-2.617c-1.308-.761-2.188-2.378-2.188-3.948A4.482 4.482 0 014.21 6.327v5.423c0 .333.143.571.428.738l5.947 3.449-1.95 1.118a.432.432 0 01-.476 0zm-.262 3.9c-2.688 0-4.662-2.021-4.662-4.519 0-.19.024-.38.047-.57l4.686 2.71c.286.167.571.167.856 0l5.97-3.448v2.26c0 .19-.07.333-.237.428l-4.543 2.616c-.619.357-1.356.523-2.117.523zm5.899 2.83a5.947 5.947 0 005.827-4.756C22.287 18.339 24 15.84 24 13.296c0-1.665-.713-3.282-1.998-4.448.119-.5.19-.999.19-1.498 0-3.401-2.759-5.947-5.946-5.947-.642 0-1.26.095-1.88.31A5.962 5.962 0 0010.205 0a5.947 5.947 0 00-5.827 4.757C1.713 5.447 0 7.945 0 10.49c0 1.666.713 3.283 1.998 4.448-.119.5-.19 1-.19 1.499 0 3.401 2.759 5.946 5.946 5.946.642 0 1.26-.095 1.88-.309a5.96 5.96 0 004.162 1.713z"></path></svg>
|
|
**ChatGPT Pro / Plus**
|
|
|
|
Sign in with your OpenAI account. Access GPT-5 Codex, GPT-5.4, and the full Codex lineup with up to 400K context.
|
|
</Card>
|
|
<Card href="/features/github-copilot-oauth">
|
|
<svg fill="currentColor" fillRule="evenodd" height="24" width="24" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg"><path d="M19.245 5.364c1.322 1.36 1.877 3.216 2.11 5.817.622 0 1.2.135 1.592.654l.73.964c.21.278.323.61.323.955v2.62c0 .339-.173.669-.453.868C20.239 19.602 16.157 21.5 12 21.5c-4.6 0-9.205-2.583-11.547-4.258-.28-.2-.452-.53-.453-.868v-2.62c0-.345.113-.679.321-.956l.73-.963c.392-.517.974-.654 1.593-.654l.029-.297c.25-2.446.81-4.213 2.082-5.52 2.461-2.54 5.71-2.851 7.146-2.864h.198c1.436.013 4.685.323 7.146 2.864zm-7.244 4.328c-.284 0-.613.016-.962.05-.123.447-.305.85-.57 1.108-1.05 1.023-2.316 1.18-2.994 1.18-.638 0-1.306-.13-1.851-.464-.516.165-1.012.403-1.044.996a65.882 65.882 0 00-.063 2.884l-.002.48c-.002.563-.005 1.126-.013 1.69.002.326.204.63.51.765 2.482 1.102 4.83 1.657 6.99 1.657 2.156 0 4.504-.555 6.985-1.657a.854.854 0 00.51-.766c.03-1.682.006-3.372-.076-5.053-.031-.596-.528-.83-1.046-.996-.546.333-1.212.464-1.85.464-.677 0-1.942-.157-2.993-1.18-.266-.258-.447-.661-.57-1.108-.32-.032-.64-.049-.96-.05zm-2.525 4.013c.539 0 .976.426.976.95v1.753c0 .525-.437.95-.976.95a.964.964 0 01-.976-.95v-1.752c0-.525.437-.951.976-.951zm5 0c.539 0 .976.426.976.95v1.753c0 .525-.437.95-.976.95a.964.964 0 01-.976-.95v-1.752c0-.525.437-.951.976-.951zM7.635 5.087c-1.05.102-1.935.438-2.385.906-.975 1.037-.765 3.668-.21 4.224.405.394 1.17.657 1.995.657h.09c.649-.013 1.785-.176 2.73-1.11.435-.41.705-1.433.675-2.47-.03-.834-.27-1.52-.63-1.813-.39-.336-1.275-.482-2.265-.394zm6.465.394c-.36.292-.6.98-.63 1.813-.03 1.037.24 2.06.675 2.47.968.957 2.136 1.104 2.776 1.11h.044c.825 0 1.59-.263 1.995-.657.555-.556.765-3.187-.21-4.224-.45-.468-1.335-.804-2.385-.906-.99-.088-1.875.058-2.265.394zM12 7.615c-.24 0-.525.015-.84.044.03.16.045.336.06.526l-.001.159a2.94 2.94 0 01-.014.25c.225-.022.425-.027.612-.028h.366c.187 0 .387.006.612.028-.015-.146-.015-.277-.015-.409.015-.19.03-.365.06-.526a9.29 9.29 0 00-.84-.044z"></path></svg>
|
|
**GitHub Copilot**
|
|
|
|
Sign in with your GitHub account. Access 19+ models including Claude, GPT-5, and Gemini through one subscription.
|
|
</Card>
|
|
<Card href="/features/qwen-code-oauth">
|
|
<svg fill="currentColor" fillRule="evenodd" height="24" width="24" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg"><path d="M12.604 1.34c.393.69.784 1.382 1.174 2.075a.18.18 0 00.157.091h5.552c.174 0 .322.11.446.327l1.454 2.57c.19.337.24.478.024.837-.26.43-.513.864-.76 1.3l-.367.658c-.106.196-.223.28-.04.512l2.652 4.637c.172.301.111.494-.043.77-.437.785-.882 1.564-1.335 2.34-.159.272-.352.375-.68.37-.777-.016-1.552-.01-2.327.016a.099.099 0 00-.081.05 575.097 575.097 0 01-2.705 4.74c-.169.293-.38.363-.725.364-.997.003-2.002.004-3.017.002a.537.537 0 01-.465-.271l-1.335-2.323a.09.09 0 00-.083-.049H4.982c-.285.03-.553-.001-.805-.092l-1.603-2.77a.543.543 0 01-.002-.54l1.207-2.12a.198.198 0 000-.197 550.951 550.951 0 01-1.875-3.272l-.79-1.395c-.16-.31-.173-.496.095-.965.465-.813.927-1.625 1.387-2.436.132-.234.304-.334.584-.335a338.3 338.3 0 012.589-.001.124.124 0 00.107-.063l2.806-4.895a.488.488 0 01.422-.246c.524-.001 1.053 0 1.583-.006L11.704 1c.341-.003.724.032.9.34zm-3.432.403a.06.06 0 00-.052.03L6.254 6.788a.157.157 0 01-.135.078H3.253c-.056 0-.07.025-.041.074l5.81 10.156c.025.042.013.062-.034.063l-2.795.015a.218.218 0 00-.2.116l-1.32 2.31c-.044.078-.021.118.068.118l5.716.008c.046 0 .08.02.104.061l1.403 2.454c.046.081.092.082.139 0l5.006-8.76.783-1.382a.055.055 0 01.096 0l1.424 2.53a.122.122 0 00.107.062l2.763-.02a.04.04 0 00.035-.02.041.041 0 000-.04l-2.9-5.086a.108.108 0 010-.113l.293-.507 1.12-1.977c.024-.041.012-.062-.035-.062H9.2c-.059 0-.073-.026-.043-.077l1.434-2.505a.107.107 0 000-.114L9.225 1.774a.06.06 0 00-.053-.031zm6.29 8.02c.046 0 .058.02.034.06l-.832 1.465-2.613 4.585a.056.056 0 01-.05.029.058.058 0 01-.05-.029L8.498 9.841c-.02-.034-.01-.052.028-.054l.216-.012 6.722-.012z"></path></svg>
|
|
**Qwen Code**
|
|
|
|
Sign in with your Qwen account. Access Qwen 3 Coder with a 1 million token context window.
|
|
</Card>
|
|
</CardGroup>
|
|
|
|
---
|
|
|
|
## Which Model Should I Use?
|
|
|
|
| Mode | What works | Recommendation |
|
|
|------|------------|----------------|
|
|
| **Chat Mode** | Any model, including local | Ollama or Gemini Flash |
|
|
| **Agent Mode** | Cloud models only | Claude Opus 4.5, GPT-5, or Kimi K2.5 (open source) |
|
|
|
|
<Warning>
|
|
**Local LLMs aren't powerful for most agentic tasks yet.** They're great for Chat — asking questions about a page, summarizing, etc. But agent tasks need strong reasoning to click the right elements and handle multi-step workflows. Use Claude Opus 4.5, GPT-5, or Kimi K2.5 for agents.
|
|
</Warning>
|
|
|
|
---
|
|
|
|
## Kimi K2.5 — In Partnership with Moonshot AI
|
|
|
|
{/* <img src="/images/moonshot-partnership-banner.png" alt="BrowserOS x Moonshot AI" className="rounded-xl" /> */}
|
|
|
|
BrowserOS has partnered with [Moonshot AI](https://www.kimi.com) to bring **Kimi K2.5** as a first-class provider. Kimi K2.5 is now the **recommended model** in BrowserOS and is set as the default provider.
|
|
|
|
For a limited time, BrowserOS users get **extended usage limits** powered by Kimi K2.5. This means you can use the AI agent, chat, and other AI-powered features with increased limits at no cost.
|
|
|
|
<CardGroup cols={2}>
|
|
<Card title="Open Source" icon="code-branch">
|
|
Fully open-source model you can inspect and trust.
|
|
</Card>
|
|
<Card title="Multimodal" icon="image">
|
|
Supports images out of the box, including screenshots and visual context.
|
|
</Card>
|
|
<Card title="Great for Agents" icon="robot">
|
|
Strong reasoning for browser automation, form filling, and multi-step workflows.
|
|
</Card>
|
|
<Card title="Affordable" icon="piggy-bank">
|
|
Excellent agentic performance at a fraction of the cost of other frontier models.
|
|
</Card>
|
|
</CardGroup>
|
|
|
|
<div id="moonshot" />
|
|
|
|
### Why Kimi K2.5?
|
|
|
|
Kimi K2.5 offers excellent performance for agentic tasks at a fraction of the cost of other frontier models. It supports images, has a 128,000 token context window, and delivers strong results on browser automation tasks. Combined with BrowserOS's open-source agent framework, this makes for a powerful and affordable AI browsing experience.
|
|
|
|
### Bring Your Own Kimi API Key
|
|
|
|
You can also bring your own Kimi API key if you want to use Kimi K2.5 beyond the extended usage period, or if you want your own dedicated limits.
|
|
|
|
**Get your API key:**
|
|
1. Go to [platform.moonshot.ai](https://platform.moonshot.ai) and create an account
|
|
2. Navigate to the **API keys** section in your dashboard
|
|
3. Click **Create new API key** and copy the key
|
|
|
|
**Add to BrowserOS:**
|
|
1. Go to `chrome://browseros/settings`
|
|
2. Click **USE** on the **Moonshot AI** card
|
|
3. Enter your API key (it will be encrypted and stored locally on your machine)
|
|
4. The model is pre-configured to `kimi-k2.5` with a 128,000 context window
|
|
5. Click **Save**
|
|
|
|
<Tip>
|
|
The base URL for the Kimi API (`https://api.moonshot.ai/v1`) is pre-filled automatically when you select the Moonshot AI provider template.
|
|
</Tip>
|
|
|
|
---
|
|
|
|
## Cloud Providers
|
|
|
|
Connect to powerful AI models using your API keys. Your keys stay on your machine — requests go directly to the provider.
|
|
|
|
<AccordionGroup>
|
|
<div id="gemini" />
|
|
<Accordion title="Gemini (Free)" icon="google">
|
|
Gemini Flash is fast and free. Google gives you 20 requests per minute at no cost.
|
|
|
|
**Get your API key:**
|
|
1. Go to [aistudio.google.com](https://aistudio.google.com)
|
|
2. Click **Get API key** in the sidebar
|
|
3. Click **Create API key** and copy it
|
|
|
|

|
|
|
|
**Add to BrowserOS:**
|
|
1. Go to `chrome://browseros/settings`
|
|
2. Click **USE** on the Gemini card
|
|
3. Set **Model ID** to `gemini-2.5-flash` (or `gemini-2.5-pro`, `gemini-3-pro-preview`, `gemini-3-flash-preview`)
|
|
4. Paste your API key
|
|
5. Check **Supports Images**, set **Context Window** to `1000000`
|
|
6. Click **Save**
|
|
|
|

|
|
</Accordion>
|
|
|
|
<div id="claude" />
|
|
<Accordion title="Claude (Best for Agents)" icon="message-bot">
|
|
Claude Opus 4.5 gives the best results for Agent Mode.
|
|
|
|
**Get your API key:**
|
|
1. Go to [console.anthropic.com](https://console.anthropic.com/dashboard)
|
|
2. Click **API keys** in the sidebar
|
|
3. Click **Create Key** and copy it
|
|
|
|

|
|
|
|
**Add to BrowserOS:**
|
|
1. Go to `chrome://browseros/settings`
|
|
2. Click **USE** on the Anthropic card
|
|
3. Set **Model ID** to `claude-opus-4-5-20251101` (or `claude-sonnet-4-5-20250929`, `claude-haiku-4-5-20251001`)
|
|
4. Paste your API key
|
|
5. Check **Supports Images**, set **Context Window** to `200000`
|
|
6. Click **Save**
|
|
|
|

|
|
</Accordion>
|
|
|
|
<div id="openai" />
|
|
<Accordion title="OpenAI" icon="brain">
|
|
GPT-5 is OpenAI's most capable model for both chat and agent tasks.
|
|
|
|
**Get your API key:**
|
|
1. Go to [platform.openai.com](https://platform.openai.com)
|
|
2. Click settings icon → **API keys**
|
|
3. Click **Create new secret key** and copy it
|
|
|
|

|
|
|
|
**Add to BrowserOS:**
|
|
1. Go to `chrome://browseros/settings`
|
|
2. Click **USE** on the OpenAI card
|
|
3. Set **Model ID** to `gpt-5` (or `gpt-5.2`, `gpt-5-mini`, `gpt-4.1`, `o4-mini`)
|
|
4. Paste your API key
|
|
5. Check **Supports Images**, set **Context Window** to `200000`
|
|
6. Click **Save**
|
|
|
|

|
|
</Accordion>
|
|
|
|
<div id="openrouter" />
|
|
<Accordion title="OpenRouter" icon="shuffle">
|
|
Access 500+ models through one API.
|
|
|
|
**Get your API key:**
|
|
1. Go to [openrouter.ai](https://openrouter.ai) and sign up
|
|
2. Go to [openrouter.ai/keys](https://openrouter.ai/keys) and create a key
|
|
|
|
**Pick a model:**
|
|
Go to [openrouter.ai/models](https://openrouter.ai/models) and copy the model ID you want (e.g., `anthropic/claude-opus-4.5`, `google/gemini-2.5-flash`).
|
|
|
|

|
|
|
|
**Add to BrowserOS:**
|
|
1. Go to `chrome://browseros/settings`
|
|
2. Click **USE** on the OpenRouter card
|
|
3. Paste the model ID and your API key
|
|
4. Set **Context Window** based on the model
|
|
5. Click **Save**
|
|
|
|

|
|
</Accordion>
|
|
|
|
<div id="azure" />
|
|
<Accordion title="Azure OpenAI" icon="microsoft">
|
|
Use OpenAI models hosted on your own Azure subscription for enterprise compliance and data residency.
|
|
|
|
**Prerequisites:**
|
|
1. An Azure subscription with access to [Azure OpenAI Service](https://portal.azure.com/#view/Microsoft_Azure_ProjectOxford/CognitiveServicesHub/~/OpenAI)
|
|
2. A deployed model (e.g., GPT-4o) in your Azure OpenAI resource
|
|
|
|
**Get your credentials:**
|
|
1. Go to [portal.azure.com](https://portal.azure.com) → **Azure OpenAI** resource
|
|
2. Navigate to **Keys and Endpoint**
|
|
3. Copy **Key 1** and your **Endpoint URL**
|
|
|
|
**Add to BrowserOS:**
|
|
1. Go to `chrome://browseros/settings`
|
|
2. Click **USE** on the Azure card
|
|
3. Set **Base URL** to your Azure endpoint (e.g., `https://your-resource.openai.azure.com/openai/deployments/your-deployment`)
|
|
4. Set **Model ID** to your deployment name
|
|
5. Paste your API key
|
|
6. Check **Supports Images**, set **Context Window** to `128000`
|
|
7. Click **Save**
|
|
</Accordion>
|
|
|
|
<div id="bedrock" />
|
|
<Accordion title="AWS Bedrock" icon="aws">
|
|
Access Claude, Llama, and other models through your AWS account with IAM-based authentication.
|
|
|
|
**Prerequisites:**
|
|
1. An AWS account with [Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/getting-started.html) enabled
|
|
2. Model access granted in the Bedrock console for your desired models
|
|
|
|
**Get your credentials:**
|
|
1. Go to the [AWS Console](https://console.aws.amazon.com) → **IAM**
|
|
2. Create or use an existing access key with Bedrock permissions
|
|
3. Note your **Access Key ID**, **Secret Access Key**, and **Region**
|
|
|
|
**Add to BrowserOS:**
|
|
1. Go to `chrome://browseros/settings`
|
|
2. Click **USE** on the AWS Bedrock card
|
|
3. Set **Base URL** to your Bedrock endpoint (region-specific)
|
|
4. Set **Model ID** to the Bedrock model ID (e.g., `anthropic.claude-3-sonnet-20240229-v1:0`)
|
|
5. Paste your credentials
|
|
6. Check **Supports Images**, set **Context Window** to `200000`
|
|
7. Click **Save**
|
|
</Accordion>
|
|
|
|
<div id="openai-compatible" />
|
|
<Accordion title="OpenAI Compatible" icon="plug">
|
|
Connect to any provider that implements the OpenAI-compatible API format (e.g., Together AI, Fireworks, Groq, Perplexity).
|
|
|
|
**Add to BrowserOS:**
|
|
1. Go to `chrome://browseros/settings`
|
|
2. Click **USE** on the OpenAI Compatible card
|
|
3. Set **Base URL** to the provider's API endpoint
|
|
4. Set **Model ID** to the model you want to use
|
|
5. Paste your API key
|
|
6. Set **Supports Images** and **Context Window** based on the model
|
|
7. Click **Save**
|
|
|
|
<Tip>
|
|
Most newer AI providers support the OpenAI-compatible API format. Check your provider's docs for the base URL and available model IDs.
|
|
</Tip>
|
|
</Accordion>
|
|
</AccordionGroup>
|
|
|
|
---
|
|
|
|
## Local Models
|
|
|
|
<Card title="Local Model Guide" icon="server" href="/features/local-models">
|
|
Run AI completely offline with Ollama or LM Studio. Includes recommended models, context length setup, and configuration steps.
|
|
</Card>
|
|
|
|
---
|
|
|
|
## Switching Between Models
|
|
|
|
Use the model switcher in the Assistant panel to change providers anytime. The default provider is highlighted.
|
|
|
|

|
|
|
|
<Tip>
|
|
Use local models for sensitive work data. Switch to Claude for agent tasks that need complex reasoning.
|
|
</Tip>
|