mirror of
https://github.com/browseros-ai/BrowserOS.git
synced 2026-05-13 15:46:22 +00:00
docs: update BYOLLM and onboarding page
This commit is contained in:
@@ -5,238 +5,176 @@ description: "Connect your own AI models to BrowserOS"
|
||||
|
||||
BrowserOS includes a default AI model you can use right away, but it has strict rate limits. For the best experience, bring your own API keys or run models locally.
|
||||
|
||||
**Why bring your own?**
|
||||
- Your API keys stay on your machine — requests go directly to the provider
|
||||
- No rate limits from BrowserOS — use it as much as you want
|
||||
- Run locally with Ollama for complete privacy
|
||||
|
||||
## Which Model Should I Use?
|
||||
|
||||
This is the most important thing to understand:
|
||||
|
||||
| Mode | What works | Recommendation |
|
||||
|------|------------|----------------|
|
||||
| **Chat Mode** | Any model, including local | Ollama or Gemini Flash — fast and cheap |
|
||||
| **Agent Mode** | Cloud models only | Claude Opus 4.5 for best results |
|
||||
| **Chat Mode** | Any model, including local | Ollama or Gemini Flash |
|
||||
| **Agent Mode** | Cloud models only | Claude Opus 4.5 |
|
||||
|
||||
<Warning>
|
||||
Local LLMs are great for Chat Mode — asking questions about a page, summarizing, etc. But they're not powerful enough for Agent Mode yet. Agent tasks need strong reasoning to click the right elements and handle multi-step workflows.
|
||||
**Local LLMs don't work for Agent Mode yet.** They're great for Chat — asking questions about a page, summarizing, etc. But agent tasks need strong reasoning to click the right elements and handle multi-step workflows. Use Claude Opus 4.5 or Sonnet 4.5 for agents.
|
||||
</Warning>
|
||||
|
||||
**For Agent Mode, we recommend:**
|
||||
- **Claude Opus 4.5** — Best quality, slower
|
||||
- **Claude Sonnet 4.5** — Great quality, faster
|
||||
- **Claude Haiku 4.5** or **Gemini 3 Flash** — Good and fast
|
||||
---
|
||||
|
||||
## Cloud Providers
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Gemini (Free)">
|
||||
Gemini 3 Flash is fast and free. Google gives you 20 requests per minute at no cost.
|
||||
Connect to powerful AI models using your API keys. Your keys stay on your machine — requests go directly to the provider.
|
||||
|
||||
### Get your API key
|
||||
<AccordionGroup>
|
||||
<Accordion title="Gemini (Free)" icon="google" defaultOpen={true}>
|
||||
Gemini Flash is fast and free. Google gives you 20 requests per minute at no cost.
|
||||
|
||||
<Steps>
|
||||
<Step title="Go to Google AI Studio">
|
||||
Visit [aistudio.google.com](https://aistudio.google.com) and click **Get API key** in the sidebar.
|
||||
**Get your API key:**
|
||||
1. Go to [aistudio.google.com](https://aistudio.google.com)
|
||||
2. Click **Get API key** in the sidebar
|
||||
3. Click **Create API key** and copy it
|
||||
|
||||

|
||||
</Step>
|
||||
<Step title="Create a key">
|
||||
Click **Create API key**, name your project, and click **Create key**.
|
||||

|
||||
|
||||

|
||||
</Step>
|
||||
<Step title="Copy the key">
|
||||
Click on your key to copy it.
|
||||
**Add to BrowserOS:**
|
||||
1. Go to `chrome://browseros/settings`
|
||||
2. Click **USE** on the Gemini card
|
||||
3. Set **Model ID** to `gemini-2.5-flash-preview-05-20`
|
||||
4. Paste your API key
|
||||
5. Check **Supports Images**, set **Context Window** to `1000000`
|
||||
6. Click **Save**
|
||||
|
||||

|
||||
</Step>
|
||||
</Steps>
|
||||

|
||||
</Accordion>
|
||||
|
||||
### Add to BrowserOS
|
||||
<Accordion title="Claude (Best for Agents)" icon="message-bot">
|
||||
Claude Opus 4.5 gives the best results for Agent Mode.
|
||||
|
||||
<Steps>
|
||||
<Step title="Open settings">
|
||||
Go to `chrome://browseros/settings` and click **USE** on the Gemini card.
|
||||
**Get your API key:**
|
||||
1. Go to [console.anthropic.com](https://console.anthropic.com/dashboard)
|
||||
2. Click **API keys** in the sidebar
|
||||
3. Click **Create Key** and copy it
|
||||
|
||||

|
||||
</Step>
|
||||
<Step title="Configure">
|
||||
- Set **Model ID** to `gemini-2.5-flash-preview-05-20`
|
||||
- Paste your API key
|
||||
- Check **Supports Images**
|
||||
- Set **Context Window** to `1000000`
|
||||
- Click **Save**
|
||||

|
||||
|
||||

|
||||
</Step>
|
||||
</Steps>
|
||||
</Tab>
|
||||
|
||||
<Tab title="Claude (Best for Agents)">
|
||||
Claude Opus 4.5 gives the best results for Agent Mode. It's slower but handles complex tasks reliably.
|
||||
|
||||
### Get your API key
|
||||
|
||||
<Steps>
|
||||
<Step title="Go to Anthropic Console">
|
||||
Visit [console.anthropic.com](https://console.anthropic.com/dashboard) and click **API keys** in the sidebar.
|
||||
|
||||

|
||||
</Step>
|
||||
<Step title="Create a key">
|
||||
Click **Create Key**, name it, and copy the key that appears.
|
||||
|
||||

|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
### Add to BrowserOS
|
||||
|
||||
Go to `chrome://browseros/settings`, click **USE** on the Claude card, and configure:
|
||||
|
||||
- **Model ID**: `claude-opus-4-5-20250514` (or `claude-sonnet-4-5-20250514` for faster)
|
||||
- Paste your API key
|
||||
- Check **Supports Images**
|
||||
- Set **Context Window** to `200000`
|
||||
- Click **Save**
|
||||
**Add to BrowserOS:**
|
||||
1. Go to `chrome://browseros/settings`
|
||||
2. Click **USE** on the Anthropic card
|
||||
3. Set **Model ID** to `claude-opus-4-5-20250514`
|
||||
4. Paste your API key
|
||||
5. Check **Supports Images**, set **Context Window** to `200000`
|
||||
6. Click **Save**
|
||||
|
||||

|
||||
</Tab>
|
||||
</Accordion>
|
||||
|
||||
<Tab title="OpenAI">
|
||||
<Accordion title="OpenAI" icon="brain">
|
||||
GPT-4.1 is solid for both chat and agent tasks.
|
||||
|
||||
### Get your API key
|
||||
**Get your API key:**
|
||||
1. Go to [platform.openai.com](https://platform.openai.com)
|
||||
2. Click settings icon → **API keys**
|
||||
3. Click **Create new secret key** and copy it
|
||||
|
||||
<Steps>
|
||||
<Step title="Go to OpenAI Platform">
|
||||
Visit [platform.openai.com](https://platform.openai.com), click the settings icon, then **API keys**.
|
||||

|
||||
|
||||

|
||||
</Step>
|
||||
<Step title="Create a key">
|
||||
Click **Create new secret key**, name it, and copy it. You won't see it again.
|
||||
|
||||

|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
### Add to BrowserOS
|
||||
|
||||
Go to `chrome://browseros/settings`, click **USE** on the OpenAI card, and configure:
|
||||
|
||||
- **Model ID**: `gpt-4.1` (or `gpt-4.1-mini` for cheaper)
|
||||
- Paste your API key
|
||||
- Check **Supports Images**
|
||||
- Set **Context Window** to `128000`
|
||||
- Click **Save**
|
||||
**Add to BrowserOS:**
|
||||
1. Go to `chrome://browseros/settings`
|
||||
2. Click **USE** on the OpenAI card
|
||||
3. Set **Model ID** to `gpt-4.1`
|
||||
4. Paste your API key
|
||||
5. Check **Supports Images**, set **Context Window** to `128000`
|
||||
6. Click **Save**
|
||||
|
||||

|
||||
</Tab>
|
||||
</Accordion>
|
||||
|
||||
<Tab title="OpenRouter">
|
||||
OpenRouter gives you access to 500+ models through one API. Good if you want to try different models.
|
||||
<Accordion title="OpenRouter" icon="shuffle">
|
||||
Access 500+ models through one API.
|
||||
|
||||
### Get your API key
|
||||
**Get your API key:**
|
||||
1. Go to [openrouter.ai](https://openrouter.ai) and sign up
|
||||
2. Copy your API key from the homepage
|
||||
|
||||
Visit [openrouter.ai](https://openrouter.ai), sign up, and grab your API key from the homepage.
|
||||
|
||||

|
||||
|
||||
### Pick a model
|
||||
|
||||
Go to [openrouter.ai/models](https://openrouter.ai/models) and find a model you want. Copy the model ID (like `anthropic/claude-opus-4.5`).
|
||||
**Pick a model:**
|
||||
Go to [openrouter.ai/models](https://openrouter.ai/models) and copy the model ID you want (e.g., `anthropic/claude-opus-4.5`).
|
||||
|
||||

|
||||
|
||||
### Add to BrowserOS
|
||||
|
||||
Go to `chrome://browseros/settings`, click **USE** on the OpenRouter card, and configure:
|
||||
|
||||
- **Model ID**: The one you copied (e.g., `anthropic/claude-opus-4.5`)
|
||||
- Paste your API key
|
||||
- Set **Context Window** based on the model
|
||||
- Click **Save**
|
||||
**Add to BrowserOS:**
|
||||
1. Go to `chrome://browseros/settings`
|
||||
2. Click **USE** on the OpenRouter card
|
||||
3. Paste the model ID and your API key
|
||||
4. Set **Context Window** based on the model
|
||||
5. Click **Save**
|
||||
|
||||

|
||||
</Tab>
|
||||
</Tabs>
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
---
|
||||
|
||||
## Local Models
|
||||
|
||||
Local models are free and private — your data never leaves your machine. They work great for Chat Mode.
|
||||
<Note>
|
||||
**Run AI completely offline.** Local models are free, private, and your data never leaves your machine. Perfect for Chat Mode with sensitive data.
|
||||
</Note>
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Ollama">
|
||||
Ollama is the easiest way to run models locally.
|
||||
<AccordionGroup>
|
||||
<Accordion title="Ollama" icon="terminal" defaultOpen={true}>
|
||||
The easiest way to run models locally.
|
||||
|
||||
### Setup
|
||||
**Setup:**
|
||||
1. Download from [ollama.com](https://ollama.com)
|
||||
2. Pull a model:
|
||||
```bash
|
||||
ollama pull llama3.2
|
||||
```
|
||||
3. Start Ollama:
|
||||
```bash
|
||||
ollama serve
|
||||
```
|
||||
|
||||
<Steps>
|
||||
<Step title="Install Ollama">
|
||||
Download from [ollama.com](https://ollama.com) and install it.
|
||||
</Step>
|
||||
<Step title="Pull a model">
|
||||
Open your terminal and run:
|
||||
```bash
|
||||
ollama pull llama3.2
|
||||
```
|
||||
</Step>
|
||||
<Step title="Start Ollama">
|
||||
```bash
|
||||
ollama serve
|
||||
```
|
||||

|
||||
|
||||

|
||||
</Step>
|
||||
<Step title="Add to BrowserOS">
|
||||
Go to `chrome://browseros/settings`, click **Add Provider**, select **Ollama**, and set the model ID to `llama3.2`.
|
||||
**Add to BrowserOS:**
|
||||
1. Go to `chrome://browseros/settings`
|
||||
2. Click **USE** on the Ollama card
|
||||
3. Set **Model ID** to `llama3.2`
|
||||
4. Click **Save**
|
||||
|
||||

|
||||
</Step>
|
||||
</Steps>
|
||||

|
||||
|
||||
**Recommended models:** `llama3.2`, `qwen3:8b`, `mistral`
|
||||
</Tab>
|
||||
</Accordion>
|
||||
|
||||
<Tab title="LM Studio">
|
||||
LM Studio has a nice GUI if you don't want to use the terminal.
|
||||
<Accordion title="LM Studio" icon="desktop">
|
||||
Nice GUI if you don't want to use the terminal.
|
||||
|
||||
### Setup
|
||||
**Setup:**
|
||||
1. Download from [lmstudio.ai](https://lmstudio.ai)
|
||||
2. Open LM Studio → **Developer** tab → load a model
|
||||
3. It runs a server at `http://localhost:1234/v1/`
|
||||
|
||||
<Steps>
|
||||
<Step title="Download LM Studio">
|
||||
Get it from [lmstudio.ai](https://lmstudio.ai) and install.
|
||||
</Step>
|
||||
<Step title="Load a model">
|
||||
Open LM Studio, go to the **Developer** tab, and load a model.
|
||||

|
||||
|
||||

|
||||
</Step>
|
||||
<Step title="Start the server">
|
||||
LM Studio runs a local server at `http://localhost:1234/v1/`.
|
||||
**Add to BrowserOS:**
|
||||
1. Go to `chrome://browseros/settings`
|
||||
2. Click **USE** on the **OpenAI Compatible** card
|
||||
3. Set **Base URL** to `http://localhost:1234/v1/`
|
||||
4. Set **Model ID** to the model you loaded
|
||||
5. Set **Context Window** to match your LM Studio config
|
||||
6. Click **Save**
|
||||
|
||||

|
||||
</Step>
|
||||
<Step title="Add to BrowserOS">
|
||||
Go to `chrome://browseros/settings`, click **Add Provider**, select **OpenAI Compatible**, and set:
|
||||
- **Base URL**: `http://localhost:1234/v1/`
|
||||
- **Model ID**: The model you loaded
|
||||
- **Context Window**: Match your LM Studio config
|
||||

|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||

|
||||
</Step>
|
||||
</Steps>
|
||||
</Tab>
|
||||
</Tabs>
|
||||
---
|
||||
|
||||
## Switching Models
|
||||
## Switching Between Models
|
||||
|
||||
Use the model switcher in the Assistant to change between providers anytime.
|
||||
|
||||
- Use local models for sensitive data
|
||||
- Switch to Claude for agent tasks
|
||||
Use the model switcher in the Assistant panel to change providers anytime. The default provider is highlighted.
|
||||
|
||||

|
||||
|
||||
<Tip>
|
||||
Use local models for sensitive work data. Switch to Claude for agent tasks that need complex reasoning.
|
||||
</Tip>
|
||||
|
||||
@@ -1,30 +1,54 @@
|
||||
---
|
||||
title: "Onboarding Guide"
|
||||
description: "Get started with BrowserOS in 2 simple steps"
|
||||
title: "Getting Started"
|
||||
description: "Set up BrowserOS in 2 minutes"
|
||||
---
|
||||
|
||||
## 🚀 Getting Started
|
||||
Welcome to BrowserOS! Let's get you set up.
|
||||
|
||||
### Step 1: Import your data from Chrome
|
||||
<Steps>
|
||||
<Step title="Import from Chrome">
|
||||
Bring your bookmarks, history, and passwords from Chrome.
|
||||
|
||||
Bring your bookmarks, history, and settings over from Chrome.
|
||||
1. Go to `chrome://settings/importData`
|
||||
2. Select **Google Chrome** and click **Import**
|
||||
3. Choose **Always allow** when prompted
|
||||
|
||||
1. Navigate to `chrome://settings/importData`
|
||||
2. Click **Import**
|
||||
3. Follow the prompts and choose **Always allow** when asked to import everything at once
|
||||
<Tip>
|
||||
This imports everything in one click — bookmarks, passwords, history, and extensions.
|
||||
</Tip>
|
||||
</Step>
|
||||
|
||||
<Tip>
|
||||
This imports your bookmarks, history, passwords, and settings in one go.
|
||||
</Tip>
|
||||
<Step title="Set up your AI">
|
||||
BrowserOS includes a default AI model with limited daily usage. For the best experience, add your own.
|
||||
|
||||
### Step 2: BYOK (Bring Your Own Keys)
|
||||
**Quick option:** Get a free Gemini API key from [aistudio.google.com](https://aistudio.google.com) — 20 requests per minute at no cost.
|
||||
|
||||
BrowserOS comes with a default LLM provider to test things out. But you have full control over your AI models! Navigate to `chrome://browseros/settings` or click the **Settings** icon on the new tab page to configure your own API keys for various providers.
|
||||
<Card title="Configure your LLM" icon="key" href="/features/bring-your-own-llm">
|
||||
Set up Gemini, Claude, OpenAI, or run models locally with Ollama
|
||||
</Card>
|
||||
</Step>
|
||||
|
||||
<Card title="Configure LLM Provider" icon="key" href="/llm-setup-guide">
|
||||
Set up Gemini, Claude, OpenAI, or run models locally with Ollama
|
||||
</Card>
|
||||
<Step title="Try it out">
|
||||
Open any webpage and click the **Assistant** button in the toolbar.
|
||||
|
||||
### All done!
|
||||
- **Chat Mode:** Ask questions about the page
|
||||
- **Agent Mode:** Describe a task and watch it execute
|
||||
|
||||
You're ready to use BrowserOS! This page can always be accessed again at `chrome://browseros-first-run`
|
||||
<Note>
|
||||
For Agent Mode, use Claude Opus 4.5 or Sonnet 4.5. Local models work great for Chat but aren't powerful enough for agents yet.
|
||||
</Note>
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
## You're all set!
|
||||
|
||||
Explore what BrowserOS can do:
|
||||
|
||||
<Columns cols={2}>
|
||||
<Card title="LLM Chat & Hub" icon="message" href="/features/llm-chat-hub">
|
||||
Chat with AI on any page, compare responses across models
|
||||
</Card>
|
||||
<Card title="Use with Claude Code" icon="terminal" href="/features/use-with-claude-code">
|
||||
Control BrowserOS from Claude Code or Gemini CLI
|
||||
</Card>
|
||||
</Columns>
|
||||
|
||||
Reference in New Issue
Block a user