diff --git a/docs/features/bring-your-own-llm.mdx b/docs/features/bring-your-own-llm.mdx
index 8b57b4a87..cdd303249 100644
--- a/docs/features/bring-your-own-llm.mdx
+++ b/docs/features/bring-your-own-llm.mdx
@@ -5,238 +5,176 @@ description: "Connect your own AI models to BrowserOS"
BrowserOS includes a default AI model you can use right away, but it has strict rate limits. For the best experience, bring your own API keys or run models locally.
-**Why bring your own?**
-- Your API keys stay on your machine — requests go directly to the provider
-- No rate limits from BrowserOS — use it as much as you want
-- Run locally with Ollama for complete privacy
-
## Which Model Should I Use?
-This is the most important thing to understand:
-
| Mode | What works | Recommendation |
|------|------------|----------------|
-| **Chat Mode** | Any model, including local | Ollama or Gemini Flash — fast and cheap |
-| **Agent Mode** | Cloud models only | Claude Opus 4.5 for best results |
+| **Chat Mode** | Any model, including local | Ollama or Gemini Flash |
+| **Agent Mode** | Cloud models only | Claude Opus 4.5 |
-Local LLMs are great for Chat Mode — asking questions about a page, summarizing, etc. But they're not powerful enough for Agent Mode yet. Agent tasks need strong reasoning to click the right elements and handle multi-step workflows.
+**Local LLMs don't work for Agent Mode yet.** They're great for Chat — asking questions about a page, summarizing, etc. But agent tasks need strong reasoning to click the right elements and handle multi-step workflows. Use Claude Opus 4.5 or Sonnet 4.5 for agents.
-**For Agent Mode, we recommend:**
-- **Claude Opus 4.5** — Best quality, slower
-- **Claude Sonnet 4.5** — Great quality, faster
-- **Claude Haiku 4.5** or **Gemini 3 Flash** — Good and fast
+---
## Cloud Providers
-
-
- Gemini 3 Flash is fast and free. Google gives you 20 requests per minute at no cost.
+Connect to powerful AI models using your API keys. Your keys stay on your machine — requests go directly to the provider.
- ### Get your API key
+
+
+ Gemini Flash is fast and free. Google gives you 20 requests per minute at no cost.
-
-
- Visit [aistudio.google.com](https://aistudio.google.com) and click **Get API key** in the sidebar.
+ **Get your API key:**
+ 1. Go to [aistudio.google.com](https://aistudio.google.com)
+ 2. Click **Get API key** in the sidebar
+ 3. Click **Create API key** and copy it
- 
-
-
- Click **Create API key**, name your project, and click **Create key**.
+ 
- 
-
-
- Click on your key to copy it.
+ **Add to BrowserOS:**
+ 1. Go to `chrome://browseros/settings`
+ 2. Click **USE** on the Gemini card
+ 3. Set **Model ID** to `gemini-2.5-flash-preview-05-20`
+ 4. Paste your API key
+ 5. Check **Supports Images**, set **Context Window** to `1000000`
+ 6. Click **Save**
- 
-
-
+ 
+
- ### Add to BrowserOS
+
+ Claude Opus 4.5 gives the best results for Agent Mode.
-
-
- Go to `chrome://browseros/settings` and click **USE** on the Gemini card.
+ **Get your API key:**
+ 1. Go to [console.anthropic.com](https://console.anthropic.com/dashboard)
+ 2. Click **API keys** in the sidebar
+ 3. Click **Create Key** and copy it
- 
-
-
- - Set **Model ID** to `gemini-2.5-flash-preview-05-20`
- - Paste your API key
- - Check **Supports Images**
- - Set **Context Window** to `1000000`
- - Click **Save**
+ 
- 
-
-
-
-
-
- Claude Opus 4.5 gives the best results for Agent Mode. It's slower but handles complex tasks reliably.
-
- ### Get your API key
-
-
-
- Visit [console.anthropic.com](https://console.anthropic.com/dashboard) and click **API keys** in the sidebar.
-
- 
-
-
- Click **Create Key**, name it, and copy the key that appears.
-
- 
-
-
-
- ### Add to BrowserOS
-
- Go to `chrome://browseros/settings`, click **USE** on the Claude card, and configure:
-
- - **Model ID**: `claude-opus-4-5-20250514` (or `claude-sonnet-4-5-20250514` for faster)
- - Paste your API key
- - Check **Supports Images**
- - Set **Context Window** to `200000`
- - Click **Save**
+ **Add to BrowserOS:**
+ 1. Go to `chrome://browseros/settings`
+ 2. Click **USE** on the Anthropic card
+ 3. Set **Model ID** to `claude-opus-4-5-20250514`
+ 4. Paste your API key
+ 5. Check **Supports Images**, set **Context Window** to `200000`
+ 6. Click **Save**

-
+
-
+
GPT-4.1 is solid for both chat and agent tasks.
- ### Get your API key
+ **Get your API key:**
+ 1. Go to [platform.openai.com](https://platform.openai.com)
+ 2. Click settings icon → **API keys**
+ 3. Click **Create new secret key** and copy it
-
-
- Visit [platform.openai.com](https://platform.openai.com), click the settings icon, then **API keys**.
+ 
- 
-
-
- Click **Create new secret key**, name it, and copy it. You won't see it again.
-
- 
-
-
-
- ### Add to BrowserOS
-
- Go to `chrome://browseros/settings`, click **USE** on the OpenAI card, and configure:
-
- - **Model ID**: `gpt-4.1` (or `gpt-4.1-mini` for cheaper)
- - Paste your API key
- - Check **Supports Images**
- - Set **Context Window** to `128000`
- - Click **Save**
+ **Add to BrowserOS:**
+ 1. Go to `chrome://browseros/settings`
+ 2. Click **USE** on the OpenAI card
+ 3. Set **Model ID** to `gpt-4.1`
+ 4. Paste your API key
+ 5. Check **Supports Images**, set **Context Window** to `128000`
+ 6. Click **Save**

-
+
-
- OpenRouter gives you access to 500+ models through one API. Good if you want to try different models.
+
+ Access 500+ models through one API.
- ### Get your API key
+ **Get your API key:**
+ 1. Go to [openrouter.ai](https://openrouter.ai) and sign up
+ 2. Copy your API key from the homepage
- Visit [openrouter.ai](https://openrouter.ai), sign up, and grab your API key from the homepage.
-
- 
-
- ### Pick a model
-
- Go to [openrouter.ai/models](https://openrouter.ai/models) and find a model you want. Copy the model ID (like `anthropic/claude-opus-4.5`).
+ **Pick a model:**
+ Go to [openrouter.ai/models](https://openrouter.ai/models) and copy the model ID you want (e.g., `anthropic/claude-opus-4.5`).

- ### Add to BrowserOS
-
- Go to `chrome://browseros/settings`, click **USE** on the OpenRouter card, and configure:
-
- - **Model ID**: The one you copied (e.g., `anthropic/claude-opus-4.5`)
- - Paste your API key
- - Set **Context Window** based on the model
- - Click **Save**
+ **Add to BrowserOS:**
+ 1. Go to `chrome://browseros/settings`
+ 2. Click **USE** on the OpenRouter card
+ 3. Paste the model ID and your API key
+ 4. Set **Context Window** based on the model
+ 5. Click **Save**

-
-
+
+
+
+---
## Local Models
-Local models are free and private — your data never leaves your machine. They work great for Chat Mode.
+
+**Run AI completely offline.** Local models are free, private, and your data never leaves your machine. Perfect for Chat Mode with sensitive data.
+
-
-
- Ollama is the easiest way to run models locally.
+
+
+ The easiest way to run models locally.
- ### Setup
+ **Setup:**
+ 1. Download from [ollama.com](https://ollama.com)
+ 2. Pull a model:
+ ```bash
+ ollama pull llama3.2
+ ```
+ 3. Start Ollama:
+ ```bash
+ ollama serve
+ ```
-
-
- Download from [ollama.com](https://ollama.com) and install it.
-
-
- Open your terminal and run:
- ```bash
- ollama pull llama3.2
- ```
-
-
- ```bash
- ollama serve
- ```
+ 
- 
-
-
- Go to `chrome://browseros/settings`, click **Add Provider**, select **Ollama**, and set the model ID to `llama3.2`.
+ **Add to BrowserOS:**
+ 1. Go to `chrome://browseros/settings`
+ 2. Click **USE** on the Ollama card
+ 3. Set **Model ID** to `llama3.2`
+ 4. Click **Save**
- 
-
-
+ 
**Recommended models:** `llama3.2`, `qwen3:8b`, `mistral`
-
+
-
- LM Studio has a nice GUI if you don't want to use the terminal.
+
+ Nice GUI if you don't want to use the terminal.
- ### Setup
+ **Setup:**
+ 1. Download from [lmstudio.ai](https://lmstudio.ai)
+ 2. Open LM Studio → **Developer** tab → load a model
+ 3. It runs a server at `http://localhost:1234/v1/`
-
-
- Get it from [lmstudio.ai](https://lmstudio.ai) and install.
-
-
- Open LM Studio, go to the **Developer** tab, and load a model.
+ 
- 
-
-
- LM Studio runs a local server at `http://localhost:1234/v1/`.
+ **Add to BrowserOS:**
+ 1. Go to `chrome://browseros/settings`
+ 2. Click **USE** on the **OpenAI Compatible** card
+ 3. Set **Base URL** to `http://localhost:1234/v1/`
+ 4. Set **Model ID** to the model you loaded
+ 5. Set **Context Window** to match your LM Studio config
+ 6. Click **Save**
- 
-
-
- Go to `chrome://browseros/settings`, click **Add Provider**, select **OpenAI Compatible**, and set:
- - **Base URL**: `http://localhost:1234/v1/`
- - **Model ID**: The model you loaded
- - **Context Window**: Match your LM Studio config
+ 
+
+
- 
-
-
-
-
+---
-## Switching Models
+## Switching Between Models
-Use the model switcher in the Assistant to change between providers anytime.
-
-- Use local models for sensitive data
-- Switch to Claude for agent tasks
+Use the model switcher in the Assistant panel to change providers anytime. The default provider is highlighted.

+
+
+Use local models for sensitive work data. Switch to Claude for agent tasks that need complex reasoning.
+
diff --git a/docs/onboarding.mdx b/docs/onboarding.mdx
index 91303b4ff..b696a7621 100644
--- a/docs/onboarding.mdx
+++ b/docs/onboarding.mdx
@@ -1,30 +1,54 @@
---
-title: "Onboarding Guide"
-description: "Get started with BrowserOS in 2 simple steps"
+title: "Getting Started"
+description: "Set up BrowserOS in 2 minutes"
---
-## 🚀 Getting Started
+Welcome to BrowserOS! Let's get you set up.
-### Step 1: Import your data from Chrome
+
+
+ Bring your bookmarks, history, and passwords from Chrome.
-Bring your bookmarks, history, and settings over from Chrome.
+ 1. Go to `chrome://settings/importData`
+ 2. Select **Google Chrome** and click **Import**
+ 3. Choose **Always allow** when prompted
-1. Navigate to `chrome://settings/importData`
-2. Click **Import**
-3. Follow the prompts and choose **Always allow** when asked to import everything at once
+
+ This imports everything in one click — bookmarks, passwords, history, and extensions.
+
+
-
- This imports your bookmarks, history, passwords, and settings in one go.
-
+
+ BrowserOS includes a default AI model with limited daily usage. For the best experience, add your own.
-### Step 2: BYOK (Bring Your Own Keys)
+ **Quick option:** Get a free Gemini API key from [aistudio.google.com](https://aistudio.google.com) — 20 requests per minute at no cost.
-BrowserOS comes with a default LLM provider to test things out. But you have full control over your AI models! Navigate to `chrome://browseros/settings` or click the **Settings** icon on the new tab page to configure your own API keys for various providers.
+
+ Set up Gemini, Claude, OpenAI, or run models locally with Ollama
+
+
-
- Set up Gemini, Claude, OpenAI, or run models locally with Ollama
-
+
+ Open any webpage and click the **Assistant** button in the toolbar.
-### All done!
+ - **Chat Mode:** Ask questions about the page
+ - **Agent Mode:** Describe a task and watch it execute
-You're ready to use BrowserOS! This page can always be accessed again at `chrome://browseros-first-run`
+
+ For Agent Mode, use Claude Opus 4.5 or Sonnet 4.5. Local models work great for Chat but aren't powerful enough for agents yet.
+
+
+
+
+## You're all set!
+
+Explore what BrowserOS can do:
+
+
+
+ Chat with AI on any page, compare responses across models
+
+
+ Control BrowserOS from Claude Code or Gemini CLI
+
+