mirror of
https://github.com/browseros-ai/BrowserOS.git
synced 2026-05-13 15:46:22 +00:00
chore: udpate n8n docs (#288)
This commit is contained in:
11
CLAUDE.md
Normal file
11
CLAUDE.md
Normal file
@@ -0,0 +1,11 @@
|
|||||||
|
# Project Instructions
|
||||||
|
|
||||||
|
## Docs Image Workflow
|
||||||
|
|
||||||
|
When updating documentation that involves new screenshots or images:
|
||||||
|
|
||||||
|
1. Prompt the user to copy the image to their clipboard (Cmd+C)
|
||||||
|
2. Run: `python scripts/save_clipboard.py <target_path>`
|
||||||
|
3. Example: `python scripts/save_clipboard.py docs/images/agent-step.png`
|
||||||
|
|
||||||
|
This saves the clipboard image directly to the docs folder without manual file management.
|
||||||
BIN
docs/images/agent-step.png
(Stored with Git LFS)
BIN
docs/images/agent-step.png
(Stored with Git LFS)
Binary file not shown.
BIN
docs/images/client-step.png
(Stored with Git LFS)
BIN
docs/images/client-step.png
(Stored with Git LFS)
Binary file not shown.
BIN
docs/images/mcp-node.png
(Stored with Git LFS)
BIN
docs/images/mcp-node.png
(Stored with Git LFS)
Binary file not shown.
BIN
docs/images/n8n-step3-workflow.png
(Stored with Git LFS)
Normal file
BIN
docs/images/n8n-step3-workflow.png
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
docs/images/n8n-step4-agent-prompt.png
(Stored with Git LFS)
Normal file
BIN
docs/images/n8n-step4-agent-prompt.png
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
docs/images/n8n-step5-mcp-client.png
(Stored with Git LFS)
Normal file
BIN
docs/images/n8n-step5-mcp-client.png
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
docs/images/n8n-step6-test.png
(Stored with Git LFS)
Normal file
BIN
docs/images/n8n-step6-test.png
(Stored with Git LFS)
Normal file
Binary file not shown.
@@ -38,60 +38,51 @@ Go to **Settings → Community Nodes** and install `n8n-nodes-mcp`:
|
|||||||
|
|
||||||
### Step 3: Create Workflow
|
### Step 3: Create Workflow
|
||||||
|
|
||||||
Create a new workflow and add the **AI Agent** node. Then configure it:
|
Create a new workflow with:
|
||||||
|
|
||||||
1. In the AI Agent, add **Chat Model** (e.g., OpenAI Chat Model) and configure your API credentials
|
1. Add a **Chat Trigger** node (this receives user messages)
|
||||||
2. In the AI Agent's **Tools** section, search for "mcp" and add **MCP Client Tool**
|
2. Add an **AI Agent** node and connect it to the trigger
|
||||||
|
3. Under the AI Agent, click **Chat Model** and add your LLM (e.g., OpenAI Chat Model)
|
||||||
|
4. Under the AI Agent, click **Tool** → search "mcp" → select **MCP Client Tool**
|
||||||
|
|
||||||
<Frame>
|
<Frame>
|
||||||

|

|
||||||
</Frame>
|
</Frame>
|
||||||
|
|
||||||
### Step 4: Configure AI Agent Prompt
|
### Step 4: Configure AI Agent Prompt
|
||||||
|
|
||||||
In the AI Agent node, set **Source for Prompt** to **Define below** and paste:
|
In the AI Agent node, set **Source for Prompt (User Message)** to **Connected Chat Trigger Node**.
|
||||||
|
|
||||||
```
|
|
||||||
choose the right tool based on the user input:{{ $json.chatInput }}
|
|
||||||
```
|
|
||||||
|
|
||||||
<Frame>
|
<Frame>
|
||||||

|

|
||||||
</Frame>
|
</Frame>
|
||||||
|
|
||||||
### Step 5: Configure MCP Client Credentials
|
### Step 5: Configure MCP Client
|
||||||
|
|
||||||
Click on the **MCP Client** node to configure it:
|
Click on the **MCP Client** node and configure:
|
||||||
|
|
||||||
1. **Credential to connect with** → **Create New Credential**
|
1. **Endpoint**: Copy your MCP URL from BrowserOS (e.g., `http://127.0.0.1:9228/mcp`)
|
||||||
2. Select **Command Line (stdio)**
|
2. **Server Transport**: `HTTP Streamable`
|
||||||
3. **Command**: `npx`
|
|
||||||
4. **Arguments**: `mcp-remote http://127.0.0.1:9101/mcp`
|
|
||||||
|
|
||||||
<Note>
|
<Note>
|
||||||
Replace `9101` with your BrowserOS MCP port from **Settings → MCP** in BrowserOS.
|
Find your MCP URL in BrowserOS under **Settings → MCP**.
|
||||||
</Note>
|
</Note>
|
||||||
|
|
||||||
### Step 6: Configure MCP Tool Execution
|
|
||||||
|
|
||||||
In the same MCP Client node:
|
|
||||||
|
|
||||||
1. **Operation**: `Execute Tool`
|
|
||||||
2. **Tool Name**: `{{ $fromAI("tool") }}`
|
|
||||||
|
|
||||||
<Frame>
|
<Frame>
|
||||||

|

|
||||||
</Frame>
|
</Frame>
|
||||||
|
|
||||||
### Step 7: Test Workflow
|
### Step 6: Test Workflow
|
||||||
|
|
||||||
Save the workflow and test with a chat input:
|
Save the workflow, click **Open chat**, and send a message:
|
||||||
|
|
||||||
```
|
```
|
||||||
Open google.com in BrowserOS
|
open google.com on browserOS
|
||||||
```
|
```
|
||||||
|
|
||||||
Execute and watch BrowserOS respond!
|
<Frame>
|
||||||
|

|
||||||
|
</Frame>
|
||||||
|
|
||||||
## Troubleshooting
|
## Troubleshooting
|
||||||
|
|
||||||
|
|||||||
39
scripts/save_clipboard.py
Normal file
39
scripts/save_clipboard.py
Normal file
@@ -0,0 +1,39 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Save clipboard image to a specified path.
|
||||||
|
Usage: python scripts/save_clipboard.py <output_path>
|
||||||
|
"""
|
||||||
|
import sys
|
||||||
|
import os
|
||||||
|
|
||||||
|
try:
|
||||||
|
from PIL import ImageGrab
|
||||||
|
except ImportError:
|
||||||
|
print("Installing Pillow...")
|
||||||
|
import subprocess
|
||||||
|
subprocess.check_call([sys.executable, "-m", "pip", "install", "Pillow", "-q"])
|
||||||
|
from PIL import ImageGrab
|
||||||
|
|
||||||
|
def main():
|
||||||
|
if len(sys.argv) != 2:
|
||||||
|
print("Usage: python scripts/save_clipboard.py <output_path>")
|
||||||
|
print("Example: python scripts/save_clipboard.py docs/images/screenshot.png")
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
output_path = sys.argv[1]
|
||||||
|
|
||||||
|
# Ensure directory exists
|
||||||
|
os.makedirs(os.path.dirname(output_path) or ".", exist_ok=True)
|
||||||
|
|
||||||
|
# Grab from clipboard
|
||||||
|
img = ImageGrab.grabclipboard()
|
||||||
|
|
||||||
|
if img is None:
|
||||||
|
print("❌ No image in clipboard. Copy an image first (Cmd+C).")
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
img.save(output_path)
|
||||||
|
print(f"✅ Saved to {output_path}")
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
Reference in New Issue
Block a user