mirror of
https://github.com/browseros-ai/BrowserOS.git
synced 2026-05-14 08:03:58 +00:00
Compare commits
2 Commits
feat/db-fo
...
fix/dev-se
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
8cd7c985e3 | ||
|
|
548c9dd996 |
@@ -1,152 +0,0 @@
|
||||
---
|
||||
name: ask-internal
|
||||
description: Answer questions about BrowserOS internal stuff (setup, features, architecture, design decisions) by reading the private internal-docs submodule and the codebase. Use for "how do I X", "where is Y", "what is the deal with Z", or any question that mixes ops/setup knowledge with code knowledge. Can execute steps with per-command confirmation.
|
||||
allowed-tools: Bash, Read, Grep, Glob, Edit, Write
|
||||
---
|
||||
|
||||
# Ask Internal
|
||||
|
||||
Answer team-internal questions by reading `.internal-docs/` and the codebase, synthesizing a direct answer with file:line citations, and optionally running surfaced commands with confirmation.
|
||||
|
||||
**Announce at start:** "I'm using the ask-internal skill to answer this from internal-docs and the codebase."
|
||||
|
||||
## When to use
|
||||
|
||||
- "How do I reset my dogfood profile?"
|
||||
- "What's the deal with the OpenClaw VM startup?"
|
||||
- "Where do we configure release signing?"
|
||||
- Any question whose answer lives in setup runbooks, feature notes, architecture docs, or the code that produced them.
|
||||
|
||||
## Hard rules — never do these
|
||||
|
||||
- NEVER execute a state-mutating command without per-command `y` confirmation from the user.
|
||||
- NEVER edit BrowserOS code in response to an ask-internal question. The skill answers; it does not modify code. Use `/document-internal` for writes.
|
||||
- NEVER guess. If grep finds nothing useful in docs or code, say so plainly.
|
||||
- NEVER run this skill if `.internal-docs/` is missing. Stop with the init command.
|
||||
- NEVER cite a file or line number you have not actually read.
|
||||
|
||||
## Voice rules
|
||||
|
||||
Apply the same voice rules as `document-internal` to the synthesized answer:
|
||||
|
||||
- Lead with the point.
|
||||
- Concrete nouns. Name files, functions, commands.
|
||||
- Short sentences. Active voice. No em dashes.
|
||||
- Banned words: delve, crucial, robust, comprehensive, nuanced, multifaceted, furthermore, moreover, additionally, pivotal, landscape, tapestry, underscore, foster, showcase, intricate, vibrant, fundamental, significant, leverage, utilize.
|
||||
- No filler intros.
|
||||
|
||||
## Workflow
|
||||
|
||||
### Step 0: Pre-flight
|
||||
|
||||
```bash
|
||||
if git submodule status .internal-docs 2>/dev/null | grep -q '^-'; then
|
||||
echo "internal-docs submodule not initialized. Run: git submodule update --init .internal-docs"
|
||||
exit 0
|
||||
fi
|
||||
[ -d .internal-docs ] && [ -n "$(ls -A .internal-docs 2>/dev/null)" ] || {
|
||||
echo ".internal-docs/ missing or empty. Submodule not configured?"
|
||||
exit 0
|
||||
}
|
||||
```
|
||||
|
||||
### Step 1: Parse the question
|
||||
|
||||
Pull the keywords from the user's question. Drop stop words. Identify intent:
|
||||
|
||||
- **Setup-question** ("how do I", "how to", "where do I configure"): bias the search toward `setup/`.
|
||||
- **Feature-question** ("what is X", "why does X work this way"): bias toward `features/` and `architecture/`.
|
||||
- **Free-form** ("anything about Y"): search all categories.
|
||||
|
||||
### Step 2: Multi-source search
|
||||
|
||||
Run grep in parallel across two sources.
|
||||
|
||||
**Internal docs:**
|
||||
|
||||
```bash
|
||||
grep -rni --include='*.md' '<keyword>' .internal-docs/
|
||||
```
|
||||
|
||||
Search each keyword separately. Collect top hits by relevance (more keyword matches = higher).
|
||||
|
||||
**Codebase (skip vendored Chromium and `node_modules`):**
|
||||
|
||||
```bash
|
||||
grep -rni --include='*.ts' --include='*.tsx' --include='*.js' --include='*.json' --include='*.sh' \
|
||||
--exclude-dir=node_modules --exclude-dir=chromium --exclude-dir=.grove \
|
||||
'<keyword>' packages/ scripts/ .config/ .github/
|
||||
```
|
||||
|
||||
Read the top 3-5 doc hits and top 3-5 code hits. Do not skim — read the relevant section fully so citations are accurate.
|
||||
|
||||
### Step 3: Synthesize answer
|
||||
|
||||
Structure the response:
|
||||
|
||||
1. **Direct answer.** First sentence answers the question. No preamble.
|
||||
2. **Steps if applicable.** Numbered list with exact commands.
|
||||
3. **Citations.** Every factual claim references `path/to/file.md:42` or `path/to/code.ts:117`. Run the voice self-check before printing.
|
||||
|
||||
If multiple docs cover the topic at different layers (e.g., a setup runbook and a feature note both mention dogfood profiles), reconcile them in the answer rather than dumping both.
|
||||
|
||||
### Step 4: Offer execution (only if commands surfaced)
|
||||
|
||||
If Step 3 produced executable commands the user could run, ask:
|
||||
|
||||
> Run these for you? (y / n / dry-run)
|
||||
|
||||
- **y:** Execute one at a time. For any command that mutates state (writes a file, modifies config, kills a process, deletes anything), ask "run this? <command>" before each. Read-only commands (`ls`, `cat`, `git status`) run without per-command confirmation but still print before running.
|
||||
- **n:** Skip. Done.
|
||||
- **dry-run:** Print the full sequence as a `bash` block. Do not execute.
|
||||
|
||||
### Step 5: Doc-not-found path
|
||||
|
||||
If Step 2 returned nothing useful (no doc hits AND no clear code answer):
|
||||
|
||||
1. Tell the user: "No doc covers this. Tangentially relevant files: <list>."
|
||||
2. Ask: "Draft a new doc and open a PR to internal-docs?"
|
||||
3. On yes: invoke the full `/document-internal` flow (four sharp questions, draft, voice check, PR), forced to `setup/` doc type, with the code-grep findings handed in as initial context.
|
||||
|
||||
### Step 6: Completion status
|
||||
|
||||
Report one of:
|
||||
|
||||
- **DONE** — answer delivered, citations verified.
|
||||
- **DONE_WITH_CONCERNS** — answered, but flag uncertainty (e.g., docs and code disagreed; user should reconcile).
|
||||
- **BLOCKED** — submodule missing or other pre-flight failure.
|
||||
- **NEEDS_CONTEXT** — question too vague to search effectively. Ask one clarifying question.
|
||||
|
||||
## Citation discipline
|
||||
|
||||
Every "X is at Y" claim in the answer must point to a file:line that the skill actually read. Do not approximate. If you didn't read it, don't cite it.
|
||||
|
||||
If a doc says one thing and the code says another, surface the conflict explicitly:
|
||||
|
||||
> The setup runbook (`setup/dogfood-profile.md:23`) says to delete `~/.cache/browseros/dogfood`, but the actual code path in `packages/cli/src/cleanup.ts:47` removes `~/.local/share/browseros/dogfood`. The doc looks stale. Recommend updating it.
|
||||
|
||||
## Common Mistakes
|
||||
|
||||
**Skimming and then citing**
|
||||
- **Problem:** Citation points to a line that doesn't actually contain the claim.
|
||||
- **Fix:** Read the section fully before citing. If you didn't read line 117, don't cite line 117.
|
||||
|
||||
**Executing without per-command confirmation for mutations**
|
||||
- **Problem:** User says "y" to "run all", skill blasts through `rm -rf`-style commands.
|
||||
- **Fix:** "y" means "run this sequence with per-mutation confirmations". Per-command y is required for writes.
|
||||
|
||||
**Searching only docs, not code**
|
||||
- **Problem:** Doc says X but code does Y; answer is wrong.
|
||||
- **Fix:** Always grep both sources in Step 2.
|
||||
|
||||
## Red Flags
|
||||
|
||||
**Never:**
|
||||
- Cite a file:line you haven't read.
|
||||
- Run mutations without per-command confirmation.
|
||||
- Modify BrowserOS code from this skill (use `/document-internal` for writes).
|
||||
|
||||
**Always:**
|
||||
- Pre-flight check before any search.
|
||||
- Reconcile doc vs code conflicts in the answer, don't hide them.
|
||||
- Plain "no doc covers this" when grep is empty — never invent.
|
||||
@@ -1,208 +0,0 @@
|
||||
---
|
||||
name: document-internal
|
||||
description: Draft a 1-page internal doc (feature, architecture, or design) for the private browseros-ai/internal-docs repo. Use when wrapping up a feature on a branch, after the PR is open or about to be opened. Skill drafts from the diff, asks four sharp questions, enforces voice rules, and opens a PR to internal-docs.
|
||||
allowed-tools: Bash, Read, Write, Edit, Grep, Glob
|
||||
---
|
||||
|
||||
# Document Internal
|
||||
|
||||
Draft a 1-page internal doc (feature note, architecture note, or design spec) from the current branch's diff and open a PR to `browseros-ai/internal-docs`.
|
||||
|
||||
**Announce at start:** "I'm using the document-internal skill to draft a doc for internal-docs."
|
||||
|
||||
## When to use
|
||||
|
||||
After finishing implementation on a feature branch, when the work is doc-worthy (a major feature, a new subsystem, a setup runbook for something internal, or a design decision that future engineers need to know).
|
||||
|
||||
## Hard rules — never do these
|
||||
|
||||
- NEVER `git add -A` or `git add .` inside the tmp clone of internal-docs. Always specific paths.
|
||||
- NEVER write outside the tmp clone (no spillover into the OSS repo's working tree).
|
||||
- NEVER fabricate filler content for empty template sections. Empty stays empty.
|
||||
- NEVER touch the OSS repo's `.gitmodules` or submodule pointer — the sync workflow handles that.
|
||||
- NEVER run this skill if `.internal-docs/` is missing. Stop with the init command.
|
||||
- NEVER push to `internal-docs/main` directly. Always a feature branch + PR.
|
||||
|
||||
## Voice rules — enforced by Step 4
|
||||
|
||||
The skill MUST follow these and refuse to draft otherwise. After generation, scan for violations and regenerate offending sentences (max 3 attempts).
|
||||
|
||||
- Lead with the point. First sentence answers "what is this?"
|
||||
- Concrete nouns. Name files, functions, commands. Not "the system" or "the component".
|
||||
- Short sentences. Average <20 words. No deeply nested clauses.
|
||||
- Active voice. "X does Y" not "Y is done by X".
|
||||
- No em dashes. Use commas, periods, or rephrase.
|
||||
- Banned words: delve, crucial, robust, comprehensive, nuanced, multifaceted, furthermore, moreover, additionally, pivotal, landscape, tapestry, underscore, foster, showcase, intricate, vibrant, fundamental, significant, leverage, utilize.
|
||||
- "110 IQ" target. Write for a smart engineer who has not seen this code yet.
|
||||
- No filler intros ("This document describes..."). Start with the substance.
|
||||
- Empty sections stay empty. Do not write "N/A" or fabricate content.
|
||||
|
||||
## Workflow
|
||||
|
||||
### Step 0: Pre-flight
|
||||
|
||||
Bail with a clear message on any failure.
|
||||
|
||||
```bash
|
||||
# Submodule must be initialized
|
||||
if git submodule status .internal-docs 2>/dev/null | grep -q '^-'; then
|
||||
echo "internal-docs submodule not initialized. Run: git submodule update --init .internal-docs"
|
||||
exit 0
|
||||
fi
|
||||
[ -d .internal-docs ] || { echo ".internal-docs/ missing. Submodule not configured?"; exit 0; }
|
||||
|
||||
# Must be on a feature branch
|
||||
BRANCH=$(git branch --show-current)
|
||||
if [ "$BRANCH" = "main" ] || [ "$BRANCH" = "dev" ]; then
|
||||
echo "On $BRANCH. Run from a feature branch."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Determine base branch (default: dev for this repo, fall back to main).
|
||||
# Suppress rev-parse's SHA output on stdout so it doesn't get captured into BASE.
|
||||
BASE=$(git rev-parse --verify origin/dev >/dev/null 2>&1 && echo dev || echo main)
|
||||
|
||||
# Gather context
|
||||
git log "$BASE..HEAD" --oneline
|
||||
git diff "$BASE...HEAD" --stat
|
||||
gh pr view --json body -q .body 2>/dev/null # may be empty if no PR yet
|
||||
```
|
||||
|
||||
### Step 1: Identify the doc
|
||||
|
||||
Ask the user for three things in one prompt:
|
||||
|
||||
1. **Doc type:** `feature` (default for `feat/*` branches), `architecture`, or `design`
|
||||
2. **Slug:** kebab-case, short (e.g., `cowork-mcp`, `auto-skill-suggest`)
|
||||
3. **Owner:** GitHub handle (default = `git config user.name` or current `gh api user --jq .login`)
|
||||
|
||||
### Step 2: Decision brief — four sharp questions
|
||||
|
||||
Ask one question at a time. Each answer constrains the next. These force compression before drafting.
|
||||
|
||||
1. "In one sentence: what can someone now DO that they could not before?"
|
||||
2. "What is the one design decision a future engineer needs to know?"
|
||||
3. "Which 3-5 files are the heart of this change?" (suggest candidates from the diff)
|
||||
4. "Any sharp edges or gotchas? (or 'none')"
|
||||
|
||||
Skip any question that is N/A for the doc type. Architecture notes don't need question 1; design specs don't need question 4.
|
||||
|
||||
### Step 3: Draft from the template
|
||||
|
||||
Read the matching template from `.internal-docs/_templates/`:
|
||||
|
||||
- `feature` → `feature-note.md`
|
||||
- `architecture` → `architecture-note.md`
|
||||
- `design` → `design-spec.md`
|
||||
|
||||
If `.internal-docs/_templates/` does not exist (first run, before seeding), fall back to the seeds bundled with this skill at `.claude/skills/document-internal/seeds/_templates/`.
|
||||
|
||||
Generate the 1-pager from the template, the four answers, and the diff context.
|
||||
|
||||
### Step 4: Voice self-check
|
||||
|
||||
Scan the draft for violations:
|
||||
|
||||
- Em dash present (`—`).
|
||||
- Any banned word from the list.
|
||||
- Average sentence length > 20 words.
|
||||
- Body line count > 60 (feature notes only — architecture/design have no cap).
|
||||
|
||||
If any violation found, regenerate the offending sentences in place. Max 3 attempts. If still failing after 3 attempts, stop and report which rules are violated.
|
||||
|
||||
If the body is over 60 lines for a feature note, ask: "This is N lines, target is 60. Trim, or promote to `architecture/` (no length cap)?"
|
||||
|
||||
### Step 5: Show + iterate
|
||||
|
||||
Print the full draft. Ask:
|
||||
|
||||
> Edit needed? Paste any changes, or say "looks good".
|
||||
|
||||
Apply user edits with the Edit tool. Re-run Step 4. Loop until the user approves.
|
||||
|
||||
### Step 6: Open PR to internal-docs
|
||||
|
||||
Use a tmp clone. Never the user's `.internal-docs` checkout — keeps the user's submodule clean.
|
||||
|
||||
```bash
|
||||
TMP=$(mktemp -d)
|
||||
trap 'rm -rf "$TMP"' EXIT # cleans up even if any step below fails
|
||||
git clone -b main git@github.com:browseros-ai/internal-docs.git "$TMP"
|
||||
cd "$TMP"
|
||||
git checkout -b "docs/<slug>"
|
||||
|
||||
# Write the doc
|
||||
mkdir -p "<type>" # features, architecture, designs, or setup
|
||||
cat > "<type>/$(date -u +%Y-%m)-<slug>.md" <<'DOC'
|
||||
<draft content>
|
||||
DOC
|
||||
|
||||
# Update the root README index — insert one line under the matching section
|
||||
# Use Edit tool to add: "- [<title>](<type>/YYYY-MM-<slug>.md) — <one-line description>"
|
||||
|
||||
git add "<type>/$(date -u +%Y-%m)-<slug>.md" README.md
|
||||
git commit -m "docs(<type>): <slug>"
|
||||
git push -u origin "docs/<slug>"
|
||||
|
||||
PR_URL=$(gh pr create -R browseros-ai/internal-docs --base main \
|
||||
--head "docs/<slug>" \
|
||||
--title "docs(<type>): <slug>" \
|
||||
--body "$(cat <<'BODY'
|
||||
## Summary
|
||||
<one-line of what this doc covers>
|
||||
|
||||
## Source
|
||||
- BrowserOS branch: <branch>
|
||||
- Related PR: <#NNN if any>
|
||||
BODY
|
||||
)")
|
||||
|
||||
cd -
|
||||
echo "PR opened: $PR_URL"
|
||||
# trap above cleans up $TMP on EXIT
|
||||
```
|
||||
|
||||
If the slug contains characters that won't shell-escape cleanly, sanitize before substitution.
|
||||
|
||||
### Step 7: Completion status
|
||||
|
||||
Report one of:
|
||||
|
||||
- **DONE** — file written, branch pushed, PR opened. Print PR URL.
|
||||
- **DONE_WITH_CONCERNS** — same as DONE but list concerns (e.g., voice check needed multiple regens, user skipped a question).
|
||||
- **BLOCKED** — submodule missing, auth fail, or template missing. State exactly what's needed.
|
||||
|
||||
## Doc type defaults
|
||||
|
||||
| Branch pattern | Default doc type | Default location |
|
||||
|----------------|------------------|------------------|
|
||||
| `feat/*` | feature | `features/` |
|
||||
| `arch/*` or refactor branches with >10 files in `packages/` | architecture | `architecture/` |
|
||||
| `rfc/*` or `design/*` | design | `designs/` |
|
||||
| Otherwise | ask | ask |
|
||||
|
||||
## Common Mistakes
|
||||
|
||||
**Drafting before asking the four questions**
|
||||
- **Problem:** Output is generic filler that says nothing concrete.
|
||||
- **Fix:** Always ask Step 2 first, even if the diff "looks obvious".
|
||||
|
||||
**Touching `.internal-docs/` directly**
|
||||
- **Problem:** User's submodule HEAD moves, parent repo shows dirty state.
|
||||
- **Fix:** Always use the tmp clone in Step 6.
|
||||
|
||||
**Skipping voice check on user edits**
|
||||
- **Problem:** User pastes prose with em dashes or filler; ships as-is.
|
||||
- **Fix:** Re-run Step 4 after every user edit.
|
||||
|
||||
## Red Flags
|
||||
|
||||
**Never:**
|
||||
- Push to `internal-docs/main`. Always branch + PR.
|
||||
- Modify the OSS repo's `.gitmodules` or submodule pointer.
|
||||
- Fabricate content for empty template sections.
|
||||
|
||||
**Always:**
|
||||
- Pre-flight check before doing any work.
|
||||
- One-pager rule for feature notes (60-line body cap).
|
||||
- File:line citations when referencing code.
|
||||
@@ -1,51 +0,0 @@
|
||||
# BrowserOS Internal Docs
|
||||
|
||||
Private team docs for `browseros-ai`. Mounted as a submodule into the public OSS repo at `.internal-docs/`.
|
||||
|
||||
If you are reading this from a public clone of BrowserOS without team access — this submodule is for the BrowserOS internal team. Nothing here is required to build or use BrowserOS.
|
||||
|
||||
## How to find what you need
|
||||
|
||||
- Setup task ("how do I X locally") → look in [`setup/`](setup/)
|
||||
- Recently shipped feature → look in [`features/`](features/)
|
||||
- Cross-cutting subsystem → look in [`architecture/`](architecture/)
|
||||
- A design decision or RFC → look in [`designs/`](designs/)
|
||||
|
||||
Or run `/ask-internal "<your question>"` from any BrowserOS checkout. The skill greps these docs and the codebase, then synthesizes an answer with citations.
|
||||
|
||||
## How to add a doc
|
||||
|
||||
Run `/document-internal` from a feature branch. The skill drafts a 1-pager from your branch's diff, asks four sharp questions, enforces voice rules, and opens a PR back to this repo.
|
||||
|
||||
## Index
|
||||
|
||||
### Setup
|
||||
<!-- one line per setup runbook: -->
|
||||
<!-- - [Dev environment](setup/dev-environment.md): first-time machine setup -->
|
||||
|
||||
### Features
|
||||
<!-- one line per shipped feature, newest first: -->
|
||||
<!-- - [Cowork MCP](features/2026-04-cowork-mcp.md): bring outside MCPs into the BrowserOS agent -->
|
||||
|
||||
### Architecture
|
||||
<!-- one line per cross-cutting subsystem: -->
|
||||
<!-- - [Chrome fork overview](architecture/chrome-fork-overview.md): what we patched and why -->
|
||||
|
||||
### Designs
|
||||
<!-- one line per design spec, newest first: -->
|
||||
<!-- - [Internal docs submodule](designs/2026-04-30-internal-docs-submodule.md): this system -->
|
||||
|
||||
## Templates
|
||||
|
||||
When `/document-internal` runs, it reads from [`_templates/`](_templates/). Edit the templates here when the team's preferred shape changes.
|
||||
|
||||
## Voice
|
||||
|
||||
Docs in this repo follow these rules. The `/document-internal` skill enforces them; humans editing by hand should match.
|
||||
|
||||
- Lead with the point.
|
||||
- Concrete nouns. Name files, functions, commands.
|
||||
- Short sentences, active voice, no em dashes.
|
||||
- No filler words: delve, crucial, robust, comprehensive, nuanced, multifaceted, leverage, utilize, etc.
|
||||
- Empty sections stay empty. Do not write "N/A" or fake content.
|
||||
- Feature notes target one screen, body 60 lines max.
|
||||
@@ -1,31 +0,0 @@
|
||||
---
|
||||
title: <subsystem name>
|
||||
owner: <github handle>
|
||||
status: current | deprecated
|
||||
date: YYYY-MM-DD
|
||||
related-features: [feature-slug-1, feature-slug-2]
|
||||
---
|
||||
|
||||
# <subsystem name>
|
||||
|
||||
## What this subsystem does
|
||||
<1-2 paragraphs. The top-level responsibility. Boundaries.>
|
||||
|
||||
## Architecture
|
||||
<Diagram (ASCII or mermaid) plus prose. Components and how they talk.>
|
||||
|
||||
## Constraints
|
||||
<Hard rules the design enforces. "X must never call Y" type statements.>
|
||||
|
||||
## Decisions made
|
||||
<Numbered list of non-obvious decisions and the reason for each.>
|
||||
|
||||
## Key files
|
||||
- `path/to/file.ts` — role
|
||||
- `path/to/dir/` — what lives here
|
||||
|
||||
## How to evolve this
|
||||
<Where to add things. Which tests to expect to update. What NOT to touch.>
|
||||
|
||||
## Open questions
|
||||
<What is still being figured out. Empty if none.>
|
||||
@@ -1,34 +0,0 @@
|
||||
---
|
||||
title: <design name>
|
||||
owner: <github handle>
|
||||
status: proposed | accepted | rejected | superseded
|
||||
date: YYYY-MM-DD
|
||||
supersedes: <design-slug or none>
|
||||
---
|
||||
|
||||
# <design name>
|
||||
|
||||
## Goal
|
||||
<2-4 sentences. What this design is trying to accomplish.>
|
||||
|
||||
## Context
|
||||
<1-2 paragraphs. The current state, what is failing, why this needs to change.>
|
||||
|
||||
## Selected Approach
|
||||
<The chosen design at a high level. Architecture, components, data flow.>
|
||||
|
||||
## Alternatives Considered
|
||||
### 1. <name>
|
||||
<2-3 sentences on what this would look like, then pro/con and why rejected (or deferred).>
|
||||
|
||||
### 2. <name>
|
||||
<Same shape.>
|
||||
|
||||
## Out of Scope
|
||||
<What this design does NOT cover. Defer references.>
|
||||
|
||||
## Rollout
|
||||
<Numbered steps from "nothing exists" to "fully shipped".>
|
||||
|
||||
## Open Questions
|
||||
<Resolved during design? Empty. Unresolved? List with owner.>
|
||||
@@ -1,29 +0,0 @@
|
||||
---
|
||||
title: <feature name>
|
||||
owner: <github handle>
|
||||
status: shipped | wip | deprecated
|
||||
date: YYYY-MM-DD
|
||||
prs: ["#NNN"]
|
||||
tags: [agent, browser, mcp]
|
||||
---
|
||||
|
||||
# <feature name>
|
||||
|
||||
## What it does
|
||||
<2-3 sentences. What can someone now do that they could not before. Lead with user-facing impact, not implementation.>
|
||||
|
||||
## Why we built it
|
||||
<1-2 sentences. Motivation. What pain it removed or what unlocked.>
|
||||
|
||||
## How it works
|
||||
<3-6 sentences. The flow at a high level. Name the key files.>
|
||||
|
||||
## Key files
|
||||
- `path/to/file.ts` — what it does
|
||||
- `path/to/other.ts` — what it does
|
||||
|
||||
## How to run / test it locally
|
||||
<bullet list of commands. Empty section if N/A — do not fake.>
|
||||
|
||||
## Gotchas
|
||||
<known sharp edges. "If you see X, that's why." Empty if N/A.>
|
||||
62
.github/workflows/sync-internal-docs.yml
vendored
62
.github/workflows/sync-internal-docs.yml
vendored
@@ -1,62 +0,0 @@
|
||||
name: Sync internal-docs submodule
|
||||
|
||||
on:
|
||||
schedule:
|
||||
- cron: '0 */4 * * *'
|
||||
workflow_dispatch:
|
||||
|
||||
jobs:
|
||||
sync:
|
||||
name: Bump internal-docs submodule pointer on dev
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: write
|
||||
pull-requests: write
|
||||
steps:
|
||||
- name: Rewrite SSH submodule URL to HTTPS-with-token
|
||||
env:
|
||||
TOKEN: ${{ secrets.INTERNAL_DOCS_SYNC_TOKEN }}
|
||||
run: |
|
||||
git config --global "url.https://x-access-token:${TOKEN}@github.com/.insteadOf" "git@github.com:"
|
||||
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
token: ${{ secrets.INTERNAL_DOCS_SYNC_TOKEN }}
|
||||
submodules: true
|
||||
ref: dev
|
||||
fetch-depth: 50
|
||||
|
||||
- name: Open auto-merge PR if internal-docs has new commits
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
run: |
|
||||
set -e
|
||||
|
||||
# Skip if submodule not yet configured (handoff window before someone adds it)
|
||||
if ! git config --file .gitmodules --get-regexp '^submodule\..internal-docs\.path$' >/dev/null 2>&1; then
|
||||
echo "internal-docs submodule not yet configured in .gitmodules. Skipping."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
git submodule update --remote --merge .internal-docs
|
||||
|
||||
if git diff --quiet .internal-docs; then
|
||||
echo "No internal-docs changes to sync."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
BRANCH="bot/sync-internal-docs-$(date -u +%Y%m%d-%H%M%S)"
|
||||
git config user.name "browseros-bot"
|
||||
git config user.email "bot@browseros.ai"
|
||||
git checkout -b "$BRANCH"
|
||||
git add .internal-docs
|
||||
git commit -m "chore: sync internal-docs submodule"
|
||||
git push -u origin "$BRANCH"
|
||||
|
||||
PR_URL=$(gh pr create \
|
||||
--base dev \
|
||||
--head "$BRANCH" \
|
||||
--title "chore: sync internal-docs submodule" \
|
||||
--body "Automated bump of the \`.internal-docs\` submodule pointer. Auto-merging.")
|
||||
|
||||
gh pr merge "$PR_URL" --auto --squash --delete-branch
|
||||
4
.gitmodules
vendored
4
.gitmodules
vendored
@@ -1,4 +0,0 @@
|
||||
[submodule ".internal-docs"]
|
||||
path = .internal-docs
|
||||
url = git@github.com:browseros-ai/internal-docs.git
|
||||
branch = main
|
||||
|
||||
Submodule .internal-docs deleted from 590799ae1c
@@ -1,25 +1,20 @@
|
||||
import { ArrowLeft } from 'lucide-react'
|
||||
import { ArrowLeft, Bot, Home } from 'lucide-react'
|
||||
import { type FC, useEffect, useMemo, useRef } from 'react'
|
||||
import { Navigate, useNavigate, useParams, useSearchParams } from 'react-router'
|
||||
import { Button } from '@/components/ui/button'
|
||||
import type {
|
||||
HarnessAgent,
|
||||
HarnessAgentAdapter,
|
||||
} from '@/entrypoints/app/agents/agent-harness-types'
|
||||
import type { AgentAdapterHealth } from '@/entrypoints/app/agents/agent-row/agent-row.types'
|
||||
import {
|
||||
cancelHarnessTurn,
|
||||
useAgentAdapters,
|
||||
useEnqueueHarnessMessage,
|
||||
useHarnessAgents,
|
||||
useRemoveHarnessQueuedMessage,
|
||||
useUpdateHarnessAgent,
|
||||
} from '@/entrypoints/app/agents/useAgents'
|
||||
import type { AgentEntry } from '@/entrypoints/app/agents/useOpenClaw'
|
||||
import { AgentRail } from './AgentRail'
|
||||
import {
|
||||
type AgentEntry,
|
||||
getModelDisplayName,
|
||||
} from '@/entrypoints/app/agents/useOpenClaw'
|
||||
import { cn } from '@/lib/utils'
|
||||
import { useAgentCommandData } from './agent-command-layout'
|
||||
import { ClawChat } from './ClawChat'
|
||||
import { ConversationHeader } from './ConversationHeader'
|
||||
import { ConversationInput } from './ConversationInput'
|
||||
import {
|
||||
buildChatHistoryFromClawMessages,
|
||||
@@ -30,6 +25,162 @@ import { QueuePanel } from './QueuePanel'
|
||||
import { useAgentConversation } from './useAgentConversation'
|
||||
import { useHarnessChatHistory } from './useHarnessChatHistory'
|
||||
|
||||
function StatusBadge({ status }: { status: string }) {
|
||||
return (
|
||||
<div className="inline-flex items-center gap-2 rounded-full border border-border/60 bg-card px-3 py-1 text-[11px] text-muted-foreground uppercase tracking-[0.18em]">
|
||||
<span
|
||||
className={cn(
|
||||
'size-1.5 rounded-full',
|
||||
status === 'Working on your request'
|
||||
? 'bg-amber-500'
|
||||
: status === 'Ready'
|
||||
? 'bg-emerald-500'
|
||||
: status === 'Offline'
|
||||
? 'bg-muted-foreground/50'
|
||||
: 'bg-[var(--accent-orange)]',
|
||||
)}
|
||||
/>
|
||||
<span>{status}</span>
|
||||
</div>
|
||||
)
|
||||
}
|
||||
|
||||
function AgentIdentity({
|
||||
name,
|
||||
meta,
|
||||
className,
|
||||
}: {
|
||||
name: string
|
||||
meta: string
|
||||
className?: string
|
||||
}) {
|
||||
return (
|
||||
<div className={cn('min-w-0', className)}>
|
||||
<div className="truncate font-semibold text-[15px] leading-5">{name}</div>
|
||||
<div className="truncate text-muted-foreground text-xs leading-5">
|
||||
{meta}
|
||||
</div>
|
||||
</div>
|
||||
)
|
||||
}
|
||||
|
||||
function ConversationHeader({
|
||||
agentName,
|
||||
agentMeta,
|
||||
status,
|
||||
backLabel,
|
||||
backTarget,
|
||||
onGoHome,
|
||||
}: {
|
||||
agentName: string
|
||||
agentMeta: string
|
||||
status: string
|
||||
backLabel: string
|
||||
backTarget: 'home' | 'page'
|
||||
onGoHome: () => void
|
||||
}) {
|
||||
const BackIcon = backTarget === 'home' ? Home : ArrowLeft
|
||||
|
||||
return (
|
||||
<div className="flex h-14 items-center justify-between gap-4 border-border/50 border-b px-5">
|
||||
<div className="flex min-w-0 items-center gap-3">
|
||||
<Button
|
||||
variant="ghost"
|
||||
size="icon"
|
||||
onClick={onGoHome}
|
||||
className="size-8 rounded-xl lg:hidden"
|
||||
title={backLabel}
|
||||
>
|
||||
<BackIcon className="size-4" />
|
||||
</Button>
|
||||
<div className="flex size-8 shrink-0 items-center justify-center rounded-xl bg-muted text-muted-foreground">
|
||||
<Bot className="size-4" />
|
||||
</div>
|
||||
<AgentIdentity name={agentName} meta={agentMeta} />
|
||||
</div>
|
||||
|
||||
<StatusBadge status={status} />
|
||||
</div>
|
||||
)
|
||||
}
|
||||
|
||||
function AgentRailHeader({ onGoHome }: { onGoHome: () => void }) {
|
||||
return (
|
||||
<div className="hidden h-14 items-center border-border/50 border-r border-b bg-background/70 px-4 lg:flex">
|
||||
<div className="flex min-w-0 items-center gap-3">
|
||||
<Button
|
||||
variant="ghost"
|
||||
size="icon"
|
||||
onClick={onGoHome}
|
||||
className="size-8 rounded-xl"
|
||||
title="Back to home"
|
||||
>
|
||||
<ArrowLeft className="size-4" />
|
||||
</Button>
|
||||
<div className="truncate font-semibold text-[15px] leading-5">
|
||||
Agents
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
)
|
||||
}
|
||||
|
||||
function AgentRailList({
|
||||
activeAgentId,
|
||||
agents,
|
||||
onSelectAgent,
|
||||
}: {
|
||||
activeAgentId: string
|
||||
agents: AgentEntry[]
|
||||
onSelectAgent: (entry: AgentEntry) => void
|
||||
}) {
|
||||
return (
|
||||
<aside className="hidden min-h-0 flex-col border-border/50 border-r bg-background/70 lg:flex">
|
||||
<div className="styled-scrollbar min-h-0 flex-1 space-y-2 overflow-y-auto px-3 py-3">
|
||||
{agents.map((entry) => {
|
||||
const active = entry.agentId === activeAgentId
|
||||
const modelName = getAgentEntryMeta(entry)
|
||||
|
||||
return (
|
||||
<button
|
||||
key={entry.agentId}
|
||||
type="button"
|
||||
onClick={() => onSelectAgent(entry)}
|
||||
className={cn(
|
||||
'w-full rounded-2xl border px-3 py-3 text-left transition-all',
|
||||
active
|
||||
? 'border-[var(--accent-orange)]/30 bg-[var(--accent-orange)]/8 shadow-sm'
|
||||
: 'border-transparent bg-transparent hover:border-border/60 hover:bg-card',
|
||||
)}
|
||||
>
|
||||
<div className="flex items-center gap-3">
|
||||
<div
|
||||
className={cn(
|
||||
'flex size-9 items-center justify-center rounded-xl',
|
||||
active
|
||||
? 'bg-[var(--accent-orange)]/12 text-[var(--accent-orange)]'
|
||||
: 'bg-muted text-muted-foreground',
|
||||
)}
|
||||
>
|
||||
<Bot className="size-4" />
|
||||
</div>
|
||||
<AgentIdentity name={entry.name} meta={modelName} />
|
||||
</div>
|
||||
</button>
|
||||
)
|
||||
})}
|
||||
</div>
|
||||
</aside>
|
||||
)
|
||||
}
|
||||
|
||||
function getAgentEntryMeta(agent: AgentEntry | undefined): string {
|
||||
if (agent?.source === 'agent-harness') {
|
||||
return getModelDisplayName(agent.model) ?? 'ACP agent'
|
||||
}
|
||||
return getModelDisplayName(agent?.model) ?? 'OpenClaw agent'
|
||||
}
|
||||
|
||||
function AgentConversationController({
|
||||
agentId,
|
||||
initialMessage,
|
||||
@@ -138,7 +289,7 @@ function AgentConversationController({
|
||||
}
|
||||
|
||||
return (
|
||||
<div className="flex min-h-0 flex-1 flex-col overflow-hidden">
|
||||
<div className="flex min-h-0 flex-col overflow-hidden">
|
||||
<ClawChat
|
||||
agentName={agentName}
|
||||
historyMessages={historyMessages}
|
||||
@@ -217,22 +368,6 @@ interface AgentCommandConversationProps {
|
||||
createAgentPath?: string
|
||||
}
|
||||
|
||||
function inferAdapterFromEntry(
|
||||
entry: AgentEntry | undefined,
|
||||
): HarnessAgentAdapter | 'unknown' {
|
||||
if (!entry) return 'unknown'
|
||||
if (entry.source === 'agent-harness') {
|
||||
// Harness entries don't carry the adapter on AgentEntry; the rail
|
||||
// / header read the harness record directly. This branch only runs
|
||||
// before the harness query resolves, so 'unknown' is correct — the
|
||||
// tile's bot fallback renders until data arrives.
|
||||
return 'unknown'
|
||||
}
|
||||
// OpenClaw-only entries (no harness shadow) are deprecated in
|
||||
// practice but the rail still tolerates them.
|
||||
return 'openclaw'
|
||||
}
|
||||
|
||||
export const AgentCommandConversation: FC<AgentCommandConversationProps> = ({
|
||||
variant = 'command',
|
||||
backPath = '/home',
|
||||
@@ -243,110 +378,60 @@ export const AgentCommandConversation: FC<AgentCommandConversationProps> = ({
|
||||
const [searchParams, setSearchParams] = useSearchParams()
|
||||
const navigate = useNavigate()
|
||||
const { agents } = useAgentCommandData()
|
||||
const { harnessAgents } = useHarnessAgents()
|
||||
const { adapters } = useAgentAdapters()
|
||||
const updateAgent = useUpdateHarnessAgent()
|
||||
|
||||
const shouldRedirectHome = !agentId
|
||||
const resolvedAgentId = agentId ?? ''
|
||||
const harnessAgent = harnessAgents.find(
|
||||
(entry) => entry.id === resolvedAgentId,
|
||||
)
|
||||
const entry = agents.find((item) => item.agentId === resolvedAgentId)
|
||||
const fallbackName = entry?.name || resolvedAgentId || 'Agent'
|
||||
const fallbackAdapter = inferAdapterFromEntry(entry)
|
||||
const agent = agents.find((entry) => entry.agentId === resolvedAgentId)
|
||||
const agentName = agent?.name || resolvedAgentId || 'Agent'
|
||||
const agentMeta = getAgentEntryMeta(agent)
|
||||
const initialMessage = searchParams.get('q')
|
||||
const isPageVariant = variant === 'page'
|
||||
const backLabel = isPageVariant ? 'Back to agents' : 'Back to home'
|
||||
|
||||
const adapterHealth = useMemo<AgentAdapterHealth | null>(() => {
|
||||
const adapterId = harnessAgent?.adapter
|
||||
if (!adapterId) return null
|
||||
const descriptor = adapters.find((item) => item.id === adapterId)
|
||||
if (!descriptor?.health) return null
|
||||
return {
|
||||
healthy: descriptor.health.healthy,
|
||||
reason: descriptor.health.reason,
|
||||
}
|
||||
}, [adapters, harnessAgent?.adapter])
|
||||
|
||||
if (shouldRedirectHome) {
|
||||
return <Navigate to="/home" replace />
|
||||
}
|
||||
|
||||
const handleSelectHarnessAgent = (target: HarnessAgent) => {
|
||||
navigate(`${agentPathPrefix}/${target.id}`)
|
||||
const handleSelectAgent = (entry: AgentEntry) => {
|
||||
navigate(`${agentPathPrefix}/${entry.agentId}`)
|
||||
}
|
||||
|
||||
const handlePinToggle = (target: HarnessAgent | null, next: boolean) => {
|
||||
if (!target) return
|
||||
updateAgent.mutate({
|
||||
agentId: target.id,
|
||||
patch: { pinned: next },
|
||||
})
|
||||
}
|
||||
// Every visible agent runs through the harness now, so per-agent
|
||||
// runtime status doesn't gate chat the way OpenClaw's legacy
|
||||
// gateway lifecycle did. Show "Ready" once the agent record is
|
||||
// resolved from the rail, "Setup" otherwise.
|
||||
const statusCopy = agent ? 'Ready' : 'Setup'
|
||||
|
||||
return (
|
||||
<div className="absolute inset-0 overflow-hidden bg-background md:pl-[theme(spacing.14)]">
|
||||
<div className="mx-auto flex h-full w-full max-w-[1480px] flex-col">
|
||||
{/* Shared top band — the rail's "Agents" header and the chat
|
||||
header live on one row so they're aligned by construction. */}
|
||||
<div className="flex shrink-0 items-stretch border-border/50 border-b">
|
||||
<div className="hidden min-h-[60px] w-[288px] shrink-0 items-center gap-3 border-border/50 border-r px-4 lg:flex">
|
||||
<Button
|
||||
variant="ghost"
|
||||
size="icon"
|
||||
onClick={() => navigate(backPath)}
|
||||
className="size-8 rounded-xl"
|
||||
title="Back to home"
|
||||
>
|
||||
<ArrowLeft className="size-4" />
|
||||
</Button>
|
||||
<div className="truncate font-semibold text-[15px] leading-5">
|
||||
Agents
|
||||
</div>
|
||||
</div>
|
||||
<div className="min-w-0 flex-1">
|
||||
<ConversationHeader
|
||||
agent={harnessAgent ?? null}
|
||||
fallbackName={fallbackName}
|
||||
fallbackAdapter={fallbackAdapter}
|
||||
adapterHealth={adapterHealth}
|
||||
backLabel={backLabel}
|
||||
backTarget={isPageVariant ? 'page' : 'home'}
|
||||
onGoHome={() => navigate(backPath)}
|
||||
onPinToggle={(next) =>
|
||||
handlePinToggle(harnessAgent ?? null, next)
|
||||
}
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
<div className="mx-auto grid h-full w-full max-w-[1480px] lg:grid-cols-[288px_minmax(0,1fr)] lg:grid-rows-[3.5rem_minmax(0,1fr)]">
|
||||
<AgentRailHeader onGoHome={() => navigate(backPath)} />
|
||||
|
||||
{/* Body grid: rail list + chat. Both columns share the same
|
||||
top edge (the band above) so headers can never drift. */}
|
||||
<div className="grid min-h-0 flex-1 grid-rows-[minmax(0,1fr)] lg:grid-cols-[288px_minmax(0,1fr)]">
|
||||
<AgentRail
|
||||
agents={harnessAgents}
|
||||
adapters={adapters}
|
||||
activeAgentId={resolvedAgentId}
|
||||
onSelectAgent={handleSelectHarnessAgent}
|
||||
onPinToggle={(target, next) => handlePinToggle(target, next)}
|
||||
/>
|
||||
<ConversationHeader
|
||||
agentName={agentName}
|
||||
agentMeta={agentMeta}
|
||||
status={statusCopy}
|
||||
backLabel={backLabel}
|
||||
backTarget={isPageVariant ? 'page' : 'home'}
|
||||
onGoHome={() => navigate(backPath)}
|
||||
/>
|
||||
|
||||
<div className="flex h-full min-h-0 flex-col overflow-hidden">
|
||||
<AgentConversationController
|
||||
key={resolvedAgentId}
|
||||
agentId={resolvedAgentId}
|
||||
agents={agents}
|
||||
initialMessage={initialMessage}
|
||||
onInitialMessageConsumed={() =>
|
||||
setSearchParams({}, { replace: true })
|
||||
}
|
||||
agentPathPrefix={agentPathPrefix}
|
||||
createAgentPath={createAgentPath}
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
<AgentRailList
|
||||
activeAgentId={resolvedAgentId}
|
||||
agents={agents}
|
||||
onSelectAgent={handleSelectAgent}
|
||||
/>
|
||||
|
||||
<AgentConversationController
|
||||
key={resolvedAgentId}
|
||||
agentId={resolvedAgentId}
|
||||
agents={agents}
|
||||
initialMessage={initialMessage}
|
||||
onInitialMessageConsumed={() =>
|
||||
setSearchParams({}, { replace: true })
|
||||
}
|
||||
agentPathPrefix={agentPathPrefix}
|
||||
createAgentPath={createAgentPath}
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
)
|
||||
|
||||
@@ -1,65 +0,0 @@
|
||||
import { type FC, useMemo } from 'react'
|
||||
import type {
|
||||
HarnessAdapterDescriptor,
|
||||
HarnessAgent,
|
||||
HarnessAgentAdapter,
|
||||
} from '@/entrypoints/app/agents/agent-harness-types'
|
||||
import type { AgentAdapterHealth } from '@/entrypoints/app/agents/agent-row/agent-row.types'
|
||||
import { orderAgentsByPinThenRecency } from '@/entrypoints/app/agents/agents-list-order'
|
||||
import { AgentRailRow } from './AgentRailRow'
|
||||
|
||||
interface AgentRailProps {
|
||||
agents: HarnessAgent[]
|
||||
adapters: HarnessAdapterDescriptor[]
|
||||
activeAgentId: string
|
||||
onSelectAgent: (agent: HarnessAgent) => void
|
||||
onPinToggle: (agent: HarnessAgent, next: boolean) => void
|
||||
}
|
||||
|
||||
/**
|
||||
* Left-column scrollable list of agents. The "Agents" label + back
|
||||
* button live in the shared top band above (so the rail header and
|
||||
* the chat header sit on a single aligned strip rather than as two
|
||||
* separately-sized headers per column). Sort matches `/agents`:
|
||||
* pinned-first → recency, so the rail doesn't reshuffle as turns
|
||||
* transition every 5 s.
|
||||
*/
|
||||
export const AgentRail: FC<AgentRailProps> = ({
|
||||
agents,
|
||||
adapters,
|
||||
activeAgentId,
|
||||
onSelectAgent,
|
||||
onPinToggle,
|
||||
}) => {
|
||||
const adapterHealth = useMemo(() => {
|
||||
const map = new Map<HarnessAgentAdapter, AgentAdapterHealth>()
|
||||
for (const adapter of adapters) {
|
||||
if (adapter.health) {
|
||||
map.set(adapter.id, {
|
||||
healthy: adapter.health.healthy,
|
||||
reason: adapter.health.reason,
|
||||
})
|
||||
}
|
||||
}
|
||||
return map
|
||||
}, [adapters])
|
||||
|
||||
const ordered = useMemo(() => orderAgentsByPinThenRecency(agents), [agents])
|
||||
|
||||
return (
|
||||
<aside className="hidden min-h-0 flex-col border-border/50 border-r bg-background/70 lg:flex">
|
||||
<div className="styled-scrollbar min-h-0 flex-1 space-y-1.5 overflow-y-auto px-3 py-3">
|
||||
{ordered.map((agent) => (
|
||||
<AgentRailRow
|
||||
key={agent.id}
|
||||
agent={agent}
|
||||
active={agent.id === activeAgentId}
|
||||
adapterHealth={adapterHealth.get(agent.adapter) ?? null}
|
||||
onSelect={() => onSelectAgent(agent)}
|
||||
onPinToggle={(next) => onPinToggle(agent, next)}
|
||||
/>
|
||||
))}
|
||||
</div>
|
||||
</aside>
|
||||
)
|
||||
}
|
||||
@@ -1,102 +0,0 @@
|
||||
import type { FC } from 'react'
|
||||
import { Badge } from '@/components/ui/badge'
|
||||
import { adapterLabel } from '@/entrypoints/app/agents/AdapterIcon'
|
||||
import type { HarnessAgent } from '@/entrypoints/app/agents/agent-harness-types'
|
||||
import { AgentSummaryChips } from '@/entrypoints/app/agents/agent-row/AgentSummaryChips'
|
||||
import { AgentTile } from '@/entrypoints/app/agents/agent-row/AgentTile'
|
||||
import type { AgentAdapterHealth } from '@/entrypoints/app/agents/agent-row/agent-row.types'
|
||||
import { PinToggle } from '@/entrypoints/app/agents/agent-row/PinToggle'
|
||||
import { cn } from '@/lib/utils'
|
||||
|
||||
interface AgentRailRowProps {
|
||||
agent: HarnessAgent
|
||||
active: boolean
|
||||
adapterHealth: AgentAdapterHealth | null
|
||||
onSelect: () => void
|
||||
onPinToggle: (next: boolean) => void
|
||||
}
|
||||
|
||||
/**
|
||||
* Compact rail row for the chat-screen sidebar. Slims `<AgentRowCard>`
|
||||
* down to the essentials that fit a ~280 px rail: tile + name + status
|
||||
* badge + pin star, with the adapter / model / reasoning chips on a
|
||||
* second line. Token totals, sparkline, last-message preview all stay
|
||||
* on the `/agents` page where rows are full-width.
|
||||
*/
|
||||
export const AgentRailRow: FC<AgentRailRowProps> = ({
|
||||
agent,
|
||||
active,
|
||||
adapterHealth,
|
||||
onSelect,
|
||||
onPinToggle,
|
||||
}) => {
|
||||
const status = agent.status ?? 'unknown'
|
||||
const lastUsedAt = agent.lastUsedAt ?? null
|
||||
const pinned = agent.pinned ?? false
|
||||
return (
|
||||
<button
|
||||
type="button"
|
||||
onClick={onSelect}
|
||||
className={cn(
|
||||
'group w-full rounded-2xl border px-3 py-3 text-left transition-colors',
|
||||
active
|
||||
? 'border-[var(--accent-orange)]/30 bg-[var(--accent-orange)]/8'
|
||||
: 'border-transparent bg-transparent hover:border-border/60 hover:bg-card',
|
||||
)}
|
||||
>
|
||||
<div className="flex min-w-0 items-start gap-3">
|
||||
<AgentTile
|
||||
adapter={agent.adapter}
|
||||
status={status}
|
||||
lastUsedAt={lastUsedAt}
|
||||
/>
|
||||
<div className="min-w-0 flex-1">
|
||||
<div className="flex items-center gap-1.5">
|
||||
<span className="truncate font-semibold text-[14px] leading-5">
|
||||
{agent.name}
|
||||
</span>
|
||||
{status === 'working' && (
|
||||
<Badge
|
||||
variant="secondary"
|
||||
className="h-5 bg-amber-50 px-1.5 text-[10px] text-amber-900 hover:bg-amber-50"
|
||||
>
|
||||
Working
|
||||
</Badge>
|
||||
)}
|
||||
{status === 'asleep' && (
|
||||
<Badge
|
||||
variant="outline"
|
||||
className="h-5 px-1.5 text-[10px] text-muted-foreground"
|
||||
>
|
||||
Asleep
|
||||
</Badge>
|
||||
)}
|
||||
{status === 'error' && (
|
||||
<Badge variant="destructive" className="h-5 px-1.5 text-[10px]">
|
||||
Attention
|
||||
</Badge>
|
||||
)}
|
||||
<div className="ml-auto">
|
||||
<PinToggle pinned={pinned} onToggle={onPinToggle} />
|
||||
</div>
|
||||
</div>
|
||||
<AgentSummaryChips
|
||||
adapter={agent.adapter}
|
||||
modelLabel={agent.modelId ?? null}
|
||||
reasoningEffort={agent.reasoningEffort ?? null}
|
||||
adapterHealth={adapterHealth}
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
</button>
|
||||
)
|
||||
}
|
||||
|
||||
/**
|
||||
* Tooltip-only label helper kept exported in case the tile row needs to
|
||||
* show "Codex agent" or similar in a future state. Inlined fallback for
|
||||
* the rare `unknown` adapter rendering path.
|
||||
*/
|
||||
export function railRowAdapterLabel(agent: HarnessAgent): string {
|
||||
return adapterLabel(agent.adapter)
|
||||
}
|
||||
@@ -1,179 +0,0 @@
|
||||
import { ArrowLeft, Home } from 'lucide-react'
|
||||
import type { FC } from 'react'
|
||||
import { Badge } from '@/components/ui/badge'
|
||||
import { Button } from '@/components/ui/button'
|
||||
import { formatRelativeTime } from '@/entrypoints/app/agents/agent-display.helpers'
|
||||
import type { HarnessAgent } from '@/entrypoints/app/agents/agent-harness-types'
|
||||
import { AgentSummaryChips } from '@/entrypoints/app/agents/agent-row/AgentSummaryChips'
|
||||
import { formatTokens } from '@/entrypoints/app/agents/agent-row/agent-row.helpers'
|
||||
import type { AgentAdapterHealth } from '@/entrypoints/app/agents/agent-row/agent-row.types'
|
||||
import { PinToggle } from '@/entrypoints/app/agents/agent-row/PinToggle'
|
||||
import type { AgentLiveness } from '@/entrypoints/app/agents/LivenessDot'
|
||||
import { cn } from '@/lib/utils'
|
||||
|
||||
interface ConversationHeaderProps {
|
||||
agent: HarnessAgent | null
|
||||
fallbackName: string
|
||||
fallbackAdapter: 'claude' | 'codex' | 'openclaw' | 'unknown'
|
||||
adapterHealth: AgentAdapterHealth | null
|
||||
backLabel: string
|
||||
backTarget: 'home' | 'page'
|
||||
onGoHome: () => void
|
||||
onPinToggle: (next: boolean) => void
|
||||
}
|
||||
|
||||
/**
|
||||
* Strip above the chat. Mirrors the `/agents` row card's title row +
|
||||
* summary chips so the user gets adapter health, pin state, and status
|
||||
* at a glance — but adds the meta line (last used · lifetime tokens ·
|
||||
* queued) that's specific to this surface.
|
||||
*
|
||||
* The mobile `lg:hidden` Back button is preserved so the small-screen
|
||||
* collapse keeps a navigable header without a sidebar.
|
||||
*/
|
||||
export const ConversationHeader: FC<ConversationHeaderProps> = ({
|
||||
agent,
|
||||
fallbackName,
|
||||
fallbackAdapter,
|
||||
adapterHealth,
|
||||
backLabel,
|
||||
backTarget,
|
||||
onGoHome,
|
||||
onPinToggle,
|
||||
}) => {
|
||||
const BackIcon = backTarget === 'home' ? Home : ArrowLeft
|
||||
const adapter = agent?.adapter ?? fallbackAdapter
|
||||
const status: AgentLiveness = agent?.status ?? 'unknown'
|
||||
const lastUsedAt = agent?.lastUsedAt ?? null
|
||||
const pinned = agent?.pinned ?? false
|
||||
const queueCount = agent?.queue?.length ?? 0
|
||||
const tokens = agent?.tokens ?? null
|
||||
const lifetimeTotal = tokens
|
||||
? tokens.cumulative.input + tokens.cumulative.output
|
||||
: 0
|
||||
|
||||
const metaParts: string[] = []
|
||||
if (lastUsedAt !== null) metaParts.push(formatRelativeTime(lastUsedAt))
|
||||
if (lifetimeTotal > 0) metaParts.push(`${formatTokens(lifetimeTotal)} tokens`)
|
||||
if (queueCount > 0) {
|
||||
metaParts.push(queueCount === 1 ? '1 queued' : `${queueCount} queued`)
|
||||
}
|
||||
|
||||
return (
|
||||
<div className="flex min-h-[60px] shrink-0 items-center justify-between gap-4 px-5 py-2.5">
|
||||
<div className="flex min-w-0 items-center gap-3">
|
||||
<Button
|
||||
variant="ghost"
|
||||
size="icon"
|
||||
onClick={onGoHome}
|
||||
className="size-8 shrink-0 rounded-xl lg:hidden"
|
||||
title={backLabel}
|
||||
>
|
||||
<BackIcon className="size-4" />
|
||||
</Button>
|
||||
<div className="group min-w-0 flex-1">
|
||||
<div className="flex items-center gap-2">
|
||||
<span className="truncate font-semibold text-[15px] leading-6">
|
||||
{agent?.name || fallbackName}
|
||||
</span>
|
||||
{agent ? (
|
||||
<PinToggle pinned={pinned} onToggle={onPinToggle} />
|
||||
) : null}
|
||||
</div>
|
||||
<div className="mt-0.5 flex items-center gap-2">
|
||||
<AgentSummaryChips
|
||||
adapter={adapter}
|
||||
modelLabel={agent?.modelId ?? null}
|
||||
reasoningEffort={agent?.reasoningEffort ?? null}
|
||||
adapterHealth={adapterHealth}
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<div className="flex shrink-0 flex-col items-end gap-1">
|
||||
<StatusPill
|
||||
status={status}
|
||||
hasActiveTurn={Boolean(agent?.activeTurnId)}
|
||||
/>
|
||||
<div className="flex h-4 items-center text-[11px] text-muted-foreground">
|
||||
<span className="truncate">
|
||||
{metaParts.length > 0 ? metaParts.join(' · ') : '\u00A0'}
|
||||
</span>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
)
|
||||
}
|
||||
|
||||
interface StatusPillProps {
|
||||
status: AgentLiveness
|
||||
hasActiveTurn: boolean
|
||||
}
|
||||
|
||||
/**
|
||||
* Working / Asleep / Attention all get distinctive styling; idle keeps
|
||||
* the legacy emerald `Ready` pill so the default state is visually
|
||||
* calm. Defensive working: `idle + activeTurnId` falls through to the
|
||||
* working pill since the server says a turn is in flight.
|
||||
*/
|
||||
const StatusPill: FC<StatusPillProps> = ({ status, hasActiveTurn }) => {
|
||||
const effective: AgentLiveness =
|
||||
status === 'idle' && hasActiveTurn ? 'working' : status
|
||||
|
||||
const base =
|
||||
'inline-flex items-center gap-2 rounded-full border px-3 py-0.5 text-[11px] uppercase tracking-[0.18em]'
|
||||
|
||||
if (effective === 'working') {
|
||||
return (
|
||||
<Badge
|
||||
variant="secondary"
|
||||
className={cn(
|
||||
base,
|
||||
'border-amber-200 bg-amber-50 text-amber-900 hover:bg-amber-50',
|
||||
)}
|
||||
>
|
||||
<span className="size-1.5 animate-pulse rounded-full bg-amber-500" />
|
||||
Working
|
||||
</Badge>
|
||||
)
|
||||
}
|
||||
if (effective === 'asleep') {
|
||||
return (
|
||||
<Badge variant="outline" className={cn(base, 'text-muted-foreground')}>
|
||||
<span className="size-1.5 rounded-full bg-muted-foreground/50" />
|
||||
Asleep
|
||||
</Badge>
|
||||
)
|
||||
}
|
||||
if (effective === 'error') {
|
||||
return (
|
||||
<Badge
|
||||
variant="destructive"
|
||||
className={cn(base, 'border-destructive/30')}
|
||||
>
|
||||
<span className="size-1.5 rounded-full bg-destructive-foreground" />
|
||||
Attention
|
||||
</Badge>
|
||||
)
|
||||
}
|
||||
if (effective === 'idle') {
|
||||
return (
|
||||
<Badge
|
||||
variant="outline"
|
||||
className={cn(
|
||||
base,
|
||||
'border-emerald-200 bg-emerald-50 text-emerald-900 hover:bg-emerald-50',
|
||||
)}
|
||||
>
|
||||
<span className="size-1.5 rounded-full bg-emerald-500" />
|
||||
Ready
|
||||
</Badge>
|
||||
)
|
||||
}
|
||||
return (
|
||||
<Badge variant="outline" className={cn(base, 'text-muted-foreground')}>
|
||||
<span className="size-1.5 rounded-full bg-muted-foreground/30" />
|
||||
Setup
|
||||
</Badge>
|
||||
)
|
||||
}
|
||||
@@ -11,7 +11,6 @@ import type {
|
||||
AgentAdapterHealth,
|
||||
AgentRowData,
|
||||
} from './agent-row/agent-row.types'
|
||||
import { compareAgentsByPinThenRecency } from './agents-list-order'
|
||||
import type { AgentListItem } from './agents-page-types'
|
||||
import type { AgentLiveness } from './LivenessDot'
|
||||
|
||||
@@ -57,18 +56,31 @@ export const AgentList: FC<AgentListProps> = ({
|
||||
return map
|
||||
}, [adapters])
|
||||
|
||||
// Sort: pinned rows first, then most recently used, then never-used
|
||||
// agents in id-stable order. The gateway's `main` agent stays
|
||||
// pinned-to-top when never touched so a fresh install has an
|
||||
// obvious starting point.
|
||||
const ordered = useMemo(() => {
|
||||
const withMeta = agents.map((agent) => {
|
||||
const harness = harnessAgentLookup?.get(agent.agentId)
|
||||
return {
|
||||
agent,
|
||||
id: agent.agentId,
|
||||
pinned: harness?.pinned ?? false,
|
||||
lastUsedAt: activity?.[agent.agentId]?.lastUsedAt ?? null,
|
||||
}
|
||||
})
|
||||
return withMeta
|
||||
.sort(compareAgentsByPinThenRecency)
|
||||
.sort((a, b) => {
|
||||
if (a.pinned !== b.pinned) return a.pinned ? -1 : 1
|
||||
const aSeed = a.agent.agentId === 'main' && a.lastUsedAt === null
|
||||
const bSeed = b.agent.agentId === 'main' && b.lastUsedAt === null
|
||||
if (aSeed && !bSeed) return -1
|
||||
if (!aSeed && bSeed) return 1
|
||||
const aValue = a.lastUsedAt ?? -Infinity
|
||||
const bValue = b.lastUsedAt ?? -Infinity
|
||||
if (aValue !== bValue) return bValue - aValue
|
||||
return a.agent.agentId.localeCompare(b.agent.agentId)
|
||||
})
|
||||
.map((entry) => entry.agent)
|
||||
}, [activity, agents, harnessAgentLookup])
|
||||
|
||||
|
||||
@@ -1,104 +0,0 @@
|
||||
import { describe, expect, it } from 'bun:test'
|
||||
import type { HarnessAgent } from './agent-harness-types'
|
||||
import {
|
||||
compareAgentsByPinThenRecency,
|
||||
orderAgentsByPinThenRecency,
|
||||
} from './agents-list-order'
|
||||
|
||||
function makeAgent(input: {
|
||||
id: string
|
||||
pinned?: boolean
|
||||
lastUsedAt?: number | null
|
||||
}): HarnessAgent {
|
||||
return {
|
||||
id: input.id,
|
||||
name: input.id,
|
||||
adapter: 'codex',
|
||||
permissionMode: 'approve-all',
|
||||
sessionKey: 'session',
|
||||
createdAt: 0,
|
||||
updatedAt: 0,
|
||||
pinned: input.pinned,
|
||||
lastUsedAt: input.lastUsedAt,
|
||||
}
|
||||
}
|
||||
|
||||
describe('orderAgentsByPinThenRecency', () => {
|
||||
it('floats pinned agents to the top regardless of recency', () => {
|
||||
const result = orderAgentsByPinThenRecency([
|
||||
makeAgent({ id: 'a', pinned: false, lastUsedAt: 1_000 }),
|
||||
makeAgent({ id: 'b', pinned: true, lastUsedAt: 100 }),
|
||||
makeAgent({ id: 'c', pinned: false, lastUsedAt: 500 }),
|
||||
])
|
||||
expect(result.map((entry) => entry.id)).toEqual(['b', 'a', 'c'])
|
||||
})
|
||||
|
||||
it('sorts by lastUsedAt desc within each pin group', () => {
|
||||
const result = orderAgentsByPinThenRecency([
|
||||
makeAgent({ id: 'older-pin', pinned: true, lastUsedAt: 100 }),
|
||||
makeAgent({ id: 'newer-pin', pinned: true, lastUsedAt: 200 }),
|
||||
makeAgent({ id: 'older', pinned: false, lastUsedAt: 50 }),
|
||||
makeAgent({ id: 'newer', pinned: false, lastUsedAt: 80 }),
|
||||
])
|
||||
expect(result.map((entry) => entry.id)).toEqual([
|
||||
'newer-pin',
|
||||
'older-pin',
|
||||
'newer',
|
||||
'older',
|
||||
])
|
||||
})
|
||||
|
||||
it('seed-pins the gateway main agent above other never-used agents', () => {
|
||||
const result = orderAgentsByPinThenRecency([
|
||||
makeAgent({ id: 'aaa', pinned: false, lastUsedAt: null }),
|
||||
makeAgent({ id: 'main', pinned: false, lastUsedAt: null }),
|
||||
makeAgent({ id: 'zzz', pinned: false, lastUsedAt: null }),
|
||||
])
|
||||
expect(result.map((entry) => entry.id)).toEqual(['main', 'aaa', 'zzz'])
|
||||
})
|
||||
|
||||
it('drops the main seed-pin once the agent has been used', () => {
|
||||
const result = orderAgentsByPinThenRecency([
|
||||
makeAgent({ id: 'aaa', pinned: false, lastUsedAt: 999 }),
|
||||
makeAgent({ id: 'main', pinned: false, lastUsedAt: 1 }),
|
||||
])
|
||||
expect(result.map((entry) => entry.id)).toEqual(['aaa', 'main'])
|
||||
})
|
||||
|
||||
it('puts never-used agents below recently-used ones', () => {
|
||||
const result = orderAgentsByPinThenRecency([
|
||||
makeAgent({ id: 'fresh', pinned: false, lastUsedAt: null }),
|
||||
makeAgent({ id: 'used', pinned: false, lastUsedAt: 100 }),
|
||||
])
|
||||
expect(result.map((entry) => entry.id)).toEqual(['used', 'fresh'])
|
||||
})
|
||||
|
||||
it('id-stable tiebreaks two agents with identical lastUsedAt', () => {
|
||||
const result = orderAgentsByPinThenRecency([
|
||||
makeAgent({ id: 'b', pinned: false, lastUsedAt: 100 }),
|
||||
makeAgent({ id: 'a', pinned: false, lastUsedAt: 100 }),
|
||||
])
|
||||
expect(result.map((entry) => entry.id)).toEqual(['a', 'b'])
|
||||
})
|
||||
})
|
||||
|
||||
describe('compareAgentsByPinThenRecency', () => {
|
||||
it('produces the same order as the harness-shape helper', () => {
|
||||
const items = [
|
||||
{ id: 'older', pinned: false, lastUsedAt: 50 },
|
||||
{ id: 'newer', pinned: false, lastUsedAt: 80 },
|
||||
{ id: 'pinned', pinned: true, lastUsedAt: 1 },
|
||||
]
|
||||
const sorted = [...items].sort(compareAgentsByPinThenRecency)
|
||||
expect(sorted.map((item) => item.id)).toEqual(['pinned', 'newer', 'older'])
|
||||
})
|
||||
|
||||
it('seeds the main agent above other never-used rows', () => {
|
||||
const items = [
|
||||
{ id: 'zzz', pinned: false, lastUsedAt: null },
|
||||
{ id: 'main', pinned: false, lastUsedAt: null },
|
||||
]
|
||||
const sorted = [...items].sort(compareAgentsByPinThenRecency)
|
||||
expect(sorted.map((item) => item.id)).toEqual(['main', 'zzz'])
|
||||
})
|
||||
})
|
||||
@@ -1,59 +0,0 @@
|
||||
import type { HarnessAgent } from './agent-harness-types'
|
||||
|
||||
/**
|
||||
* Stable ordering for index-shaped agent surfaces (the `/agents` rail
|
||||
* and the chat-screen rail at `/agents/:agentId`). Pinned rows float
|
||||
* to the top, then recency desc, with never-used agents falling to
|
||||
* the bottom in id-stable order. The gateway's `main` agent gets
|
||||
* seed-pinned to the top of the never-used group so a fresh install
|
||||
* has an obvious starting point even before the user has used it.
|
||||
*
|
||||
* NOT the same rule as the home grid (`orderHomeAgents`): home is
|
||||
* action-shaped — active-turn floats to the top — so users can
|
||||
* resume what's running. The chat rail keeps recency stable so it
|
||||
* doesn't reshuffle as turns transition every 5s.
|
||||
*/
|
||||
export function orderAgentsByPinThenRecency(
|
||||
agents: HarnessAgent[],
|
||||
): HarnessAgent[] {
|
||||
return [...agents].sort((a, b) => {
|
||||
const aPinned = a.pinned ?? false
|
||||
const bPinned = b.pinned ?? false
|
||||
if (aPinned !== bPinned) return aPinned ? -1 : 1
|
||||
|
||||
const aSeed = a.id === 'main' && (a.lastUsedAt ?? null) === null
|
||||
const bSeed = b.id === 'main' && (b.lastUsedAt ?? null) === null
|
||||
if (aSeed && !bSeed) return -1
|
||||
if (!aSeed && bSeed) return 1
|
||||
|
||||
const aValue = a.lastUsedAt ?? Number.NEGATIVE_INFINITY
|
||||
const bValue = b.lastUsedAt ?? Number.NEGATIVE_INFINITY
|
||||
if (aValue !== bValue) return bValue - aValue
|
||||
|
||||
return a.id.localeCompare(b.id)
|
||||
})
|
||||
}
|
||||
|
||||
/**
|
||||
* Same comparator, but operates over arbitrary records that carry
|
||||
* `pinned`, `lastUsedAt`, and an `id`-equivalent key. Used by the
|
||||
* `/agents` `AgentList` which pivots `AgentListItem` + harness
|
||||
* lookup into a sortable shape; both surfaces stay on identical
|
||||
* sort semantics through this adapter.
|
||||
*/
|
||||
export function compareAgentsByPinThenRecency<
|
||||
T extends { pinned: boolean; lastUsedAt: number | null; id: string },
|
||||
>(a: T, b: T): number {
|
||||
if (a.pinned !== b.pinned) return a.pinned ? -1 : 1
|
||||
|
||||
const aSeed = a.id === 'main' && a.lastUsedAt === null
|
||||
const bSeed = b.id === 'main' && b.lastUsedAt === null
|
||||
if (aSeed && !bSeed) return -1
|
||||
if (!aSeed && bSeed) return 1
|
||||
|
||||
const aValue = a.lastUsedAt ?? Number.NEGATIVE_INFINITY
|
||||
const bValue = b.lastUsedAt ?? Number.NEGATIVE_INFINITY
|
||||
if (aValue !== bValue) return bValue - aValue
|
||||
|
||||
return a.id.localeCompare(b.id)
|
||||
}
|
||||
@@ -38,8 +38,8 @@ browseros-cli install # downloads BrowserOS for your platform
|
||||
# If BrowserOS is installed but not running
|
||||
browseros-cli launch # opens BrowserOS, waits for server
|
||||
|
||||
# Configure the CLI with the Server URL from BrowserOS settings
|
||||
browseros-cli init http://127.0.0.1:9000/mcp
|
||||
# Configure the CLI (auto-discovers running BrowserOS)
|
||||
browseros-cli init --auto # detects server URL and saves config
|
||||
|
||||
# Verify connection
|
||||
browseros-cli health
|
||||
@@ -52,7 +52,7 @@ browseros-cli init <url> # non-interactive — pass URL directly
|
||||
browseros-cli init # interactive — prompts for URL
|
||||
```
|
||||
|
||||
Config is saved to `~/.config/browseros-cli/config.yaml`. If `browseros-cli health` cannot connect, copy the current Server URL from BrowserOS Settings > BrowserOS MCP and run `browseros-cli init <Server URL>` again.
|
||||
Config is saved to `~/.config/browseros-cli/config.yaml`. The CLI also auto-discovers the server from `~/.browseros/server.json` (written by BrowserOS on startup).
|
||||
|
||||
### CLI updates
|
||||
|
||||
@@ -126,9 +126,9 @@ To connect Claude Code, Gemini CLI, or any MCP client, see the [MCP setup guide]
|
||||
| `--debug` | `BOS_DEBUG=1` | Debug output |
|
||||
| `--timeout, -t` | | Request timeout (default: 2m) |
|
||||
|
||||
Priority for server URL: `--server` flag > `BROWSEROS_URL` env > config file
|
||||
Priority for server URL: `--server` flag > `BROWSEROS_URL` env > `~/.browseros/server.json` > config file
|
||||
|
||||
If no server URL is configured, the CLI exits with setup instructions pointing to `install`, `launch`, and `init <Server URL>`.
|
||||
If no server URL is configured, the CLI exits with setup instructions pointing to `install`, `launch`, and `init`.
|
||||
|
||||
## Testing
|
||||
|
||||
@@ -179,7 +179,7 @@ apps/cli/
|
||||
│ └── config.go # Config file (~/.config/browseros-cli/config.yaml)
|
||||
├── cmd/
|
||||
│ ├── root.go # Root command, global flags
|
||||
│ ├── init.go # Server URL configuration (URL arg or interactive)
|
||||
│ ├── init.go # Server URL configuration (URL arg, --auto, interactive)
|
||||
│ ├── install.go # install (download BrowserOS for current platform)
|
||||
│ ├── launch.go # launch (find and start BrowserOS, wait for server)
|
||||
│ ├── open.go # open (new_page / new_hidden_page)
|
||||
|
||||
@@ -17,6 +17,8 @@ import (
|
||||
)
|
||||
|
||||
func init() {
|
||||
var autoDiscover bool
|
||||
|
||||
cmd := &cobra.Command{
|
||||
Use: "init [url]",
|
||||
Short: "Configure the BrowserOS server connection",
|
||||
@@ -32,8 +34,9 @@ You can provide the full URL or just the port number:
|
||||
browseros-cli init http://127.0.0.1:9000/mcp
|
||||
browseros-cli init 9000
|
||||
|
||||
Modes:
|
||||
Three modes:
|
||||
browseros-cli init <url> Non-interactive (full URL or port number)
|
||||
browseros-cli init --auto Auto-discover from ~/.browseros/server.json
|
||||
browseros-cli init Interactive prompt`,
|
||||
Annotations: map[string]string{"group": "Setup:"},
|
||||
Args: cobra.MaximumNArgs(1),
|
||||
@@ -46,9 +49,22 @@ Modes:
|
||||
|
||||
switch {
|
||||
case len(args) == 1:
|
||||
// Non-interactive: URL provided as argument
|
||||
input = args[0]
|
||||
|
||||
case autoDiscover:
|
||||
// Auto-discover: server.json → config → probe common ports
|
||||
discovered := probeRunningServer()
|
||||
if discovered == "" {
|
||||
output.Error("auto-discovery failed: no running BrowserOS found.\n\n"+
|
||||
" If not running: browseros-cli launch\n"+
|
||||
" If not installed: browseros-cli install", 1)
|
||||
}
|
||||
input = discovered
|
||||
fmt.Printf("Auto-discovered server at %s\n", input)
|
||||
|
||||
default:
|
||||
// Interactive prompt (original behavior)
|
||||
fmt.Println()
|
||||
bold.Println("BrowserOS CLI Setup")
|
||||
fmt.Println()
|
||||
@@ -79,14 +95,12 @@ Modes:
|
||||
output.Errorf(1, "invalid URL: %s", input)
|
||||
}
|
||||
|
||||
// Verify connectivity
|
||||
fmt.Printf("Checking connection to %s ...\n", baseURL)
|
||||
client := &http.Client{Timeout: 5 * time.Second}
|
||||
resp, err := client.Get(baseURL + "/health")
|
||||
if err != nil {
|
||||
output.Errorf(1, "cannot connect to %s: %v\n\n"+
|
||||
"Open BrowserOS Settings > BrowserOS MCP and copy the Server URL.\n"+
|
||||
"Then run: browseros-cli init <Server URL>\n"+
|
||||
"Example: browseros-cli init http://127.0.0.1:9000/mcp", baseURL, err)
|
||||
output.Errorf(1, "cannot connect to %s: %v\nIs BrowserOS running?", baseURL, err)
|
||||
}
|
||||
resp.Body.Close()
|
||||
|
||||
@@ -107,5 +121,6 @@ Modes:
|
||||
},
|
||||
}
|
||||
|
||||
cmd.Flags().BoolVar(&autoDiscover, "auto", false, "Auto-discover server URL from ~/.browseros/server.json")
|
||||
rootCmd.AddCommand(cmd)
|
||||
}
|
||||
|
||||
@@ -28,7 +28,7 @@ Linux: Downloads AppImage (or .deb with --deb flag)
|
||||
|
||||
After installation:
|
||||
browseros-cli launch # start BrowserOS
|
||||
browseros-cli init <url> # configure the CLI with the Server URL`,
|
||||
browseros-cli init --auto # configure the CLI`,
|
||||
Annotations: map[string]string{"group": "Setup:"},
|
||||
Args: cobra.NoArgs,
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
@@ -81,7 +81,7 @@ After installation:
|
||||
fmt.Println()
|
||||
bold.Println("Next steps:")
|
||||
dim.Println(" browseros-cli launch # start BrowserOS")
|
||||
dim.Println(" browseros-cli init <url> # use the Server URL from BrowserOS settings")
|
||||
dim.Println(" browseros-cli init --auto # configure the CLI")
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"os"
|
||||
@@ -39,7 +38,6 @@ If BrowserOS is already running, reports the server URL.`,
|
||||
|
||||
if url := probeRunningServer(); url != "" {
|
||||
green.Printf("BrowserOS is already running at %s\n", url)
|
||||
dim.Printf("Next: browseros-cli init %s\n", mcpEndpointURL(url))
|
||||
return
|
||||
}
|
||||
|
||||
@@ -65,7 +63,7 @@ If BrowserOS is already running, reports the server URL.`,
|
||||
|
||||
green.Printf("BrowserOS is ready at %s\n", url)
|
||||
fmt.Println()
|
||||
dim.Printf("Next: browseros-cli init %s\n", mcpEndpointURL(url))
|
||||
dim.Println("Next: browseros-cli init --auto")
|
||||
},
|
||||
}
|
||||
|
||||
@@ -77,77 +75,39 @@ If BrowserOS is already running, reports the server URL.`,
|
||||
// Server probing
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
var commonBrowserOSPorts = []int{9100, 9200, 9300}
|
||||
|
||||
// probeRunningServer checks launch discovery, explicit config, and common ports for a running server.
|
||||
// probeRunningServer checks server.json, config, and common ports for a running server.
|
||||
func probeRunningServer() string {
|
||||
client := &http.Client{Timeout: 2 * time.Second}
|
||||
check := func(baseURL string) bool {
|
||||
client := &http.Client{Timeout: 2 * time.Second}
|
||||
resp, err := client.Get(baseURL + "/health")
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
resp.Body.Close()
|
||||
return resp.StatusCode == 200
|
||||
}
|
||||
|
||||
if url := loadBrowserosServerURL(); url != "" && checkServerHealth(client, url) {
|
||||
// 1. server.json — written by BrowserOS on startup with the actual port
|
||||
if url := loadBrowserosServerURL(); url != "" && check(url) {
|
||||
return url
|
||||
}
|
||||
|
||||
if url := defaultServerURL(); url != "" && checkServerHealth(client, url) {
|
||||
// 2. Saved config / env var
|
||||
if url := defaultServerURL(); url != "" && check(url) {
|
||||
return url
|
||||
}
|
||||
|
||||
return probeCommonServerPorts(client)
|
||||
}
|
||||
|
||||
func checkServerHealth(client *http.Client, baseURL string) bool {
|
||||
resp, err := client.Get(baseURL + "/health")
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
resp.Body.Close()
|
||||
return resp.StatusCode == 200
|
||||
}
|
||||
|
||||
func probeCommonServerPorts(client *http.Client) string {
|
||||
for _, port := range commonBrowserOSPorts {
|
||||
// 3. Probe common BrowserOS ports as last resort
|
||||
for _, port := range []int{9100, 9200, 9300} {
|
||||
url := fmt.Sprintf("http://127.0.0.1:%d", port)
|
||||
if checkServerHealth(client, url) {
|
||||
if check(url) {
|
||||
return url
|
||||
}
|
||||
}
|
||||
|
||||
return ""
|
||||
}
|
||||
|
||||
type serverDiscoveryConfig struct {
|
||||
ServerPort int `json:"server_port"`
|
||||
URL string `json:"url"`
|
||||
ServerVersion string `json:"server_version"`
|
||||
BrowserOSVersion string `json:"browseros_version,omitempty"`
|
||||
ChromiumVersion string `json:"chromium_version,omitempty"`
|
||||
}
|
||||
|
||||
// loadBrowserosServerURL reads BrowserOS's runtime discovery file for launch readiness only.
|
||||
//
|
||||
// Normal command resolution must not call this because it can override a URL the
|
||||
// user explicitly saved with `browseros-cli init <Server URL>`.
|
||||
func loadBrowserosServerURL() string {
|
||||
home, err := os.UserHomeDir()
|
||||
if err != nil {
|
||||
return ""
|
||||
}
|
||||
|
||||
data, err := os.ReadFile(filepath.Join(home, ".browseros", "server.json"))
|
||||
if err != nil {
|
||||
return ""
|
||||
}
|
||||
|
||||
var sc serverDiscoveryConfig
|
||||
if err := json.Unmarshal(data, &sc); err != nil {
|
||||
return ""
|
||||
}
|
||||
|
||||
return normalizeServerURL(sc.URL)
|
||||
}
|
||||
|
||||
func mcpEndpointURL(baseURL string) string {
|
||||
return strings.TrimSuffix(baseURL, "/") + "/mcp"
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Platform-native installation detection
|
||||
// ---------------------------------------------------------------------------
|
||||
@@ -157,8 +117,7 @@ func mcpEndpointURL(baseURL string) string {
|
||||
// macOS: `open -Ra "BrowserOS"` — queries Launch Services (finds apps anywhere)
|
||||
// Linux: checks /usr/bin/browseros (.deb), browseros.desktop, or AppImage files
|
||||
// Windows: checks executable at %LOCALAPPDATA%\BrowserOS\Application\BrowserOS.exe
|
||||
//
|
||||
// and registry uninstall key (per-user Chromium install pattern)
|
||||
// and registry uninstall key (per-user Chromium install pattern)
|
||||
func isBrowserOSInstalled() bool {
|
||||
switch runtime.GOOS {
|
||||
case "darwin":
|
||||
@@ -312,11 +271,14 @@ func waitForServer(maxWait time.Duration) (string, bool) {
|
||||
|
||||
for time.Now().Before(deadline) {
|
||||
// server.json is written by BrowserOS on startup with the actual port
|
||||
if url := loadBrowserosServerURL(); url != "" && checkServerHealth(client, url) {
|
||||
return url, true
|
||||
}
|
||||
if url := probeCommonServerPorts(client); url != "" {
|
||||
return url, true
|
||||
if url := loadBrowserosServerURL(); url != "" {
|
||||
resp, err := client.Get(url + "/health")
|
||||
if err == nil {
|
||||
resp.Body.Close()
|
||||
if resp.StatusCode == 200 {
|
||||
return url, true
|
||||
}
|
||||
}
|
||||
}
|
||||
fmt.Print(".")
|
||||
time.Sleep(1 * time.Second)
|
||||
|
||||
@@ -1,99 +0,0 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"net/url"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"browseros-cli/config"
|
||||
)
|
||||
|
||||
func TestProbeRunningServerUsesDiscoveryBeforeConfig(t *testing.T) {
|
||||
home := t.TempDir()
|
||||
t.Setenv("HOME", home)
|
||||
t.Setenv("USERPROFILE", home)
|
||||
t.Setenv("XDG_CONFIG_HOME", t.TempDir())
|
||||
t.Setenv("BROWSEROS_URL", "")
|
||||
|
||||
discoveredServer := newHealthyServer(t)
|
||||
configServer := newHealthyServer(t)
|
||||
|
||||
serverDir := filepath.Join(home, ".browseros")
|
||||
if err := os.MkdirAll(serverDir, 0755); err != nil {
|
||||
t.Fatalf("os.MkdirAll() error = %v", err)
|
||||
}
|
||||
data := []byte(fmt.Sprintf(`{"url":%q}`, discoveredServer.URL))
|
||||
if err := os.WriteFile(filepath.Join(serverDir, "server.json"), data, 0644); err != nil {
|
||||
t.Fatalf("os.WriteFile() error = %v", err)
|
||||
}
|
||||
if err := config.Save(&config.Config{ServerURL: configServer.URL}); err != nil {
|
||||
t.Fatalf("config.Save() error = %v", err)
|
||||
}
|
||||
|
||||
got := probeRunningServer()
|
||||
if got != normalizeServerURL(discoveredServer.URL) {
|
||||
t.Fatalf("probeRunningServer() = %q, want %q", got, normalizeServerURL(discoveredServer.URL))
|
||||
}
|
||||
}
|
||||
|
||||
func TestWaitForServerUsesCommonPortFallback(t *testing.T) {
|
||||
home := t.TempDir()
|
||||
t.Setenv("HOME", home)
|
||||
t.Setenv("USERPROFILE", home)
|
||||
|
||||
server := newHealthyServer(t)
|
||||
port := serverPort(t, server.URL)
|
||||
|
||||
originalPorts := commonBrowserOSPorts
|
||||
commonBrowserOSPorts = []int{port}
|
||||
t.Cleanup(func() {
|
||||
commonBrowserOSPorts = originalPorts
|
||||
})
|
||||
|
||||
got, ok := waitForServer(100 * time.Millisecond)
|
||||
if !ok {
|
||||
t.Fatal("waitForServer() ok = false, want true")
|
||||
}
|
||||
if got != normalizeServerURL(server.URL) {
|
||||
t.Fatalf("waitForServer() = %q, want %q", got, normalizeServerURL(server.URL))
|
||||
}
|
||||
}
|
||||
|
||||
func newHealthyServer(t *testing.T) *httptest.Server {
|
||||
t.Helper()
|
||||
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
if r.URL.Path != "/health" {
|
||||
http.NotFound(w, r)
|
||||
return
|
||||
}
|
||||
w.WriteHeader(http.StatusOK)
|
||||
}))
|
||||
t.Cleanup(server.Close)
|
||||
return server
|
||||
}
|
||||
|
||||
func serverPort(t *testing.T, rawURL string) int {
|
||||
t.Helper()
|
||||
|
||||
parsed, err := url.Parse(rawURL)
|
||||
if err != nil {
|
||||
t.Fatalf("url.Parse() error = %v", err)
|
||||
}
|
||||
_, portText, err := net.SplitHostPort(parsed.Host)
|
||||
if err != nil {
|
||||
t.Fatalf("net.SplitHostPort() error = %v", err)
|
||||
}
|
||||
port, err := strconv.Atoi(portText)
|
||||
if err != nil {
|
||||
t.Fatalf("strconv.Atoi() error = %v", err)
|
||||
}
|
||||
return port
|
||||
}
|
||||
@@ -2,8 +2,10 @@ package cmd
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
@@ -287,15 +289,18 @@ func drainAutomaticUpdateCheckWithTimeout(done <-chan struct{}, timeout time.Dur
|
||||
}
|
||||
}
|
||||
|
||||
// defaultServerURL returns the implicit target from user-controlled settings only.
|
||||
//
|
||||
// BrowserOS writes a discovery file at runtime, but normal commands intentionally
|
||||
// ignore it so a saved URL is not silently overridden by another running server.
|
||||
func defaultServerURL() string {
|
||||
// 1. Explicit env var always wins
|
||||
if env := normalizeServerURL(os.Getenv("BROWSEROS_URL")); env != "" {
|
||||
return env
|
||||
}
|
||||
|
||||
// 2. Live discovery file from running BrowserOS (most current)
|
||||
if url := loadBrowserosServerURL(); url != "" {
|
||||
return url
|
||||
}
|
||||
|
||||
// 3. Saved config (may be stale if port changed)
|
||||
cfg, err := config.Load()
|
||||
if err == nil {
|
||||
if url := normalizeServerURL(cfg.ServerURL); url != "" {
|
||||
@@ -306,6 +311,33 @@ func defaultServerURL() string {
|
||||
return ""
|
||||
}
|
||||
|
||||
type serverDiscoveryConfig struct {
|
||||
ServerPort int `json:"server_port"`
|
||||
URL string `json:"url"`
|
||||
ServerVersion string `json:"server_version"`
|
||||
BrowserOSVersion string `json:"browseros_version,omitempty"`
|
||||
ChromiumVersion string `json:"chromium_version,omitempty"`
|
||||
}
|
||||
|
||||
func loadBrowserosServerURL() string {
|
||||
home, err := os.UserHomeDir()
|
||||
if err != nil {
|
||||
return ""
|
||||
}
|
||||
|
||||
data, err := os.ReadFile(filepath.Join(home, ".browseros", "server.json"))
|
||||
if err != nil {
|
||||
return ""
|
||||
}
|
||||
|
||||
var sc serverDiscoveryConfig
|
||||
if err := json.Unmarshal(data, &sc); err != nil {
|
||||
return ""
|
||||
}
|
||||
|
||||
return normalizeServerURL(sc.URL)
|
||||
}
|
||||
|
||||
func normalizeServerURL(raw string) string {
|
||||
normalized := strings.TrimSpace(raw)
|
||||
|
||||
@@ -337,10 +369,8 @@ func validateServerURL(raw string) (string, error) {
|
||||
|
||||
return "", fmt.Errorf(
|
||||
"BrowserOS server URL is not configured.\n\n" +
|
||||
" Open BrowserOS Settings > BrowserOS MCP and copy the Server URL.\n" +
|
||||
" Save it with: browseros-cli init <Server URL>\n" +
|
||||
" Example: browseros-cli init http://127.0.0.1:9000/mcp\n" +
|
||||
" If BrowserOS is closed: browseros-cli launch\n" +
|
||||
" If not installed: browseros-cli install",
|
||||
" If BrowserOS is running: browseros-cli init --auto\n" +
|
||||
" If BrowserOS is closed: browseros-cli launch\n" +
|
||||
" If not installed: browseros-cli install",
|
||||
)
|
||||
}
|
||||
|
||||
@@ -1,13 +1,8 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"browseros-cli/config"
|
||||
)
|
||||
|
||||
func TestSetVersionUpdatesRootCommand(t *testing.T) {
|
||||
@@ -105,76 +100,6 @@ func TestShouldSkipAutomaticUpdates(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestDefaultServerURLUsesEnvBeforeConfig(t *testing.T) {
|
||||
t.Setenv("XDG_CONFIG_HOME", t.TempDir())
|
||||
t.Setenv("BROWSEROS_URL", "http://127.0.0.1:9115/mcp")
|
||||
|
||||
if err := config.Save(&config.Config{ServerURL: "http://127.0.0.1:9000/mcp"}); err != nil {
|
||||
t.Fatalf("config.Save() error = %v", err)
|
||||
}
|
||||
|
||||
got := defaultServerURL()
|
||||
if got != "http://127.0.0.1:9115" {
|
||||
t.Fatalf("defaultServerURL() = %q, want %q", got, "http://127.0.0.1:9115")
|
||||
}
|
||||
}
|
||||
|
||||
func TestDefaultServerURLUsesSavedConfig(t *testing.T) {
|
||||
t.Setenv("XDG_CONFIG_HOME", t.TempDir())
|
||||
t.Setenv("BROWSEROS_URL", "")
|
||||
|
||||
if err := config.Save(&config.Config{ServerURL: "http://127.0.0.1:9115/mcp"}); err != nil {
|
||||
t.Fatalf("config.Save() error = %v", err)
|
||||
}
|
||||
|
||||
got := defaultServerURL()
|
||||
if got != "http://127.0.0.1:9115" {
|
||||
t.Fatalf("defaultServerURL() = %q, want %q", got, "http://127.0.0.1:9115")
|
||||
}
|
||||
}
|
||||
|
||||
func TestDefaultServerURLIgnoresBrowserOSServerJSON(t *testing.T) {
|
||||
home := t.TempDir()
|
||||
t.Setenv("HOME", home)
|
||||
t.Setenv("USERPROFILE", home)
|
||||
t.Setenv("XDG_CONFIG_HOME", t.TempDir())
|
||||
t.Setenv("BROWSEROS_URL", "")
|
||||
|
||||
serverDir := filepath.Join(home, ".browseros")
|
||||
if err := os.MkdirAll(serverDir, 0755); err != nil {
|
||||
t.Fatalf("os.MkdirAll() error = %v", err)
|
||||
}
|
||||
data := []byte(`{"url":"http://127.0.0.1:9999"}`)
|
||||
if err := os.WriteFile(filepath.Join(serverDir, "server.json"), data, 0644); err != nil {
|
||||
t.Fatalf("os.WriteFile() error = %v", err)
|
||||
}
|
||||
|
||||
if got := defaultServerURL(); got != "" {
|
||||
t.Fatalf("defaultServerURL() = %q, want empty", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestNormalizeServerURLAcceptsMCPEndpoint(t *testing.T) {
|
||||
got := normalizeServerURL(" http://127.0.0.1:9115/mcp ")
|
||||
if got != "http://127.0.0.1:9115" {
|
||||
t.Fatalf("normalizeServerURL() = %q, want %q", got, "http://127.0.0.1:9115")
|
||||
}
|
||||
}
|
||||
|
||||
func TestValidateServerURLExplainsManualInit(t *testing.T) {
|
||||
_, err := validateServerURL("")
|
||||
if err == nil {
|
||||
t.Fatal("validateServerURL() error = nil, want setup instructions")
|
||||
}
|
||||
msg := err.Error()
|
||||
if !strings.Contains(msg, "browseros-cli init <Server URL>") {
|
||||
t.Fatalf("validateServerURL() error = %q, want manual init instructions", msg)
|
||||
}
|
||||
if strings.Contains(msg, "init --auto") {
|
||||
t.Fatalf("validateServerURL() error = %q, should not mention init --auto", msg)
|
||||
}
|
||||
}
|
||||
|
||||
func TestDrainAutomaticUpdateCheckWithTimeoutWaitsForCompletion(t *testing.T) {
|
||||
done := make(chan struct{})
|
||||
returned := make(chan struct{})
|
||||
|
||||
@@ -44,7 +44,10 @@ func (c *Client) connect(ctx context.Context) (*sdkmcp.ClientSession, error) {
|
||||
|
||||
session, err := sdkClient.Connect(ctx, transport, nil)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("cannot connect to BrowserOS at %s: %w%s", c.BaseURL, err, connectionSetupInstructions())
|
||||
return nil, fmt.Errorf("cannot connect to BrowserOS at %s: %w\n\n"+
|
||||
" If BrowserOS is running on a different port: browseros-cli init --auto\n"+
|
||||
" If BrowserOS is not running: browseros-cli launch\n"+
|
||||
" If not installed: browseros-cli install", c.BaseURL, err)
|
||||
}
|
||||
return session, nil
|
||||
}
|
||||
@@ -184,7 +187,10 @@ func (c *Client) Status() (map[string]any, error) {
|
||||
func (c *Client) restGET(path string) (map[string]any, error) {
|
||||
resp, err := c.HTTPClient.Get(c.BaseURL + path)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("cannot connect to BrowserOS at %s: %w%s", c.BaseURL, err, connectionSetupInstructions())
|
||||
return nil, fmt.Errorf("cannot connect to BrowserOS at %s: %w\n\n"+
|
||||
" If BrowserOS is running on a different port: browseros-cli init --auto\n"+
|
||||
" If BrowserOS is not running: browseros-cli launch\n"+
|
||||
" If not installed: browseros-cli install", c.BaseURL, err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
@@ -199,14 +205,3 @@ func (c *Client) restGET(path string) (map[string]any, error) {
|
||||
}
|
||||
return data, nil
|
||||
}
|
||||
|
||||
// connectionSetupInstructions explains how to recover from a stale or missing server URL.
|
||||
func connectionSetupInstructions() string {
|
||||
return "\n\n" +
|
||||
" Open BrowserOS Settings > BrowserOS MCP and copy the Server URL.\n" +
|
||||
" Save it with: browseros-cli init <Server URL>\n" +
|
||||
" Example: browseros-cli init http://127.0.0.1:9000/mcp\n" +
|
||||
" Run once with: browseros-cli --server <Server URL> health\n" +
|
||||
" If BrowserOS is closed: browseros-cli launch\n" +
|
||||
" If not installed: browseros-cli install"
|
||||
}
|
||||
|
||||
@@ -31,8 +31,8 @@ browseros-cli install
|
||||
# Start BrowserOS
|
||||
browseros-cli launch
|
||||
|
||||
# Configure MCP settings with the Server URL from BrowserOS settings
|
||||
browseros-cli init http://127.0.0.1:9000/mcp
|
||||
# Auto-configure MCP settings for your AI tools
|
||||
browseros-cli init --auto
|
||||
|
||||
# Verify everything is working
|
||||
browseros-cli health
|
||||
|
||||
25
packages/browseros-agent/apps/eval/README.md
vendored
25
packages/browseros-agent/apps/eval/README.md
vendored
@@ -9,7 +9,6 @@ Evaluation framework for BrowserOS browser automation agents. Runs tasks from st
|
||||
- **BrowserOS binary** at `/Applications/BrowserOS.app` (macOS) or `BROWSEROS_BINARY` pointing at it
|
||||
- **Bun** runtime
|
||||
- **API keys** for your LLM provider (and `CLAUDE_CODE_OAUTH_TOKEN` if you use `performance_grader`)
|
||||
- **Python 3.10+ with `agisdk`** for AGI SDK / REAL Bench grading. Set `BROWSEROS_EVAL_PYTHON` if your default `python3` is older.
|
||||
|
||||
## Quick Start
|
||||
|
||||
@@ -68,7 +67,7 @@ This lets us run the same suite against multiple model setups without copying th
|
||||
|
||||
```txt
|
||||
agisdk-daily-10 + kimi-fireworks
|
||||
agisdk-daily-10 + claude-opus
|
||||
agisdk-daily-10 + claude-sonnet
|
||||
agisdk-daily-10 + clado-action-000159
|
||||
```
|
||||
|
||||
@@ -80,7 +79,6 @@ For `orchestrator-executor` suites, there can also be an executor model/backend.
|
||||
|------|-------------|
|
||||
| `single` | Single LLM agent driven by the BrowserOS tool loop (CDP) |
|
||||
| `orchestrator-executor` | High-level orchestrator + per-step executor (LLM or Clado visual model) |
|
||||
| `claude-code` | External Claude Code CLI driven through BrowserOS MCP |
|
||||
|
||||
### Single agent
|
||||
|
||||
@@ -121,24 +119,6 @@ The orchestrator works with any LLM provider. The executor can be another LLM, o
|
||||
}
|
||||
```
|
||||
|
||||
### Claude Code
|
||||
|
||||
Claude Code runs as an external `claude -p` subprocess. The eval runner passes a task-scoped MCP config that points Claude Code at the active worker's BrowserOS MCP endpoint, while the eval capture layer still saves messages, screenshots, trajectory metadata, and grader outputs.
|
||||
|
||||
```json
|
||||
{
|
||||
"agent": {
|
||||
"type": "claude-code",
|
||||
"model": "opus"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```bash
|
||||
BROWSEROS_EVAL_PYTHON=/path/to/python3 bun run eval run --config configs/legacy/claude-code-agisdk-real.json
|
||||
bun run eval suite --config configs/legacy/claude-code-agisdk-real.json --publish r2
|
||||
```
|
||||
|
||||
## Graders
|
||||
|
||||
| Name | Description |
|
||||
@@ -171,7 +151,6 @@ The `apiKey` field supports two formats:
|
||||
| `CLADO_ACTION_MODEL`, `CLADO_ACTION_API_KEY`, `CLADO_ACTION_BASE_URL` | Clado executor defaults |
|
||||
| `BROWSEROS_BINARY` | BrowserOS binary path in CI/local smoke runs |
|
||||
| `BROWSEROS_SERVER_URL` | Optional grader MCP URL override |
|
||||
| `BROWSEROS_EVAL_PYTHON` | Optional Python interpreter for JSON graders such as `agisdk_state_diff` |
|
||||
| `WEBARENA_INFINITY_DIR` | Local WebArena-Infinity checkout for Infinity tasks |
|
||||
| `NOPECHA_API_KEY` | CAPTCHA solver extension |
|
||||
| `EVAL_R2_ACCOUNT_ID`, `EVAL_R2_ACCESS_KEY_ID`, `EVAL_R2_SECRET_ACCESS_KEY`, `EVAL_R2_BUCKET`, `EVAL_R2_CDN_BASE_URL` | R2 upload and viewer URL |
|
||||
@@ -215,7 +194,7 @@ Published runs are available at `EVAL_R2_CDN_BASE_URL/viewer.html?run=<run-id>`.
|
||||
"base_server_port": 9110,
|
||||
"base_extension_port": 9310,
|
||||
"load_extensions": false,
|
||||
"headless": false
|
||||
"headless": true
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
"baseUrl": "https://openrouter.ai/api/v1",
|
||||
"supportsImages": true
|
||||
},
|
||||
"dataset": "../../data/agisdk-real.jsonl",
|
||||
"dataset": "../../data/webbench-2of4-50.jsonl",
|
||||
"num_workers": 10,
|
||||
"restart_server_per_task": true,
|
||||
"browseros": {
|
||||
@@ -21,6 +21,6 @@
|
||||
"captcha": {
|
||||
"api_key_env": "NOPECHA_API_KEY"
|
||||
},
|
||||
"graders": ["agisdk_state_diff"],
|
||||
"graders": ["performance_grader"],
|
||||
"timeout_ms": 1800000
|
||||
}
|
||||
|
||||
@@ -23,7 +23,7 @@
|
||||
"base_server_port": 9110,
|
||||
"base_extension_port": 9310,
|
||||
"load_extensions": false,
|
||||
"headless": false
|
||||
"headless": true
|
||||
},
|
||||
"captcha": {
|
||||
"api_key_env": "NOPECHA_API_KEY"
|
||||
|
||||
@@ -1,22 +0,0 @@
|
||||
{
|
||||
"agent": {
|
||||
"type": "claude-code",
|
||||
"model": "opus"
|
||||
},
|
||||
"dataset": "../../data/agisdk-real.jsonl",
|
||||
"num_workers": 1,
|
||||
"restart_server_per_task": true,
|
||||
"browseros": {
|
||||
"server_url": "http://127.0.0.1:9110",
|
||||
"base_cdp_port": 9010,
|
||||
"base_server_port": 9110,
|
||||
"base_extension_port": 9310,
|
||||
"load_extensions": false,
|
||||
"headless": false
|
||||
},
|
||||
"captcha": {
|
||||
"api_key_env": "NOPECHA_API_KEY"
|
||||
},
|
||||
"graders": ["agisdk_state_diff"],
|
||||
"timeout_ms": 1800000
|
||||
}
|
||||
@@ -14,7 +14,7 @@
|
||||
"base_server_port": 9110,
|
||||
"base_extension_port": 9310,
|
||||
"load_extensions": false,
|
||||
"headless": false
|
||||
"headless": true
|
||||
},
|
||||
"captcha": {
|
||||
"api_key_env": "NOPECHA_API_KEY"
|
||||
|
||||
@@ -1,238 +0,0 @@
|
||||
import { writeFile } from 'node:fs/promises'
|
||||
import { join } from 'node:path'
|
||||
import { DEFAULT_TIMEOUT_MS } from '../../constants'
|
||||
import type { ClaudeCodeAgentConfig, UIMessageStreamEvent } from '../../types'
|
||||
import { withEvalTimeout } from '../../utils/with-eval-timeout'
|
||||
import type { AgentContext, AgentEvaluator, AgentResult } from '../types'
|
||||
import {
|
||||
type ClaudeCodeProcessRunner,
|
||||
createClaudeCodeProcessRunner,
|
||||
} from './process-runner'
|
||||
import {
|
||||
ClaudeCodeStreamParser,
|
||||
shouldCaptureScreenshotForTool,
|
||||
} from './stream-parser'
|
||||
|
||||
export interface ClaudeCodeEvaluatorDeps {
|
||||
processRunner?: ClaudeCodeProcessRunner
|
||||
}
|
||||
|
||||
export class ClaudeCodeEvaluator implements AgentEvaluator {
|
||||
private processRunner: ClaudeCodeProcessRunner
|
||||
|
||||
constructor(
|
||||
private ctx: AgentContext,
|
||||
deps: ClaudeCodeEvaluatorDeps = {},
|
||||
) {
|
||||
this.processRunner = deps.processRunner ?? createClaudeCodeProcessRunner()
|
||||
}
|
||||
|
||||
async execute(): Promise<AgentResult> {
|
||||
const { config, task, capture, taskOutputDir } = this.ctx
|
||||
const startTime = Date.now()
|
||||
const timeoutMs = config.timeout_ms ?? DEFAULT_TIMEOUT_MS
|
||||
|
||||
await capture.messageLogger.logUser(task.query)
|
||||
|
||||
if (config.agent.type !== 'claude-code') {
|
||||
throw new Error('ClaudeCodeEvaluator only supports claude-code config')
|
||||
}
|
||||
const agentConfig = config.agent
|
||||
|
||||
const mcpConfigPath = join(taskOutputDir, 'claude-code-mcp.json')
|
||||
await writeFile(
|
||||
mcpConfigPath,
|
||||
JSON.stringify(
|
||||
buildClaudeCodeMcpConfig(config.browseros.server_url),
|
||||
null,
|
||||
2,
|
||||
),
|
||||
)
|
||||
|
||||
const parser = new ClaudeCodeStreamParser()
|
||||
const toolNamesById = new Map<string, string>()
|
||||
const prompt = buildClaudeCodePrompt(task.query)
|
||||
const args = buildClaudeCodeArgs({
|
||||
prompt,
|
||||
mcpConfigPath,
|
||||
config: agentConfig,
|
||||
})
|
||||
|
||||
const { terminationReason } = await withEvalTimeout(
|
||||
timeoutMs,
|
||||
capture,
|
||||
async (signal) => {
|
||||
const runResult = await this.processRunner.run({
|
||||
executable: agentConfig.claudePath,
|
||||
args,
|
||||
cwd: taskOutputDir,
|
||||
signal,
|
||||
onStdoutLine: async (line) => {
|
||||
const events = parser.pushLine(line)
|
||||
for (const event of events) {
|
||||
await this.handleStreamEvent(event, toolNamesById)
|
||||
}
|
||||
},
|
||||
})
|
||||
|
||||
if (runResult.exitCode !== 0) {
|
||||
const message =
|
||||
runResult.stderr.trim() ||
|
||||
`Claude Code exited with status ${runResult.exitCode}`
|
||||
capture.addError('agent_execution', message, {
|
||||
exitCode: runResult.exitCode,
|
||||
})
|
||||
if (!parser.getLastText()) {
|
||||
throw new Error(message)
|
||||
}
|
||||
}
|
||||
|
||||
for (const error of runResult.streamErrors ?? []) {
|
||||
capture.addWarning(
|
||||
'message_logging',
|
||||
`Claude Code stream event processing failed: ${error}`,
|
||||
)
|
||||
}
|
||||
|
||||
return runResult
|
||||
},
|
||||
)
|
||||
|
||||
const endTime = Date.now()
|
||||
const finalAnswer = parser.getLastText() ?? capture.getLastAssistantText()
|
||||
const metadata = {
|
||||
query_id: task.query_id,
|
||||
dataset: task.dataset,
|
||||
query: task.query,
|
||||
started_at: new Date(startTime).toISOString(),
|
||||
completed_at: new Date(endTime).toISOString(),
|
||||
total_duration_ms: endTime - startTime,
|
||||
total_steps: parser.getToolCallCount() || capture.getScreenshotCount(),
|
||||
termination_reason: terminationReason,
|
||||
final_answer: finalAnswer,
|
||||
errors: capture.getErrors(),
|
||||
warnings: capture.getWarnings(),
|
||||
device_pixel_ratio: capture.screenshot.getDevicePixelRatio(),
|
||||
agent_config: {
|
||||
type: 'claude-code' as const,
|
||||
model: agentConfig.model,
|
||||
},
|
||||
grader_results: {},
|
||||
}
|
||||
|
||||
await capture.trajectorySaver.saveMetadata(metadata)
|
||||
|
||||
return {
|
||||
metadata,
|
||||
messages: capture.getMessages(),
|
||||
finalAnswer,
|
||||
}
|
||||
}
|
||||
|
||||
private async handleStreamEvent(
|
||||
event: UIMessageStreamEvent,
|
||||
toolNamesById: Map<string, string>,
|
||||
): Promise<void> {
|
||||
const { capture, task } = this.ctx
|
||||
let screenshot: number | undefined
|
||||
|
||||
if (event.type === 'tool-input-available') {
|
||||
toolNamesById.set(event.toolCallId, event.toolName)
|
||||
if (isPageInput(event.input)) {
|
||||
capture.setActivePageId(event.input.page)
|
||||
}
|
||||
}
|
||||
|
||||
if (
|
||||
event.type === 'tool-output-available' ||
|
||||
event.type === 'tool-output-error'
|
||||
) {
|
||||
const toolName = toolNamesById.get(event.toolCallId)
|
||||
if (toolName && shouldCaptureScreenshotForTool(toolName)) {
|
||||
screenshot = await this.captureScreenshot()
|
||||
}
|
||||
}
|
||||
|
||||
await capture.messageLogger.logStreamEvent(event, screenshot)
|
||||
capture.emitEvent(task.query_id, {
|
||||
...event,
|
||||
...(screenshot !== undefined && { screenshot }),
|
||||
})
|
||||
}
|
||||
|
||||
private async captureScreenshot(): Promise<number | undefined> {
|
||||
const { capture, task } = this.ctx
|
||||
try {
|
||||
const screenshot = await capture.screenshot.capture(
|
||||
capture.getActivePageId(),
|
||||
)
|
||||
capture.emitEvent(task.query_id, {
|
||||
type: 'screenshot-captured',
|
||||
screenshot,
|
||||
})
|
||||
return screenshot
|
||||
} catch {
|
||||
return undefined
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function isPageInput(input: unknown): input is { page: number } {
|
||||
return (
|
||||
typeof input === 'object' &&
|
||||
input !== null &&
|
||||
'page' in input &&
|
||||
typeof input.page === 'number'
|
||||
)
|
||||
}
|
||||
|
||||
function buildClaudeCodePrompt(taskQuery: string): string {
|
||||
return [
|
||||
'You are running inside BrowserOS eval.',
|
||||
'Use the BrowserOS MCP tools to interact with the already-open browser and complete the user task.',
|
||||
'When the task is complete, respond with the final answer only.',
|
||||
'If blocked, explain the blocker clearly.',
|
||||
'',
|
||||
`Task: ${taskQuery}`,
|
||||
].join('\n')
|
||||
}
|
||||
|
||||
function buildClaudeCodeArgs({
|
||||
prompt,
|
||||
mcpConfigPath,
|
||||
config,
|
||||
}: {
|
||||
prompt: string
|
||||
mcpConfigPath: string
|
||||
config: ClaudeCodeAgentConfig
|
||||
}): string[] {
|
||||
const args = [
|
||||
'-p',
|
||||
prompt,
|
||||
'--mcp-config',
|
||||
mcpConfigPath,
|
||||
'--strict-mcp-config',
|
||||
'--output-format',
|
||||
'stream-json',
|
||||
'--verbose',
|
||||
]
|
||||
|
||||
if (config.model) args.push('--model', config.model)
|
||||
args.push(...config.extraArgs)
|
||||
|
||||
return args
|
||||
}
|
||||
|
||||
function buildClaudeCodeMcpConfig(serverUrl: string) {
|
||||
const trimmed = serverUrl.replace(/\/$/, '')
|
||||
const url = trimmed.endsWith('/mcp') ? trimmed : `${trimmed}/mcp`
|
||||
return {
|
||||
mcpServers: {
|
||||
browseros: {
|
||||
type: 'http',
|
||||
url,
|
||||
headers: { 'X-BrowserOS-Source': 'sdk-internal' },
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
@@ -1,114 +0,0 @@
|
||||
export interface ClaudeCodeRunOptions {
|
||||
executable: string
|
||||
args: string[]
|
||||
cwd: string
|
||||
signal?: AbortSignal
|
||||
onStdoutLine: (line: string) => Promise<void>
|
||||
}
|
||||
|
||||
export interface ClaudeCodeRunResult {
|
||||
exitCode: number
|
||||
stderr: string
|
||||
streamErrors?: string[]
|
||||
}
|
||||
|
||||
export interface ClaudeCodeProcessRunner {
|
||||
run(options: ClaudeCodeRunOptions): Promise<ClaudeCodeRunResult>
|
||||
}
|
||||
|
||||
export interface SpawnOptions {
|
||||
cwd: string
|
||||
signal?: AbortSignal
|
||||
onStdoutLine: (line: string) => Promise<void>
|
||||
}
|
||||
|
||||
export interface CreateClaudeCodeProcessRunnerDeps {
|
||||
spawn?: (cmd: string[], options: SpawnOptions) => Promise<ClaudeCodeRunResult>
|
||||
}
|
||||
|
||||
export function createClaudeCodeProcessRunner(
|
||||
deps: CreateClaudeCodeProcessRunnerDeps = {},
|
||||
): ClaudeCodeProcessRunner {
|
||||
const spawn = deps.spawn ?? spawnClaudeCode
|
||||
return {
|
||||
run: async ({ executable, args, cwd, signal, onStdoutLine }) =>
|
||||
spawn([executable, ...args], { cwd, signal, onStdoutLine }),
|
||||
}
|
||||
}
|
||||
|
||||
async function spawnClaudeCode(
|
||||
cmd: string[],
|
||||
options: SpawnOptions,
|
||||
): Promise<ClaudeCodeRunResult> {
|
||||
const proc = Bun.spawn({
|
||||
cmd,
|
||||
cwd: options.cwd,
|
||||
stdin: 'ignore',
|
||||
stdout: 'pipe',
|
||||
stderr: 'pipe',
|
||||
})
|
||||
|
||||
const abort = () => {
|
||||
try {
|
||||
proc.kill('SIGTERM')
|
||||
} catch {
|
||||
// Process may already have exited.
|
||||
}
|
||||
}
|
||||
options.signal?.addEventListener('abort', abort, { once: true })
|
||||
|
||||
try {
|
||||
const streamErrors: string[] = []
|
||||
const stdoutPromise = readLines(
|
||||
proc.stdout,
|
||||
options.onStdoutLine,
|
||||
streamErrors,
|
||||
)
|
||||
const stderrPromise = new Response(proc.stderr).text()
|
||||
const exitCode = await proc.exited
|
||||
await stdoutPromise
|
||||
const stderr = await stderrPromise
|
||||
return { exitCode, stderr, streamErrors }
|
||||
} finally {
|
||||
options.signal?.removeEventListener('abort', abort)
|
||||
}
|
||||
}
|
||||
|
||||
async function readLines(
|
||||
stream: ReadableStream<Uint8Array>,
|
||||
onLine: (line: string) => Promise<void>,
|
||||
streamErrors: string[],
|
||||
): Promise<void> {
|
||||
const reader = stream.getReader()
|
||||
const decoder = new TextDecoder()
|
||||
let buffer = ''
|
||||
|
||||
while (true) {
|
||||
const { done, value } = await reader.read()
|
||||
if (done) break
|
||||
|
||||
buffer += decoder.decode(value, { stream: true })
|
||||
const lines = buffer.split('\n')
|
||||
buffer = lines.pop() ?? ''
|
||||
for (const line of lines) {
|
||||
await emitLine(line, onLine, streamErrors)
|
||||
}
|
||||
}
|
||||
|
||||
buffer += decoder.decode()
|
||||
if (buffer.length > 0) {
|
||||
await emitLine(buffer, onLine, streamErrors)
|
||||
}
|
||||
}
|
||||
|
||||
async function emitLine(
|
||||
line: string,
|
||||
onLine: (line: string) => Promise<void>,
|
||||
streamErrors: string[],
|
||||
): Promise<void> {
|
||||
try {
|
||||
await onLine(line)
|
||||
} catch (error) {
|
||||
streamErrors.push(error instanceof Error ? error.message : String(error))
|
||||
}
|
||||
}
|
||||
@@ -1,142 +0,0 @@
|
||||
import { randomUUID } from 'node:crypto'
|
||||
import type { UIMessageStreamEvent } from '../../types'
|
||||
|
||||
type JsonObject = Record<string, unknown>
|
||||
|
||||
export class ClaudeCodeStreamParser {
|
||||
private lastText: string | null = null
|
||||
private toolCallCount = 0
|
||||
|
||||
pushLine(line: string): UIMessageStreamEvent[] {
|
||||
const trimmed = line.trim()
|
||||
if (!trimmed) return []
|
||||
|
||||
let parsed: unknown
|
||||
try {
|
||||
parsed = JSON.parse(trimmed)
|
||||
} catch {
|
||||
return []
|
||||
}
|
||||
|
||||
if (!isObject(parsed)) return []
|
||||
|
||||
if (parsed.type === 'assistant') {
|
||||
return this.parseAssistantMessage(parsed)
|
||||
}
|
||||
if (parsed.type === 'user') {
|
||||
return this.parseUserMessage(parsed)
|
||||
}
|
||||
if (parsed.type === 'result' && typeof parsed.result === 'string') {
|
||||
this.lastText = parsed.result
|
||||
}
|
||||
|
||||
return []
|
||||
}
|
||||
|
||||
getLastText(): string | null {
|
||||
return this.lastText
|
||||
}
|
||||
|
||||
getToolCallCount(): number {
|
||||
return this.toolCallCount
|
||||
}
|
||||
|
||||
private parseAssistantMessage(message: JsonObject): UIMessageStreamEvent[] {
|
||||
const content = contentBlocks(message)
|
||||
const events: UIMessageStreamEvent[] = []
|
||||
|
||||
for (const block of content) {
|
||||
if (block.type === 'text' && typeof block.text === 'string') {
|
||||
const id = randomUUID()
|
||||
this.lastText = block.text
|
||||
events.push(
|
||||
{ type: 'text-start', id },
|
||||
{ type: 'text-delta', id, delta: block.text },
|
||||
{ type: 'text-end', id },
|
||||
)
|
||||
} else if (
|
||||
block.type === 'tool_use' &&
|
||||
typeof block.id === 'string' &&
|
||||
typeof block.name === 'string'
|
||||
) {
|
||||
this.toolCallCount++
|
||||
events.push({
|
||||
type: 'tool-input-available',
|
||||
toolCallId: block.id,
|
||||
toolName: block.name,
|
||||
input: block.input,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return events
|
||||
}
|
||||
|
||||
private parseUserMessage(message: JsonObject): UIMessageStreamEvent[] {
|
||||
const content = contentBlocks(message)
|
||||
const events: UIMessageStreamEvent[] = []
|
||||
|
||||
for (const block of content) {
|
||||
if (
|
||||
block.type !== 'tool_result' ||
|
||||
typeof block.tool_use_id !== 'string'
|
||||
) {
|
||||
continue
|
||||
}
|
||||
|
||||
if (block.is_error === true) {
|
||||
events.push({
|
||||
type: 'tool-output-error',
|
||||
toolCallId: block.tool_use_id,
|
||||
errorText: stringifyToolContent(block.content),
|
||||
})
|
||||
} else {
|
||||
events.push({
|
||||
type: 'tool-output-available',
|
||||
toolCallId: block.tool_use_id,
|
||||
output: normalizeToolContent(block.content),
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return events
|
||||
}
|
||||
}
|
||||
|
||||
export function shouldCaptureScreenshotForTool(toolName: string): boolean {
|
||||
if (!toolName.startsWith('mcp__browseros__')) return false
|
||||
return !toolName.endsWith('__take_screenshot')
|
||||
}
|
||||
|
||||
function contentBlocks(message: JsonObject): JsonObject[] {
|
||||
const inner = isObject(message.message) ? message.message : message
|
||||
return Array.isArray(inner.content) ? inner.content.filter(isObject) : []
|
||||
}
|
||||
|
||||
function isObject(value: unknown): value is JsonObject {
|
||||
return typeof value === 'object' && value !== null
|
||||
}
|
||||
|
||||
function normalizeToolContent(content: unknown): unknown {
|
||||
if (!Array.isArray(content)) return content
|
||||
return content.map((item) => {
|
||||
if (
|
||||
isObject(item) &&
|
||||
item.type === 'text' &&
|
||||
typeof item.text === 'string'
|
||||
) {
|
||||
return item.text
|
||||
}
|
||||
return item
|
||||
})
|
||||
}
|
||||
|
||||
function stringifyToolContent(content: unknown): string {
|
||||
const normalized = normalizeToolContent(content)
|
||||
if (typeof normalized === 'string') return normalized
|
||||
try {
|
||||
return JSON.stringify(normalized)
|
||||
} catch {
|
||||
return String(normalized)
|
||||
}
|
||||
}
|
||||
@@ -1,4 +1,3 @@
|
||||
import { ClaudeCodeEvaluator } from './claude-code'
|
||||
import { OrchestratorExecutorEvaluator } from './orchestrator-executor'
|
||||
import { SingleAgentEvaluator } from './single-agent'
|
||||
import type { AgentContext, AgentEvaluator } from './types'
|
||||
@@ -9,8 +8,6 @@ export function createAgent(context: AgentContext): AgentEvaluator {
|
||||
return new SingleAgentEvaluator(context)
|
||||
case 'orchestrator-executor':
|
||||
return new OrchestratorExecutorEvaluator(context)
|
||||
case 'claude-code':
|
||||
return new ClaudeCodeEvaluator(context)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -105,10 +105,7 @@ export class TrajectorySaver {
|
||||
errors: [],
|
||||
warnings: [],
|
||||
agent_config: {
|
||||
type: agentConfig.type as
|
||||
| 'single'
|
||||
| 'orchestrator-executor'
|
||||
| 'claude-code',
|
||||
type: agentConfig.type as 'single' | 'orchestrator-executor',
|
||||
model: agentConfig.model,
|
||||
},
|
||||
grader_results: {},
|
||||
|
||||
@@ -82,16 +82,6 @@ function suiteToEvalConfig(
|
||||
})
|
||||
}
|
||||
|
||||
if (suite.agent.type === 'claude-code') {
|
||||
return EvalConfigSchema.parse({
|
||||
...base,
|
||||
agent: {
|
||||
type: 'claude-code',
|
||||
...(variant.agent.model && { model: variant.agent.model }),
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
const executorBackend = suite.agent.executorBackend ?? 'tool-loop'
|
||||
const executor =
|
||||
executorBackend === 'clado'
|
||||
@@ -145,10 +135,7 @@ export async function resolveSuiteCommand(
|
||||
const loaded = await loadSuite(options.suitePath)
|
||||
const variant = resolveVariant({
|
||||
variantId: options.variantId,
|
||||
provider:
|
||||
loaded.suite.agent.type === 'claude-code'
|
||||
? 'claude-code'
|
||||
: options.provider,
|
||||
provider: options.provider,
|
||||
model: options.model,
|
||||
apiKey: options.apiKey,
|
||||
baseUrl: options.baseUrl,
|
||||
|
||||
@@ -2,7 +2,6 @@ export interface PythonEvaluatorOptions {
|
||||
scriptPath: string
|
||||
input: unknown
|
||||
timeoutMs: number
|
||||
pythonPath?: string
|
||||
}
|
||||
|
||||
export interface PythonEvaluatorResult<T> {
|
||||
@@ -16,9 +15,7 @@ export interface PythonEvaluatorResult<T> {
|
||||
export async function runPythonJsonEvaluator<T>(
|
||||
options: PythonEvaluatorOptions,
|
||||
): Promise<PythonEvaluatorResult<T>> {
|
||||
const pythonPath =
|
||||
options.pythonPath || process.env.BROWSEROS_EVAL_PYTHON || 'python3'
|
||||
const proc = Bun.spawn([pythonPath, options.scriptPath], {
|
||||
const proc = Bun.spawn(['python3', options.scriptPath], {
|
||||
stdin: 'pipe',
|
||||
stdout: 'pipe',
|
||||
stderr: 'pipe',
|
||||
|
||||
@@ -33,13 +33,6 @@ function variantSource(config: EvalConfig): {
|
||||
baseUrl?: string
|
||||
supportsImages?: boolean
|
||||
} {
|
||||
if (config.agent.type === 'claude-code') {
|
||||
return {
|
||||
provider: 'claude-code',
|
||||
model: config.agent.model ?? 'default',
|
||||
}
|
||||
}
|
||||
|
||||
const agent =
|
||||
config.agent.type === 'single' ? config.agent : config.agent.orchestrator
|
||||
if (!agent.model) {
|
||||
@@ -83,7 +76,10 @@ export async function adaptEvalConfigFile(
|
||||
suite: {
|
||||
id,
|
||||
dataset: evalConfig.dataset,
|
||||
agent: suiteAgent(evalConfig, backend),
|
||||
agent:
|
||||
evalConfig.agent.type === 'single'
|
||||
? { type: 'tool-loop' }
|
||||
: { type: 'orchestrated', executorBackend: backend ?? 'tool-loop' },
|
||||
graders: evalConfig.graders ?? [],
|
||||
workers: evalConfig.num_workers,
|
||||
restartBrowserPerTask: evalConfig.restart_server_per_task,
|
||||
@@ -103,17 +99,3 @@ export async function adaptEvalConfigFile(
|
||||
}),
|
||||
}
|
||||
}
|
||||
|
||||
function suiteAgent(
|
||||
config: EvalConfig,
|
||||
backend: ReturnType<typeof executorBackend>,
|
||||
): EvalSuite['agent'] {
|
||||
switch (config.agent.type) {
|
||||
case 'single':
|
||||
return { type: 'tool-loop' }
|
||||
case 'orchestrator-executor':
|
||||
return { type: 'orchestrated', executorBackend: backend ?? 'tool-loop' }
|
||||
case 'claude-code':
|
||||
return { type: 'claude-code' }
|
||||
}
|
||||
}
|
||||
|
||||
@@ -57,30 +57,10 @@ export function resolveVariant(
|
||||
options: ResolveVariantOptions = {},
|
||||
): EvalVariant {
|
||||
const env = options.env ?? process.env
|
||||
const id = options.variantId ?? env.EVAL_VARIANT ?? 'default'
|
||||
const provider =
|
||||
options.provider ?? env.EVAL_AGENT_PROVIDER ?? 'openai-compatible'
|
||||
const model = options.model ?? env.EVAL_AGENT_MODEL
|
||||
|
||||
if (provider === 'claude-code') {
|
||||
const id = options.variantId ?? env.EVAL_VARIANT ?? 'claude-code'
|
||||
return {
|
||||
id,
|
||||
agent: {
|
||||
provider,
|
||||
model: model ?? '',
|
||||
},
|
||||
publicMetadata: {
|
||||
id,
|
||||
agent: {
|
||||
provider,
|
||||
model: model || 'default',
|
||||
apiKeyConfigured: false,
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
const id = options.variantId ?? env.EVAL_VARIANT ?? 'default'
|
||||
const apiKey = options.apiKey ?? env.EVAL_AGENT_API_KEY
|
||||
const apiKeyEnv =
|
||||
options.apiKeyEnv ?? (options.apiKey ? undefined : 'EVAL_AGENT_API_KEY')
|
||||
|
||||
@@ -8,7 +8,6 @@ export const SuiteAgentSchema = z
|
||||
'single',
|
||||
'orchestrated',
|
||||
'orchestrator-executor',
|
||||
'claude-code',
|
||||
]),
|
||||
executorBackend: z.enum(['tool-loop', 'clado']).optional(),
|
||||
})
|
||||
|
||||
@@ -19,19 +19,9 @@ export const OrchestratorExecutorConfigSchema = z.object({
|
||||
}),
|
||||
})
|
||||
|
||||
export const ClaudeCodeAgentConfigSchema = z
|
||||
.object({
|
||||
type: z.literal('claude-code'),
|
||||
model: z.string().min(1).optional(),
|
||||
claudePath: z.string().min(1).default('claude'),
|
||||
extraArgs: z.array(z.string()).default([]),
|
||||
})
|
||||
.strict()
|
||||
|
||||
export const AgentConfigSchema = z.discriminatedUnion('type', [
|
||||
SingleAgentConfigSchema,
|
||||
OrchestratorExecutorConfigSchema,
|
||||
ClaudeCodeAgentConfigSchema,
|
||||
])
|
||||
|
||||
export const EvalConfigSchema = z.object({
|
||||
@@ -63,6 +53,5 @@ export type SingleAgentConfig = z.infer<typeof SingleAgentConfigSchema>
|
||||
export type OrchestratorExecutorConfig = z.infer<
|
||||
typeof OrchestratorExecutorConfigSchema
|
||||
>
|
||||
export type ClaudeCodeAgentConfig = z.infer<typeof ClaudeCodeAgentConfigSchema>
|
||||
export type AgentConfig = z.infer<typeof AgentConfigSchema>
|
||||
export type EvalConfig = z.infer<typeof EvalConfigSchema>
|
||||
|
||||
@@ -2,8 +2,6 @@
|
||||
export {
|
||||
type AgentConfig,
|
||||
AgentConfigSchema,
|
||||
type ClaudeCodeAgentConfig,
|
||||
ClaudeCodeAgentConfigSchema,
|
||||
type EvalConfig,
|
||||
EvalConfigSchema,
|
||||
type OrchestratorExecutorConfig,
|
||||
|
||||
@@ -13,7 +13,7 @@ export const GraderResultSchema = z.object({
|
||||
// Agent config in metadata
|
||||
const AgentConfigMetaSchema = z
|
||||
.object({
|
||||
type: z.enum(['single', 'orchestrator-executor', 'claude-code']),
|
||||
type: z.enum(['single', 'orchestrator-executor']),
|
||||
model: z.string().optional(),
|
||||
})
|
||||
.passthrough()
|
||||
|
||||
@@ -59,7 +59,7 @@ export async function validateConfig(
|
||||
) {
|
||||
envVarsToCheck.push(config.agent.apiKey)
|
||||
}
|
||||
} else if (config.agent.type === 'orchestrator-executor') {
|
||||
} else {
|
||||
const { orchestrator, executor } = config.agent
|
||||
if (orchestrator.apiKey && isEnvVarName(orchestrator.apiKey)) {
|
||||
envVarsToCheck.push(orchestrator.apiKey)
|
||||
|
||||
@@ -1,268 +0,0 @@
|
||||
import { describe, expect, it } from 'bun:test'
|
||||
import { mkdtemp, readFile } from 'node:fs/promises'
|
||||
import { tmpdir } from 'node:os'
|
||||
import { join } from 'node:path'
|
||||
import { createAgent } from '../../src/agents'
|
||||
import { ClaudeCodeEvaluator } from '../../src/agents/claude-code'
|
||||
import { CaptureContext } from '../../src/capture/context'
|
||||
import {
|
||||
AgentConfigSchema,
|
||||
type EvalConfig,
|
||||
EvalConfigSchema,
|
||||
type Task,
|
||||
TaskMetadataSchema,
|
||||
} from '../../src/types'
|
||||
|
||||
function config(): EvalConfig {
|
||||
return {
|
||||
agent: {
|
||||
type: 'claude-code',
|
||||
model: 'opus',
|
||||
claudePath: 'claude',
|
||||
extraArgs: [],
|
||||
},
|
||||
dataset: 'data/test.jsonl',
|
||||
num_workers: 1,
|
||||
restart_server_per_task: false,
|
||||
browseros: {
|
||||
server_url: 'http://127.0.0.1:9110',
|
||||
base_cdp_port: 9010,
|
||||
base_server_port: 9110,
|
||||
base_extension_port: 9310,
|
||||
load_extensions: false,
|
||||
headless: false,
|
||||
},
|
||||
graders: [],
|
||||
}
|
||||
}
|
||||
|
||||
const task: Task = {
|
||||
query_id: 'task-1',
|
||||
dataset: 'test',
|
||||
query: 'Find the title',
|
||||
graders: [],
|
||||
metadata: {
|
||||
original_task_id: 'task-1',
|
||||
},
|
||||
}
|
||||
|
||||
describe('ClaudeCodeEvaluator', () => {
|
||||
it('accepts claude-code config defaults without permission mode', () => {
|
||||
const agent = AgentConfigSchema.parse({ type: 'claude-code' })
|
||||
|
||||
expect(agent).toEqual({
|
||||
type: 'claude-code',
|
||||
claudePath: 'claude',
|
||||
extraArgs: [],
|
||||
})
|
||||
})
|
||||
|
||||
it('accepts claude-code as a runnable eval agent', () => {
|
||||
const parsed = EvalConfigSchema.parse({
|
||||
agent: {
|
||||
type: 'claude-code',
|
||||
model: 'opus',
|
||||
},
|
||||
dataset: 'data/test-set.jsonl',
|
||||
browseros: {
|
||||
server_url: 'http://127.0.0.1:9110',
|
||||
},
|
||||
})
|
||||
|
||||
expect(parsed.agent.type).toBe('claude-code')
|
||||
expect(parsed.agent.model).toBe('opus')
|
||||
})
|
||||
|
||||
it('rejects unsupported claude-code settings instead of silently ignoring them', () => {
|
||||
expect(
|
||||
AgentConfigSchema.safeParse({
|
||||
type: 'claude-code',
|
||||
permissionMode: 'bypassPermissions',
|
||||
}).success,
|
||||
).toBe(false)
|
||||
expect(
|
||||
AgentConfigSchema.safeParse({
|
||||
type: 'claude-code',
|
||||
maxTurns: 3,
|
||||
}).success,
|
||||
).toBe(false)
|
||||
})
|
||||
|
||||
it('allows claude-code in task metadata', () => {
|
||||
const metadata = TaskMetadataSchema.parse({
|
||||
query_id: 'task-1',
|
||||
dataset: 'test',
|
||||
query: 'Do the thing',
|
||||
started_at: new Date().toISOString(),
|
||||
completed_at: new Date().toISOString(),
|
||||
total_duration_ms: 100,
|
||||
total_steps: 1,
|
||||
termination_reason: 'completed',
|
||||
final_answer: 'done',
|
||||
errors: [],
|
||||
warnings: [],
|
||||
agent_config: {
|
||||
type: 'claude-code',
|
||||
model: 'opus',
|
||||
},
|
||||
grader_results: {},
|
||||
})
|
||||
|
||||
expect(metadata.agent_config.type).toBe('claude-code')
|
||||
})
|
||||
|
||||
it('is created by the agent factory', async () => {
|
||||
const outputDir = await mkdtemp(join(tmpdir(), 'claude-code-eval-'))
|
||||
const { capture, taskOutputDir } = await CaptureContext.create({
|
||||
serverUrl: 'http://127.0.0.1:9110',
|
||||
outputDir,
|
||||
taskId: task.query_id,
|
||||
initialPageId: 1,
|
||||
})
|
||||
|
||||
const agent = createAgent({
|
||||
config: config(),
|
||||
task,
|
||||
workerIndex: 0,
|
||||
initialPageId: 1,
|
||||
outputDir,
|
||||
taskOutputDir,
|
||||
capture,
|
||||
})
|
||||
|
||||
expect(agent).toBeInstanceOf(ClaudeCodeEvaluator)
|
||||
})
|
||||
|
||||
it('runs claude code, logs messages, writes MCP config, and saves metadata', async () => {
|
||||
const outputDir = await mkdtemp(join(tmpdir(), 'claude-code-eval-'))
|
||||
const { capture, taskOutputDir } = await CaptureContext.create({
|
||||
serverUrl: 'http://127.0.0.1:9110',
|
||||
outputDir,
|
||||
taskId: task.query_id,
|
||||
initialPageId: 1,
|
||||
})
|
||||
const calls: Array<{ executable: string; args: string[]; cwd: string }> = []
|
||||
const evaluator = new ClaudeCodeEvaluator(
|
||||
{
|
||||
config: config(),
|
||||
task,
|
||||
workerIndex: 0,
|
||||
initialPageId: 1,
|
||||
outputDir,
|
||||
taskOutputDir,
|
||||
capture,
|
||||
},
|
||||
{
|
||||
processRunner: {
|
||||
async run(options) {
|
||||
calls.push(options)
|
||||
await options.onStdoutLine(
|
||||
JSON.stringify({
|
||||
type: 'assistant',
|
||||
message: {
|
||||
content: [{ type: 'text', text: 'The title is Example' }],
|
||||
},
|
||||
}),
|
||||
)
|
||||
await options.onStdoutLine(
|
||||
JSON.stringify({
|
||||
type: 'result',
|
||||
subtype: 'success',
|
||||
result: 'The title is Example',
|
||||
}),
|
||||
)
|
||||
return { exitCode: 0, stderr: '' }
|
||||
},
|
||||
},
|
||||
},
|
||||
)
|
||||
|
||||
const result = await evaluator.execute()
|
||||
|
||||
expect(result.finalAnswer).toBe('The title is Example')
|
||||
expect(result.metadata.agent_config).toMatchObject({
|
||||
type: 'claude-code',
|
||||
model: 'opus',
|
||||
})
|
||||
expect(result.messages.some((msg) => msg.type === 'user')).toBe(true)
|
||||
expect(result.messages.some((msg) => msg.type === 'text-delta')).toBe(true)
|
||||
const mcpConfig = JSON.parse(
|
||||
await readFile(join(taskOutputDir, 'claude-code-mcp.json'), 'utf-8'),
|
||||
)
|
||||
expect(mcpConfig.mcpServers.browseros).toMatchObject({
|
||||
type: 'http',
|
||||
url: 'http://127.0.0.1:9110/mcp',
|
||||
headers: {
|
||||
'X-BrowserOS-Source': 'sdk-internal',
|
||||
},
|
||||
})
|
||||
expect(calls).toEqual([
|
||||
expect.objectContaining({
|
||||
executable: 'claude',
|
||||
cwd: taskOutputDir,
|
||||
args: [
|
||||
'-p',
|
||||
expect.stringContaining('Task: Find the title'),
|
||||
'--mcp-config',
|
||||
join(taskOutputDir, 'claude-code-mcp.json'),
|
||||
'--strict-mcp-config',
|
||||
'--output-format',
|
||||
'stream-json',
|
||||
'--verbose',
|
||||
'--model',
|
||||
'opus',
|
||||
],
|
||||
}),
|
||||
])
|
||||
expect(calls[0].args).not.toContain('--permission-mode')
|
||||
})
|
||||
|
||||
it('records non-fatal stream processing errors as warnings', async () => {
|
||||
const outputDir = await mkdtemp(join(tmpdir(), 'claude-code-eval-'))
|
||||
const { capture, taskOutputDir } = await CaptureContext.create({
|
||||
serverUrl: 'http://127.0.0.1:9110',
|
||||
outputDir,
|
||||
taskId: task.query_id,
|
||||
initialPageId: 1,
|
||||
})
|
||||
const evaluator = new ClaudeCodeEvaluator(
|
||||
{
|
||||
config: config(),
|
||||
task,
|
||||
workerIndex: 0,
|
||||
initialPageId: 1,
|
||||
outputDir,
|
||||
taskOutputDir,
|
||||
capture,
|
||||
},
|
||||
{
|
||||
processRunner: {
|
||||
async run(options) {
|
||||
await options.onStdoutLine(
|
||||
JSON.stringify({
|
||||
type: 'result',
|
||||
subtype: 'success',
|
||||
result: 'done',
|
||||
}),
|
||||
)
|
||||
return {
|
||||
exitCode: 0,
|
||||
stderr: '',
|
||||
streamErrors: ['bad stream line'],
|
||||
}
|
||||
},
|
||||
},
|
||||
},
|
||||
)
|
||||
|
||||
const result = await evaluator.execute()
|
||||
|
||||
expect(result.finalAnswer).toBe('done')
|
||||
expect(result.metadata.warnings).toEqual([
|
||||
expect.objectContaining({
|
||||
source: 'message_logging',
|
||||
message: 'Claude Code stream event processing failed: bad stream line',
|
||||
}),
|
||||
])
|
||||
})
|
||||
})
|
||||
@@ -1,78 +0,0 @@
|
||||
import { describe, expect, it } from 'bun:test'
|
||||
import { chmod, mkdtemp, writeFile } from 'node:fs/promises'
|
||||
import { tmpdir } from 'node:os'
|
||||
import { join } from 'node:path'
|
||||
import { createClaudeCodeProcessRunner } from '../../src/agents/claude-code/process-runner'
|
||||
|
||||
async function writeStdoutScript(): Promise<string> {
|
||||
const dir = await mkdtemp(join(tmpdir(), 'claude-code-runner-'))
|
||||
const script = join(dir, 'stdout-lines')
|
||||
await writeFile(script, '#!/bin/sh\nprintf "first\\nbad\\nlast\\n"\n')
|
||||
await chmod(script, 0o755)
|
||||
return script
|
||||
}
|
||||
|
||||
describe('createClaudeCodeProcessRunner', () => {
|
||||
it('passes executable and args to the spawn dependency', async () => {
|
||||
const calls: unknown[] = []
|
||||
const runner = createClaudeCodeProcessRunner({
|
||||
spawn: async (cmd, options) => {
|
||||
calls.push({ cmd, options })
|
||||
await options.onStdoutLine('{"type":"result","result":"done"}')
|
||||
return { exitCode: 0, stderr: '' }
|
||||
},
|
||||
})
|
||||
|
||||
const result = await runner.run({
|
||||
executable: 'claude',
|
||||
args: ['-p', 'hello'],
|
||||
cwd: '/tmp',
|
||||
signal: new AbortController().signal,
|
||||
onStdoutLine: async () => {},
|
||||
})
|
||||
|
||||
expect(result.exitCode).toBe(0)
|
||||
expect(calls).toEqual([
|
||||
{
|
||||
cmd: ['claude', '-p', 'hello'],
|
||||
options: expect.objectContaining({ cwd: '/tmp' }),
|
||||
},
|
||||
])
|
||||
})
|
||||
|
||||
it('returns stderr and non-zero exit codes', async () => {
|
||||
const runner = createClaudeCodeProcessRunner({
|
||||
spawn: async () => ({ exitCode: 2, stderr: 'bad auth' }),
|
||||
})
|
||||
|
||||
const result = await runner.run({
|
||||
executable: 'claude',
|
||||
args: [],
|
||||
cwd: '/tmp',
|
||||
signal: new AbortController().signal,
|
||||
onStdoutLine: async () => {},
|
||||
})
|
||||
|
||||
expect(result).toEqual({ exitCode: 2, stderr: 'bad auth' })
|
||||
})
|
||||
|
||||
it('continues reading stdout after a line handler error', async () => {
|
||||
const script = await writeStdoutScript()
|
||||
const lines: string[] = []
|
||||
const runner = createClaudeCodeProcessRunner()
|
||||
|
||||
const result = await runner.run({
|
||||
executable: script,
|
||||
args: [],
|
||||
cwd: '/tmp',
|
||||
onStdoutLine: async (line) => {
|
||||
lines.push(line)
|
||||
if (line === 'bad') throw new Error('bad line')
|
||||
},
|
||||
})
|
||||
|
||||
expect(result.exitCode).toBe(0)
|
||||
expect(result.streamErrors).toEqual(['bad line'])
|
||||
expect(lines).toEqual(['first', 'bad', 'last'])
|
||||
})
|
||||
})
|
||||
@@ -1,102 +0,0 @@
|
||||
import { describe, expect, it } from 'bun:test'
|
||||
import {
|
||||
ClaudeCodeStreamParser,
|
||||
shouldCaptureScreenshotForTool,
|
||||
} from '../../src/agents/claude-code/stream-parser'
|
||||
|
||||
describe('ClaudeCodeStreamParser', () => {
|
||||
it('maps assistant text and MCP tool use into eval stream events', () => {
|
||||
const parser = new ClaudeCodeStreamParser()
|
||||
const events = parser.pushLine(
|
||||
JSON.stringify({
|
||||
type: 'assistant',
|
||||
message: {
|
||||
content: [
|
||||
{ type: 'text', text: 'I will navigate.' },
|
||||
{
|
||||
type: 'tool_use',
|
||||
id: 'toolu_1',
|
||||
name: 'mcp__browseros__navigate_page',
|
||||
input: { page: 2, url: 'https://example.com' },
|
||||
},
|
||||
],
|
||||
},
|
||||
}),
|
||||
)
|
||||
|
||||
expect(events).toEqual([
|
||||
{ type: 'text-start', id: expect.any(String) },
|
||||
{
|
||||
type: 'text-delta',
|
||||
id: expect.any(String),
|
||||
delta: 'I will navigate.',
|
||||
},
|
||||
{ type: 'text-end', id: expect.any(String) },
|
||||
{
|
||||
type: 'tool-input-available',
|
||||
toolCallId: 'toolu_1',
|
||||
toolName: 'mcp__browseros__navigate_page',
|
||||
input: { page: 2, url: 'https://example.com' },
|
||||
},
|
||||
])
|
||||
expect(parser.getLastText()).toBe('I will navigate.')
|
||||
expect(parser.getToolCallCount()).toBe(1)
|
||||
})
|
||||
|
||||
it('maps Claude Code tool results into eval output events', () => {
|
||||
const parser = new ClaudeCodeStreamParser()
|
||||
const events = parser.pushLine(
|
||||
JSON.stringify({
|
||||
type: 'user',
|
||||
message: {
|
||||
content: [
|
||||
{
|
||||
type: 'tool_result',
|
||||
tool_use_id: 'toolu_1',
|
||||
content: 'Navigated successfully',
|
||||
},
|
||||
],
|
||||
},
|
||||
}),
|
||||
)
|
||||
|
||||
expect(events).toEqual([
|
||||
{
|
||||
type: 'tool-output-available',
|
||||
toolCallId: 'toolu_1',
|
||||
output: 'Navigated successfully',
|
||||
},
|
||||
])
|
||||
})
|
||||
|
||||
it('uses result messages as the authoritative final text', () => {
|
||||
const parser = new ClaudeCodeStreamParser()
|
||||
parser.pushLine(
|
||||
JSON.stringify({
|
||||
type: 'assistant',
|
||||
message: {
|
||||
content: [{ type: 'text', text: 'I will complete the task.' }],
|
||||
},
|
||||
}),
|
||||
)
|
||||
parser.pushLine(
|
||||
JSON.stringify({
|
||||
type: 'result',
|
||||
subtype: 'success',
|
||||
result: 'Final answer',
|
||||
}),
|
||||
)
|
||||
|
||||
expect(parser.getLastText()).toBe('Final answer')
|
||||
})
|
||||
|
||||
it('identifies BrowserOS MCP tools that should trigger screenshots', () => {
|
||||
expect(
|
||||
shouldCaptureScreenshotForTool('mcp__browseros__navigate_page'),
|
||||
).toBe(true)
|
||||
expect(
|
||||
shouldCaptureScreenshotForTool('mcp__browseros__take_screenshot'),
|
||||
).toBe(false)
|
||||
expect(shouldCaptureScreenshotForTool('Read')).toBe(false)
|
||||
})
|
||||
})
|
||||
@@ -7,11 +7,8 @@ import {
|
||||
runSuiteCommand,
|
||||
} from '../../src/cli/commands/suite'
|
||||
import type { RunEvalOptions } from '../../src/runner/types'
|
||||
import type { EvalSuite } from '../../src/suites/schema'
|
||||
|
||||
async function writeTempSuite(
|
||||
overrides: Partial<EvalSuite> = {},
|
||||
): Promise<{ dir: string; suitePath: string }> {
|
||||
async function writeTempSuite(): Promise<{ dir: string; suitePath: string }> {
|
||||
const dir = await mkdtemp(join(tmpdir(), 'eval-suite-cli-'))
|
||||
const suitePath = join(dir, 'agisdk-daily-10.json')
|
||||
await writeFile(
|
||||
@@ -26,9 +23,8 @@ async function writeTempSuite(
|
||||
restartBrowserPerTask: true,
|
||||
browseros: {
|
||||
server_url: 'http://127.0.0.1:9110',
|
||||
headless: false,
|
||||
headless: true,
|
||||
},
|
||||
...overrides,
|
||||
},
|
||||
null,
|
||||
2,
|
||||
@@ -47,7 +43,9 @@ describe('suite command', () => {
|
||||
|
||||
expect(resolved.kind).toBe('config')
|
||||
expect(resolved.suite.id).toBe('browseros-agent-weekly')
|
||||
expect(resolved.evalConfig.dataset).toBe('../../data/agisdk-real.jsonl')
|
||||
expect(resolved.evalConfig.dataset).toBe(
|
||||
'../../data/webbench-2of4-50.jsonl',
|
||||
)
|
||||
expect(resolved.variant.publicMetadata.agent.apiKeyConfigured).toBe(true)
|
||||
})
|
||||
|
||||
@@ -77,25 +75,6 @@ describe('suite command', () => {
|
||||
expect(resolved.evalConfig.num_workers).toBe(2)
|
||||
})
|
||||
|
||||
it('resolves claude-code suites without provider API credentials', async () => {
|
||||
const { dir, suitePath } = await writeTempSuite({
|
||||
agent: { type: 'claude-code' },
|
||||
})
|
||||
|
||||
const resolved = await resolveSuiteCommand({
|
||||
suitePath,
|
||||
model: 'opus',
|
||||
env: {},
|
||||
})
|
||||
|
||||
expect(resolved.kind).toBe('suite')
|
||||
expect(resolved.evalConfig.agent).toMatchObject({
|
||||
type: 'claude-code',
|
||||
model: 'opus',
|
||||
})
|
||||
expect(resolved.datasetPath).toBe(join(dir, 'tasks.jsonl'))
|
||||
})
|
||||
|
||||
it('runs config and suite commands through the runner dependency', async () => {
|
||||
const calls: RunEvalOptions[] = []
|
||||
await runSuiteCommand(
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
import { describe, expect, it } from 'bun:test'
|
||||
import { chmod, mkdtemp, writeFile } from 'node:fs/promises'
|
||||
import { mkdtemp, writeFile } from 'node:fs/promises'
|
||||
import { tmpdir } from 'node:os'
|
||||
import { join } from 'node:path'
|
||||
import { runPythonJsonEvaluator } from '../../src/grading/python-evaluator'
|
||||
@@ -11,17 +11,6 @@ async function writeScript(source: string): Promise<string> {
|
||||
return script
|
||||
}
|
||||
|
||||
async function writePythonWrapper(): Promise<string> {
|
||||
const dir = await mkdtemp(join(tmpdir(), 'eval-python-wrapper-'))
|
||||
const wrapper = join(dir, 'python-wrapper')
|
||||
await writeFile(
|
||||
wrapper,
|
||||
'#!/bin/sh\necho custom-python >&2\nexec python3 "$@"\n',
|
||||
)
|
||||
await chmod(wrapper, 0o755)
|
||||
return wrapper
|
||||
}
|
||||
|
||||
describe('runPythonJsonEvaluator', () => {
|
||||
it('sends JSON on stdin, captures stderr, and parses stdout JSON', async () => {
|
||||
const script = await writeScript(`
|
||||
@@ -60,34 +49,6 @@ sys.exit(3)
|
||||
).rejects.toThrow('bad verifier')
|
||||
})
|
||||
|
||||
it('uses BROWSEROS_EVAL_PYTHON when provided', async () => {
|
||||
const script = await writeScript(`
|
||||
import json, sys
|
||||
data = json.loads(sys.stdin.read())
|
||||
print(json.dumps({"ok": data["ok"]}))
|
||||
`)
|
||||
const wrapper = await writePythonWrapper()
|
||||
const previousPythonPath = process.env.BROWSEROS_EVAL_PYTHON
|
||||
process.env.BROWSEROS_EVAL_PYTHON = wrapper
|
||||
|
||||
try {
|
||||
const result = await runPythonJsonEvaluator<{ ok: boolean }>({
|
||||
scriptPath: script,
|
||||
input: { ok: true },
|
||||
timeoutMs: 5_000,
|
||||
})
|
||||
|
||||
expect(result.output).toEqual({ ok: true })
|
||||
expect(result.stderr).toContain('custom-python')
|
||||
} finally {
|
||||
if (previousPythonPath === undefined) {
|
||||
delete process.env.BROWSEROS_EVAL_PYTHON
|
||||
} else {
|
||||
process.env.BROWSEROS_EVAL_PYTHON = previousPythonPath
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
it('enforces timeouts', async () => {
|
||||
const script = await writeScript(`
|
||||
import time
|
||||
|
||||
@@ -1,18 +1,15 @@
|
||||
import { describe, expect, it } from 'bun:test'
|
||||
import { mkdtemp, writeFile } from 'node:fs/promises'
|
||||
import { tmpdir } from 'node:os'
|
||||
import { join } from 'node:path'
|
||||
import { adaptEvalConfigFile } from '../../src/suites/config-adapter'
|
||||
|
||||
describe('adaptEvalConfigFile', () => {
|
||||
it('preserves browseros-agent-weekly AGI SDK config semantics', async () => {
|
||||
it('preserves browseros-agent-weekly config semantics', async () => {
|
||||
const adapted = await adaptEvalConfigFile(
|
||||
'apps/eval/configs/legacy/browseros-agent-weekly.json',
|
||||
)
|
||||
|
||||
expect(adapted.suite.id).toBe('browseros-agent-weekly')
|
||||
expect(adapted.suite.dataset).toBe('../../data/agisdk-real.jsonl')
|
||||
expect(adapted.suite.graders).toEqual(['agisdk_state_diff'])
|
||||
expect(adapted.suite.dataset).toBe('../../data/webbench-2of4-50.jsonl')
|
||||
expect(adapted.suite.graders).toEqual(['performance_grader'])
|
||||
expect(adapted.suite.workers).toBe(10)
|
||||
expect(adapted.suite.restartBrowserPerTask).toBe(true)
|
||||
expect(adapted.suite.timeoutMs).toBe(1_800_000)
|
||||
@@ -37,33 +34,4 @@ describe('adaptEvalConfigFile', () => {
|
||||
'secret-openrouter-value',
|
||||
)
|
||||
})
|
||||
|
||||
it('adapts claude-code configs without provider credentials', async () => {
|
||||
const dir = await mkdtemp(join(tmpdir(), 'claude-code-config-'))
|
||||
const configPath = join(dir, 'claude-code-agisdk.json')
|
||||
await writeFile(
|
||||
configPath,
|
||||
JSON.stringify({
|
||||
agent: {
|
||||
type: 'claude-code',
|
||||
model: 'opus',
|
||||
},
|
||||
dataset: 'tasks.jsonl',
|
||||
num_workers: 1,
|
||||
restart_server_per_task: false,
|
||||
browseros: {
|
||||
server_url: 'http://127.0.0.1:9110',
|
||||
headless: false,
|
||||
},
|
||||
}),
|
||||
)
|
||||
|
||||
const adapted = await adaptEvalConfigFile(configPath, { env: {} })
|
||||
|
||||
expect(adapted.suite.agent).toEqual({ type: 'claude-code' })
|
||||
expect(adapted.variant.agent).toMatchObject({
|
||||
provider: 'claude-code',
|
||||
model: 'opus',
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
@@ -35,16 +35,6 @@ describe('EvalSuiteSchema', () => {
|
||||
expect(parsed.success).toBe(false)
|
||||
})
|
||||
|
||||
it('validates claude-code suites', () => {
|
||||
const suite = EvalSuiteSchema.parse({
|
||||
id: 'claude-code-agisdk',
|
||||
dataset: 'data/agisdk-real.jsonl',
|
||||
agent: { type: 'claude-code' },
|
||||
})
|
||||
|
||||
expect(suite.agent.type).toBe('claude-code')
|
||||
})
|
||||
|
||||
it('validates the daily AGISDK 10-task suite', async () => {
|
||||
const loaded = await loadSuite(
|
||||
'apps/eval/configs/suites/agisdk-daily-10.json',
|
||||
@@ -99,40 +89,4 @@ describe('resolveVariant', () => {
|
||||
}),
|
||||
).toThrow('EVAL_AGENT_API_KEY')
|
||||
})
|
||||
|
||||
it('resolves claude-code variants without model or API key requirements', () => {
|
||||
const variant = resolveVariant({
|
||||
variantId: 'claude-opus',
|
||||
provider: 'claude-code',
|
||||
model: 'opus',
|
||||
env: {},
|
||||
})
|
||||
|
||||
expect(variant.id).toBe('claude-opus')
|
||||
expect(variant.agent).toEqual({
|
||||
provider: 'claude-code',
|
||||
model: 'opus',
|
||||
})
|
||||
expect(variant.publicMetadata.agent).toEqual({
|
||||
provider: 'claude-code',
|
||||
model: 'opus',
|
||||
apiKeyConfigured: false,
|
||||
})
|
||||
|
||||
const defaultVariant = resolveVariant({
|
||||
provider: 'claude-code',
|
||||
env: {},
|
||||
})
|
||||
|
||||
expect(defaultVariant.id).toBe('claude-code')
|
||||
expect(defaultVariant.agent).toEqual({
|
||||
provider: 'claude-code',
|
||||
model: '',
|
||||
})
|
||||
expect(defaultVariant.publicMetadata.agent).toEqual({
|
||||
provider: 'claude-code',
|
||||
model: 'default',
|
||||
apiKeyConfigured: false,
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
@@ -1,5 +1,3 @@
|
||||
tmp-shot-*/
|
||||
tmp-upload-*/
|
||||
.devtools
|
||||
db/
|
||||
identity/
|
||||
|
||||
@@ -1,7 +0,0 @@
|
||||
import { defineConfig } from 'drizzle-kit'
|
||||
|
||||
export default defineConfig({
|
||||
dialect: 'sqlite',
|
||||
schema: './src/lib/db/schema/index.ts',
|
||||
out: './src/lib/db/migrations',
|
||||
})
|
||||
@@ -11,7 +11,6 @@
|
||||
"start": "bun --watch --env-file=.env.development src/index.ts",
|
||||
"start:ci": "bun --env-file=.env.development src/index.ts",
|
||||
"build": "bun ../../scripts/build/server.ts --target=all",
|
||||
"db:generate": "drizzle-kit generate --config drizzle.config.ts",
|
||||
"test": "bun run test:all",
|
||||
"test:all": "bun run ./tests/__helpers__/run-test-group.ts all",
|
||||
"test:agent": "bun run ./tests/__helpers__/run-test-group.ts agent",
|
||||
@@ -101,7 +100,6 @@
|
||||
"commander": "^14.0.1",
|
||||
"core-js": "3.45.1",
|
||||
"debug": "4.4.3",
|
||||
"drizzle-orm": "^0.45.2",
|
||||
"eventsource-parser": "^3.0.0",
|
||||
"fuse.js": "^7.1.0",
|
||||
"gray-matter": "^4.0.3",
|
||||
@@ -124,7 +122,6 @@
|
||||
"@types/sinon": "^21.0.0",
|
||||
"@types/ws": "^8.5.13",
|
||||
"async-mutex": "^0.5.0",
|
||||
"drizzle-kit": "^0.31.10",
|
||||
"pino-pretty": "^13.0.0",
|
||||
"puppeteer": "24.23.0",
|
||||
"sinon": "^21.0.1",
|
||||
|
||||
@@ -18,7 +18,7 @@ import type { ContentfulStatusCode } from 'hono/utils/http-status'
|
||||
import { HttpAgentError } from '../agent/errors'
|
||||
import { INLINED_ENV } from '../env'
|
||||
import { KlavisClient } from '../lib/clients/klavis/klavis-client'
|
||||
import { initializeOAuth, shutdownOAuth } from '../lib/clients/oauth'
|
||||
import { initializeOAuth } from '../lib/clients/oauth'
|
||||
import { getDb } from '../lib/db'
|
||||
import { logger } from '../lib/logger'
|
||||
import { Sentry } from '../lib/sentry'
|
||||
@@ -88,10 +88,11 @@ export async function createHttpServer(config: HttpServerConfig) {
|
||||
} = config
|
||||
|
||||
const { onShutdown } = config
|
||||
|
||||
// Initialize OAuth token manager (callback server binds lazily on first PKCE login)
|
||||
const tokenManager = browserosId
|
||||
? initializeOAuth(getDb(), browserosId)
|
||||
: null
|
||||
if (!browserosId) shutdownOAuth()
|
||||
|
||||
const aclPolicyService = new GlobalAclPolicyService()
|
||||
await aclPolicyService.load()
|
||||
@@ -170,7 +171,7 @@ export async function createHttpServer(config: HttpServerConfig) {
|
||||
'/shutdown',
|
||||
createShutdownRoute({
|
||||
onShutdown: () => {
|
||||
shutdownOAuth()
|
||||
tokenManager?.stopCallbackServer()
|
||||
stopKlavisBackground()
|
||||
klavisRef.handle?.close().catch((err) =>
|
||||
logger.warn('Failed to close Klavis proxy transport', {
|
||||
|
||||
@@ -13,12 +13,11 @@ import {
|
||||
type TurnFrame,
|
||||
TurnRegistry,
|
||||
} from '../../../lib/agents/active-turn-registry'
|
||||
import type {
|
||||
AgentStore,
|
||||
CreateAgentInput,
|
||||
} from '../../../lib/agents/agent-store'
|
||||
import type { AgentDefinition } from '../../../lib/agents/agent-types'
|
||||
import { DbAgentStore } from '../../../lib/agents/db-agent-store'
|
||||
import {
|
||||
type CreateAgentInput,
|
||||
FileAgentStore,
|
||||
} from '../../../lib/agents/file-agent-store'
|
||||
import {
|
||||
FileMessageQueue,
|
||||
type QueuedMessage,
|
||||
@@ -153,7 +152,7 @@ export interface GatewayStatusSnapshot {
|
||||
}
|
||||
|
||||
export class AgentHarnessService {
|
||||
private readonly agentStore: AgentStore
|
||||
private readonly agentStore: FileAgentStore
|
||||
private readonly runtime: AgentRuntime
|
||||
private readonly openclawProvisioner: OpenClawProvisioner | null
|
||||
private readonly turnRegistry: TurnRegistry
|
||||
@@ -170,7 +169,7 @@ export class AgentHarnessService {
|
||||
|
||||
constructor(
|
||||
deps: {
|
||||
agentStore?: AgentStore
|
||||
agentStore?: FileAgentStore
|
||||
runtime?: AgentRuntime
|
||||
browserosServerPort?: number
|
||||
openclawGateway?: OpenclawGatewayAccessor
|
||||
@@ -180,7 +179,7 @@ export class AgentHarnessService {
|
||||
messageQueue?: FileMessageQueue
|
||||
} = {},
|
||||
) {
|
||||
this.agentStore = deps.agentStore ?? new DbAgentStore()
|
||||
this.agentStore = deps.agentStore ?? new FileAgentStore()
|
||||
this.runtime =
|
||||
deps.runtime ??
|
||||
new AcpxRuntime({
|
||||
|
||||
@@ -311,49 +311,17 @@ export class ChatService {
|
||||
contextChanges.length > 0
|
||||
? `${contextChanges.map((c) => `[Context: ${c}]`).join('\n')}\n\n`
|
||||
: ''
|
||||
|
||||
// Persist the *raw* user text in session.agent.messages so it
|
||||
// round-trips clean to the client's useChat state and to any
|
||||
// future history reload. The wrapped form (browser context +
|
||||
// <selected_text> + <USER_QUERY>) is built as a transient prompt
|
||||
// copy below — the LLM sees it, the user-visible state never
|
||||
// does.
|
||||
session.agent.appendUserMessage(request.message)
|
||||
const promptUserText = contextPrefix + userContent
|
||||
const wrappedUserMessageId =
|
||||
session.agent.messages[session.agent.messages.length - 1]?.id
|
||||
|
||||
const promptUiMessages = filterValidMessages(session.agent.messages).map(
|
||||
(msg) =>
|
||||
msg.id === wrappedUserMessageId && msg.role === 'user'
|
||||
? {
|
||||
...msg,
|
||||
parts: [{ type: 'text' as const, text: promptUserText }],
|
||||
}
|
||||
: msg,
|
||||
)
|
||||
session.agent.appendUserMessage(contextPrefix + userContent)
|
||||
|
||||
return createAgentUIStreamResponse({
|
||||
agent: session.agent.toolLoopAgent,
|
||||
uiMessages: promptUiMessages,
|
||||
uiMessages: filterValidMessages(session.agent.messages),
|
||||
abortSignal,
|
||||
onFinish: async ({ messages }: { messages: UIMessage[] }) => {
|
||||
// The agent loop returns `messages` containing the prompt-
|
||||
// wrapped user text. Restore the raw form before persisting
|
||||
// so subsequent turns see the clean text and the client's
|
||||
// local UIMessage matches what was originally typed.
|
||||
const restored = messages.map((msg) =>
|
||||
msg.id === wrappedUserMessageId && msg.role === 'user'
|
||||
? {
|
||||
...msg,
|
||||
parts: [{ type: 'text' as const, text: request.message }],
|
||||
}
|
||||
: msg,
|
||||
)
|
||||
session.agent.messages = filterValidMessages(restored)
|
||||
session.agent.messages = filterValidMessages(messages)
|
||||
logger.info('Agent execution complete', {
|
||||
conversationId: request.conversationId,
|
||||
totalMessages: restored.length,
|
||||
totalMessages: messages.length,
|
||||
})
|
||||
|
||||
if (session?.hiddenPageId) {
|
||||
|
||||
@@ -558,53 +558,13 @@ function mapToolUseToHistoryToolCall(
|
||||
}
|
||||
|
||||
function userContentToText(content: AcpxUserContent): string {
|
||||
if ('Text' in content) return unwrapBrowserosAcpUserMessage(content.Text)
|
||||
if ('Text' in content) return unwrapBrowserosAcpPrompt(content.Text)
|
||||
if ('Mention' in content) return content.Mention.content
|
||||
if ('Image' in content) return content.Image.source ? '[image]' : ''
|
||||
return ''
|
||||
}
|
||||
|
||||
/**
|
||||
* Strip the BrowserOS ACP envelopes from a user-message text so HTTP
|
||||
* consumers (history endpoint, listing's `lastUserMessage`) see only
|
||||
* the user's actual question. Two layers are added on the wire today:
|
||||
*
|
||||
* 1. <role>…</role>\n\n<user_request>…</user_request> from
|
||||
* `buildBrowserosAcpPrompt` (outer).
|
||||
* 2. ## Browser Context + <selected_text> + <USER_QUERY> from
|
||||
* `apps/server/src/agent/format-message.ts` (inner).
|
||||
*
|
||||
* Each step is independently defensive — anchors that don't match are
|
||||
* skipped — so partially-wrapped text (older persisted records,
|
||||
* messages without a selection, future schema drift) gets best-
|
||||
* effort cleaning without throwing. The function is idempotent;
|
||||
* applying it to already-clean text is a no-op.
|
||||
*
|
||||
* TODO: drop this once acpx/runtime exposes a real system-prompt
|
||||
* surface so we can stop persisting the role block on every user
|
||||
* message. Tracked in the server architecture audit.
|
||||
*/
|
||||
export function unwrapBrowserosAcpUserMessage(raw: string): string {
|
||||
if (!raw) return raw
|
||||
let text = raw
|
||||
|
||||
// Order matters: the outer envelope is added AFTER
|
||||
// `escapePromptTagText` runs over the inner formatUserMessage
|
||||
// payload (see buildBrowserosAcpPrompt). So once the outer
|
||||
// <role>…</role>+<user_request>…</user_request> tags are stripped,
|
||||
// the inner content is still entity-escaped (`<USER_QUERY>`
|
||||
// not `<USER_QUERY>`). We decode entities BEFORE the inner-envelope
|
||||
// strips so their anchors actually match.
|
||||
text = stripOuterRoleEnvelope(text)
|
||||
text = decodeBasicEntities(text)
|
||||
text = stripBrowserContextHeader(text)
|
||||
text = stripSelectedTextBlock(text)
|
||||
text = unwrapUserQuery(text)
|
||||
|
||||
return text.trim()
|
||||
}
|
||||
|
||||
function stripOuterRoleEnvelope(value: string): string {
|
||||
function unwrapBrowserosAcpPrompt(value: string): string {
|
||||
const prefix = `${BROWSEROS_ACP_AGENT_INSTRUCTIONS}
|
||||
|
||||
<user_request>
|
||||
@@ -612,41 +572,12 @@ function stripOuterRoleEnvelope(value: string): string {
|
||||
const suffix = `
|
||||
</user_request>`
|
||||
if (!value.startsWith(prefix) || !value.endsWith(suffix)) return value
|
||||
return value.slice(prefix.length, -suffix.length)
|
||||
|
||||
// TODO: nikhil: remove this once acpx/runtime exposes system prompt support.
|
||||
return unescapePromptTagText(value.slice(prefix.length, -suffix.length))
|
||||
}
|
||||
|
||||
function stripBrowserContextHeader(value: string): string {
|
||||
// The `## Browser Context` block (when present) ends with the
|
||||
// `\n\n---\n\n` separator emitted by `formatBrowserContext`.
|
||||
// Anchored at the start of the string; non-greedy match through
|
||||
// the body; one removal.
|
||||
const match = value.match(/^## Browser Context\n[\s\S]*?\n\n---\n\n/)
|
||||
return match ? value.slice(match[0].length) : value
|
||||
}
|
||||
|
||||
function stripSelectedTextBlock(value: string): string {
|
||||
// Optional `<selected_text [attrs]>…</selected_text>\n\n` block
|
||||
// emitted by `formatUserMessage` when the user has a selection.
|
||||
return value.replace(
|
||||
/<selected_text(?:[^>]*)>\n[\s\S]*?\n<\/selected_text>\n\n/,
|
||||
'',
|
||||
)
|
||||
}
|
||||
|
||||
function unwrapUserQuery(value: string): string {
|
||||
// `formatUserMessage` always wraps the user's typed text in
|
||||
// `<USER_QUERY>\n…\n</USER_QUERY>` — even when no browser context
|
||||
// or selection is present.
|
||||
const match = value.match(/^<USER_QUERY>\n([\s\S]*?)\n<\/USER_QUERY>$/)
|
||||
return match ? match[1] : value
|
||||
}
|
||||
|
||||
function decodeBasicEntities(value: string): string {
|
||||
// Reverse the three escapes the server applied via
|
||||
// `escapePromptTagText` so user-typed XML-like content (e.g.
|
||||
// `<USER_QUERY>` typed literally) renders as the user typed it.
|
||||
// Decode `&` last to avoid double-decoding sequences like
|
||||
// `&lt;` → `<` → `<`.
|
||||
function unescapePromptTagText(value: string): string {
|
||||
return value
|
||||
.replace(/</g, '<')
|
||||
.replace(/>/g, '>')
|
||||
|
||||
@@ -1,37 +0,0 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 BrowserOS
|
||||
* SPDX-License-Identifier: AGPL-3.0-or-later
|
||||
*/
|
||||
|
||||
import type { AgentAdapter, AgentDefinition } from './agent-types'
|
||||
|
||||
export interface CreateAgentInput {
|
||||
name: string
|
||||
adapter: AgentAdapter
|
||||
modelId?: string
|
||||
reasoningEffort?: string
|
||||
providerType?: string
|
||||
providerName?: string
|
||||
baseUrl?: string
|
||||
apiKey?: string
|
||||
supportsImages?: boolean
|
||||
}
|
||||
|
||||
export interface AgentStore {
|
||||
list(): Promise<AgentDefinition[]>
|
||||
get(id: string): Promise<AgentDefinition | null>
|
||||
create(input: CreateAgentInput): Promise<AgentDefinition>
|
||||
upsertExisting(input: {
|
||||
id: string
|
||||
name: string
|
||||
adapter: AgentAdapter
|
||||
modelId?: string
|
||||
reasoningEffort?: string
|
||||
}): Promise<AgentDefinition>
|
||||
update(
|
||||
id: string,
|
||||
patch: Partial<Pick<AgentDefinition, 'name' | 'pinned'>>,
|
||||
): Promise<AgentDefinition | null>
|
||||
delete(id: string): Promise<boolean>
|
||||
}
|
||||
@@ -1,201 +0,0 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 BrowserOS
|
||||
* SPDX-License-Identifier: AGPL-3.0-or-later
|
||||
*/
|
||||
|
||||
import { randomUUID } from 'node:crypto'
|
||||
import { desc, eq } from 'drizzle-orm'
|
||||
import { type BrowserOsDatabase, getDb } from '../db'
|
||||
import { type AgentDefinitionRow, agentDefinitions } from '../db/schema'
|
||||
import { logger } from '../logger'
|
||||
import {
|
||||
resolveDefaultModelId,
|
||||
resolveDefaultReasoningEffort,
|
||||
} from './agent-catalog'
|
||||
import type { AgentStore, CreateAgentInput } from './agent-store'
|
||||
import type { AgentDefinition } from './agent-types'
|
||||
|
||||
/** Persists BrowserOS-owned harness agent definitions in the process SQLite database. */
|
||||
export class DbAgentStore implements AgentStore {
|
||||
private readonly db: BrowserOsDatabase
|
||||
private writeQueue: Promise<unknown> = Promise.resolve()
|
||||
|
||||
constructor(options: { db?: BrowserOsDatabase } = {}) {
|
||||
this.db = options.db ?? getDb()
|
||||
}
|
||||
|
||||
async list(): Promise<AgentDefinition[]> {
|
||||
const rows = this.db
|
||||
.select()
|
||||
.from(agentDefinitions)
|
||||
.orderBy(desc(agentDefinitions.updatedAt))
|
||||
.all()
|
||||
const agents = rows.map(toAgentDefinition)
|
||||
logger.debug('Agent harness store listed agents', {
|
||||
count: agents.length,
|
||||
store: 'sqlite',
|
||||
})
|
||||
return agents
|
||||
}
|
||||
|
||||
async get(id: string): Promise<AgentDefinition | null> {
|
||||
const row =
|
||||
this.db
|
||||
.select()
|
||||
.from(agentDefinitions)
|
||||
.where(eq(agentDefinitions.id, id))
|
||||
.get() ?? null
|
||||
return row ? toAgentDefinition(row) : null
|
||||
}
|
||||
|
||||
async create(input: CreateAgentInput): Promise<AgentDefinition> {
|
||||
return this.withWriteLock(async () => {
|
||||
const now = Date.now()
|
||||
const id =
|
||||
input.adapter === 'openclaw' ? `oc-${randomUUID()}` : randomUUID()
|
||||
const row: AgentDefinitionRow = {
|
||||
id,
|
||||
name: input.name.trim(),
|
||||
adapter: input.adapter,
|
||||
modelId: input.modelId ?? resolveDefaultModelId(input.adapter),
|
||||
reasoningEffort:
|
||||
input.reasoningEffort ?? resolveDefaultReasoningEffort(input.adapter),
|
||||
permissionMode: 'approve-all',
|
||||
sessionKey: `agent:${id}:main`,
|
||||
pinned: false,
|
||||
adapterConfigJson: serializeAdapterConfig(input),
|
||||
createdAt: now,
|
||||
updatedAt: now,
|
||||
}
|
||||
this.db.insert(agentDefinitions).values(row).run()
|
||||
const agent = toAgentDefinition(row)
|
||||
logger.info('Agent harness store created agent', {
|
||||
agentId: agent.id,
|
||||
name: agent.name,
|
||||
adapter: agent.adapter,
|
||||
modelId: agent.modelId,
|
||||
reasoningEffort: agent.reasoningEffort,
|
||||
sessionKey: agent.sessionKey,
|
||||
store: 'sqlite',
|
||||
})
|
||||
return agent
|
||||
})
|
||||
}
|
||||
|
||||
/** Backfills a harness record for gateway-side OpenClaw agents discovered during reconciliation. */
|
||||
async upsertExisting(input: {
|
||||
id: string
|
||||
name: string
|
||||
adapter: AgentDefinition['adapter']
|
||||
modelId?: string
|
||||
reasoningEffort?: string
|
||||
}): Promise<AgentDefinition> {
|
||||
return this.withWriteLock(async () => {
|
||||
const existing = await this.get(input.id)
|
||||
if (existing) return existing
|
||||
|
||||
const now = Date.now()
|
||||
const row: AgentDefinitionRow = {
|
||||
id: input.id,
|
||||
name: input.name.trim(),
|
||||
adapter: input.adapter,
|
||||
modelId: input.modelId ?? resolveDefaultModelId(input.adapter),
|
||||
reasoningEffort:
|
||||
input.reasoningEffort ?? resolveDefaultReasoningEffort(input.adapter),
|
||||
permissionMode: 'approve-all',
|
||||
sessionKey: `agent:${input.id}:main`,
|
||||
pinned: false,
|
||||
adapterConfigJson: null,
|
||||
createdAt: now,
|
||||
updatedAt: now,
|
||||
}
|
||||
this.db.insert(agentDefinitions).values(row).run()
|
||||
const agent = toAgentDefinition(row)
|
||||
logger.info('Agent harness store backfilled agent', {
|
||||
agentId: agent.id,
|
||||
name: agent.name,
|
||||
adapter: agent.adapter,
|
||||
sessionKey: agent.sessionKey,
|
||||
store: 'sqlite',
|
||||
})
|
||||
return agent
|
||||
})
|
||||
}
|
||||
|
||||
async update(
|
||||
id: string,
|
||||
patch: Partial<Pick<AgentDefinition, 'name' | 'pinned'>>,
|
||||
): Promise<AgentDefinition | null> {
|
||||
return this.withWriteLock(async () => {
|
||||
const current = await this.get(id)
|
||||
if (!current) return null
|
||||
|
||||
const values = {
|
||||
...(patch.name !== undefined ? { name: patch.name.trim() } : {}),
|
||||
...(patch.pinned !== undefined ? { pinned: patch.pinned } : {}),
|
||||
updatedAt: Date.now(),
|
||||
}
|
||||
this.db
|
||||
.update(agentDefinitions)
|
||||
.set(values)
|
||||
.where(eq(agentDefinitions.id, id))
|
||||
.run()
|
||||
return this.get(id)
|
||||
})
|
||||
}
|
||||
|
||||
async delete(id: string): Promise<boolean> {
|
||||
return this.withWriteLock(async () => {
|
||||
const existing = await this.get(id)
|
||||
if (!existing) return false
|
||||
this.db.delete(agentDefinitions).where(eq(agentDefinitions.id, id)).run()
|
||||
logger.info('Agent harness store deleted agent', {
|
||||
agentId: id,
|
||||
store: 'sqlite',
|
||||
})
|
||||
return true
|
||||
})
|
||||
}
|
||||
|
||||
private withWriteLock<T>(fn: () => Promise<T>): Promise<T> {
|
||||
const result = this.writeQueue.then(fn, fn)
|
||||
this.writeQueue = result.then(
|
||||
() => undefined,
|
||||
() => undefined,
|
||||
)
|
||||
return result
|
||||
}
|
||||
}
|
||||
|
||||
function toAgentDefinition(row: AgentDefinitionRow): AgentDefinition {
|
||||
return {
|
||||
id: row.id,
|
||||
name: row.name,
|
||||
adapter: row.adapter,
|
||||
modelId: row.modelId,
|
||||
reasoningEffort: row.reasoningEffort,
|
||||
permissionMode: row.permissionMode,
|
||||
sessionKey: row.sessionKey,
|
||||
pinned: row.pinned,
|
||||
createdAt: row.createdAt,
|
||||
updatedAt: row.updatedAt,
|
||||
}
|
||||
}
|
||||
|
||||
function serializeAdapterConfig(input: CreateAgentInput): string | null {
|
||||
const config = {
|
||||
...(input.providerType !== undefined
|
||||
? { providerType: input.providerType }
|
||||
: {}),
|
||||
...(input.providerName !== undefined
|
||||
? { providerName: input.providerName }
|
||||
: {}),
|
||||
...(input.baseUrl !== undefined ? { baseUrl: input.baseUrl } : {}),
|
||||
...(input.apiKey !== undefined ? { apiKey: input.apiKey } : {}),
|
||||
...(input.supportsImages !== undefined
|
||||
? { supportsImages: input.supportsImages }
|
||||
: {}),
|
||||
}
|
||||
return Object.keys(config).length > 0 ? JSON.stringify(config) : null
|
||||
}
|
||||
@@ -0,0 +1,243 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 BrowserOS
|
||||
* SPDX-License-Identifier: AGPL-3.0-or-later
|
||||
*/
|
||||
|
||||
import { randomUUID } from 'node:crypto'
|
||||
import { mkdir, readFile, rename, writeFile } from 'node:fs/promises'
|
||||
import { dirname, join } from 'node:path'
|
||||
import { getBrowserosDir } from '../browseros-dir'
|
||||
import { logger } from '../logger'
|
||||
import {
|
||||
resolveDefaultModelId,
|
||||
resolveDefaultReasoningEffort,
|
||||
} from './agent-catalog'
|
||||
import type { AgentAdapter, AgentDefinition } from './agent-types'
|
||||
|
||||
interface AgentStoreFile {
|
||||
version: 1
|
||||
agents: AgentDefinition[]
|
||||
}
|
||||
|
||||
export interface CreateAgentInput {
|
||||
name: string
|
||||
adapter: AgentAdapter
|
||||
modelId?: string
|
||||
reasoningEffort?: string
|
||||
/**
|
||||
* Provider fields used only when `adapter === 'openclaw'`. They are
|
||||
* forwarded to the gateway-side createAgent call by the harness
|
||||
* service. Other adapters ignore them.
|
||||
*/
|
||||
providerType?: string
|
||||
providerName?: string
|
||||
baseUrl?: string
|
||||
apiKey?: string
|
||||
supportsImages?: boolean
|
||||
}
|
||||
|
||||
export class FileAgentStore {
|
||||
private readonly filePath: string
|
||||
private writeQueue: Promise<unknown> = Promise.resolve()
|
||||
|
||||
constructor(options: { filePath?: string } = {}) {
|
||||
this.filePath =
|
||||
options.filePath ??
|
||||
join(getBrowserosDir(), 'agents', 'harness', 'agents.json')
|
||||
}
|
||||
|
||||
async list(): Promise<AgentDefinition[]> {
|
||||
const file = await this.read()
|
||||
const agents = [...file.agents].sort((a, b) => b.updatedAt - a.updatedAt)
|
||||
logger.debug('Agent harness store listed agents', {
|
||||
count: agents.length,
|
||||
filePath: this.filePath,
|
||||
})
|
||||
return agents
|
||||
}
|
||||
|
||||
async get(id: string): Promise<AgentDefinition | null> {
|
||||
const file = await this.read()
|
||||
const agent = file.agents.find((entry) => entry.id === id) ?? null
|
||||
logger.debug('Agent harness store loaded agent', {
|
||||
agentId: id,
|
||||
found: Boolean(agent),
|
||||
adapter: agent?.adapter,
|
||||
filePath: this.filePath,
|
||||
})
|
||||
return agent
|
||||
}
|
||||
|
||||
async create(input: CreateAgentInput): Promise<AgentDefinition> {
|
||||
return this.withWriteLock(async () => {
|
||||
const now = Date.now()
|
||||
// OpenClaw agent names must match ^[a-z][a-z0-9-]*$, so prefix with
|
||||
// a fixed letter to guarantee a valid name when the harness id is
|
||||
// also used as the gateway-side agent name. Other adapters keep
|
||||
// raw UUIDs to preserve compatibility with existing records.
|
||||
const id =
|
||||
input.adapter === 'openclaw' ? `oc-${randomUUID()}` : randomUUID()
|
||||
const agent: AgentDefinition = {
|
||||
id,
|
||||
name: input.name.trim(),
|
||||
adapter: input.adapter,
|
||||
modelId: input.modelId ?? resolveDefaultModelId(input.adapter),
|
||||
reasoningEffort:
|
||||
input.reasoningEffort ?? resolveDefaultReasoningEffort(input.adapter),
|
||||
permissionMode: 'approve-all',
|
||||
sessionKey: `agent:${id}:main`,
|
||||
createdAt: now,
|
||||
updatedAt: now,
|
||||
}
|
||||
const file = await this.read()
|
||||
await this.write({ ...file, agents: [...file.agents, agent] })
|
||||
logger.info('Agent harness store created agent', {
|
||||
agentId: agent.id,
|
||||
name: agent.name,
|
||||
adapter: agent.adapter,
|
||||
modelId: agent.modelId,
|
||||
reasoningEffort: agent.reasoningEffort,
|
||||
sessionKey: agent.sessionKey,
|
||||
filePath: this.filePath,
|
||||
})
|
||||
return agent
|
||||
})
|
||||
}
|
||||
|
||||
/**
|
||||
* Inserts a harness record using a caller-provided id. Used to backfill
|
||||
* harness records for gateway-side OpenClaw agents that pre-date the
|
||||
* dual-creation flow (or were created directly via the legacy
|
||||
* `/claw/agents` API). No-ops when an entry with this id already
|
||||
* exists, so the call is safe to run on every server start.
|
||||
*/
|
||||
async upsertExisting(input: {
|
||||
id: string
|
||||
name: string
|
||||
adapter: AgentAdapter
|
||||
modelId?: string
|
||||
reasoningEffort?: string
|
||||
}): Promise<AgentDefinition> {
|
||||
return this.withWriteLock(async () => {
|
||||
const file = await this.read()
|
||||
const existing = file.agents.find((entry) => entry.id === input.id)
|
||||
if (existing) return existing
|
||||
const now = Date.now()
|
||||
const agent: AgentDefinition = {
|
||||
id: input.id,
|
||||
name: input.name.trim(),
|
||||
adapter: input.adapter,
|
||||
modelId: input.modelId ?? resolveDefaultModelId(input.adapter),
|
||||
reasoningEffort:
|
||||
input.reasoningEffort ?? resolveDefaultReasoningEffort(input.adapter),
|
||||
permissionMode: 'approve-all',
|
||||
sessionKey: `agent:${input.id}:main`,
|
||||
createdAt: now,
|
||||
updatedAt: now,
|
||||
}
|
||||
await this.write({ ...file, agents: [...file.agents, agent] })
|
||||
logger.info('Agent harness store backfilled agent', {
|
||||
agentId: agent.id,
|
||||
name: agent.name,
|
||||
adapter: agent.adapter,
|
||||
sessionKey: agent.sessionKey,
|
||||
filePath: this.filePath,
|
||||
})
|
||||
return agent
|
||||
})
|
||||
}
|
||||
|
||||
/**
|
||||
* Apply a partial update to an agent record. Returns the updated
|
||||
* record, or `null` if no agent matches `id`. Atomic via the same
|
||||
* temp-file + rename + write-queue rules as `create`. Bumps
|
||||
* `updatedAt` so the rail's recency sort reflects the change.
|
||||
*
|
||||
* Currently consumed by the pin-toggle mutation; the rename UI will
|
||||
* use the same patch surface.
|
||||
*/
|
||||
async update(
|
||||
id: string,
|
||||
patch: Partial<Pick<AgentDefinition, 'name' | 'pinned'>>,
|
||||
): Promise<AgentDefinition | null> {
|
||||
return this.withWriteLock(async () => {
|
||||
const file = await this.read()
|
||||
const index = file.agents.findIndex((agent) => agent.id === id)
|
||||
if (index < 0) return null
|
||||
const current = file.agents[index]
|
||||
const next: AgentDefinition = {
|
||||
...current,
|
||||
...(patch.name !== undefined ? { name: patch.name.trim() } : {}),
|
||||
...(patch.pinned !== undefined ? { pinned: patch.pinned } : {}),
|
||||
updatedAt: Date.now(),
|
||||
}
|
||||
const agents = [...file.agents]
|
||||
agents[index] = next
|
||||
await this.write({ ...file, agents })
|
||||
logger.info('Agent harness store updated agent', {
|
||||
agentId: id,
|
||||
patchedFields: Object.keys(patch),
|
||||
filePath: this.filePath,
|
||||
})
|
||||
return next
|
||||
})
|
||||
}
|
||||
|
||||
async delete(id: string): Promise<boolean> {
|
||||
return this.withWriteLock(async () => {
|
||||
const file = await this.read()
|
||||
const agents = file.agents.filter((agent) => agent.id !== id)
|
||||
if (agents.length === file.agents.length) return false
|
||||
await this.write({ ...file, agents })
|
||||
logger.info('Agent harness store deleted agent', {
|
||||
agentId: id,
|
||||
filePath: this.filePath,
|
||||
})
|
||||
return true
|
||||
})
|
||||
}
|
||||
|
||||
private async read(): Promise<AgentStoreFile> {
|
||||
try {
|
||||
const raw = await readFile(this.filePath, 'utf8')
|
||||
const parsed = JSON.parse(raw) as AgentStoreFile
|
||||
if (parsed.version !== 1 || !Array.isArray(parsed.agents)) {
|
||||
return emptyStoreFile()
|
||||
}
|
||||
return parsed
|
||||
} catch (err) {
|
||||
if (isNotFoundError(err)) return emptyStoreFile()
|
||||
throw err
|
||||
}
|
||||
}
|
||||
|
||||
private async write(file: AgentStoreFile): Promise<void> {
|
||||
await mkdir(dirname(this.filePath), { recursive: true })
|
||||
const tmpPath = `${this.filePath}.${process.pid}.${Date.now()}.tmp`
|
||||
await writeFile(tmpPath, `${JSON.stringify(file, null, 2)}\n`, 'utf8')
|
||||
await rename(tmpPath, this.filePath)
|
||||
}
|
||||
|
||||
private withWriteLock<T>(fn: () => Promise<T>): Promise<T> {
|
||||
const result = this.writeQueue.then(fn, fn)
|
||||
this.writeQueue = result.then(
|
||||
() => undefined,
|
||||
() => undefined,
|
||||
)
|
||||
return result
|
||||
}
|
||||
}
|
||||
|
||||
function emptyStoreFile(): AgentStoreFile {
|
||||
return { version: 1, agents: [] }
|
||||
}
|
||||
|
||||
function isNotFoundError(err: unknown): boolean {
|
||||
return (
|
||||
typeof err === 'object' &&
|
||||
err !== null &&
|
||||
'code' in err &&
|
||||
err.code === 'ENOENT'
|
||||
)
|
||||
}
|
||||
@@ -4,23 +4,20 @@
|
||||
* SPDX-License-Identifier: AGPL-3.0-or-later
|
||||
*/
|
||||
|
||||
import type { BrowserOsDatabase } from '../../db'
|
||||
import type { Database } from 'bun:sqlite'
|
||||
import { OAuthCallbackServer } from './callback-server'
|
||||
import type { OAuthTokenManager } from './token-manager'
|
||||
import { OAuthTokenManager as OAuthTokenManagerImpl } from './token-manager'
|
||||
import { OAuthTokenManager } from './token-manager'
|
||||
import { OAuthTokenStore } from './token-store'
|
||||
|
||||
let tokenManager: OAuthTokenManager | null = null
|
||||
|
||||
/** Initializes the process OAuth manager using the BrowserOS Drizzle database. */
|
||||
export function initializeOAuth(
|
||||
db: BrowserOsDatabase,
|
||||
db: Database,
|
||||
browserosId: string,
|
||||
): OAuthTokenManager {
|
||||
shutdownOAuth()
|
||||
const store = new OAuthTokenStore(db)
|
||||
const callbackServer = new OAuthCallbackServer()
|
||||
tokenManager = new OAuthTokenManagerImpl(store, browserosId, callbackServer)
|
||||
tokenManager = new OAuthTokenManager(store, browserosId, callbackServer)
|
||||
callbackServer.setTokenManager(tokenManager)
|
||||
return tokenManager
|
||||
}
|
||||
@@ -28,9 +25,3 @@ export function initializeOAuth(
|
||||
export function getOAuthTokenManager(): OAuthTokenManager | null {
|
||||
return tokenManager
|
||||
}
|
||||
|
||||
/** Stops the process OAuth manager and clears global access to provider tokens. */
|
||||
export function shutdownOAuth(): void {
|
||||
tokenManager?.stopCallbackServer()
|
||||
tokenManager = null
|
||||
}
|
||||
|
||||
@@ -9,31 +9,7 @@ import { TIMEOUTS } from '@browseros/shared/constants/timeouts'
|
||||
import { logger } from '../../logger'
|
||||
import type { OAuthCallbackServer } from './callback-server'
|
||||
import { getOAuthProvider, type OAuthProviderConfig } from './providers'
|
||||
|
||||
export interface StoredOAuthTokens {
|
||||
accessToken: string
|
||||
refreshToken: string
|
||||
expiresAt: number
|
||||
email?: string
|
||||
accountId?: string
|
||||
}
|
||||
|
||||
export interface OAuthStatus {
|
||||
authenticated: boolean
|
||||
email?: string
|
||||
provider: string
|
||||
}
|
||||
|
||||
export interface OAuthTokenStore {
|
||||
upsertTokens(
|
||||
browserosId: string,
|
||||
provider: string,
|
||||
tokens: StoredOAuthTokens,
|
||||
): void
|
||||
getTokens(browserosId: string, provider: string): StoredOAuthTokens | null
|
||||
deleteTokens(browserosId: string, provider: string): void
|
||||
getStatus(browserosId: string, provider: string): OAuthStatus
|
||||
}
|
||||
import type { OAuthTokenStore, StoredOAuthTokens } from './token-store'
|
||||
|
||||
interface PendingOAuthFlow {
|
||||
provider: string
|
||||
@@ -479,7 +455,7 @@ export class OAuthTokenManager {
|
||||
}
|
||||
|
||||
private stopCallbackIfIdle(): void {
|
||||
const hasPkceFlows = this.pendingFlows.size > 0
|
||||
const hasPkceFlows = [...this.pendingFlows.values()].some(() => true)
|
||||
if (!hasPkceFlows) {
|
||||
this.callbackServer.stop()
|
||||
}
|
||||
|
||||
@@ -2,85 +2,98 @@
|
||||
* @license
|
||||
* Copyright 2025 BrowserOS
|
||||
* SPDX-License-Identifier: AGPL-3.0-or-later
|
||||
*
|
||||
* SQLite storage for OAuth tokens.
|
||||
*/
|
||||
|
||||
import { and, eq } from 'drizzle-orm'
|
||||
import type { BrowserOsDatabase } from '../../db'
|
||||
import { type OAuthTokenRow, oauthTokens } from '../../db/schema'
|
||||
import type {
|
||||
OAuthStatus,
|
||||
OAuthTokenStore as OAuthTokenStoreContract,
|
||||
StoredOAuthTokens,
|
||||
} from './token-manager'
|
||||
import type { Database } from 'bun:sqlite'
|
||||
|
||||
/** Persists OAuth tokens in the BrowserOS Drizzle database for server-managed LLM providers. */
|
||||
export class OAuthTokenStore implements OAuthTokenStoreContract {
|
||||
constructor(private readonly db: BrowserOsDatabase) {}
|
||||
export interface StoredOAuthTokens {
|
||||
accessToken: string
|
||||
refreshToken: string
|
||||
expiresAt: number
|
||||
email?: string
|
||||
accountId?: string
|
||||
}
|
||||
|
||||
export interface OAuthStatus {
|
||||
authenticated: boolean
|
||||
email?: string
|
||||
provider: string
|
||||
}
|
||||
|
||||
export class OAuthTokenStore {
|
||||
constructor(private readonly db: Database) {}
|
||||
|
||||
upsertTokens(
|
||||
browserosId: string,
|
||||
provider: string,
|
||||
tokens: StoredOAuthTokens,
|
||||
): void {
|
||||
const row: OAuthTokenRow = {
|
||||
const stmt = this.db.prepare(`
|
||||
INSERT INTO oauth_tokens (browseros_id, provider, access_token, refresh_token, expires_at, email, account_id, updated_at)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, datetime('now'))
|
||||
ON CONFLICT (browseros_id, provider) DO UPDATE SET
|
||||
access_token = excluded.access_token,
|
||||
refresh_token = excluded.refresh_token,
|
||||
expires_at = excluded.expires_at,
|
||||
email = excluded.email,
|
||||
account_id = excluded.account_id,
|
||||
updated_at = datetime('now')
|
||||
`)
|
||||
stmt.run(
|
||||
browserosId,
|
||||
provider,
|
||||
accessToken: tokens.accessToken,
|
||||
refreshToken: tokens.refreshToken,
|
||||
expiresAt: tokens.expiresAt,
|
||||
email: tokens.email ?? null,
|
||||
accountId: tokens.accountId ?? null,
|
||||
updatedAt: Date.now(),
|
||||
}
|
||||
this.db
|
||||
.insert(oauthTokens)
|
||||
.values(row)
|
||||
.onConflictDoUpdate({
|
||||
target: [oauthTokens.browserosId, oauthTokens.provider],
|
||||
set: row,
|
||||
})
|
||||
.run()
|
||||
tokens.accessToken,
|
||||
tokens.refreshToken,
|
||||
tokens.expiresAt,
|
||||
tokens.email ?? null,
|
||||
tokens.accountId ?? null,
|
||||
)
|
||||
}
|
||||
|
||||
getTokens(browserosId: string, provider: string): StoredOAuthTokens | null {
|
||||
const row = this.findRow(browserosId, provider)
|
||||
const row = this.db
|
||||
.prepare(
|
||||
'SELECT access_token, refresh_token, expires_at, email, account_id FROM oauth_tokens WHERE browseros_id = ? AND provider = ?',
|
||||
)
|
||||
.get(browserosId, provider) as {
|
||||
access_token: string
|
||||
refresh_token: string
|
||||
expires_at: number
|
||||
email: string | null
|
||||
account_id: string | null
|
||||
} | null
|
||||
|
||||
if (!row) return null
|
||||
return {
|
||||
accessToken: row.accessToken,
|
||||
refreshToken: row.refreshToken,
|
||||
expiresAt: row.expiresAt,
|
||||
accessToken: row.access_token,
|
||||
refreshToken: row.refresh_token,
|
||||
expiresAt: row.expires_at,
|
||||
email: row.email ?? undefined,
|
||||
accountId: row.accountId ?? undefined,
|
||||
accountId: row.account_id ?? undefined,
|
||||
}
|
||||
}
|
||||
|
||||
deleteTokens(browserosId: string, provider: string): void {
|
||||
this.db.delete(oauthTokens).where(tokenKey(browserosId, provider)).run()
|
||||
this.db
|
||||
.prepare(
|
||||
'DELETE FROM oauth_tokens WHERE browseros_id = ? AND provider = ?',
|
||||
)
|
||||
.run(browserosId, provider)
|
||||
}
|
||||
|
||||
getStatus(browserosId: string, provider: string): OAuthStatus {
|
||||
const row = this.findRow(browserosId, provider)
|
||||
const row = this.db
|
||||
.prepare(
|
||||
'SELECT email FROM oauth_tokens WHERE browseros_id = ? AND provider = ?',
|
||||
)
|
||||
.get(browserosId, provider) as { email: string | null } | null
|
||||
|
||||
return {
|
||||
authenticated: row !== null,
|
||||
email: row?.email ?? undefined,
|
||||
provider,
|
||||
}
|
||||
}
|
||||
|
||||
private findRow(browserosId: string, provider: string): OAuthTokenRow | null {
|
||||
return (
|
||||
this.db
|
||||
.select()
|
||||
.from(oauthTokens)
|
||||
.where(tokenKey(browserosId, provider))
|
||||
.get() ?? null
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
function tokenKey(browserosId: string, provider: string) {
|
||||
return and(
|
||||
eq(oauthTokens.browserosId, browserosId),
|
||||
eq(oauthTokens.provider, provider),
|
||||
)
|
||||
}
|
||||
|
||||
@@ -1,82 +0,0 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 BrowserOS
|
||||
* SPDX-License-Identifier: AGPL-3.0-or-later
|
||||
*/
|
||||
|
||||
import { Database as BunDatabase } from 'bun:sqlite'
|
||||
import { existsSync, mkdirSync } from 'node:fs'
|
||||
import { dirname, join } from 'node:path'
|
||||
import { fileURLToPath } from 'node:url'
|
||||
import { type BunSQLiteDatabase, drizzle } from 'drizzle-orm/bun-sqlite'
|
||||
import { migrate } from 'drizzle-orm/bun-sqlite/migrator'
|
||||
import * as schema from './schema'
|
||||
|
||||
export type BrowserOsDatabase = BunSQLiteDatabase<typeof schema>
|
||||
|
||||
export interface DbHandle {
|
||||
path: string
|
||||
migrationsDir: string
|
||||
sqlite: BunDatabase
|
||||
db: BrowserOsDatabase
|
||||
}
|
||||
|
||||
export interface OpenDbOptions {
|
||||
dbPath: string
|
||||
resourcesDir?: string
|
||||
migrationsDir?: string
|
||||
runMigrations?: boolean
|
||||
}
|
||||
|
||||
const sourceMigrationsDir = fileURLToPath(
|
||||
new URL('./migrations', import.meta.url),
|
||||
)
|
||||
|
||||
/** Opens BrowserOS SQLite and applies checked-in Drizzle migrations before callers use the DB. */
|
||||
export function openBrowserOsDatabase(options: OpenDbOptions): DbHandle {
|
||||
const migrationsDir = resolveMigrationsDir(options)
|
||||
mkdirSync(dirname(options.dbPath), { recursive: true })
|
||||
|
||||
const sqlite = new BunDatabase(options.dbPath)
|
||||
sqlite.exec('PRAGMA journal_mode = WAL')
|
||||
sqlite.exec('PRAGMA foreign_keys = ON')
|
||||
|
||||
const db = drizzle(sqlite, { schema })
|
||||
if (options.runMigrations !== false) {
|
||||
migrate(db, { migrationsFolder: migrationsDir })
|
||||
}
|
||||
|
||||
return {
|
||||
path: options.dbPath,
|
||||
migrationsDir,
|
||||
sqlite,
|
||||
db,
|
||||
}
|
||||
}
|
||||
|
||||
/** Resolves migrations from explicit test paths, packaged resources, or the source tree. */
|
||||
export function resolveMigrationsDir(
|
||||
options: Pick<OpenDbOptions, 'migrationsDir' | 'resourcesDir'> = {},
|
||||
): string {
|
||||
if (options.migrationsDir) {
|
||||
if (existsSync(options.migrationsDir)) return options.migrationsDir
|
||||
throw new Error(
|
||||
`Drizzle migrations directory not found. Checked: ${options.migrationsDir}`,
|
||||
)
|
||||
}
|
||||
|
||||
const candidates = [
|
||||
options.resourcesDir
|
||||
? join(options.resourcesDir, 'db', 'migrations')
|
||||
: null,
|
||||
sourceMigrationsDir,
|
||||
].filter((candidate): candidate is string => Boolean(candidate))
|
||||
|
||||
for (const candidate of candidates) {
|
||||
if (existsSync(candidate)) return candidate
|
||||
}
|
||||
|
||||
throw new Error(
|
||||
`Drizzle migrations directory not found. Checked: ${candidates.join(', ')}`,
|
||||
)
|
||||
}
|
||||
@@ -3,39 +3,31 @@
|
||||
* Copyright 2025 BrowserOS
|
||||
* SPDX-License-Identifier: AGPL-3.0-or-later
|
||||
*/
|
||||
import {
|
||||
type BrowserOsDatabase,
|
||||
type DbHandle,
|
||||
type OpenDbOptions,
|
||||
openBrowserOsDatabase,
|
||||
} from './client'
|
||||
import { Database } from 'bun:sqlite'
|
||||
|
||||
let handle: DbHandle | null = null
|
||||
import { initSchema } from './schema'
|
||||
|
||||
/** Initializes the process-wide BrowserOS database handle used by server services. */
|
||||
export function initializeDb(options: OpenDbOptions): DbHandle {
|
||||
if (!handle) {
|
||||
handle = openBrowserOsDatabase(options)
|
||||
let db: Database | null = null
|
||||
|
||||
export function initializeDb(dbPath: string): Database {
|
||||
if (!db) {
|
||||
db = new Database(dbPath)
|
||||
db.exec('PRAGMA journal_mode = WAL')
|
||||
initSchema(db)
|
||||
}
|
||||
return handle
|
||||
return db
|
||||
}
|
||||
|
||||
export function getDbHandle(): DbHandle {
|
||||
if (!handle) {
|
||||
export function getDb(): Database {
|
||||
if (!db) {
|
||||
throw new Error('Database not initialized. Call initializeDb() first.')
|
||||
}
|
||||
return handle
|
||||
}
|
||||
|
||||
export function getDb(): BrowserOsDatabase {
|
||||
return getDbHandle().db
|
||||
return db
|
||||
}
|
||||
|
||||
export function closeDb(): void {
|
||||
if (handle) {
|
||||
handle.sqlite.close()
|
||||
handle = null
|
||||
if (db) {
|
||||
db.close()
|
||||
db = null
|
||||
}
|
||||
}
|
||||
|
||||
export type { BrowserOsDatabase, DbHandle, OpenDbOptions }
|
||||
|
||||
@@ -1,17 +0,0 @@
|
||||
CREATE TABLE `agent_definitions` (
|
||||
`id` text PRIMARY KEY NOT NULL,
|
||||
`name` text NOT NULL,
|
||||
`adapter` text NOT NULL,
|
||||
`model_id` text NOT NULL,
|
||||
`reasoning_effort` text NOT NULL,
|
||||
`permission_mode` text DEFAULT 'approve-all' NOT NULL,
|
||||
`session_key` text NOT NULL,
|
||||
`pinned` integer DEFAULT false NOT NULL,
|
||||
`adapter_config_json` text,
|
||||
`created_at` integer NOT NULL,
|
||||
`updated_at` integer NOT NULL
|
||||
);
|
||||
--> statement-breakpoint
|
||||
CREATE UNIQUE INDEX `agent_definitions_session_key_unique` ON `agent_definitions` (`session_key`);--> statement-breakpoint
|
||||
CREATE INDEX `agent_definitions_updated_at_idx` ON `agent_definitions` (`updated_at`);--> statement-breakpoint
|
||||
CREATE INDEX `agent_definitions_adapter_updated_at_idx` ON `agent_definitions` (`adapter`,`updated_at`);
|
||||
@@ -1,13 +0,0 @@
|
||||
CREATE TABLE `oauth_tokens` (
|
||||
`browseros_id` text NOT NULL,
|
||||
`provider` text NOT NULL,
|
||||
`access_token` text NOT NULL,
|
||||
`refresh_token` text NOT NULL,
|
||||
`expires_at` integer NOT NULL,
|
||||
`email` text,
|
||||
`account_id` text,
|
||||
`updated_at` integer NOT NULL,
|
||||
PRIMARY KEY(`browseros_id`, `provider`)
|
||||
);
|
||||
--> statement-breakpoint
|
||||
CREATE INDEX `oauth_tokens_browseros_id_idx` ON `oauth_tokens` (`browseros_id`);
|
||||
@@ -1,123 +0,0 @@
|
||||
{
|
||||
"version": "6",
|
||||
"dialect": "sqlite",
|
||||
"id": "faeb2b91-efc6-497a-9867-258fbcebd8b2",
|
||||
"prevId": "00000000-0000-0000-0000-000000000000",
|
||||
"tables": {
|
||||
"agent_definitions": {
|
||||
"name": "agent_definitions",
|
||||
"columns": {
|
||||
"id": {
|
||||
"name": "id",
|
||||
"type": "text",
|
||||
"primaryKey": true,
|
||||
"notNull": true,
|
||||
"autoincrement": false
|
||||
},
|
||||
"name": {
|
||||
"name": "name",
|
||||
"type": "text",
|
||||
"primaryKey": false,
|
||||
"notNull": true,
|
||||
"autoincrement": false
|
||||
},
|
||||
"adapter": {
|
||||
"name": "adapter",
|
||||
"type": "text",
|
||||
"primaryKey": false,
|
||||
"notNull": true,
|
||||
"autoincrement": false
|
||||
},
|
||||
"model_id": {
|
||||
"name": "model_id",
|
||||
"type": "text",
|
||||
"primaryKey": false,
|
||||
"notNull": true,
|
||||
"autoincrement": false
|
||||
},
|
||||
"reasoning_effort": {
|
||||
"name": "reasoning_effort",
|
||||
"type": "text",
|
||||
"primaryKey": false,
|
||||
"notNull": true,
|
||||
"autoincrement": false
|
||||
},
|
||||
"permission_mode": {
|
||||
"name": "permission_mode",
|
||||
"type": "text",
|
||||
"primaryKey": false,
|
||||
"notNull": true,
|
||||
"autoincrement": false,
|
||||
"default": "'approve-all'"
|
||||
},
|
||||
"session_key": {
|
||||
"name": "session_key",
|
||||
"type": "text",
|
||||
"primaryKey": false,
|
||||
"notNull": true,
|
||||
"autoincrement": false
|
||||
},
|
||||
"pinned": {
|
||||
"name": "pinned",
|
||||
"type": "integer",
|
||||
"primaryKey": false,
|
||||
"notNull": true,
|
||||
"autoincrement": false,
|
||||
"default": false
|
||||
},
|
||||
"adapter_config_json": {
|
||||
"name": "adapter_config_json",
|
||||
"type": "text",
|
||||
"primaryKey": false,
|
||||
"notNull": false,
|
||||
"autoincrement": false
|
||||
},
|
||||
"created_at": {
|
||||
"name": "created_at",
|
||||
"type": "integer",
|
||||
"primaryKey": false,
|
||||
"notNull": true,
|
||||
"autoincrement": false
|
||||
},
|
||||
"updated_at": {
|
||||
"name": "updated_at",
|
||||
"type": "integer",
|
||||
"primaryKey": false,
|
||||
"notNull": true,
|
||||
"autoincrement": false
|
||||
}
|
||||
},
|
||||
"indexes": {
|
||||
"agent_definitions_session_key_unique": {
|
||||
"name": "agent_definitions_session_key_unique",
|
||||
"columns": ["session_key"],
|
||||
"isUnique": true
|
||||
},
|
||||
"agent_definitions_updated_at_idx": {
|
||||
"name": "agent_definitions_updated_at_idx",
|
||||
"columns": ["updated_at"],
|
||||
"isUnique": false
|
||||
},
|
||||
"agent_definitions_adapter_updated_at_idx": {
|
||||
"name": "agent_definitions_adapter_updated_at_idx",
|
||||
"columns": ["adapter", "updated_at"],
|
||||
"isUnique": false
|
||||
}
|
||||
},
|
||||
"foreignKeys": {},
|
||||
"compositePrimaryKeys": {},
|
||||
"uniqueConstraints": {},
|
||||
"checkConstraints": {}
|
||||
}
|
||||
},
|
||||
"views": {},
|
||||
"enums": {},
|
||||
"_meta": {
|
||||
"schemas": {},
|
||||
"tables": {},
|
||||
"columns": {}
|
||||
},
|
||||
"internal": {
|
||||
"indexes": {}
|
||||
}
|
||||
}
|
||||
@@ -1,200 +0,0 @@
|
||||
{
|
||||
"version": "6",
|
||||
"dialect": "sqlite",
|
||||
"id": "6be24444-91aa-492e-96e5-d84c0f020468",
|
||||
"prevId": "faeb2b91-efc6-497a-9867-258fbcebd8b2",
|
||||
"tables": {
|
||||
"agent_definitions": {
|
||||
"name": "agent_definitions",
|
||||
"columns": {
|
||||
"id": {
|
||||
"name": "id",
|
||||
"type": "text",
|
||||
"primaryKey": true,
|
||||
"notNull": true,
|
||||
"autoincrement": false
|
||||
},
|
||||
"name": {
|
||||
"name": "name",
|
||||
"type": "text",
|
||||
"primaryKey": false,
|
||||
"notNull": true,
|
||||
"autoincrement": false
|
||||
},
|
||||
"adapter": {
|
||||
"name": "adapter",
|
||||
"type": "text",
|
||||
"primaryKey": false,
|
||||
"notNull": true,
|
||||
"autoincrement": false
|
||||
},
|
||||
"model_id": {
|
||||
"name": "model_id",
|
||||
"type": "text",
|
||||
"primaryKey": false,
|
||||
"notNull": true,
|
||||
"autoincrement": false
|
||||
},
|
||||
"reasoning_effort": {
|
||||
"name": "reasoning_effort",
|
||||
"type": "text",
|
||||
"primaryKey": false,
|
||||
"notNull": true,
|
||||
"autoincrement": false
|
||||
},
|
||||
"permission_mode": {
|
||||
"name": "permission_mode",
|
||||
"type": "text",
|
||||
"primaryKey": false,
|
||||
"notNull": true,
|
||||
"autoincrement": false,
|
||||
"default": "'approve-all'"
|
||||
},
|
||||
"session_key": {
|
||||
"name": "session_key",
|
||||
"type": "text",
|
||||
"primaryKey": false,
|
||||
"notNull": true,
|
||||
"autoincrement": false
|
||||
},
|
||||
"pinned": {
|
||||
"name": "pinned",
|
||||
"type": "integer",
|
||||
"primaryKey": false,
|
||||
"notNull": true,
|
||||
"autoincrement": false,
|
||||
"default": false
|
||||
},
|
||||
"adapter_config_json": {
|
||||
"name": "adapter_config_json",
|
||||
"type": "text",
|
||||
"primaryKey": false,
|
||||
"notNull": false,
|
||||
"autoincrement": false
|
||||
},
|
||||
"created_at": {
|
||||
"name": "created_at",
|
||||
"type": "integer",
|
||||
"primaryKey": false,
|
||||
"notNull": true,
|
||||
"autoincrement": false
|
||||
},
|
||||
"updated_at": {
|
||||
"name": "updated_at",
|
||||
"type": "integer",
|
||||
"primaryKey": false,
|
||||
"notNull": true,
|
||||
"autoincrement": false
|
||||
}
|
||||
},
|
||||
"indexes": {
|
||||
"agent_definitions_session_key_unique": {
|
||||
"name": "agent_definitions_session_key_unique",
|
||||
"columns": ["session_key"],
|
||||
"isUnique": true
|
||||
},
|
||||
"agent_definitions_updated_at_idx": {
|
||||
"name": "agent_definitions_updated_at_idx",
|
||||
"columns": ["updated_at"],
|
||||
"isUnique": false
|
||||
},
|
||||
"agent_definitions_adapter_updated_at_idx": {
|
||||
"name": "agent_definitions_adapter_updated_at_idx",
|
||||
"columns": ["adapter", "updated_at"],
|
||||
"isUnique": false
|
||||
}
|
||||
},
|
||||
"foreignKeys": {},
|
||||
"compositePrimaryKeys": {},
|
||||
"uniqueConstraints": {},
|
||||
"checkConstraints": {}
|
||||
},
|
||||
"oauth_tokens": {
|
||||
"name": "oauth_tokens",
|
||||
"columns": {
|
||||
"browseros_id": {
|
||||
"name": "browseros_id",
|
||||
"type": "text",
|
||||
"primaryKey": false,
|
||||
"notNull": true,
|
||||
"autoincrement": false
|
||||
},
|
||||
"provider": {
|
||||
"name": "provider",
|
||||
"type": "text",
|
||||
"primaryKey": false,
|
||||
"notNull": true,
|
||||
"autoincrement": false
|
||||
},
|
||||
"access_token": {
|
||||
"name": "access_token",
|
||||
"type": "text",
|
||||
"primaryKey": false,
|
||||
"notNull": true,
|
||||
"autoincrement": false
|
||||
},
|
||||
"refresh_token": {
|
||||
"name": "refresh_token",
|
||||
"type": "text",
|
||||
"primaryKey": false,
|
||||
"notNull": true,
|
||||
"autoincrement": false
|
||||
},
|
||||
"expires_at": {
|
||||
"name": "expires_at",
|
||||
"type": "integer",
|
||||
"primaryKey": false,
|
||||
"notNull": true,
|
||||
"autoincrement": false
|
||||
},
|
||||
"email": {
|
||||
"name": "email",
|
||||
"type": "text",
|
||||
"primaryKey": false,
|
||||
"notNull": false,
|
||||
"autoincrement": false
|
||||
},
|
||||
"account_id": {
|
||||
"name": "account_id",
|
||||
"type": "text",
|
||||
"primaryKey": false,
|
||||
"notNull": false,
|
||||
"autoincrement": false
|
||||
},
|
||||
"updated_at": {
|
||||
"name": "updated_at",
|
||||
"type": "integer",
|
||||
"primaryKey": false,
|
||||
"notNull": true,
|
||||
"autoincrement": false
|
||||
}
|
||||
},
|
||||
"indexes": {
|
||||
"oauth_tokens_browseros_id_idx": {
|
||||
"name": "oauth_tokens_browseros_id_idx",
|
||||
"columns": ["browseros_id"],
|
||||
"isUnique": false
|
||||
}
|
||||
},
|
||||
"foreignKeys": {},
|
||||
"compositePrimaryKeys": {
|
||||
"oauth_tokens_browseros_id_provider_pk": {
|
||||
"columns": ["browseros_id", "provider"],
|
||||
"name": "oauth_tokens_browseros_id_provider_pk"
|
||||
}
|
||||
},
|
||||
"uniqueConstraints": {},
|
||||
"checkConstraints": {}
|
||||
}
|
||||
},
|
||||
"views": {},
|
||||
"enums": {},
|
||||
"_meta": {
|
||||
"schemas": {},
|
||||
"tables": {},
|
||||
"columns": {}
|
||||
},
|
||||
"internal": {
|
||||
"indexes": {}
|
||||
}
|
||||
}
|
||||
@@ -1,20 +0,0 @@
|
||||
{
|
||||
"version": "7",
|
||||
"dialect": "sqlite",
|
||||
"entries": [
|
||||
{
|
||||
"idx": 0,
|
||||
"version": "6",
|
||||
"when": 1777750582590,
|
||||
"tag": "0000_zippy_psylocke",
|
||||
"breakpoints": true
|
||||
},
|
||||
{
|
||||
"idx": 1,
|
||||
"version": "6",
|
||||
"when": 1777752799806,
|
||||
"tag": "0001_lazy_orphan",
|
||||
"breakpoints": true
|
||||
}
|
||||
]
|
||||
}
|
||||
32
packages/browseros-agent/apps/server/src/lib/db/schema.ts
Normal file
32
packages/browseros-agent/apps/server/src/lib/db/schema.ts
Normal file
@@ -0,0 +1,32 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 BrowserOS
|
||||
* SPDX-License-Identifier: AGPL-3.0-or-later
|
||||
*/
|
||||
import type { Database } from 'bun:sqlite'
|
||||
|
||||
const IDENTITY_TABLE = `
|
||||
CREATE TABLE IF NOT EXISTS identity (
|
||||
id INTEGER PRIMARY KEY CHECK (id = 1),
|
||||
browseros_id TEXT NOT NULL,
|
||||
created_at TEXT NOT NULL DEFAULT (datetime('now'))
|
||||
)`
|
||||
|
||||
const OAUTH_TOKENS_TABLE = `
|
||||
CREATE TABLE IF NOT EXISTS oauth_tokens (
|
||||
browseros_id TEXT NOT NULL,
|
||||
provider TEXT NOT NULL,
|
||||
access_token TEXT NOT NULL,
|
||||
refresh_token TEXT NOT NULL,
|
||||
expires_at INTEGER NOT NULL,
|
||||
email TEXT,
|
||||
account_id TEXT,
|
||||
created_at TEXT NOT NULL DEFAULT (datetime('now')),
|
||||
updated_at TEXT NOT NULL DEFAULT (datetime('now')),
|
||||
PRIMARY KEY (browseros_id, provider)
|
||||
)`
|
||||
|
||||
export function initSchema(db: Database): void {
|
||||
db.exec(IDENTITY_TABLE)
|
||||
db.exec(OAUTH_TOKENS_TABLE)
|
||||
}
|
||||
@@ -1,48 +0,0 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 BrowserOS
|
||||
* SPDX-License-Identifier: AGPL-3.0-or-later
|
||||
*/
|
||||
|
||||
import type { InferInsertModel, InferSelectModel } from 'drizzle-orm'
|
||||
import {
|
||||
index,
|
||||
integer,
|
||||
sqliteTable,
|
||||
text,
|
||||
uniqueIndex,
|
||||
} from 'drizzle-orm/sqlite-core'
|
||||
|
||||
export const agentDefinitions = sqliteTable(
|
||||
'agent_definitions',
|
||||
{
|
||||
id: text('id').primaryKey(),
|
||||
name: text('name').notNull(),
|
||||
adapter: text('adapter', {
|
||||
enum: ['claude', 'codex', 'openclaw'],
|
||||
}).notNull(),
|
||||
modelId: text('model_id').notNull(),
|
||||
reasoningEffort: text('reasoning_effort').notNull(),
|
||||
permissionMode: text('permission_mode', {
|
||||
enum: ['approve-all'],
|
||||
})
|
||||
.notNull()
|
||||
.default('approve-all'),
|
||||
sessionKey: text('session_key').notNull(),
|
||||
pinned: integer('pinned', { mode: 'boolean' }).notNull().default(false),
|
||||
adapterConfigJson: text('adapter_config_json'),
|
||||
createdAt: integer('created_at').notNull(),
|
||||
updatedAt: integer('updated_at').notNull(),
|
||||
},
|
||||
(table) => [
|
||||
uniqueIndex('agent_definitions_session_key_unique').on(table.sessionKey),
|
||||
index('agent_definitions_updated_at_idx').on(table.updatedAt),
|
||||
index('agent_definitions_adapter_updated_at_idx').on(
|
||||
table.adapter,
|
||||
table.updatedAt,
|
||||
),
|
||||
],
|
||||
)
|
||||
|
||||
export type AgentDefinitionRow = InferSelectModel<typeof agentDefinitions>
|
||||
export type NewAgentDefinitionRow = InferInsertModel<typeof agentDefinitions>
|
||||
@@ -1,8 +0,0 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 BrowserOS
|
||||
* SPDX-License-Identifier: AGPL-3.0-or-later
|
||||
*/
|
||||
|
||||
export * from './agents'
|
||||
export * from './oauth'
|
||||
@@ -1,35 +0,0 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 BrowserOS
|
||||
* SPDX-License-Identifier: AGPL-3.0-or-later
|
||||
*/
|
||||
|
||||
import type { InferInsertModel, InferSelectModel } from 'drizzle-orm'
|
||||
import {
|
||||
index,
|
||||
integer,
|
||||
primaryKey,
|
||||
sqliteTable,
|
||||
text,
|
||||
} from 'drizzle-orm/sqlite-core'
|
||||
|
||||
export const oauthTokens = sqliteTable(
|
||||
'oauth_tokens',
|
||||
{
|
||||
browserosId: text('browseros_id').notNull(),
|
||||
provider: text('provider').notNull(),
|
||||
accessToken: text('access_token').notNull(),
|
||||
refreshToken: text('refresh_token').notNull(),
|
||||
expiresAt: integer('expires_at').notNull(),
|
||||
email: text('email'),
|
||||
accountId: text('account_id'),
|
||||
updatedAt: integer('updated_at').notNull(),
|
||||
},
|
||||
(table) => [
|
||||
primaryKey({ columns: [table.browserosId, table.provider] }),
|
||||
index('oauth_tokens_browseros_id_idx').on(table.browserosId),
|
||||
],
|
||||
)
|
||||
|
||||
export type OAuthTokenRow = InferSelectModel<typeof oauthTokens>
|
||||
export type NewOAuthTokenRow = InferInsertModel<typeof oauthTokens>
|
||||
@@ -3,27 +3,22 @@
|
||||
* Copyright 2025 BrowserOS
|
||||
* SPDX-License-Identifier: AGPL-3.0-or-later
|
||||
*/
|
||||
import { mkdirSync, readFileSync, writeFileSync } from 'node:fs'
|
||||
import { dirname } from 'node:path'
|
||||
import type { Database } from 'bun:sqlite'
|
||||
|
||||
export interface IdentityConfig {
|
||||
installId?: string
|
||||
statePath?: string
|
||||
db: Database
|
||||
}
|
||||
|
||||
interface IdentityStateFile {
|
||||
browserosId: string
|
||||
}
|
||||
class IdentityService {
|
||||
private browserOSId: string | null = null // Unique identifier for the BrowserOS instance
|
||||
|
||||
export class IdentityService {
|
||||
private browserOSId: string | null = null
|
||||
|
||||
/** Chooses the stable BrowserOS id without coupling it to the product SQLite schema. */
|
||||
initialize(config: IdentityConfig): void {
|
||||
const { installId, db } = config
|
||||
|
||||
// Priority: DB > config > generate new
|
||||
this.browserOSId =
|
||||
normalizeInstallId(config.installId) ??
|
||||
this.loadFromState(config.statePath) ??
|
||||
this.generateAndSave(config.statePath)
|
||||
this.loadFromDb(db) || installId || this.generateAndSave(db)
|
||||
}
|
||||
|
||||
getBrowserOSId(): string {
|
||||
@@ -39,43 +34,20 @@ export class IdentityService {
|
||||
return this.browserOSId !== null
|
||||
}
|
||||
|
||||
private loadFromState(statePath: string | undefined): string | null {
|
||||
if (!statePath) return null
|
||||
try {
|
||||
const parsed = JSON.parse(
|
||||
readFileSync(statePath, 'utf8'),
|
||||
) as Partial<IdentityStateFile>
|
||||
return typeof parsed.browserosId === 'string' &&
|
||||
parsed.browserosId.length > 0
|
||||
? parsed.browserosId
|
||||
: null
|
||||
} catch (err) {
|
||||
if (isNotFoundError(err)) return null
|
||||
throw err
|
||||
}
|
||||
private loadFromDb(db: Database): string | null {
|
||||
const stmt = db.prepare('SELECT browseros_id FROM identity WHERE id = 1')
|
||||
const row = stmt.get() as { browseros_id: string } | null
|
||||
return row?.browseros_id ?? null
|
||||
}
|
||||
|
||||
private generateAndSave(statePath: string | undefined): string {
|
||||
private generateAndSave(db: Database): string {
|
||||
const browserosId = crypto.randomUUID()
|
||||
if (statePath) {
|
||||
mkdirSync(dirname(statePath), { recursive: true })
|
||||
writeFileSync(statePath, `${JSON.stringify({ browserosId })}\n`, 'utf8')
|
||||
}
|
||||
const stmt = db.prepare(
|
||||
'INSERT OR REPLACE INTO identity (id, browseros_id) VALUES (1, ?)',
|
||||
)
|
||||
stmt.run(browserosId)
|
||||
return browserosId
|
||||
}
|
||||
}
|
||||
|
||||
function normalizeInstallId(installId: string | undefined): string | null {
|
||||
return installId && installId.length > 0 ? installId : null
|
||||
}
|
||||
|
||||
function isNotFoundError(err: unknown): boolean {
|
||||
return (
|
||||
typeof err === 'object' &&
|
||||
err !== null &&
|
||||
'code' in err &&
|
||||
err.code === 'ENOENT'
|
||||
)
|
||||
}
|
||||
|
||||
export const identity = new IdentityService()
|
||||
|
||||
@@ -8,6 +8,7 @@
|
||||
* Manages server lifecycle: initialization, startup, and shutdown.
|
||||
*/
|
||||
|
||||
import type { Database } from 'bun:sqlite'
|
||||
import fs from 'node:fs'
|
||||
import path from 'node:path'
|
||||
import { EXIT_CODES } from '@browseros/shared/constants/exit-codes'
|
||||
@@ -45,6 +46,7 @@ import { VERSION } from './version'
|
||||
|
||||
export class Application {
|
||||
private config: ServerConfig
|
||||
private db: Database | null = null
|
||||
|
||||
constructor(config: ServerConfig) {
|
||||
this.config = config
|
||||
@@ -179,19 +181,15 @@ export class Application {
|
||||
await migrateBuiltinSkills()
|
||||
await syncBuiltinSkills()
|
||||
|
||||
const dbPath = path.join(this.config.executionDir, 'db', 'browseros.sqlite')
|
||||
initializeDb({
|
||||
dbPath,
|
||||
resourcesDir: this.config.resourcesDir,
|
||||
})
|
||||
const dbPath = path.join(
|
||||
this.config.executionDir || this.config.resourcesDir,
|
||||
'browseros.db',
|
||||
)
|
||||
this.db = initializeDb(dbPath)
|
||||
|
||||
identity.initialize({
|
||||
installId: this.config.instanceInstallId,
|
||||
statePath: path.join(
|
||||
this.config.executionDir,
|
||||
'identity',
|
||||
'browseros-id.json',
|
||||
),
|
||||
db: this.db,
|
||||
})
|
||||
|
||||
const browserosId = identity.getBrowserOSId()
|
||||
|
||||
@@ -5,8 +5,8 @@
|
||||
|
||||
import { describe, expect, it } from 'bun:test'
|
||||
import { AgentHarnessService } from '../../../../src/api/services/agents/agent-harness-service'
|
||||
import type { AgentStore } from '../../../../src/lib/agents/agent-store'
|
||||
import type { AgentDefinition } from '../../../../src/lib/agents/agent-types'
|
||||
import type { FileAgentStore } from '../../../../src/lib/agents/file-agent-store'
|
||||
import type {
|
||||
AgentRuntime,
|
||||
AgentStreamEvent,
|
||||
@@ -44,7 +44,7 @@ describe('AgentHarnessService', () => {
|
||||
}
|
||||
|
||||
const service = new AgentHarnessService({
|
||||
agentStore: agentStore as AgentStore,
|
||||
agentStore: agentStore as FileAgentStore,
|
||||
runtime,
|
||||
})
|
||||
|
||||
@@ -128,7 +128,7 @@ describe('AgentHarnessService', () => {
|
||||
},
|
||||
}
|
||||
const service = new AgentHarnessService({
|
||||
agentStore: createAgentStore([agent]) as AgentStore,
|
||||
agentStore: createAgentStore([agent]) as FileAgentStore,
|
||||
runtime,
|
||||
})
|
||||
|
||||
@@ -158,7 +158,7 @@ describe('AgentHarnessService', () => {
|
||||
},
|
||||
}
|
||||
const service = new AgentHarnessService({
|
||||
agentStore: createAgentStore(agents) as AgentStore,
|
||||
agentStore: createAgentStore(agents) as FileAgentStore,
|
||||
runtime: stubRuntime(),
|
||||
openclawProvisioner: provisioner,
|
||||
})
|
||||
@@ -206,7 +206,7 @@ describe('AgentHarnessService', () => {
|
||||
},
|
||||
}
|
||||
const service = new AgentHarnessService({
|
||||
agentStore: createAgentStore(agents) as AgentStore,
|
||||
agentStore: createAgentStore(agents) as FileAgentStore,
|
||||
runtime: stubRuntime(),
|
||||
openclawProvisioner: provisioner,
|
||||
})
|
||||
@@ -220,7 +220,7 @@ describe('AgentHarnessService', () => {
|
||||
it('refuses to create an OpenClaw agent when no provisioner is wired', async () => {
|
||||
const agents: AgentDefinition[] = []
|
||||
const service = new AgentHarnessService({
|
||||
agentStore: createAgentStore(agents) as AgentStore,
|
||||
agentStore: createAgentStore(agents) as FileAgentStore,
|
||||
runtime: stubRuntime(),
|
||||
})
|
||||
|
||||
@@ -247,7 +247,7 @@ describe('AgentHarnessService', () => {
|
||||
},
|
||||
}
|
||||
const service = new AgentHarnessService({
|
||||
agentStore: createAgentStore(agents) as AgentStore,
|
||||
agentStore: createAgentStore(agents) as FileAgentStore,
|
||||
runtime: stubRuntime(),
|
||||
openclawProvisioner: provisioner,
|
||||
})
|
||||
@@ -289,7 +289,7 @@ describe('AgentHarnessService', () => {
|
||||
},
|
||||
}
|
||||
const service = new AgentHarnessService({
|
||||
agentStore: createAgentStore(agents) as AgentStore,
|
||||
agentStore: createAgentStore(agents) as FileAgentStore,
|
||||
runtime: stubRuntime(),
|
||||
openclawProvisioner: provisioner,
|
||||
})
|
||||
@@ -329,7 +329,7 @@ describe('AgentHarnessService', () => {
|
||||
},
|
||||
}
|
||||
const service = new AgentHarnessService({
|
||||
agentStore: createAgentStore(agents) as AgentStore,
|
||||
agentStore: createAgentStore(agents) as FileAgentStore,
|
||||
runtime: stubRuntime(),
|
||||
openclawProvisioner: provisioner,
|
||||
})
|
||||
@@ -383,7 +383,7 @@ describe('AgentHarnessService', () => {
|
||||
},
|
||||
}
|
||||
const service = new AgentHarnessService({
|
||||
agentStore: createAgentStore([agent]) as AgentStore,
|
||||
agentStore: createAgentStore([agent]) as FileAgentStore,
|
||||
runtime,
|
||||
})
|
||||
|
||||
@@ -432,7 +432,7 @@ describe('AgentHarnessService', () => {
|
||||
},
|
||||
}
|
||||
const service = new AgentHarnessService({
|
||||
agentStore: createAgentStore([agent]) as AgentStore,
|
||||
agentStore: createAgentStore([agent]) as FileAgentStore,
|
||||
runtime,
|
||||
})
|
||||
|
||||
@@ -511,7 +511,7 @@ function createAgentStore(agents: AgentDefinition[]) {
|
||||
agents.push(agent)
|
||||
return agent
|
||||
},
|
||||
} satisfies Partial<AgentStore>
|
||||
} satisfies Partial<FileAgentStore>
|
||||
}
|
||||
|
||||
async function collectStream(
|
||||
|
||||
@@ -298,9 +298,7 @@ describe('ChatService Klavis session rebuilds', () => {
|
||||
const firstAgent = createFakeAgent()
|
||||
const secondAgent = createFakeAgent()
|
||||
agentToReturn = firstAgent
|
||||
let lastPromptUiMessages: MockMessage[] | undefined
|
||||
streamResponseHandler = async ({ onFinish, uiMessages }) => {
|
||||
lastPromptUiMessages = uiMessages
|
||||
await onFinish({ messages: uiMessages ?? [] })
|
||||
return new Response('ok')
|
||||
}
|
||||
@@ -350,24 +348,13 @@ describe('ChatService Klavis session rebuilds', () => {
|
||||
|
||||
expect(createAgentSpy.mock.calls.length - createCallsBefore).toBe(2)
|
||||
expect(firstAgent.dispose).toHaveBeenCalledTimes(1)
|
||||
|
||||
// Persisted form stays the raw user text — TKT-774. The Klavis
|
||||
// context-change notice and the formatted user envelope go only
|
||||
// into the transient prompt copy fed to the LLM.
|
||||
expect(secondAgent.messages).toHaveLength(2)
|
||||
const persistedRebuiltMessage =
|
||||
secondAgent.messages[1]?.parts[0]?.text ?? ''
|
||||
expect(persistedRebuiltMessage).toBe('check integrations again')
|
||||
|
||||
// Prompt copy (what the agent loop actually saw) carries the
|
||||
// context-change prefix so the model knows about the new tools.
|
||||
const promptRebuiltMessage =
|
||||
lastPromptUiMessages?.at(-1)?.parts[0]?.text ?? ''
|
||||
expect(promptRebuiltMessage).toContain(
|
||||
const rebuiltMessage = secondAgent.messages[1]?.parts[0]?.text ?? ''
|
||||
expect(rebuiltMessage).toContain(
|
||||
'Klavis app integration tools are now available for the following connected apps: slack.',
|
||||
)
|
||||
expect(promptRebuiltMessage).not.toContain('klavis:pending')
|
||||
expect(promptRebuiltMessage).not.toContain('klavis:connected')
|
||||
expect(rebuiltMessage).not.toContain('klavis:pending')
|
||||
expect(rebuiltMessage).not.toContain('klavis:connected')
|
||||
})
|
||||
|
||||
it('does not rebuild a session with no enabled managed apps when Klavis connects', async () => {
|
||||
|
||||
@@ -15,11 +15,7 @@ import type {
|
||||
AcpRuntime as AcpxCoreRuntime,
|
||||
} from 'acpx/runtime'
|
||||
import { createRuntimeStore } from 'acpx/runtime'
|
||||
import { formatUserMessage } from '../../../src/agent/format-message'
|
||||
import {
|
||||
AcpxRuntime,
|
||||
unwrapBrowserosAcpUserMessage,
|
||||
} from '../../../src/lib/agents/acpx-runtime'
|
||||
import { AcpxRuntime } from '../../../src/lib/agents/acpx-runtime'
|
||||
import type { AgentDefinition } from '../../../src/lib/agents/agent-types'
|
||||
import type { AgentStreamEvent } from '../../../src/lib/agents/types'
|
||||
|
||||
@@ -309,242 +305,6 @@ open <example.com>
|
||||
])
|
||||
})
|
||||
|
||||
it('strips the inner formatUserMessage envelope from history payloads', async () => {
|
||||
const cwd = await mkdtemp(join(tmpdir(), 'browseros-acpx-runtime-'))
|
||||
const stateDir = await mkdtemp(join(tmpdir(), 'browseros-acpx-state-'))
|
||||
tempDirs.push(cwd, stateDir)
|
||||
const timestamp = '2026-04-29T20:00:00.000Z'
|
||||
const agent: AgentDefinition = {
|
||||
id: 'agent-1',
|
||||
name: 'Browser bot',
|
||||
adapter: 'codex',
|
||||
permissionMode: 'approve-all',
|
||||
sessionKey: 'agent:agent-1:main',
|
||||
createdAt: 1000,
|
||||
updatedAt: 1000,
|
||||
}
|
||||
// Wrapped form persisted to the session record. Note that the
|
||||
// inner formatUserMessage envelope's tags (`<selected_text>`,
|
||||
// `<USER_QUERY>`) are escaped to `<…>` because
|
||||
// `buildBrowserosAcpPrompt` runs `escapePromptTagText` over the
|
||||
// entire payload before adding the outer envelope.
|
||||
const wrapped = `<role>
|
||||
You are BrowserOS - a browser agent with full control of a Chromium browser through the BrowserOS MCP server.
|
||||
|
||||
Use the BrowserOS MCP server for all browser tasks, including browsing the web, interacting with pages, inspecting browser state, and managing tabs, windows, bookmarks, and history.
|
||||
</role>
|
||||
|
||||
<user_request>
|
||||
## Browser Context
|
||||
**Active Tab:** Tab 1 (Page ID: 101) - "Example" (https://example.com)
|
||||
|
||||
---
|
||||
|
||||
<selected_text (from "Example" — https://example.com)>
|
||||
quoted selection
|
||||
</selected_text>
|
||||
|
||||
<USER_QUERY>
|
||||
summarise this
|
||||
</USER_QUERY>
|
||||
</user_request>`
|
||||
const record: AcpSessionRecord = {
|
||||
schema: 'acpx.session.v1',
|
||||
acpxRecordId: agent.sessionKey,
|
||||
acpSessionId: 'sid-1',
|
||||
agentSessionId: 'inner-1',
|
||||
agentCommand: 'codex --acp',
|
||||
cwd,
|
||||
name: agent.sessionKey,
|
||||
createdAt: timestamp,
|
||||
lastUsedAt: timestamp,
|
||||
lastSeq: 0,
|
||||
eventLog: {
|
||||
active_path: '',
|
||||
segment_count: 0,
|
||||
max_segment_bytes: 0,
|
||||
max_segments: 0,
|
||||
},
|
||||
closed: false,
|
||||
messages: [
|
||||
{
|
||||
User: {
|
||||
id: 'user-1',
|
||||
content: [{ Text: wrapped }],
|
||||
},
|
||||
},
|
||||
],
|
||||
updated_at: timestamp,
|
||||
cumulative_token_usage: {},
|
||||
request_token_usage: {},
|
||||
acpx: {},
|
||||
}
|
||||
await createRuntimeStore({ stateDir }).save(record)
|
||||
|
||||
const history = await new AcpxRuntime({ cwd, stateDir }).getHistory({
|
||||
agent,
|
||||
sessionId: 'main',
|
||||
})
|
||||
|
||||
expect(history.items[0]?.text).toBe('summarise this')
|
||||
})
|
||||
|
||||
describe('unwrapBrowserosAcpUserMessage', () => {
|
||||
it('returns clean text for input that has no envelope', () => {
|
||||
expect(unwrapBrowserosAcpUserMessage('hello')).toBe('hello')
|
||||
})
|
||||
|
||||
it('handles empty input', () => {
|
||||
expect(unwrapBrowserosAcpUserMessage('')).toBe('')
|
||||
})
|
||||
|
||||
it('strips a fully wrapped message and decodes escapes', () => {
|
||||
// On-wire form: `escapePromptTagText` escapes the inner tags
|
||||
// before the outer envelope is added.
|
||||
const wrapped = `<role>
|
||||
You are BrowserOS - a browser agent with full control of a Chromium browser through the BrowserOS MCP server.
|
||||
|
||||
Use the BrowserOS MCP server for all browser tasks, including browsing the web, interacting with pages, inspecting browser state, and managing tabs, windows, bookmarks, and history.
|
||||
</role>
|
||||
|
||||
<user_request>
|
||||
## Browser Context
|
||||
**Active Tab:** Tab 1 (Page ID: 101) - "Example" (https://example.com)
|
||||
|
||||
---
|
||||
|
||||
<USER_QUERY>
|
||||
look at example
|
||||
</USER_QUERY>
|
||||
</user_request>`
|
||||
expect(unwrapBrowserosAcpUserMessage(wrapped)).toBe('look at example')
|
||||
})
|
||||
|
||||
it('strips the inner envelope when only the inner wrapper is present', () => {
|
||||
// Plain (un-escaped) inner-envelope-only input — covers the
|
||||
// hypothetical case where some future code path stores the
|
||||
// unwrapped-outer form directly.
|
||||
const innerOnly = `## Browser Context
|
||||
**Active Tab:** Tab 1
|
||||
|
||||
---
|
||||
|
||||
<USER_QUERY>
|
||||
just inner
|
||||
</USER_QUERY>`
|
||||
expect(unwrapBrowserosAcpUserMessage(innerOnly)).toBe('just inner')
|
||||
})
|
||||
|
||||
it('strips the outer envelope when only the outer wrapper is present', () => {
|
||||
const outerOnly = `<role>
|
||||
You are BrowserOS - a browser agent with full control of a Chromium browser through the BrowserOS MCP server.
|
||||
|
||||
Use the BrowserOS MCP server for all browser tasks, including browsing the web, interacting with pages, inspecting browser state, and managing tabs, windows, bookmarks, and history.
|
||||
</role>
|
||||
|
||||
<user_request>
|
||||
just outer
|
||||
</user_request>`
|
||||
expect(unwrapBrowserosAcpUserMessage(outerOnly)).toBe('just outer')
|
||||
})
|
||||
|
||||
it('removes a selected_text block with attribute string', () => {
|
||||
const wrapped = `<role>
|
||||
You are BrowserOS - a browser agent with full control of a Chromium browser through the BrowserOS MCP server.
|
||||
|
||||
Use the BrowserOS MCP server for all browser tasks, including browsing the web, interacting with pages, inspecting browser state, and managing tabs, windows, bookmarks, and history.
|
||||
</role>
|
||||
|
||||
<user_request>
|
||||
<selected_text (from "Title" — https://example.com)>
|
||||
selection body
|
||||
</selected_text>
|
||||
|
||||
<USER_QUERY>
|
||||
question with selection
|
||||
</USER_QUERY>
|
||||
</user_request>`
|
||||
expect(unwrapBrowserosAcpUserMessage(wrapped)).toBe(
|
||||
'question with selection',
|
||||
)
|
||||
})
|
||||
|
||||
it('is idempotent — applying twice equals applying once', () => {
|
||||
const wrapped = `<role>
|
||||
You are BrowserOS - a browser agent with full control of a Chromium browser through the BrowserOS MCP server.
|
||||
|
||||
Use the BrowserOS MCP server for all browser tasks, including browsing the web, interacting with pages, inspecting browser state, and managing tabs, windows, bookmarks, and history.
|
||||
</role>
|
||||
|
||||
<user_request>
|
||||
## Browser Context
|
||||
ctx
|
||||
|
||||
---
|
||||
|
||||
<USER_QUERY>
|
||||
hello
|
||||
</USER_QUERY>
|
||||
</user_request>`
|
||||
const once = unwrapBrowserosAcpUserMessage(wrapped)
|
||||
const twice = unwrapBrowserosAcpUserMessage(once)
|
||||
expect(twice).toBe(once)
|
||||
expect(twice).toBe('hello')
|
||||
})
|
||||
|
||||
it('round-trips formatUserMessage output back to the user typed text', () => {
|
||||
const userText = 'fix the OAuth redirect after login'
|
||||
const formatted = formatUserMessage(userText, {
|
||||
activeTab: {
|
||||
id: 1,
|
||||
url: 'https://example.com',
|
||||
title: 'Example',
|
||||
},
|
||||
})
|
||||
// Mirror what acpx-runtime.ts's buildBrowserosAcpPrompt does
|
||||
// on the wire: escape the inner payload (so its tags survive
|
||||
// round-trip serialisation) and then wrap with <role>…</role>
|
||||
// + <user_request>…</user_request>. Constants/escape rules
|
||||
// are duplicated here so the test pins the exact serialised
|
||||
// shape rather than the helpers that produce it.
|
||||
const escapeForPrompt = (value: string) =>
|
||||
value.replace(/&/g, '&').replace(/</g, '<').replace(/>/g, '>')
|
||||
const ROLE = `<role>
|
||||
You are BrowserOS - a browser agent with full control of a Chromium browser through the BrowserOS MCP server.
|
||||
|
||||
Use the BrowserOS MCP server for all browser tasks, including browsing the web, interacting with pages, inspecting browser state, and managing tabs, windows, bookmarks, and history.
|
||||
</role>`
|
||||
const wrapped = `${ROLE}
|
||||
|
||||
<user_request>
|
||||
${escapeForPrompt(formatted)}
|
||||
</user_request>`
|
||||
expect(unwrapBrowserosAcpUserMessage(wrapped)).toBe(userText)
|
||||
})
|
||||
|
||||
it('preserves user-typed angle-brackets via the entity decode', () => {
|
||||
// `escapePromptTagText` escapes every `<` and `>` in the
|
||||
// payload — including the inner envelope's own tags AND any
|
||||
// user-typed tag-like content. The on-wire form below is what
|
||||
// a user typing `<USER_QUERY>foo</USER_QUERY>` literally
|
||||
// produces after formatUserMessage + buildBrowserosAcpPrompt.
|
||||
const wrapped = `<role>
|
||||
You are BrowserOS - a browser agent with full control of a Chromium browser through the BrowserOS MCP server.
|
||||
|
||||
Use the BrowserOS MCP server for all browser tasks, including browsing the web, interacting with pages, inspecting browser state, and managing tabs, windows, bookmarks, and history.
|
||||
</role>
|
||||
|
||||
<user_request>
|
||||
<USER_QUERY>
|
||||
<USER_QUERY>foo</USER_QUERY>
|
||||
</USER_QUERY>
|
||||
</user_request>`
|
||||
expect(unwrapBrowserosAcpUserMessage(wrapped)).toBe(
|
||||
'<USER_QUERY>foo</USER_QUERY>',
|
||||
)
|
||||
})
|
||||
})
|
||||
|
||||
it('continues the turn when runtime config control is unavailable', async () => {
|
||||
const calls: Array<{ method: string; input: unknown }> = []
|
||||
const runtime = new AcpxRuntime({
|
||||
|
||||
@@ -1,140 +0,0 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 BrowserOS
|
||||
*/
|
||||
|
||||
import { afterEach, describe, expect, it } from 'bun:test'
|
||||
import { mkdtempSync } from 'node:fs'
|
||||
import { rm } from 'node:fs/promises'
|
||||
import { tmpdir } from 'node:os'
|
||||
import { join } from 'node:path'
|
||||
import { eq } from 'drizzle-orm'
|
||||
import { DbAgentStore } from '../../../src/lib/agents/db-agent-store'
|
||||
import { closeDb, initializeDb } from '../../../src/lib/db'
|
||||
import { agentDefinitions } from '../../../src/lib/db/schema'
|
||||
|
||||
describe('DbAgentStore', () => {
|
||||
const tempDirs: string[] = []
|
||||
|
||||
afterEach(async () => {
|
||||
closeDb()
|
||||
await Promise.all(
|
||||
tempDirs.map((dir) => rm(dir, { recursive: true, force: true })),
|
||||
)
|
||||
tempDirs.length = 0
|
||||
})
|
||||
|
||||
it('creates, lists, loads, updates, and deletes named agents', async () => {
|
||||
const store = createStore()
|
||||
|
||||
const agent = await store.create({
|
||||
name: ' Review bot ',
|
||||
adapter: 'codex',
|
||||
modelId: 'gpt-5.5',
|
||||
reasoningEffort: 'medium',
|
||||
})
|
||||
|
||||
expect(agent).toMatchObject({
|
||||
name: 'Review bot',
|
||||
adapter: 'codex',
|
||||
modelId: 'gpt-5.5',
|
||||
reasoningEffort: 'medium',
|
||||
permissionMode: 'approve-all',
|
||||
sessionKey: `agent:${agent.id}:main`,
|
||||
pinned: false,
|
||||
})
|
||||
|
||||
const updated = await store.update(agent.id, {
|
||||
name: 'Renamed bot',
|
||||
pinned: true,
|
||||
})
|
||||
|
||||
expect(updated).toMatchObject({
|
||||
id: agent.id,
|
||||
name: 'Renamed bot',
|
||||
pinned: true,
|
||||
})
|
||||
expect(await store.get(agent.id)).toEqual(updated)
|
||||
expect(await store.list()).toEqual([updated])
|
||||
expect(await store.delete(agent.id)).toBe(true)
|
||||
expect(await store.delete(agent.id)).toBe(false)
|
||||
expect(await store.list()).toEqual([])
|
||||
})
|
||||
|
||||
it('serializes concurrent creates without dropping agents', async () => {
|
||||
const store = createStore()
|
||||
|
||||
const created = await Promise.all(
|
||||
Array.from({ length: 10 }, (_, index) =>
|
||||
store.create({
|
||||
name: `Agent ${index}`,
|
||||
adapter: index % 2 === 0 ? 'codex' : 'claude',
|
||||
}),
|
||||
),
|
||||
)
|
||||
|
||||
const listed = await store.list()
|
||||
expect(listed).toHaveLength(created.length)
|
||||
expect(new Set(listed.map((agent) => agent.id)).size).toBe(created.length)
|
||||
})
|
||||
|
||||
it('persists OpenClaw adapter config with the agent record', async () => {
|
||||
const { db, store } = createStoreWithDb()
|
||||
|
||||
const agent = await store.create({
|
||||
name: 'OpenClaw bot',
|
||||
adapter: 'openclaw',
|
||||
providerType: 'openai-compatible',
|
||||
providerName: 'Kimi',
|
||||
baseUrl: 'https://api.fireworks.ai/inference/v1',
|
||||
apiKey: 'test-key',
|
||||
supportsImages: true,
|
||||
})
|
||||
|
||||
const row = db
|
||||
.select()
|
||||
.from(agentDefinitions)
|
||||
.where(eq(agentDefinitions.id, agent.id))
|
||||
.get()
|
||||
|
||||
expect(JSON.parse(row?.adapterConfigJson ?? '{}')).toEqual({
|
||||
providerType: 'openai-compatible',
|
||||
providerName: 'Kimi',
|
||||
baseUrl: 'https://api.fireworks.ai/inference/v1',
|
||||
apiKey: 'test-key',
|
||||
supportsImages: true,
|
||||
})
|
||||
})
|
||||
|
||||
it('upserts gateway-owned OpenClaw records idempotently', async () => {
|
||||
const store = createStore()
|
||||
|
||||
const first = await store.upsertExisting({
|
||||
id: 'oc-existing',
|
||||
name: 'Gateway agent',
|
||||
adapter: 'openclaw',
|
||||
modelId: 'openrouter/anthropic/claude-sonnet-4.5',
|
||||
})
|
||||
const second = await store.upsertExisting({
|
||||
id: 'oc-existing',
|
||||
name: 'Changed gateway name',
|
||||
adapter: 'openclaw',
|
||||
})
|
||||
|
||||
expect(second).toEqual(first)
|
||||
expect(await store.list()).toEqual([first])
|
||||
})
|
||||
|
||||
function createStore(): DbAgentStore {
|
||||
return createStoreWithDb().store
|
||||
}
|
||||
|
||||
function createStoreWithDb() {
|
||||
const dir = mkdtempSync(join(tmpdir(), 'browseros-db-agents-test-'))
|
||||
tempDirs.push(dir)
|
||||
const handle = initializeDb({
|
||||
dbPath: join(dir, 'db', 'browseros.sqlite'),
|
||||
})
|
||||
return { db: handle.db, store: new DbAgentStore({ db: handle.db }) }
|
||||
}
|
||||
})
|
||||
@@ -0,0 +1,67 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 BrowserOS
|
||||
*/
|
||||
|
||||
import { afterEach, describe, expect, it } from 'bun:test'
|
||||
import { mkdtemp, rm } from 'node:fs/promises'
|
||||
import { tmpdir } from 'node:os'
|
||||
import { join } from 'node:path'
|
||||
import { FileAgentStore } from '../../../src/lib/agents/file-agent-store'
|
||||
|
||||
describe('FileAgentStore', () => {
|
||||
const tempDirs: string[] = []
|
||||
|
||||
afterEach(async () => {
|
||||
await Promise.all(
|
||||
tempDirs.map((dir) => rm(dir, { recursive: true, force: true })),
|
||||
)
|
||||
tempDirs.length = 0
|
||||
})
|
||||
|
||||
it('creates, lists, loads, and deletes named agents', async () => {
|
||||
const dir = await mkdtemp(join(tmpdir(), 'browseros-agents-'))
|
||||
tempDirs.push(dir)
|
||||
const store = new FileAgentStore({ filePath: join(dir, 'agents.json') })
|
||||
|
||||
const agent = await store.create({
|
||||
name: 'Review bot',
|
||||
adapter: 'codex',
|
||||
modelId: 'gpt-5.5',
|
||||
reasoningEffort: 'medium',
|
||||
})
|
||||
|
||||
expect(agent).toMatchObject({
|
||||
name: 'Review bot',
|
||||
adapter: 'codex',
|
||||
modelId: 'gpt-5.5',
|
||||
reasoningEffort: 'medium',
|
||||
permissionMode: 'approve-all',
|
||||
sessionKey: `agent:${agent.id}:main`,
|
||||
})
|
||||
expect(await store.list()).toEqual([agent])
|
||||
expect(await store.get(agent.id)).toEqual(agent)
|
||||
|
||||
await store.delete(agent.id)
|
||||
expect(await store.list()).toEqual([])
|
||||
})
|
||||
|
||||
it('serializes concurrent creates without dropping agents', async () => {
|
||||
const dir = await mkdtemp(join(tmpdir(), 'browseros-agents-'))
|
||||
tempDirs.push(dir)
|
||||
const store = new FileAgentStore({ filePath: join(dir, 'agents.json') })
|
||||
|
||||
const created = await Promise.all(
|
||||
Array.from({ length: 10 }, (_, index) =>
|
||||
store.create({
|
||||
name: `Agent ${index}`,
|
||||
adapter: index % 2 === 0 ? 'codex' : 'claude',
|
||||
}),
|
||||
),
|
||||
)
|
||||
|
||||
const listed = await store.list()
|
||||
expect(listed).toHaveLength(created.length)
|
||||
expect(new Set(listed.map((agent) => agent.id)).size).toBe(created.length)
|
||||
})
|
||||
})
|
||||
@@ -1,84 +0,0 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 BrowserOS
|
||||
*/
|
||||
|
||||
import { afterEach, describe, expect, it, spyOn } from 'bun:test'
|
||||
import { mkdtempSync } from 'node:fs'
|
||||
import { rm } from 'node:fs/promises'
|
||||
import { tmpdir } from 'node:os'
|
||||
import { join } from 'node:path'
|
||||
import {
|
||||
getOAuthTokenManager,
|
||||
initializeOAuth,
|
||||
shutdownOAuth,
|
||||
} from '../../../../src/lib/clients/oauth'
|
||||
import { closeDb, initializeDb } from '../../../../src/lib/db'
|
||||
|
||||
describe('OAuth client setup', () => {
|
||||
const tempDirs: string[] = []
|
||||
|
||||
afterEach(async () => {
|
||||
shutdownOAuth()
|
||||
closeDb()
|
||||
await Promise.all(
|
||||
tempDirs.map((dir) => rm(dir, { recursive: true, force: true })),
|
||||
)
|
||||
tempDirs.length = 0
|
||||
})
|
||||
|
||||
it('initializes a process token manager backed by the BrowserOS database', () => {
|
||||
const dir = mkdtempSync(join(tmpdir(), 'browseros-oauth-index-test-'))
|
||||
tempDirs.push(dir)
|
||||
const handle = initializeDb({
|
||||
dbPath: join(dir, 'db', 'browseros.sqlite'),
|
||||
})
|
||||
|
||||
const manager = initializeOAuth(handle.db, 'browseros-1')
|
||||
|
||||
expect(getOAuthTokenManager()).toBe(manager)
|
||||
expect(manager.getStatus('qwen-code')).toEqual({
|
||||
authenticated: false,
|
||||
email: undefined,
|
||||
provider: 'qwen-code',
|
||||
})
|
||||
|
||||
manager.storeTokens('qwen-code', {
|
||||
accessToken: 'access-token',
|
||||
refreshToken: 'refresh-token',
|
||||
expiresIn: 3600,
|
||||
})
|
||||
|
||||
expect(manager.getStatus('qwen-code')).toEqual({
|
||||
authenticated: true,
|
||||
email: undefined,
|
||||
provider: 'qwen-code',
|
||||
})
|
||||
})
|
||||
|
||||
it('stops and clears the current process token manager', () => {
|
||||
const handle = initializeTestDb()
|
||||
const firstManager = initializeOAuth(handle.db, 'browseros-1')
|
||||
const stopFirst = spyOn(firstManager, 'stopCallbackServer')
|
||||
|
||||
const secondManager = initializeOAuth(handle.db, 'browseros-2')
|
||||
|
||||
expect(stopFirst).toHaveBeenCalledTimes(1)
|
||||
expect(getOAuthTokenManager()).toBe(secondManager)
|
||||
|
||||
const stopSecond = spyOn(secondManager, 'stopCallbackServer')
|
||||
|
||||
shutdownOAuth()
|
||||
|
||||
expect(stopSecond).toHaveBeenCalledTimes(1)
|
||||
expect(getOAuthTokenManager()).toBeNull()
|
||||
})
|
||||
|
||||
function initializeTestDb() {
|
||||
const dir = mkdtempSync(join(tmpdir(), 'browseros-oauth-index-test-'))
|
||||
tempDirs.push(dir)
|
||||
return initializeDb({
|
||||
dbPath: join(dir, 'db', 'browseros.sqlite'),
|
||||
})
|
||||
}
|
||||
})
|
||||
@@ -1,81 +0,0 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 BrowserOS
|
||||
*/
|
||||
|
||||
import { afterEach, describe, expect, it } from 'bun:test'
|
||||
import { mkdtempSync } from 'node:fs'
|
||||
import { rm } from 'node:fs/promises'
|
||||
import { tmpdir } from 'node:os'
|
||||
import { join } from 'node:path'
|
||||
import { OAuthTokenStore } from '../../../../src/lib/clients/oauth/token-store'
|
||||
import { closeDb, initializeDb } from '../../../../src/lib/db'
|
||||
|
||||
describe('OAuthTokenStore', () => {
|
||||
const tempDirs: string[] = []
|
||||
|
||||
afterEach(async () => {
|
||||
closeDb()
|
||||
await Promise.all(
|
||||
tempDirs.map((dir) => rm(dir, { recursive: true, force: true })),
|
||||
)
|
||||
tempDirs.length = 0
|
||||
})
|
||||
|
||||
it('stores, updates, reads, reports status, and deletes provider tokens', () => {
|
||||
const store = createStore()
|
||||
|
||||
store.upsertTokens('browseros-1', 'github-copilot', {
|
||||
accessToken: 'access-1',
|
||||
refreshToken: 'refresh-1',
|
||||
expiresAt: 1234,
|
||||
email: 'user@example.com',
|
||||
accountId: 'account-1',
|
||||
})
|
||||
|
||||
expect(store.getTokens('browseros-1', 'github-copilot')).toEqual({
|
||||
accessToken: 'access-1',
|
||||
refreshToken: 'refresh-1',
|
||||
expiresAt: 1234,
|
||||
email: 'user@example.com',
|
||||
accountId: 'account-1',
|
||||
})
|
||||
expect(store.getStatus('browseros-1', 'github-copilot')).toEqual({
|
||||
authenticated: true,
|
||||
email: 'user@example.com',
|
||||
provider: 'github-copilot',
|
||||
})
|
||||
|
||||
store.upsertTokens('browseros-1', 'github-copilot', {
|
||||
accessToken: 'access-2',
|
||||
refreshToken: '',
|
||||
expiresAt: 0,
|
||||
})
|
||||
|
||||
expect(store.getTokens('browseros-1', 'github-copilot')).toEqual({
|
||||
accessToken: 'access-2',
|
||||
refreshToken: '',
|
||||
expiresAt: 0,
|
||||
email: undefined,
|
||||
accountId: undefined,
|
||||
})
|
||||
|
||||
store.deleteTokens('browseros-1', 'github-copilot')
|
||||
|
||||
expect(store.getTokens('browseros-1', 'github-copilot')).toBeNull()
|
||||
expect(store.getStatus('browseros-1', 'github-copilot')).toEqual({
|
||||
authenticated: false,
|
||||
email: undefined,
|
||||
provider: 'github-copilot',
|
||||
})
|
||||
})
|
||||
|
||||
function createStore(): OAuthTokenStore {
|
||||
const dir = mkdtempSync(join(tmpdir(), 'browseros-oauth-store-test-'))
|
||||
tempDirs.push(dir)
|
||||
const handle = initializeDb({
|
||||
dbPath: join(dir, 'db', 'browseros.sqlite'),
|
||||
})
|
||||
return new OAuthTokenStore(handle.db)
|
||||
}
|
||||
})
|
||||
@@ -1,62 +0,0 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 BrowserOS
|
||||
*/
|
||||
|
||||
import { afterEach, describe, expect, it } from 'bun:test'
|
||||
import { existsSync, mkdtempSync } from 'node:fs'
|
||||
import { rm } from 'node:fs/promises'
|
||||
import { tmpdir } from 'node:os'
|
||||
import { join } from 'node:path'
|
||||
import { closeDb, initializeDb } from '../../../src/lib/db'
|
||||
import { agentDefinitions } from '../../../src/lib/db/schema'
|
||||
|
||||
describe('database initialization', () => {
|
||||
const tempDirs: string[] = []
|
||||
|
||||
afterEach(async () => {
|
||||
closeDb()
|
||||
await Promise.all(
|
||||
tempDirs.map((dir) => rm(dir, { recursive: true, force: true })),
|
||||
)
|
||||
tempDirs.length = 0
|
||||
})
|
||||
|
||||
it('creates the parent directory, opens sqlite, and runs migrations', () => {
|
||||
const dir = mkTempDir()
|
||||
const dbPath = join(dir, 'nested', 'browseros.sqlite')
|
||||
|
||||
const handle = initializeDb({ dbPath })
|
||||
const rows = handle.db.select().from(agentDefinitions).all()
|
||||
|
||||
expect(existsSync(dbPath)).toBe(true)
|
||||
expect(rows).toEqual([])
|
||||
})
|
||||
|
||||
it('is idempotent when initialized twice for the same path', () => {
|
||||
const dir = mkTempDir()
|
||||
const dbPath = join(dir, 'browseros.sqlite')
|
||||
|
||||
const first = initializeDb({ dbPath })
|
||||
const second = initializeDb({ dbPath })
|
||||
|
||||
expect(second).toBe(first)
|
||||
})
|
||||
|
||||
it('fails clearly when an explicit migration directory is missing', () => {
|
||||
const dir = mkTempDir()
|
||||
|
||||
expect(() =>
|
||||
initializeDb({
|
||||
dbPath: join(dir, 'browseros.sqlite'),
|
||||
migrationsDir: join(dir, 'missing-migrations'),
|
||||
}),
|
||||
).toThrow(/Drizzle migrations directory not found/)
|
||||
})
|
||||
|
||||
function mkTempDir(): string {
|
||||
const dir = mkdtempSync(join(tmpdir(), 'browseros-db-test-'))
|
||||
tempDirs.push(dir)
|
||||
return dir
|
||||
}
|
||||
})
|
||||
@@ -1,63 +0,0 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 BrowserOS
|
||||
*/
|
||||
|
||||
import { afterEach, describe, expect, it } from 'bun:test'
|
||||
import { mkdtempSync } from 'node:fs'
|
||||
import { readFile, rm } from 'node:fs/promises'
|
||||
import { tmpdir } from 'node:os'
|
||||
import { join } from 'node:path'
|
||||
import { IdentityService } from '../../src/lib/identity'
|
||||
|
||||
describe('IdentityService', () => {
|
||||
const tempDirs: string[] = []
|
||||
|
||||
afterEach(async () => {
|
||||
await Promise.all(
|
||||
tempDirs.map((dir) => rm(dir, { recursive: true, force: true })),
|
||||
)
|
||||
tempDirs.length = 0
|
||||
})
|
||||
|
||||
it('uses the install id when config provides one', () => {
|
||||
const service = new IdentityService()
|
||||
|
||||
service.initialize({ installId: 'install-123' })
|
||||
|
||||
expect(service.getBrowserOSId()).toBe('install-123')
|
||||
})
|
||||
|
||||
it('ignores an empty install id and generates a fallback id', () => {
|
||||
const dir = mkTempDir()
|
||||
const statePath = join(dir, 'identity', 'browseros-id.json')
|
||||
const service = new IdentityService()
|
||||
|
||||
service.initialize({ installId: '', statePath })
|
||||
|
||||
expect(service.getBrowserOSId()).not.toBe('')
|
||||
})
|
||||
|
||||
it('persists a generated fallback id without using the database', async () => {
|
||||
const dir = mkTempDir()
|
||||
const statePath = join(dir, 'identity', 'browseros-id.json')
|
||||
|
||||
const first = new IdentityService()
|
||||
first.initialize({ statePath })
|
||||
const id = first.getBrowserOSId()
|
||||
|
||||
const second = new IdentityService()
|
||||
second.initialize({ statePath })
|
||||
|
||||
expect(second.getBrowserOSId()).toBe(id)
|
||||
expect(JSON.parse(await readFile(statePath, 'utf8'))).toEqual({
|
||||
browserosId: id,
|
||||
})
|
||||
})
|
||||
|
||||
function mkTempDir(): string {
|
||||
const dir = mkdtempSync(join(tmpdir(), 'browseros-identity-test-'))
|
||||
tempDirs.push(dir)
|
||||
return dir
|
||||
}
|
||||
})
|
||||
@@ -121,15 +121,7 @@ async function setupApplicationTest() {
|
||||
spyOn(browserosDir, 'writeServerConfig').mockImplementation(async () => {})
|
||||
spyOn(browserosDir, 'removeServerConfigSync').mockImplementation(() => {})
|
||||
|
||||
spyOn(dbModule, 'initializeDb').mockImplementation(
|
||||
() =>
|
||||
({
|
||||
path: '/tmp/browseros-execution/db/browseros.sqlite',
|
||||
migrationsDir: '/tmp/browseros-resources/db/migrations',
|
||||
sqlite: { close: () => {} },
|
||||
db: {},
|
||||
}) as never,
|
||||
)
|
||||
spyOn(dbModule, 'initializeDb').mockImplementation(() => ({}) as never)
|
||||
spyOn(identityModule.identity, 'initialize').mockImplementation(() => {})
|
||||
spyOn(identityModule.identity, 'getBrowserOSId').mockImplementation(
|
||||
() => 'browseros-id',
|
||||
|
||||
@@ -187,7 +187,6 @@
|
||||
"commander": "^14.0.1",
|
||||
"core-js": "3.45.1",
|
||||
"debug": "4.4.3",
|
||||
"drizzle-orm": "^0.45.2",
|
||||
"eventsource-parser": "^3.0.0",
|
||||
"fuse.js": "^7.1.0",
|
||||
"gray-matter": "^4.0.3",
|
||||
@@ -210,7 +209,6 @@
|
||||
"@types/sinon": "^21.0.0",
|
||||
"@types/ws": "^8.5.13",
|
||||
"async-mutex": "^0.5.0",
|
||||
"drizzle-kit": "^0.31.10",
|
||||
"pino-pretty": "^13.0.0",
|
||||
"puppeteer": "24.23.0",
|
||||
"sinon": "^21.0.1",
|
||||
@@ -570,8 +568,6 @@
|
||||
|
||||
"@dnd-kit/utilities": ["@dnd-kit/utilities@3.2.2", "", { "dependencies": { "tslib": "^2.0.0" }, "peerDependencies": { "react": ">=16.8.0" } }, "sha512-+MKAJEOfaBe5SmV6t34p80MMKhjvUz0vRrvVJbPT0WElzaOJ/1xs+D+KDv+tD/NE5ujfrChEcshd4fLn0wpiqg=="],
|
||||
|
||||
"@drizzle-team/brocli": ["@drizzle-team/brocli@0.10.2", "", {}, "sha512-z33Il7l5dKjUgGULTqBsQBQwckHh5AbIuxhdsIxDDiZAzBOrZO6q9ogcWC65kU382AfynTfgNumVcNIjuIua6w=="],
|
||||
|
||||
"@emnapi/runtime": ["@emnapi/runtime@1.8.1", "", { "dependencies": { "tslib": "^2.4.0" } }, "sha512-mehfKSMWjjNol8659Z8KxEMrdSJDDot5SXMq00dM8BN4o+CLNXQ0xH2V7EchNHV4RmbZLmmPdEaXZc5H2FXmDg=="],
|
||||
|
||||
"@emoji-mart/data": ["@emoji-mart/data@1.2.1", "", {}, "sha512-no2pQMWiBy6gpBEiqGeU77/bFejDqUTRY7KX+0+iur13op3bqUsXdnwoZs6Xb1zbv0gAj5VvS1PWoUUckSr5Dw=="],
|
||||
@@ -608,10 +604,6 @@
|
||||
|
||||
"@envelop/types": ["@envelop/types@5.2.1", "", { "dependencies": { "@whatwg-node/promise-helpers": "^1.0.0", "tslib": "^2.5.0" } }, "sha512-CsFmA3u3c2QoLDTfEpGr4t25fjMU31nyvse7IzWTvb0ZycuPjMjb0fjlheh+PbhBYb9YLugnT2uY6Mwcg1o+Zg=="],
|
||||
|
||||
"@esbuild-kit/core-utils": ["@esbuild-kit/core-utils@3.3.2", "", { "dependencies": { "esbuild": "~0.18.20", "source-map-support": "^0.5.21" } }, "sha512-sPRAnw9CdSsRmEtnsl2WXWdyquogVpB3yZ3dgwJfe8zrOzTsV7cJvmwrKVa+0ma5BoiGJ+BoqkMvawbayKUsqQ=="],
|
||||
|
||||
"@esbuild-kit/esm-loader": ["@esbuild-kit/esm-loader@2.6.5", "", { "dependencies": { "@esbuild-kit/core-utils": "^3.3.2", "get-tsconfig": "^4.7.0" } }, "sha512-FxEMIkJKnodyA1OaCUoEvbYRkoZlLZ4d/eXFu9Fh8CbBBgP5EmZxrfTRyN0qpXZ4vOvqnE5YdRdcrmUUXuU+dA=="],
|
||||
|
||||
"@esbuild/aix-ppc64": ["@esbuild/aix-ppc64@0.27.2", "", { "os": "aix", "cpu": "ppc64" }, "sha512-GZMB+a0mOMZs4MpDbj8RJp4cw+w1WV5NYD6xzgvzUJ5Ek2jerwfO2eADyI6ExDSUED+1X8aMbegahsJi+8mgpw=="],
|
||||
|
||||
"@esbuild/android-arm": ["@esbuild/android-arm@0.27.2", "", { "os": "android", "cpu": "arm" }, "sha512-DVNI8jlPa7Ujbr1yjU2PfUSRtAUZPG9I1RwW4F4xFB1Imiu2on0ADiI/c3td+KmDtVKNbi+nffGDQMfcIMkwIA=="],
|
||||
@@ -2412,10 +2404,6 @@
|
||||
|
||||
"downshift": ["downshift@9.0.13", "", { "dependencies": { "@babel/runtime": "^7.24.5", "compute-scroll-into-view": "^3.1.0", "prop-types": "^15.8.1", "react-is": "18.2.0", "tslib": "^2.6.2" }, "peerDependencies": { "react": ">=16.12.0" } }, "sha512-fPV+K5jwEzfEAhNhprgCmpWQ23MKwKNzdbtK0QQFiw4hbFcKhMeGB+ccorfWJzmsLR5Dty+CmLDduWlIs74G/w=="],
|
||||
|
||||
"drizzle-kit": ["drizzle-kit@0.31.10", "", { "dependencies": { "@drizzle-team/brocli": "^0.10.2", "@esbuild-kit/esm-loader": "^2.5.5", "esbuild": "^0.25.4", "tsx": "^4.21.0" }, "bin": { "drizzle-kit": "bin.cjs" } }, "sha512-7OZcmQUrdGI+DUNNsKBn1aW8qSoKuTH7d0mYgSP8bAzdFzKoovxEFnoGQp2dVs82EOJeYycqRtciopszwUf8bw=="],
|
||||
|
||||
"drizzle-orm": ["drizzle-orm@0.45.2", "", { "peerDependencies": { "@aws-sdk/client-rds-data": ">=3", "@cloudflare/workers-types": ">=4", "@electric-sql/pglite": ">=0.2.0", "@libsql/client": ">=0.10.0", "@libsql/client-wasm": ">=0.10.0", "@neondatabase/serverless": ">=0.10.0", "@op-engineering/op-sqlite": ">=2", "@opentelemetry/api": "^1.4.1", "@planetscale/database": ">=1.13", "@prisma/client": "*", "@tidbcloud/serverless": "*", "@types/better-sqlite3": "*", "@types/pg": "*", "@types/sql.js": "*", "@upstash/redis": ">=1.34.7", "@vercel/postgres": ">=0.8.0", "@xata.io/client": "*", "better-sqlite3": ">=7", "bun-types": "*", "expo-sqlite": ">=14.0.0", "gel": ">=2", "knex": "*", "kysely": "*", "mysql2": ">=2", "pg": ">=8", "postgres": ">=3", "sql.js": ">=1", "sqlite3": ">=5" }, "optionalPeers": ["@aws-sdk/client-rds-data", "@cloudflare/workers-types", "@electric-sql/pglite", "@libsql/client", "@libsql/client-wasm", "@neondatabase/serverless", "@op-engineering/op-sqlite", "@opentelemetry/api", "@planetscale/database", "@prisma/client", "@tidbcloud/serverless", "@types/better-sqlite3", "@types/pg", "@types/sql.js", "@upstash/redis", "@vercel/postgres", "@xata.io/client", "better-sqlite3", "bun-types", "expo-sqlite", "gel", "knex", "kysely", "mysql2", "pg", "postgres", "sql.js", "sqlite3"] }, "sha512-kY0BSaTNYWnoDMVoyY8uxmyHjpJW1geOmBMdSSicKo9CIIWkSxMIj2rkeSR51b8KAPB7m+qysjuHme5nKP+E5Q=="],
|
||||
|
||||
"dset": ["dset@3.1.4", "", {}, "sha512-2QF/g9/zTaPDc3BjNcVTGoBbXBgYfMTTceLaYcFJ/W9kggFUkhxD/hMEeuLKbugyef9SqAx8cpgwlIP/jinUTA=="],
|
||||
|
||||
"dunder-proto": ["dunder-proto@1.0.1", "", { "dependencies": { "call-bind-apply-helpers": "^1.0.1", "es-errors": "^1.3.0", "gopd": "^1.2.0" } }, "sha512-KIN/nDJBQRcXw0MLVhZE9iQHmG68qAVIBg9CqmUYjmQIhgij9U5MFvrqkUL5FbtyyzZuOeOt0zdeRe4UY7ct+A=="],
|
||||
@@ -4430,8 +4418,6 @@
|
||||
|
||||
"@emotion/serialize/@emotion/unitless": ["@emotion/unitless@0.10.0", "", {}, "sha512-dFoMUuQA20zvtVTuxZww6OHoJYgrzfKM1t52mVySDJnMSEa08ruEvdYQbhvyu6soU+NeLVd3yKfTfT0NeV6qGg=="],
|
||||
|
||||
"@esbuild-kit/core-utils/esbuild": ["esbuild@0.18.20", "", { "optionalDependencies": { "@esbuild/android-arm": "0.18.20", "@esbuild/android-arm64": "0.18.20", "@esbuild/android-x64": "0.18.20", "@esbuild/darwin-arm64": "0.18.20", "@esbuild/darwin-x64": "0.18.20", "@esbuild/freebsd-arm64": "0.18.20", "@esbuild/freebsd-x64": "0.18.20", "@esbuild/linux-arm": "0.18.20", "@esbuild/linux-arm64": "0.18.20", "@esbuild/linux-ia32": "0.18.20", "@esbuild/linux-loong64": "0.18.20", "@esbuild/linux-mips64el": "0.18.20", "@esbuild/linux-ppc64": "0.18.20", "@esbuild/linux-riscv64": "0.18.20", "@esbuild/linux-s390x": "0.18.20", "@esbuild/linux-x64": "0.18.20", "@esbuild/netbsd-x64": "0.18.20", "@esbuild/openbsd-x64": "0.18.20", "@esbuild/sunos-x64": "0.18.20", "@esbuild/win32-arm64": "0.18.20", "@esbuild/win32-ia32": "0.18.20", "@esbuild/win32-x64": "0.18.20" }, "bin": { "esbuild": "bin/esbuild" } }, "sha512-ceqxoedUrcayh7Y7ZX6NdbbDzGROiyVBgC4PriJThBKSVPWnnFHZAkfI1lJT8QFkOwH4qOS2SJkS4wvpGl8BpA=="],
|
||||
|
||||
"@google/gemini-cli-core/@google/genai": ["@google/genai@1.16.0", "", { "dependencies": { "google-auth-library": "^9.14.2", "ws": "^8.18.0" }, "peerDependencies": { "@modelcontextprotocol/sdk": "^1.11.4" }, "optionalPeers": ["@modelcontextprotocol/sdk"] }, "sha512-hdTYu39QgDFxv+FB6BK2zi4UIJGWhx2iPc0pHQ0C5Q/RCi+m+4gsryIzTGO+riqWcUA8/WGYp6hpqckdOBNysw=="],
|
||||
|
||||
"@google/gemini-cli-core/@modelcontextprotocol/sdk": ["@modelcontextprotocol/sdk@1.26.0", "", { "dependencies": { "@hono/node-server": "^1.19.9", "ajv": "^8.17.1", "ajv-formats": "^3.0.1", "content-type": "^1.0.5", "cors": "^2.8.5", "cross-spawn": "^7.0.5", "eventsource": "^3.0.2", "eventsource-parser": "^3.0.0", "express": "^5.2.1", "express-rate-limit": "^8.2.1", "hono": "^4.11.4", "jose": "^6.1.3", "json-schema-typed": "^8.0.2", "pkce-challenge": "^5.0.0", "raw-body": "^3.0.0", "zod": "^3.25 || ^4.0", "zod-to-json-schema": "^3.25.1" }, "peerDependencies": { "@cfworker/json-schema": "^4.1.1" }, "optionalPeers": ["@cfworker/json-schema"] }, "sha512-Y5RmPncpiDtTXDbLKswIJzTqu2hyBKxTNsgKqKclDbhIgg1wgtf1fRuvxgTnRfcnxtvvgbIEcqUOzZrJ6iSReg=="],
|
||||
@@ -4898,8 +4884,6 @@
|
||||
|
||||
"dotenv-expand/dotenv": ["dotenv@16.6.1", "", {}, "sha512-uBq4egWHTcTt33a72vpSG0z3HnPuIl6NqYcTrKEg2azoEyl2hpW0zqlxysq2pK9HlDIHyHyakeYaYnSAwd8bow=="],
|
||||
|
||||
"drizzle-kit/esbuild": ["esbuild@0.25.12", "", { "optionalDependencies": { "@esbuild/aix-ppc64": "0.25.12", "@esbuild/android-arm": "0.25.12", "@esbuild/android-arm64": "0.25.12", "@esbuild/android-x64": "0.25.12", "@esbuild/darwin-arm64": "0.25.12", "@esbuild/darwin-x64": "0.25.12", "@esbuild/freebsd-arm64": "0.25.12", "@esbuild/freebsd-x64": "0.25.12", "@esbuild/linux-arm": "0.25.12", "@esbuild/linux-arm64": "0.25.12", "@esbuild/linux-ia32": "0.25.12", "@esbuild/linux-loong64": "0.25.12", "@esbuild/linux-mips64el": "0.25.12", "@esbuild/linux-ppc64": "0.25.12", "@esbuild/linux-riscv64": "0.25.12", "@esbuild/linux-s390x": "0.25.12", "@esbuild/linux-x64": "0.25.12", "@esbuild/netbsd-arm64": "0.25.12", "@esbuild/netbsd-x64": "0.25.12", "@esbuild/openbsd-arm64": "0.25.12", "@esbuild/openbsd-x64": "0.25.12", "@esbuild/openharmony-arm64": "0.25.12", "@esbuild/sunos-x64": "0.25.12", "@esbuild/win32-arm64": "0.25.12", "@esbuild/win32-ia32": "0.25.12", "@esbuild/win32-x64": "0.25.12" }, "bin": { "esbuild": "bin/esbuild" } }, "sha512-bbPBYYrtZbkt6Os6FiTLCTFxvq4tt3JKall1vRwshA3fdVztsLAatFaZobhkBC8/BrPetoa0oksYoKXoG4ryJg=="],
|
||||
|
||||
"duplexify/readable-stream": ["readable-stream@3.6.2", "", { "dependencies": { "inherits": "^2.0.3", "string_decoder": "^1.1.1", "util-deprecate": "^1.0.1" } }, "sha512-9u/sniCrY3D5WdsERHzHE4G2YCXqoG5FTHUiCC4SIbr6XcLZBY05ya9EKjYek9O5xOAwjGq+1JdGBAS7Q9ScoA=="],
|
||||
|
||||
"ecdsa-sig-formatter/safe-buffer": ["safe-buffer@5.2.1", "", {}, "sha512-rp3So07KcdmmKbGvgaNxQSJr7bGVSVk5S9Eq1F+ppbRo70+YeaDxkw5Dd8NPN+GD6bjnYm2VuPuCXmpuYvmCXQ=="],
|
||||
@@ -5364,50 +5348,6 @@
|
||||
|
||||
"@browseros/server/@types/bun/bun-types": ["bun-types@1.3.5", "", { "dependencies": { "@types/node": "*" } }, "sha512-inmAYe2PFLs0SUbFOWSVD24sg1jFlMPxOjOSSCYqUgn4Hsc3rDc7dFvfVYjFPNHtov6kgUeulV4SxbuIV/stPw=="],
|
||||
|
||||
"@esbuild-kit/core-utils/esbuild/@esbuild/android-arm": ["@esbuild/android-arm@0.18.20", "", { "os": "android", "cpu": "arm" }, "sha512-fyi7TDI/ijKKNZTUJAQqiG5T7YjJXgnzkURqmGj13C6dCqckZBLdl4h7bkhHt/t0WP+zO9/zwroDvANaOqO5Sw=="],
|
||||
|
||||
"@esbuild-kit/core-utils/esbuild/@esbuild/android-arm64": ["@esbuild/android-arm64@0.18.20", "", { "os": "android", "cpu": "arm64" }, "sha512-Nz4rJcchGDtENV0eMKUNa6L12zz2zBDXuhj/Vjh18zGqB44Bi7MBMSXjgunJgjRhCmKOjnPuZp4Mb6OKqtMHLQ=="],
|
||||
|
||||
"@esbuild-kit/core-utils/esbuild/@esbuild/android-x64": ["@esbuild/android-x64@0.18.20", "", { "os": "android", "cpu": "x64" }, "sha512-8GDdlePJA8D6zlZYJV/jnrRAi6rOiNaCC/JclcXpB+KIuvfBN4owLtgzY2bsxnx666XjJx2kDPUmnTtR8qKQUg=="],
|
||||
|
||||
"@esbuild-kit/core-utils/esbuild/@esbuild/darwin-arm64": ["@esbuild/darwin-arm64@0.18.20", "", { "os": "darwin", "cpu": "arm64" }, "sha512-bxRHW5kHU38zS2lPTPOyuyTm+S+eobPUnTNkdJEfAddYgEcll4xkT8DB9d2008DtTbl7uJag2HuE5NZAZgnNEA=="],
|
||||
|
||||
"@esbuild-kit/core-utils/esbuild/@esbuild/darwin-x64": ["@esbuild/darwin-x64@0.18.20", "", { "os": "darwin", "cpu": "x64" }, "sha512-pc5gxlMDxzm513qPGbCbDukOdsGtKhfxD1zJKXjCCcU7ju50O7MeAZ8c4krSJcOIJGFR+qx21yMMVYwiQvyTyQ=="],
|
||||
|
||||
"@esbuild-kit/core-utils/esbuild/@esbuild/freebsd-arm64": ["@esbuild/freebsd-arm64@0.18.20", "", { "os": "freebsd", "cpu": "arm64" }, "sha512-yqDQHy4QHevpMAaxhhIwYPMv1NECwOvIpGCZkECn8w2WFHXjEwrBn3CeNIYsibZ/iZEUemj++M26W3cNR5h+Tw=="],
|
||||
|
||||
"@esbuild-kit/core-utils/esbuild/@esbuild/freebsd-x64": ["@esbuild/freebsd-x64@0.18.20", "", { "os": "freebsd", "cpu": "x64" }, "sha512-tgWRPPuQsd3RmBZwarGVHZQvtzfEBOreNuxEMKFcd5DaDn2PbBxfwLcj4+aenoh7ctXcbXmOQIn8HI6mCSw5MQ=="],
|
||||
|
||||
"@esbuild-kit/core-utils/esbuild/@esbuild/linux-arm": ["@esbuild/linux-arm@0.18.20", "", { "os": "linux", "cpu": "arm" }, "sha512-/5bHkMWnq1EgKr1V+Ybz3s1hWXok7mDFUMQ4cG10AfW3wL02PSZi5kFpYKrptDsgb2WAJIvRcDm+qIvXf/apvg=="],
|
||||
|
||||
"@esbuild-kit/core-utils/esbuild/@esbuild/linux-arm64": ["@esbuild/linux-arm64@0.18.20", "", { "os": "linux", "cpu": "arm64" }, "sha512-2YbscF+UL7SQAVIpnWvYwM+3LskyDmPhe31pE7/aoTMFKKzIc9lLbyGUpmmb8a8AixOL61sQ/mFh3jEjHYFvdA=="],
|
||||
|
||||
"@esbuild-kit/core-utils/esbuild/@esbuild/linux-ia32": ["@esbuild/linux-ia32@0.18.20", "", { "os": "linux", "cpu": "ia32" }, "sha512-P4etWwq6IsReT0E1KHU40bOnzMHoH73aXp96Fs8TIT6z9Hu8G6+0SHSw9i2isWrD2nbx2qo5yUqACgdfVGx7TA=="],
|
||||
|
||||
"@esbuild-kit/core-utils/esbuild/@esbuild/linux-loong64": ["@esbuild/linux-loong64@0.18.20", "", { "os": "linux", "cpu": "none" }, "sha512-nXW8nqBTrOpDLPgPY9uV+/1DjxoQ7DoB2N8eocyq8I9XuqJ7BiAMDMf9n1xZM9TgW0J8zrquIb/A7s3BJv7rjg=="],
|
||||
|
||||
"@esbuild-kit/core-utils/esbuild/@esbuild/linux-mips64el": ["@esbuild/linux-mips64el@0.18.20", "", { "os": "linux", "cpu": "none" }, "sha512-d5NeaXZcHp8PzYy5VnXV3VSd2D328Zb+9dEq5HE6bw6+N86JVPExrA6O68OPwobntbNJ0pzCpUFZTo3w0GyetQ=="],
|
||||
|
||||
"@esbuild-kit/core-utils/esbuild/@esbuild/linux-ppc64": ["@esbuild/linux-ppc64@0.18.20", "", { "os": "linux", "cpu": "ppc64" }, "sha512-WHPyeScRNcmANnLQkq6AfyXRFr5D6N2sKgkFo2FqguP44Nw2eyDlbTdZwd9GYk98DZG9QItIiTlFLHJHjxP3FA=="],
|
||||
|
||||
"@esbuild-kit/core-utils/esbuild/@esbuild/linux-riscv64": ["@esbuild/linux-riscv64@0.18.20", "", { "os": "linux", "cpu": "none" }, "sha512-WSxo6h5ecI5XH34KC7w5veNnKkju3zBRLEQNY7mv5mtBmrP/MjNBCAlsM2u5hDBlS3NGcTQpoBvRzqBcRtpq1A=="],
|
||||
|
||||
"@esbuild-kit/core-utils/esbuild/@esbuild/linux-s390x": ["@esbuild/linux-s390x@0.18.20", "", { "os": "linux", "cpu": "s390x" }, "sha512-+8231GMs3mAEth6Ja1iK0a1sQ3ohfcpzpRLH8uuc5/KVDFneH6jtAJLFGafpzpMRO6DzJ6AvXKze9LfFMrIHVQ=="],
|
||||
|
||||
"@esbuild-kit/core-utils/esbuild/@esbuild/linux-x64": ["@esbuild/linux-x64@0.18.20", "", { "os": "linux", "cpu": "x64" }, "sha512-UYqiqemphJcNsFEskc73jQ7B9jgwjWrSayxawS6UVFZGWrAAtkzjxSqnoclCXxWtfwLdzU+vTpcNYhpn43uP1w=="],
|
||||
|
||||
"@esbuild-kit/core-utils/esbuild/@esbuild/netbsd-x64": ["@esbuild/netbsd-x64@0.18.20", "", { "os": "none", "cpu": "x64" }, "sha512-iO1c++VP6xUBUmltHZoMtCUdPlnPGdBom6IrO4gyKPFFVBKioIImVooR5I83nTew5UOYrk3gIJhbZh8X44y06A=="],
|
||||
|
||||
"@esbuild-kit/core-utils/esbuild/@esbuild/openbsd-x64": ["@esbuild/openbsd-x64@0.18.20", "", { "os": "openbsd", "cpu": "x64" }, "sha512-e5e4YSsuQfX4cxcygw/UCPIEP6wbIL+se3sxPdCiMbFLBWu0eiZOJ7WoD+ptCLrmjZBK1Wk7I6D/I3NglUGOxg=="],
|
||||
|
||||
"@esbuild-kit/core-utils/esbuild/@esbuild/sunos-x64": ["@esbuild/sunos-x64@0.18.20", "", { "os": "sunos", "cpu": "x64" }, "sha512-kDbFRFp0YpTQVVrqUd5FTYmWo45zGaXe0X8E1G/LKFC0v8x0vWrhOWSLITcCn63lmZIxfOMXtCfti/RxN/0wnQ=="],
|
||||
|
||||
"@esbuild-kit/core-utils/esbuild/@esbuild/win32-arm64": ["@esbuild/win32-arm64@0.18.20", "", { "os": "win32", "cpu": "arm64" }, "sha512-ddYFR6ItYgoaq4v4JmQQaAI5s7npztfV4Ag6NrhiaW0RrnOXqBkgwZLofVTlq1daVTQNhtI5oieTvkRPfZrePg=="],
|
||||
|
||||
"@esbuild-kit/core-utils/esbuild/@esbuild/win32-ia32": ["@esbuild/win32-ia32@0.18.20", "", { "os": "win32", "cpu": "ia32" }, "sha512-Wv7QBi3ID/rROT08SABTS7eV4hX26sVduqDOTe1MvGMjNd3EjOz4b7zeexIR62GTIEKrfJXKL9LFxTYgkyeu7g=="],
|
||||
|
||||
"@esbuild-kit/core-utils/esbuild/@esbuild/win32-x64": ["@esbuild/win32-x64@0.18.20", "", { "os": "win32", "cpu": "x64" }, "sha512-kTdfRcSiDfQca/y9QIkng02avJ+NCaQvrMejlsB3RRv5sE9rRoeBPISaZpKxHELzRxZyLvNts1P27W3wV+8geQ=="],
|
||||
|
||||
"@google/gemini-cli-core/@modelcontextprotocol/sdk/zod": ["zod@4.3.5", "", {}, "sha512-k7Nwx6vuWx1IJ9Bjuf4Zt1PEllcwe7cls3VNzm4CQ1/hgtFUK2bRNG3rvnpPUhFjmqJKAKtjV576KnUkHocg/g=="],
|
||||
|
||||
"@google/gemini-cli-core/@opentelemetry/exporter-logs-otlp-http/@opentelemetry/api-logs": ["@opentelemetry/api-logs@0.203.0", "", { "dependencies": { "@opentelemetry/api": "^1.3.0" } }, "sha512-9B9RU0H7Ya1Dx/Rkyc4stuBZSGVQF27WigitInx2QQoj6KUpEFYPKoWjdFTunJYxmXmh17HeBvbMa1EhGyPmqQ=="],
|
||||
@@ -5620,58 +5560,6 @@
|
||||
|
||||
"d3-sankey/d3-shape/d3-path": ["d3-path@1.0.9", "", {}, "sha512-VLaYcn81dtHVTjEHd8B+pbe9yHWpXKZUC87PzoFmsFrJqgFwDe/qxfp5MlfsfM1V5E/iVt0MmEbWQ7FVIXh/bg=="],
|
||||
|
||||
"drizzle-kit/esbuild/@esbuild/aix-ppc64": ["@esbuild/aix-ppc64@0.25.12", "", { "os": "aix", "cpu": "ppc64" }, "sha512-Hhmwd6CInZ3dwpuGTF8fJG6yoWmsToE+vYgD4nytZVxcu1ulHpUQRAB1UJ8+N1Am3Mz4+xOByoQoSZf4D+CpkA=="],
|
||||
|
||||
"drizzle-kit/esbuild/@esbuild/android-arm": ["@esbuild/android-arm@0.25.12", "", { "os": "android", "cpu": "arm" }, "sha512-VJ+sKvNA/GE7Ccacc9Cha7bpS8nyzVv0jdVgwNDaR4gDMC/2TTRc33Ip8qrNYUcpkOHUT5OZ0bUcNNVZQ9RLlg=="],
|
||||
|
||||
"drizzle-kit/esbuild/@esbuild/android-arm64": ["@esbuild/android-arm64@0.25.12", "", { "os": "android", "cpu": "arm64" }, "sha512-6AAmLG7zwD1Z159jCKPvAxZd4y/VTO0VkprYy+3N2FtJ8+BQWFXU+OxARIwA46c5tdD9SsKGZ/1ocqBS/gAKHg=="],
|
||||
|
||||
"drizzle-kit/esbuild/@esbuild/android-x64": ["@esbuild/android-x64@0.25.12", "", { "os": "android", "cpu": "x64" }, "sha512-5jbb+2hhDHx5phYR2By8GTWEzn6I9UqR11Kwf22iKbNpYrsmRB18aX/9ivc5cabcUiAT/wM+YIZ6SG9QO6a8kg=="],
|
||||
|
||||
"drizzle-kit/esbuild/@esbuild/darwin-arm64": ["@esbuild/darwin-arm64@0.25.12", "", { "os": "darwin", "cpu": "arm64" }, "sha512-N3zl+lxHCifgIlcMUP5016ESkeQjLj/959RxxNYIthIg+CQHInujFuXeWbWMgnTo4cp5XVHqFPmpyu9J65C1Yg=="],
|
||||
|
||||
"drizzle-kit/esbuild/@esbuild/darwin-x64": ["@esbuild/darwin-x64@0.25.12", "", { "os": "darwin", "cpu": "x64" }, "sha512-HQ9ka4Kx21qHXwtlTUVbKJOAnmG1ipXhdWTmNXiPzPfWKpXqASVcWdnf2bnL73wgjNrFXAa3yYvBSd9pzfEIpA=="],
|
||||
|
||||
"drizzle-kit/esbuild/@esbuild/freebsd-arm64": ["@esbuild/freebsd-arm64@0.25.12", "", { "os": "freebsd", "cpu": "arm64" }, "sha512-gA0Bx759+7Jve03K1S0vkOu5Lg/85dou3EseOGUes8flVOGxbhDDh/iZaoek11Y8mtyKPGF3vP8XhnkDEAmzeg=="],
|
||||
|
||||
"drizzle-kit/esbuild/@esbuild/freebsd-x64": ["@esbuild/freebsd-x64@0.25.12", "", { "os": "freebsd", "cpu": "x64" }, "sha512-TGbO26Yw2xsHzxtbVFGEXBFH0FRAP7gtcPE7P5yP7wGy7cXK2oO7RyOhL5NLiqTlBh47XhmIUXuGciXEqYFfBQ=="],
|
||||
|
||||
"drizzle-kit/esbuild/@esbuild/linux-arm": ["@esbuild/linux-arm@0.25.12", "", { "os": "linux", "cpu": "arm" }, "sha512-lPDGyC1JPDou8kGcywY0YILzWlhhnRjdof3UlcoqYmS9El818LLfJJc3PXXgZHrHCAKs/Z2SeZtDJr5MrkxtOw=="],
|
||||
|
||||
"drizzle-kit/esbuild/@esbuild/linux-arm64": ["@esbuild/linux-arm64@0.25.12", "", { "os": "linux", "cpu": "arm64" }, "sha512-8bwX7a8FghIgrupcxb4aUmYDLp8pX06rGh5HqDT7bB+8Rdells6mHvrFHHW2JAOPZUbnjUpKTLg6ECyzvas2AQ=="],
|
||||
|
||||
"drizzle-kit/esbuild/@esbuild/linux-ia32": ["@esbuild/linux-ia32@0.25.12", "", { "os": "linux", "cpu": "ia32" }, "sha512-0y9KrdVnbMM2/vG8KfU0byhUN+EFCny9+8g202gYqSSVMonbsCfLjUO+rCci7pM0WBEtz+oK/PIwHkzxkyharA=="],
|
||||
|
||||
"drizzle-kit/esbuild/@esbuild/linux-loong64": ["@esbuild/linux-loong64@0.25.12", "", { "os": "linux", "cpu": "none" }, "sha512-h///Lr5a9rib/v1GGqXVGzjL4TMvVTv+s1DPoxQdz7l/AYv6LDSxdIwzxkrPW438oUXiDtwM10o9PmwS/6Z0Ng=="],
|
||||
|
||||
"drizzle-kit/esbuild/@esbuild/linux-mips64el": ["@esbuild/linux-mips64el@0.25.12", "", { "os": "linux", "cpu": "none" }, "sha512-iyRrM1Pzy9GFMDLsXn1iHUm18nhKnNMWscjmp4+hpafcZjrr2WbT//d20xaGljXDBYHqRcl8HnxbX6uaA/eGVw=="],
|
||||
|
||||
"drizzle-kit/esbuild/@esbuild/linux-ppc64": ["@esbuild/linux-ppc64@0.25.12", "", { "os": "linux", "cpu": "ppc64" }, "sha512-9meM/lRXxMi5PSUqEXRCtVjEZBGwB7P/D4yT8UG/mwIdze2aV4Vo6U5gD3+RsoHXKkHCfSxZKzmDssVlRj1QQA=="],
|
||||
|
||||
"drizzle-kit/esbuild/@esbuild/linux-riscv64": ["@esbuild/linux-riscv64@0.25.12", "", { "os": "linux", "cpu": "none" }, "sha512-Zr7KR4hgKUpWAwb1f3o5ygT04MzqVrGEGXGLnj15YQDJErYu/BGg+wmFlIDOdJp0PmB0lLvxFIOXZgFRrdjR0w=="],
|
||||
|
||||
"drizzle-kit/esbuild/@esbuild/linux-s390x": ["@esbuild/linux-s390x@0.25.12", "", { "os": "linux", "cpu": "s390x" }, "sha512-MsKncOcgTNvdtiISc/jZs/Zf8d0cl/t3gYWX8J9ubBnVOwlk65UIEEvgBORTiljloIWnBzLs4qhzPkJcitIzIg=="],
|
||||
|
||||
"drizzle-kit/esbuild/@esbuild/linux-x64": ["@esbuild/linux-x64@0.25.12", "", { "os": "linux", "cpu": "x64" }, "sha512-uqZMTLr/zR/ed4jIGnwSLkaHmPjOjJvnm6TVVitAa08SLS9Z0VM8wIRx7gWbJB5/J54YuIMInDquWyYvQLZkgw=="],
|
||||
|
||||
"drizzle-kit/esbuild/@esbuild/netbsd-arm64": ["@esbuild/netbsd-arm64@0.25.12", "", { "os": "none", "cpu": "arm64" }, "sha512-xXwcTq4GhRM7J9A8Gv5boanHhRa/Q9KLVmcyXHCTaM4wKfIpWkdXiMog/KsnxzJ0A1+nD+zoecuzqPmCRyBGjg=="],
|
||||
|
||||
"drizzle-kit/esbuild/@esbuild/netbsd-x64": ["@esbuild/netbsd-x64@0.25.12", "", { "os": "none", "cpu": "x64" }, "sha512-Ld5pTlzPy3YwGec4OuHh1aCVCRvOXdH8DgRjfDy/oumVovmuSzWfnSJg+VtakB9Cm0gxNO9BzWkj6mtO1FMXkQ=="],
|
||||
|
||||
"drizzle-kit/esbuild/@esbuild/openbsd-arm64": ["@esbuild/openbsd-arm64@0.25.12", "", { "os": "openbsd", "cpu": "arm64" }, "sha512-fF96T6KsBo/pkQI950FARU9apGNTSlZGsv1jZBAlcLL1MLjLNIWPBkj5NlSz8aAzYKg+eNqknrUJ24QBybeR5A=="],
|
||||
|
||||
"drizzle-kit/esbuild/@esbuild/openbsd-x64": ["@esbuild/openbsd-x64@0.25.12", "", { "os": "openbsd", "cpu": "x64" }, "sha512-MZyXUkZHjQxUvzK7rN8DJ3SRmrVrke8ZyRusHlP+kuwqTcfWLyqMOE3sScPPyeIXN/mDJIfGXvcMqCgYKekoQw=="],
|
||||
|
||||
"drizzle-kit/esbuild/@esbuild/openharmony-arm64": ["@esbuild/openharmony-arm64@0.25.12", "", { "os": "none", "cpu": "arm64" }, "sha512-rm0YWsqUSRrjncSXGA7Zv78Nbnw4XL6/dzr20cyrQf7ZmRcsovpcRBdhD43Nuk3y7XIoW2OxMVvwuRvk9XdASg=="],
|
||||
|
||||
"drizzle-kit/esbuild/@esbuild/sunos-x64": ["@esbuild/sunos-x64@0.25.12", "", { "os": "sunos", "cpu": "x64" }, "sha512-3wGSCDyuTHQUzt0nV7bocDy72r2lI33QL3gkDNGkod22EsYl04sMf0qLb8luNKTOmgF/eDEDP5BFNwoBKH441w=="],
|
||||
|
||||
"drizzle-kit/esbuild/@esbuild/win32-arm64": ["@esbuild/win32-arm64@0.25.12", "", { "os": "win32", "cpu": "arm64" }, "sha512-rMmLrur64A7+DKlnSuwqUdRKyd3UE7oPJZmnljqEptesKM8wx9J8gx5u0+9Pq0fQQW8vqeKebwNXdfOyP+8Bsg=="],
|
||||
|
||||
"drizzle-kit/esbuild/@esbuild/win32-ia32": ["@esbuild/win32-ia32@0.25.12", "", { "os": "win32", "cpu": "ia32" }, "sha512-HkqnmmBoCbCwxUKKNPBixiWDGCpQGVsrQfJoVGYLPT41XWF8lHuE5N6WhVia2n4o5QK5M4tYr21827fNhi4byQ=="],
|
||||
|
||||
"drizzle-kit/esbuild/@esbuild/win32-x64": ["@esbuild/win32-x64@0.25.12", "", { "os": "win32", "cpu": "x64" }, "sha512-alJC0uCZpTFrSL0CCDjcgleBXPnCrEAhTBILpeAp7M/OFgoqtAetfBzX0xM00MUsVVPpVjlPuMbREqnZCXaTnA=="],
|
||||
|
||||
"form-data/mime-types/mime-db": ["mime-db@1.52.0", "", {}, "sha512-sPU4uV7dYlvtWJxwwxHD0PuihVNiE7TyAbQ5SWxDCB9mUYvOgroQOwYQQOKPJ8CIbE+1ETVlOoK1UC2nU3gYvg=="],
|
||||
|
||||
"fx-runner/which/is-absolute": ["is-absolute@0.1.7", "", { "dependencies": { "is-relative": "^0.1.0" } }, "sha512-Xi9/ZSn4NFapG8RP98iNPMOeaV3mXPisxKxzKtHVqr3g56j/fBn+yZmnxSVAA8lmZbl2J9b/a4kJvfU3hqQYgA=="],
|
||||
|
||||
@@ -13,12 +13,8 @@
|
||||
"dev:watch:new": "./tools/dev/run.sh watch --new",
|
||||
"dev:manual": "./tools/dev/run.sh watch --manual",
|
||||
"dev:setup": "./tools/dev/run.sh setup",
|
||||
"dev:cleanup": "./tools/dev/run.sh cleanup --target dev",
|
||||
"dev:reset": "./tools/dev/run.sh reset --target dev",
|
||||
"dev:cleanup:dogfood": "./tools/dev/run.sh cleanup --target dogfood",
|
||||
"dev:reset:dogfood": "./tools/dev/run.sh reset --target dogfood",
|
||||
"dev:cleanup:prod": "./tools/dev/run.sh cleanup --target prod",
|
||||
"dev:reset:prod": "./tools/dev/run.sh reset --target prod",
|
||||
"dev:cleanup": "./tools/dev/run.sh cleanup",
|
||||
"dev:reset": "./tools/dev/run.sh reset",
|
||||
"install:browseros-dogfood": "make -C tools/dogfood install",
|
||||
"test:env": "./tools/dev/run.sh test",
|
||||
"test:cleanup": "./tools/dev/run.sh cleanup --quick --yes",
|
||||
|
||||
@@ -51,17 +51,6 @@
|
||||
"destination": "resources/vm/browseros-vm.yaml",
|
||||
"os": ["macos"],
|
||||
"arch": ["arm64", "x64"]
|
||||
},
|
||||
{
|
||||
"name": "Drizzle migrations",
|
||||
"source": {
|
||||
"type": "local",
|
||||
"path": "apps/server/src/lib/db/migrations"
|
||||
},
|
||||
"destination": "resources/db/migrations",
|
||||
"recursive": true,
|
||||
"os": ["macos"],
|
||||
"arch": ["arm64", "x64"]
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
@@ -20,11 +20,6 @@ function validateRule(rule: ResourceRule): void {
|
||||
`Manifest rule ${rule.name} is missing source path or destination`,
|
||||
)
|
||||
}
|
||||
if (rule.recursive && rule.source.type !== 'local') {
|
||||
throw new Error(
|
||||
`Manifest rule ${rule.name} uses recursive with non-local source`,
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
function parseSource(raw: unknown): ResourceRule['source'] {
|
||||
@@ -59,7 +54,6 @@ function parseRule(raw: unknown): ResourceRule {
|
||||
source: parseSource(item.source),
|
||||
destination: String(item.destination ?? ''),
|
||||
executable: item.executable === true,
|
||||
recursive: item.recursive === true,
|
||||
}
|
||||
if (isStringArray(item.os)) {
|
||||
rule.os = item.os as ResourceRule['os']
|
||||
|
||||
@@ -1,10 +1,8 @@
|
||||
import { afterEach, describe, expect, it } from 'bun:test'
|
||||
import { mkdir, mkdtemp, readFile, rm, writeFile } from 'node:fs/promises'
|
||||
import { mkdtemp, rm, writeFile } from 'node:fs/promises'
|
||||
import { tmpdir } from 'node:os'
|
||||
import { join } from 'node:path'
|
||||
import { loadManifest } from './manifest'
|
||||
import { stageCompiledArtifact } from './stage'
|
||||
import type { BuildTarget, ResourceRule } from './types'
|
||||
|
||||
describe('server artifact staging', () => {
|
||||
let tempDir: string | null = null
|
||||
@@ -25,90 +23,4 @@ describe('server artifact staging', () => {
|
||||
resources: [],
|
||||
})
|
||||
})
|
||||
|
||||
it('parses recursive local-resource rules from the manifest', async () => {
|
||||
tempDir = await mkdtemp(join(tmpdir(), 'browseros-stage-test-'))
|
||||
const manifestPath = join(tempDir, 'manifest.json')
|
||||
await writeFile(
|
||||
manifestPath,
|
||||
JSON.stringify({
|
||||
resources: [
|
||||
{
|
||||
name: 'Drizzle migrations',
|
||||
source: {
|
||||
type: 'local',
|
||||
path: 'apps/server/src/lib/db/migrations',
|
||||
},
|
||||
destination: 'resources/db/migrations',
|
||||
recursive: true,
|
||||
os: ['macos'],
|
||||
arch: ['arm64', 'x64'],
|
||||
},
|
||||
],
|
||||
}),
|
||||
)
|
||||
|
||||
expect(loadManifest(manifestPath).resources[0]).toMatchObject({
|
||||
name: 'Drizzle migrations',
|
||||
recursive: true,
|
||||
})
|
||||
})
|
||||
|
||||
it('copies recursive local resource directories', async () => {
|
||||
tempDir = await mkdtemp(join(tmpdir(), 'browseros-stage-test-'))
|
||||
const sourceRoot = join(tempDir, 'source')
|
||||
const distRoot = join(tempDir, 'dist')
|
||||
const binaryPath = join(tempDir, 'browseros-server')
|
||||
const migrationsDir = join(sourceRoot, 'apps/server/src/lib/db/migrations')
|
||||
await mkdir(join(migrationsDir, 'meta'), { recursive: true })
|
||||
await writeFile(binaryPath, 'server')
|
||||
await writeFile(join(migrationsDir, '0000_init.sql'), 'CREATE TABLE x;')
|
||||
await writeFile(
|
||||
join(migrationsDir, 'meta', '_journal.json'),
|
||||
'{"entries":[]}',
|
||||
)
|
||||
|
||||
const artifact = await stageCompiledArtifact(
|
||||
distRoot,
|
||||
binaryPath,
|
||||
testTarget,
|
||||
'0.0.0-test',
|
||||
[migrationRule],
|
||||
sourceRoot,
|
||||
)
|
||||
|
||||
expect(
|
||||
await readFile(
|
||||
join(artifact.resourcesDir, 'db/migrations/0000_init.sql'),
|
||||
'utf8',
|
||||
),
|
||||
).toBe('CREATE TABLE x;')
|
||||
expect(
|
||||
await readFile(
|
||||
join(artifact.resourcesDir, 'db/migrations/meta/_journal.json'),
|
||||
'utf8',
|
||||
),
|
||||
).toBe('{"entries":[]}')
|
||||
})
|
||||
})
|
||||
|
||||
const testTarget: BuildTarget = {
|
||||
id: 'darwin-arm64',
|
||||
name: 'macOS ARM64',
|
||||
os: 'macos',
|
||||
arch: 'arm64',
|
||||
bunTarget: 'bun-darwin-arm64',
|
||||
serverBinaryName: 'browseros-server',
|
||||
}
|
||||
|
||||
const migrationRule: ResourceRule = {
|
||||
name: 'Drizzle migrations',
|
||||
source: {
|
||||
type: 'local',
|
||||
path: 'apps/server/src/lib/db/migrations',
|
||||
},
|
||||
destination: 'resources/db/migrations',
|
||||
recursive: true,
|
||||
os: ['macos'],
|
||||
arch: ['arm64', 'x64'],
|
||||
}
|
||||
|
||||
@@ -108,7 +108,7 @@ async function stageLocalRule(
|
||||
const sourcePath = isAbsolute(rule.source.path)
|
||||
? rule.source.path
|
||||
: resolve(sourceRoot, rule.source.path)
|
||||
await cp(sourcePath, destinationPath, { recursive: rule.recursive === true })
|
||||
await cp(sourcePath, destinationPath)
|
||||
|
||||
if (rule.executable && target.os !== 'windows') {
|
||||
await chmod(destinationPath, 0o755)
|
||||
|
||||
@@ -57,7 +57,6 @@ export interface ResourceRule {
|
||||
source: ResourceSource
|
||||
destination: string
|
||||
executable?: boolean
|
||||
recursive?: boolean
|
||||
os?: TargetOs[]
|
||||
arch?: TargetArch[]
|
||||
}
|
||||
|
||||
@@ -14,20 +14,16 @@ import (
|
||||
|
||||
var cleanupCmd = &cobra.Command{
|
||||
Use: "cleanup",
|
||||
Short: "Kill target processes and remove orphaned temp directories",
|
||||
Long: "Stops target BrowserOS processes, clears target ports, and removes target temp directories.",
|
||||
Short: "Kill port processes and remove orphaned temp directories",
|
||||
Long: "Stops old dev watch processes, clears dev/test ports, and removes orphaned browseros-* temp directories.",
|
||||
RunE: runCleanup,
|
||||
}
|
||||
|
||||
var (
|
||||
cleanupOnlyPorts bool
|
||||
cleanupOnlyTemps bool
|
||||
cleanupQuick bool
|
||||
cleanupYes bool
|
||||
cleanupTarget string
|
||||
cleanupBrowserOSDir string
|
||||
cleanupPortsValue string
|
||||
cleanupBrowserUserDataDir string
|
||||
cleanupPorts bool
|
||||
cleanupTemps bool
|
||||
cleanupQuick bool
|
||||
cleanupYes bool
|
||||
)
|
||||
|
||||
type safeCleanupOptions struct {
|
||||
@@ -36,12 +32,8 @@ type safeCleanupOptions struct {
|
||||
}
|
||||
|
||||
func init() {
|
||||
cleanupCmd.Flags().StringVar(&cleanupTarget, "target", targetDev, "Cleanup target: dev, dogfood, or prod")
|
||||
cleanupCmd.Flags().StringVar(&cleanupBrowserOSDir, "browseros-dir", "", "Override target BrowserOS state directory")
|
||||
cleanupCmd.Flags().StringVar(&cleanupPortsValue, "ports", "", "Override ports as cdp,server,extension")
|
||||
cleanupCmd.Flags().StringVar(&cleanupBrowserUserDataDir, "browser-user-data-dir", "", "Override BrowserOS user-data dir to stop")
|
||||
cleanupCmd.Flags().BoolVar(&cleanupOnlyPorts, "only-ports", false, "Only kill port processes")
|
||||
cleanupCmd.Flags().BoolVar(&cleanupOnlyTemps, "only-temps", false, "Only remove temp directories")
|
||||
cleanupCmd.Flags().BoolVar(&cleanupPorts, "ports", false, "Only kill port processes")
|
||||
cleanupCmd.Flags().BoolVar(&cleanupTemps, "temps", false, "Only remove temp directories")
|
||||
cleanupCmd.Flags().BoolVar(&cleanupQuick, "quick", false, "Run safe cleanup only")
|
||||
cleanupCmd.Flags().BoolVar(&cleanupYes, "yes", false, "Answer yes to the safe cleanup prompt")
|
||||
rootCmd.AddCommand(cleanupCmd)
|
||||
@@ -50,24 +42,11 @@ func init() {
|
||||
// runCleanup performs the non-destructive daily cleanup path for local dev.
|
||||
func runCleanup(cmd *cobra.Command, args []string) error {
|
||||
out := cmd.OutOrStdout()
|
||||
root, err := proc.FindMonorepoRoot()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
target, err := resolveResetTarget(root, resetTargetOptions{
|
||||
Target: cleanupTarget,
|
||||
BrowserOSDir: cleanupBrowserOSDir,
|
||||
Ports: cleanupPortsValue,
|
||||
BrowserUserDataDir: cleanupBrowserUserDataDir,
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if !cleanupYes && !cleanupQuick {
|
||||
ok, err := confirmYesNo(out, bufio.NewReader(os.Stdin), resetPrompt{
|
||||
Title: "Run safe cleanup?",
|
||||
Body: fmt.Sprintf("Stops %s processes, clears target ports, and removes target temp profiles. This does not touch saved BrowserOS data, Lima, containers, or images.", target.Name),
|
||||
Action: "Run safe cleanup for " + target.Name,
|
||||
Body: "Stops old dev watch processes, clears dev ports, and removes temporary /tmp browser profiles. This does not touch ~/.browseros-dev, Lima, containers, images, or saved dev data.",
|
||||
Action: "Run safe cleanup",
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
@@ -77,51 +56,42 @@ func runCleanup(cmd *cobra.Command, args []string) error {
|
||||
return nil
|
||||
}
|
||||
}
|
||||
if err := ensureTargetStopped(out, target); err != nil {
|
||||
return err
|
||||
}
|
||||
return runSafeCleanup(out, target, safeCleanupOptions{
|
||||
ports: !cleanupOnlyTemps || cleanupOnlyPorts,
|
||||
temps: !cleanupOnlyPorts || cleanupOnlyTemps,
|
||||
return runSafeCleanup(out, safeCleanupOptions{
|
||||
ports: !cleanupTemps || cleanupPorts,
|
||||
temps: !cleanupPorts || cleanupTemps,
|
||||
})
|
||||
}
|
||||
|
||||
// runSafeCleanup is shared by cleanup and reset before any destructive repair steps.
|
||||
func runSafeCleanup(out io.Writer, target resetTarget, opts safeCleanupOptions) error {
|
||||
func runSafeCleanup(out io.Writer, opts safeCleanupOptions) error {
|
||||
if opts.ports {
|
||||
if target.WatchRunStateDir != "" {
|
||||
stopped, err := proc.StopAllWatchProcessesInDir(target.WatchRunStateDir, 3*time.Second)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if stopped > 0 {
|
||||
fmt.Fprintf(out, "%s stopped %d old %s watch process group(s)\n", successStyle.Sprint("Stopped:"), stopped, target.Name)
|
||||
}
|
||||
ports := proc.DefaultLocalPorts()
|
||||
stopped, err := proc.StopAllWatchProcesses(3 * time.Second)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if len(target.BrowserUserDataDirs) > 0 {
|
||||
killedBrowsers, err := proc.KillBrowserProcessesForUserDataDirs(target.BrowserUserDataDirs, 3*time.Second)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if killedBrowsers > 0 {
|
||||
fmt.Fprintf(out, "%s stopped %d BrowserOS %s profile process(es)\n", successStyle.Sprint("Stopped:"), killedBrowsers, target.Name)
|
||||
}
|
||||
if stopped > 0 {
|
||||
fmt.Fprintf(out, "%s stopped %d old dev watch process group(s)\n", successStyle.Sprint("Stopped:"), stopped)
|
||||
}
|
||||
if target.Ports != nil {
|
||||
ports := *target.Ports
|
||||
fmt.Fprintf(out, "%s ports %d, %d, %d\n", labelStyle.Sprint("Clearing:"), ports.CDP, ports.Server, ports.Extension)
|
||||
if err := proc.KillPortsAndWait(ports, 3*time.Second); err != nil {
|
||||
return err
|
||||
}
|
||||
fmt.Fprintln(out, successStyle.Sprint("Ports cleared."))
|
||||
killedBrowsers, err := proc.KillBrowserProcessesForDevProfiles(3 * time.Second)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if killedBrowsers > 0 {
|
||||
fmt.Fprintf(out, "%s stopped %d BrowserOS dev/test profile process(es)\n", successStyle.Sprint("Stopped:"), killedBrowsers)
|
||||
}
|
||||
fmt.Fprintf(out, "%s ports %d, %d, %d\n", labelStyle.Sprint("Clearing:"), ports.CDP, ports.Server, ports.Extension)
|
||||
if err := proc.KillPortsAndWait(ports, 3*time.Second); err != nil {
|
||||
return err
|
||||
}
|
||||
fmt.Fprintln(out, successStyle.Sprint("Ports cleared."))
|
||||
}
|
||||
|
||||
if opts.temps {
|
||||
n := proc.CleanupTempDirs(target.TempPrefixes...)
|
||||
n := proc.CleanupTempDirs("browseros-test-", "browseros-dev-")
|
||||
if n > 0 {
|
||||
fmt.Fprintf(out, "%s removed %d temp directories\n", successStyle.Sprint("Removed:"), n)
|
||||
} else if len(target.TempPrefixes) > 0 {
|
||||
} else {
|
||||
fmt.Fprintln(out, dimStyle.Sprint("No orphaned temp directories found."))
|
||||
}
|
||||
}
|
||||
|
||||
@@ -64,11 +64,7 @@ func TestConfirmTypedRequiresExactToken(t *testing.T) {
|
||||
|
||||
func TestResetOverviewTellsUserToUseSmallestReset(t *testing.T) {
|
||||
var out bytes.Buffer
|
||||
printResetOverview(&out, resetTarget{
|
||||
Title: "BrowserOS dev reset",
|
||||
BrowserOSDir: "/Users/me/.browseros-dev",
|
||||
DeleteRootLabel: "Delete dev profile:",
|
||||
})
|
||||
printResetOverview(&out, devPaths{Root: "/Users/me/.browseros-dev"})
|
||||
|
||||
text := out.String()
|
||||
for _, want := range []string{
|
||||
|
||||
@@ -1,197 +0,0 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"net"
|
||||
"os"
|
||||
"syscall"
|
||||
"time"
|
||||
)
|
||||
|
||||
const dogfoodStopTimeout = 10 * time.Second
|
||||
|
||||
type dogfoodRunState struct {
|
||||
PID int `json:"pid"`
|
||||
Mode string `json:"mode"`
|
||||
SocketPath string `json:"socket_path"`
|
||||
LogPath string `json:"log_path"`
|
||||
}
|
||||
|
||||
type dogfoodIPCRequest struct {
|
||||
Command string `json:"command"`
|
||||
}
|
||||
|
||||
type dogfoodIPCResponse struct {
|
||||
OK bool `json:"ok"`
|
||||
Error string `json:"error,omitempty"`
|
||||
}
|
||||
|
||||
func ensureTargetStopped(out io.Writer, target resetTarget) error {
|
||||
if target.Dogfood == nil {
|
||||
return nil
|
||||
}
|
||||
return stopDogfoodRun(out, *target.Dogfood, dogfoodStopTimeout)
|
||||
}
|
||||
|
||||
func stopDogfoodRun(out io.Writer, target dogfoodRuntimeTarget, timeout time.Duration) error {
|
||||
active, err := dogfoodRunActive(target.LockPath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if !active {
|
||||
cleanupDogfoodRunFilesWithWarning(out, target)
|
||||
return nil
|
||||
}
|
||||
|
||||
fmt.Fprintln(out, labelStyle.Sprint("Stopping dogfood run first."))
|
||||
if err := stopDogfoodDaemon(target); err == nil {
|
||||
if stopped, err := waitForDogfoodStopped(out, target, timeout); err != nil {
|
||||
return err
|
||||
} else if stopped {
|
||||
fmt.Fprintln(out, successStyle.Sprint("Dogfood stopped."))
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
state, err := readDogfoodRunState(target.StatePath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("dogfood is running but state is unreadable at %s: %w", target.StatePath, err)
|
||||
}
|
||||
if state.PID <= 0 {
|
||||
return fmt.Errorf("dogfood is running but state has no pid at %s", target.StatePath)
|
||||
}
|
||||
if err := signalDogfoodPID(state.PID, syscall.SIGTERM); err != nil {
|
||||
return err
|
||||
}
|
||||
if stopped, err := waitForDogfoodStopped(out, target, timeout); err != nil {
|
||||
return err
|
||||
} else if stopped {
|
||||
fmt.Fprintln(out, successStyle.Sprint("Dogfood stopped."))
|
||||
return nil
|
||||
}
|
||||
if err := signalDogfoodPID(state.PID, syscall.SIGKILL); err != nil {
|
||||
return err
|
||||
}
|
||||
if stopped, err := waitForDogfoodStopped(out, target, time.Second); err != nil {
|
||||
return err
|
||||
} else if stopped {
|
||||
fmt.Fprintln(out, successStyle.Sprint("Dogfood force-stopped."))
|
||||
return nil
|
||||
}
|
||||
return fmt.Errorf("dogfood is still running; stop it manually before cleanup/reset")
|
||||
}
|
||||
|
||||
func stopDogfoodDaemon(target dogfoodRuntimeTarget) error {
|
||||
socketPath := target.SocketPath
|
||||
if state, err := readDogfoodRunState(target.StatePath); err == nil && state.SocketPath != "" {
|
||||
socketPath = state.SocketPath
|
||||
}
|
||||
conn, err := net.DialTimeout("unix", socketPath, 700*time.Millisecond)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer conn.Close()
|
||||
|
||||
data, err := json.Marshal(dogfoodIPCRequest{Command: "stop"})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
data = append(data, '\n')
|
||||
if _, err := conn.Write(data); err != nil {
|
||||
return err
|
||||
}
|
||||
_ = conn.SetReadDeadline(time.Now().Add(2 * time.Second))
|
||||
scanner := bufio.NewScanner(conn)
|
||||
if !scanner.Scan() {
|
||||
if err := scanner.Err(); err != nil {
|
||||
return err
|
||||
}
|
||||
return errors.New("dogfood daemon closed connection without response")
|
||||
}
|
||||
var response dogfoodIPCResponse
|
||||
if err := json.Unmarshal(scanner.Bytes(), &response); err != nil {
|
||||
return err
|
||||
}
|
||||
if response.Error != "" {
|
||||
return errors.New(response.Error)
|
||||
}
|
||||
if !response.OK {
|
||||
return errors.New("dogfood daemon did not accept stop request")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func waitForDogfoodStopped(out io.Writer, target dogfoodRuntimeTarget, timeout time.Duration) (bool, error) {
|
||||
deadline := time.Now().Add(timeout)
|
||||
for {
|
||||
active, err := dogfoodRunActive(target.LockPath)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
if !active {
|
||||
cleanupDogfoodRunFilesWithWarning(out, target)
|
||||
return true, nil
|
||||
}
|
||||
if time.Now().After(deadline) {
|
||||
return false, nil
|
||||
}
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
}
|
||||
}
|
||||
|
||||
func dogfoodRunActive(lockPath string) (bool, error) {
|
||||
file, err := os.OpenFile(lockPath, os.O_CREATE|os.O_RDWR, 0o644)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
defer file.Close()
|
||||
if err := syscall.Flock(int(file.Fd()), syscall.LOCK_EX|syscall.LOCK_NB); err != nil {
|
||||
if errors.Is(err, syscall.EWOULDBLOCK) || errors.Is(err, syscall.EAGAIN) {
|
||||
return true, nil
|
||||
}
|
||||
return false, err
|
||||
}
|
||||
return false, syscall.Flock(int(file.Fd()), syscall.LOCK_UN)
|
||||
}
|
||||
|
||||
func readDogfoodRunState(path string) (dogfoodRunState, error) {
|
||||
data, err := os.ReadFile(path)
|
||||
if err != nil {
|
||||
return dogfoodRunState{}, err
|
||||
}
|
||||
var state dogfoodRunState
|
||||
if err := json.Unmarshal(data, &state); err != nil {
|
||||
return dogfoodRunState{}, err
|
||||
}
|
||||
return state, nil
|
||||
}
|
||||
|
||||
func signalDogfoodPID(pid int, sig syscall.Signal) error {
|
||||
if pid <= 0 {
|
||||
return fmt.Errorf("invalid dogfood pid %d", pid)
|
||||
}
|
||||
if err := syscall.Kill(pid, sig); err != nil && err != syscall.ESRCH {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func cleanupDogfoodRunFilesWithWarning(out io.Writer, target dogfoodRuntimeTarget) {
|
||||
if err := cleanupDogfoodRunFiles(target); err != nil {
|
||||
fmt.Fprintf(out, "%s could not remove dogfood run files: %v\n", warnStyle.Sprint("Warning:"), err)
|
||||
}
|
||||
}
|
||||
|
||||
func cleanupDogfoodRunFiles(target dogfoodRuntimeTarget) error {
|
||||
if err := os.Remove(target.SocketPath); err != nil && !os.IsNotExist(err) {
|
||||
return err
|
||||
}
|
||||
if err := os.Remove(target.StatePath); err != nil && !os.IsNotExist(err) {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
@@ -1,37 +0,0 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
func TestWaitForDogfoodStoppedWarnsWhenRunFileCleanupFails(t *testing.T) {
|
||||
root := t.TempDir()
|
||||
socketPath := filepath.Join(root, "dogfood.sock")
|
||||
if err := os.Mkdir(socketPath, 0o755); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if err := os.WriteFile(filepath.Join(socketPath, "child"), []byte("x"), 0o644); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
var out bytes.Buffer
|
||||
stopped, err := waitForDogfoodStopped(&out, dogfoodRuntimeTarget{
|
||||
LockPath: filepath.Join(root, "run.lock"),
|
||||
SocketPath: socketPath,
|
||||
StatePath: filepath.Join(root, "state.json"),
|
||||
}, time.Millisecond)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if !stopped {
|
||||
t.Fatal("expected inactive dogfood run to be treated as stopped")
|
||||
}
|
||||
if !strings.Contains(out.String(), "Warning:") {
|
||||
t.Fatalf("missing cleanup warning:\n%s", out.String())
|
||||
}
|
||||
}
|
||||
@@ -10,12 +10,11 @@ import (
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"browseros-dev/proc"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
const (
|
||||
devDirName = ".browseros-dev"
|
||||
limaVMName = "browseros-vm"
|
||||
openClawImage = "ghcr.io/openclaw/openclaw:2026.4.12"
|
||||
openClawContainerName = "browseros-openclaw-openclaw-gateway-1"
|
||||
@@ -24,11 +23,16 @@ const (
|
||||
|
||||
var resetCmd = &cobra.Command{
|
||||
Use: "reset",
|
||||
Short: "Guide destructive BrowserOS profile and VM resets",
|
||||
Long: "Walks through safe cleanup, VM shutdown/deletion, OpenClaw container/image removal, and target BrowserOS state reset.",
|
||||
Short: "Guide destructive BrowserOS dev profile and VM resets",
|
||||
Long: "Walks through safe cleanup, VM shutdown/deletion, OpenClaw container/image removal, and full ~/.browseros-dev reset.",
|
||||
RunE: runReset,
|
||||
}
|
||||
|
||||
type devPaths struct {
|
||||
Root string
|
||||
LimaHome string
|
||||
}
|
||||
|
||||
type resetPrompt struct {
|
||||
Title string
|
||||
Body string
|
||||
@@ -45,18 +49,7 @@ type podmanMachineEntry struct {
|
||||
Running bool `json:"Running"`
|
||||
}
|
||||
|
||||
var (
|
||||
resetTargetName string
|
||||
resetBrowserOSDir string
|
||||
resetPortsValue string
|
||||
resetBrowserUserDataDir string
|
||||
)
|
||||
|
||||
func init() {
|
||||
resetCmd.Flags().StringVar(&resetTargetName, "target", targetDev, "Reset target: dev, dogfood, or prod")
|
||||
resetCmd.Flags().StringVar(&resetBrowserOSDir, "browseros-dir", "", "Override target BrowserOS state directory")
|
||||
resetCmd.Flags().StringVar(&resetPortsValue, "ports", "", "Override ports as cdp,server,extension")
|
||||
resetCmd.Flags().StringVar(&resetBrowserUserDataDir, "browser-user-data-dir", "", "Override BrowserOS user-data dir to stop")
|
||||
rootCmd.AddCommand(resetCmd)
|
||||
}
|
||||
|
||||
@@ -64,34 +57,21 @@ func init() {
|
||||
func runReset(cmd *cobra.Command, args []string) error {
|
||||
out := cmd.OutOrStdout()
|
||||
reader := bufio.NewReader(os.Stdin)
|
||||
root, err := proc.FindMonorepoRoot()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
target, err := resolveResetTarget(root, resetTargetOptions{
|
||||
Target: resetTargetName,
|
||||
BrowserOSDir: resetBrowserOSDir,
|
||||
Ports: resetPortsValue,
|
||||
BrowserUserDataDir: resetBrowserUserDataDir,
|
||||
})
|
||||
paths, err := resolveDevPaths()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
printResetOverview(out, target)
|
||||
|
||||
if err := ensureTargetStopped(out, target); err != nil {
|
||||
return err
|
||||
}
|
||||
printResetOverview(out, paths)
|
||||
|
||||
if ok, err := confirmYesNo(out, reader, resetPrompt{
|
||||
Title: "Run safe cleanup first?",
|
||||
Body: fmt.Sprintf("This stops %s processes, clears target ports, and removes target temp profiles. It does not touch saved BrowserOS data.", target.Name),
|
||||
Action: "Run safe cleanup for " + target.Name,
|
||||
Body: "This stops old dev watch processes, clears dev ports, and removes temporary /tmp browser profiles. It does not touch saved dev data.",
|
||||
Action: "Run safe cleanup",
|
||||
}); err != nil {
|
||||
return err
|
||||
} else if ok {
|
||||
if err := runSafeCleanup(out, target, safeCleanupOptions{ports: true, temps: true}); err != nil {
|
||||
if err := runSafeCleanup(out, safeCleanupOptions{ports: true, temps: true}); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
@@ -102,28 +82,28 @@ func runReset(cmd *cobra.Command, args []string) error {
|
||||
if err := maybeResetLegacyPodman(out, reader); err != nil {
|
||||
return err
|
||||
}
|
||||
return maybeDeleteTargetRoot(out, reader, target)
|
||||
return maybeDeleteDevProfile(out, reader, paths)
|
||||
}
|
||||
|
||||
vm, err := findVM(limactlPath, target.LimaHome)
|
||||
vm, err := findVM(limactlPath, paths.LimaHome)
|
||||
if err != nil {
|
||||
fmt.Fprintf(out, "%s could not inspect Lima VMs: %v\n", warnStyle.Sprint("Warning:"), err)
|
||||
if err := maybeResetLegacyPodman(out, reader); err != nil {
|
||||
return err
|
||||
}
|
||||
return maybeDeleteTargetRoot(out, reader, target)
|
||||
return maybeDeleteDevProfile(out, reader, paths)
|
||||
}
|
||||
if vm == nil {
|
||||
fmt.Fprintf(out, "%s %s was not found in %s.\n", dimStyle.Sprint("Not found:"), limaVMName, pathStyle.Sprint(target.LimaHome))
|
||||
fmt.Fprintf(out, "%s %s was not found in %s.\n", dimStyle.Sprint("Not found:"), limaVMName, pathStyle.Sprint(paths.LimaHome))
|
||||
if err := maybeResetLegacyPodman(out, reader); err != nil {
|
||||
return err
|
||||
}
|
||||
return maybeDeleteTargetRoot(out, reader, target)
|
||||
return maybeDeleteDevProfile(out, reader, paths)
|
||||
}
|
||||
|
||||
fmt.Fprintf(out, "%s %s %s\n", labelStyle.Sprint("Found VM:"), commandStyle.Sprint(vm.Name), dimStyle.Sprintf("(%s)", vm.Status))
|
||||
if strings.EqualFold(vm.Status, "Running") {
|
||||
if err := maybeResetOpenClaw(out, reader, limactlPath, target.LimaHome); err != nil {
|
||||
if err := maybeResetOpenClaw(out, reader, limactlPath, paths.LimaHome); err != nil {
|
||||
return err
|
||||
}
|
||||
if ok, err := confirmYesNo(out, reader, resetPrompt{
|
||||
@@ -133,7 +113,7 @@ func runReset(cmd *cobra.Command, args []string) error {
|
||||
}); err != nil {
|
||||
return err
|
||||
} else if ok {
|
||||
if err := runLimactl(out, limactlPath, target.LimaHome, "stop", limaVMName); err != nil {
|
||||
if err := runLimactl(out, limactlPath, paths.LimaHome, "stop", limaVMName); err != nil {
|
||||
return err
|
||||
}
|
||||
fmt.Fprintln(out, successStyle.Sprint("VM stopped."))
|
||||
@@ -145,12 +125,12 @@ func runReset(cmd *cobra.Command, args []string) error {
|
||||
|
||||
if ok, err := confirmYesNo(out, reader, resetPrompt{
|
||||
Title: "Delete VM?",
|
||||
Body: fmt.Sprintf("This deletes the Lima VM and its container store. %s remains. OpenClaw will be pulled again next time.", target.BrowserOSDir),
|
||||
Body: "This deletes the Lima VM and its container store. ~/.browseros-dev remains. OpenClaw will be pulled again next time.",
|
||||
Action: "Delete browseros-vm",
|
||||
}); err != nil {
|
||||
return err
|
||||
} else if ok {
|
||||
if err := runLimactl(out, limactlPath, target.LimaHome, "delete", "--force", limaVMName); err != nil {
|
||||
if err := runLimactl(out, limactlPath, paths.LimaHome, "delete", "--force", limaVMName); err != nil {
|
||||
return err
|
||||
}
|
||||
fmt.Fprintln(out, successStyle.Sprint("VM deleted."))
|
||||
@@ -160,19 +140,35 @@ func runReset(cmd *cobra.Command, args []string) error {
|
||||
return err
|
||||
}
|
||||
|
||||
return maybeDeleteTargetRoot(out, reader, target)
|
||||
return maybeDeleteDevProfile(out, reader, paths)
|
||||
}
|
||||
|
||||
func printResetOverview(out io.Writer, target resetTarget) {
|
||||
fmt.Fprintln(out, headerStyle.Sprint(target.Title))
|
||||
func resolveDevPaths() (devPaths, error) {
|
||||
if override := strings.TrimSpace(os.Getenv("BROWSEROS_DIR")); override != "" {
|
||||
root, err := filepath.Abs(override)
|
||||
if err != nil {
|
||||
return devPaths{}, err
|
||||
}
|
||||
return devPaths{Root: root, LimaHome: filepath.Join(root, "lima")}, nil
|
||||
}
|
||||
home, err := os.UserHomeDir()
|
||||
if err != nil {
|
||||
return devPaths{}, err
|
||||
}
|
||||
root := filepath.Join(home, devDirName)
|
||||
return devPaths{Root: root, LimaHome: filepath.Join(root, "lima")}, nil
|
||||
}
|
||||
|
||||
func printResetOverview(out io.Writer, paths devPaths) {
|
||||
fmt.Fprintln(out, headerStyle.Sprint("BrowserOS dev reset"))
|
||||
fmt.Fprintln(out)
|
||||
fmt.Fprintf(out, "This can reset parts of %s. Pick the smallest reset that matches the problem.\n", pathStyle.Sprint(target.BrowserOSDir))
|
||||
fmt.Fprintf(out, "This can reset parts of %s. Pick the smallest reset that matches the problem.\n", pathStyle.Sprint(paths.Root))
|
||||
fmt.Fprintln(out)
|
||||
fmt.Fprintf(out, " %s %s\n", labelStyle.Sprint("Stop VM:"), dimStyle.Sprint("Shuts down browseros-vm. Keeps data."))
|
||||
fmt.Fprintf(out, " %s %s\n", labelStyle.Sprint("Delete VM:"), dimStyle.Sprint("Removes Lima/container state. Keeps the target state root."))
|
||||
fmt.Fprintf(out, " %s %s\n", labelStyle.Sprint("Delete VM:"), dimStyle.Sprint("Removes Lima/container state. Keeps the dev profile."))
|
||||
fmt.Fprintf(out, " %s %s\n", labelStyle.Sprint("Remove OpenClaw container:"), dimStyle.Sprint("Keeps the downloaded OpenClaw image."))
|
||||
fmt.Fprintf(out, " %s %s\n", labelStyle.Sprint("Remove OpenClaw image:"), dimStyle.Sprint("Next startup pulls it again."))
|
||||
fmt.Fprintf(out, " %s %s\n", warnStyle.Sprint(target.DeleteRootLabel), dimStyle.Sprint("Deletes the target BrowserOS state root."))
|
||||
fmt.Fprintf(out, " %s %s\n", warnStyle.Sprint("Delete dev profile:"), dimStyle.Sprint("Deletes the dev profile root and dev-local BrowserOS data."))
|
||||
fmt.Fprintln(out)
|
||||
}
|
||||
|
||||
@@ -248,24 +244,24 @@ func maybeResetOpenClaw(out io.Writer, reader *bufio.Reader, limactlPath string,
|
||||
return nil
|
||||
}
|
||||
|
||||
func maybeDeleteTargetRoot(out io.Writer, reader *bufio.Reader, target resetTarget) error {
|
||||
func maybeDeleteDevProfile(out io.Writer, reader *bufio.Reader, paths devPaths) error {
|
||||
ok, err := confirmTyped(
|
||||
out,
|
||||
reader,
|
||||
target.DeleteRootLabel,
|
||||
fmt.Sprintf("This deletes %s. %s", pathStyle.Sprint(target.BrowserOSDir), target.DeleteRootBody),
|
||||
"Delete dev profile?",
|
||||
fmt.Sprintf("This deletes %s. It removes BrowserOS dev data plus VM/OpenClaw state.", pathStyle.Sprint(paths.Root)),
|
||||
"DELETE",
|
||||
)
|
||||
if err != nil || !ok {
|
||||
return err
|
||||
}
|
||||
if err := validateDevProfileRootForDeletion(target.BrowserOSDir); err != nil {
|
||||
if err := validateDevProfileRootForDeletion(paths.Root); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := os.RemoveAll(target.BrowserOSDir); err != nil {
|
||||
if err := os.RemoveAll(paths.Root); err != nil {
|
||||
return err
|
||||
}
|
||||
fmt.Fprintf(out, "%s %s\n", successStyle.Sprint("Deleted:"), pathStyle.Sprint(target.BrowserOSDir))
|
||||
fmt.Fprintf(out, "%s %s\n", successStyle.Sprint("Deleted:"), pathStyle.Sprint(paths.Root))
|
||||
return nil
|
||||
}
|
||||
|
||||
|
||||
@@ -1,365 +0,0 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"browseros-dev/proc"
|
||||
|
||||
"gopkg.in/yaml.v3"
|
||||
)
|
||||
|
||||
const (
|
||||
targetDev = "dev"
|
||||
targetDogfood = "dogfood"
|
||||
targetProd = "prod"
|
||||
|
||||
devDirName = ".browseros-dev"
|
||||
prodDirName = ".browseros"
|
||||
)
|
||||
|
||||
type resetTargetOptions struct {
|
||||
Target string
|
||||
BrowserOSDir string
|
||||
Ports string
|
||||
BrowserUserDataDir string
|
||||
}
|
||||
|
||||
type resetTarget struct {
|
||||
Name string
|
||||
Title string
|
||||
BrowserOSDir string
|
||||
LimaHome string
|
||||
Ports *proc.Ports
|
||||
BrowserUserDataDirs []string
|
||||
TempPrefixes []string
|
||||
WatchRunStateDir string
|
||||
DeleteRootLabel string
|
||||
DeleteRootBody string
|
||||
Dogfood *dogfoodRuntimeTarget
|
||||
}
|
||||
|
||||
type dogfoodRuntimeTarget struct {
|
||||
ConfigDir string
|
||||
LockPath string
|
||||
StatePath string
|
||||
SocketPath string
|
||||
}
|
||||
|
||||
type dogfoodConfigFile struct {
|
||||
BrowserOSDir string `yaml:"browseros_dir"`
|
||||
DevUserDataDir string `yaml:"dev_user_data_dir"`
|
||||
Ports struct {
|
||||
CDP int `yaml:"cdp"`
|
||||
Server int `yaml:"server"`
|
||||
Extension int `yaml:"extension"`
|
||||
} `yaml:"ports"`
|
||||
}
|
||||
|
||||
func resolveResetTarget(root string, opts resetTargetOptions) (resetTarget, error) {
|
||||
target := strings.TrimSpace(opts.Target)
|
||||
if target == "" {
|
||||
target = targetDev
|
||||
}
|
||||
switch target {
|
||||
case targetDev:
|
||||
return resolveDevTarget(root, opts)
|
||||
case targetDogfood:
|
||||
return resolveDogfoodTarget(opts)
|
||||
case targetProd:
|
||||
return resolveProdTarget(opts)
|
||||
default:
|
||||
return resetTarget{}, fmt.Errorf("unsupported reset target %q", target)
|
||||
}
|
||||
}
|
||||
|
||||
func resolveDevTarget(root string, opts resetTargetOptions) (resetTarget, error) {
|
||||
browserosDir, err := resolveBrowserOSDir(opts.BrowserOSDir, devDirName)
|
||||
if err != nil {
|
||||
return resetTarget{}, err
|
||||
}
|
||||
ports, err := resolveTargetPorts(root, opts.Ports)
|
||||
if err != nil {
|
||||
return resetTarget{}, err
|
||||
}
|
||||
return resetTarget{
|
||||
Name: targetDev,
|
||||
Title: "BrowserOS dev reset",
|
||||
BrowserOSDir: browserosDir,
|
||||
LimaHome: filepath.Join(browserosDir, "lima"),
|
||||
Ports: &ports,
|
||||
BrowserUserDataDirs: []string{"/tmp/browseros-dev"},
|
||||
TempPrefixes: []string{"browseros-test-", "browseros-dev-"},
|
||||
WatchRunStateDir: filepath.Join(browserosDir, "runs"),
|
||||
DeleteRootLabel: "Delete dev profile?",
|
||||
DeleteRootBody: "It removes BrowserOS dev data plus VM/OpenClaw state.",
|
||||
}, nil
|
||||
}
|
||||
|
||||
func resolveDogfoodTarget(opts resetTargetOptions) (resetTarget, error) {
|
||||
cfgDir, err := dogfoodConfigDir()
|
||||
if err != nil {
|
||||
return resetTarget{}, err
|
||||
}
|
||||
cfg, err := loadDogfoodConfig(filepath.Join(cfgDir, "config.yaml"))
|
||||
if err != nil {
|
||||
return resetTarget{}, err
|
||||
}
|
||||
applyDogfoodDefaults(&cfg, cfgDir)
|
||||
browserosDir := firstNonEmpty(opts.BrowserOSDir, cfg.BrowserOSDir)
|
||||
if browserosDir == "" {
|
||||
return resetTarget{}, fmt.Errorf("dogfood browseros_dir is empty")
|
||||
}
|
||||
browserosDir, err = filepath.Abs(expandTilde(browserosDir))
|
||||
if err != nil {
|
||||
return resetTarget{}, err
|
||||
}
|
||||
ports, err := parsePorts(firstNonEmpty(opts.Ports, formatPorts(proc.Ports{
|
||||
CDP: cfg.Ports.CDP,
|
||||
Server: cfg.Ports.Server,
|
||||
Extension: cfg.Ports.Extension,
|
||||
})))
|
||||
if err != nil {
|
||||
return resetTarget{}, err
|
||||
}
|
||||
browserUserDataDir := firstNonEmpty(opts.BrowserUserDataDir, cfg.DevUserDataDir)
|
||||
if browserUserDataDir == "" {
|
||||
return resetTarget{}, fmt.Errorf("dogfood dev_user_data_dir is empty")
|
||||
}
|
||||
browserUserDataDir, err = filepath.Abs(expandTilde(browserUserDataDir))
|
||||
if err != nil {
|
||||
return resetTarget{}, err
|
||||
}
|
||||
return resetTarget{
|
||||
Name: targetDogfood,
|
||||
Title: "BrowserOS dogfood reset",
|
||||
BrowserOSDir: browserosDir,
|
||||
LimaHome: filepath.Join(browserosDir, "lima"),
|
||||
Ports: &ports,
|
||||
BrowserUserDataDirs: []string{browserUserDataDir},
|
||||
DeleteRootLabel: "Delete dogfood BrowserOS state?",
|
||||
DeleteRootBody: "It removes dogfood-local BrowserOS server data plus VM/OpenClaw state. It does not touch your source BrowserOS browser profile.",
|
||||
Dogfood: &dogfoodRuntimeTarget{
|
||||
ConfigDir: cfgDir,
|
||||
LockPath: filepath.Join(cfgDir, "run.lock"),
|
||||
StatePath: filepath.Join(cfgDir, "state.json"),
|
||||
SocketPath: filepath.Join(cfgDir, "daemon.sock"),
|
||||
},
|
||||
}, nil
|
||||
}
|
||||
|
||||
func applyDogfoodDefaults(cfg *dogfoodConfigFile, cfgDir string) {
|
||||
if cfg.BrowserOSDir == "" {
|
||||
if home, err := os.UserHomeDir(); err == nil {
|
||||
cfg.BrowserOSDir = filepath.Join(home, ".browseros-dogfood")
|
||||
}
|
||||
}
|
||||
if cfg.DevUserDataDir == "" {
|
||||
cfg.DevUserDataDir = filepath.Join(cfgDir, "profile")
|
||||
}
|
||||
if cfg.Ports.CDP == 0 {
|
||||
cfg.Ports.CDP = 9015
|
||||
}
|
||||
if cfg.Ports.Server == 0 {
|
||||
cfg.Ports.Server = 9115
|
||||
}
|
||||
if cfg.Ports.Extension == 0 {
|
||||
cfg.Ports.Extension = 9315
|
||||
}
|
||||
}
|
||||
|
||||
func resolveProdTarget(opts resetTargetOptions) (resetTarget, error) {
|
||||
browserosDir, err := resolveBrowserOSDir(opts.BrowserOSDir, prodDirName)
|
||||
if err != nil {
|
||||
return resetTarget{}, err
|
||||
}
|
||||
return resetTarget{
|
||||
Name: targetProd,
|
||||
Title: "BrowserOS prod reset",
|
||||
BrowserOSDir: browserosDir,
|
||||
LimaHome: filepath.Join(browserosDir, "lima"),
|
||||
DeleteRootLabel: "Delete prod BrowserOS state?",
|
||||
DeleteRootBody: "It removes ~/.browseros server data plus VM/OpenClaw state. It does not delete your BrowserOS browser profile.",
|
||||
}, nil
|
||||
}
|
||||
|
||||
func resolveBrowserOSDir(override string, dirName string) (string, error) {
|
||||
if strings.TrimSpace(override) != "" {
|
||||
return filepath.Abs(expandTilde(strings.TrimSpace(override)))
|
||||
}
|
||||
if dirName == devDirName {
|
||||
if env := strings.TrimSpace(os.Getenv("BROWSEROS_DIR")); env != "" {
|
||||
return filepath.Abs(expandTilde(env))
|
||||
}
|
||||
}
|
||||
home, err := os.UserHomeDir()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return filepath.Join(home, dirName), nil
|
||||
}
|
||||
|
||||
func resolveTargetPorts(root string, explicit string) (proc.Ports, error) {
|
||||
if strings.TrimSpace(explicit) != "" {
|
||||
return parsePorts(explicit)
|
||||
}
|
||||
for _, path := range []string{
|
||||
filepath.Join(root, "apps/server/.env.development"),
|
||||
filepath.Join(root, "apps/server/.env.example"),
|
||||
} {
|
||||
ports, ok, err := readPortsFromEnvFile(path)
|
||||
if err != nil {
|
||||
return proc.Ports{}, err
|
||||
}
|
||||
if ok {
|
||||
return ports, nil
|
||||
}
|
||||
}
|
||||
return proc.DefaultLocalPorts(), nil
|
||||
}
|
||||
|
||||
func readPortsFromEnvFile(path string) (proc.Ports, bool, error) {
|
||||
file, err := os.Open(path)
|
||||
if os.IsNotExist(err) {
|
||||
return proc.Ports{}, false, nil
|
||||
}
|
||||
if err != nil {
|
||||
return proc.Ports{}, false, err
|
||||
}
|
||||
defer file.Close()
|
||||
|
||||
values := map[string]int{}
|
||||
scanner := bufio.NewScanner(file)
|
||||
for scanner.Scan() {
|
||||
key, value, ok := parseEnvLine(scanner.Text())
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
switch key {
|
||||
case "BROWSEROS_CDP_PORT", "BROWSEROS_SERVER_PORT", "BROWSEROS_EXTENSION_PORT":
|
||||
port, err := strconv.Atoi(value)
|
||||
if err != nil {
|
||||
return proc.Ports{}, false, fmt.Errorf("parse %s in %s: %w", key, path, err)
|
||||
}
|
||||
values[key] = port
|
||||
}
|
||||
}
|
||||
if err := scanner.Err(); err != nil {
|
||||
return proc.Ports{}, false, err
|
||||
}
|
||||
if len(values) != 3 {
|
||||
return proc.Ports{}, false, nil
|
||||
}
|
||||
return proc.Ports{
|
||||
CDP: values["BROWSEROS_CDP_PORT"],
|
||||
Server: values["BROWSEROS_SERVER_PORT"],
|
||||
Extension: values["BROWSEROS_EXTENSION_PORT"],
|
||||
}, true, nil
|
||||
}
|
||||
|
||||
func parseEnvLine(line string) (string, string, bool) {
|
||||
line = strings.TrimSpace(line)
|
||||
if line == "" || strings.HasPrefix(line, "#") {
|
||||
return "", "", false
|
||||
}
|
||||
key, value, ok := strings.Cut(line, "=")
|
||||
if !ok {
|
||||
return "", "", false
|
||||
}
|
||||
key = strings.TrimSpace(key)
|
||||
value = strings.TrimSpace(stripInlineComment(value))
|
||||
value = strings.Trim(value, `"'`)
|
||||
return key, value, key != "" && value != ""
|
||||
}
|
||||
|
||||
func stripInlineComment(value string) string {
|
||||
quote := byte(0)
|
||||
for index := 0; index < len(value); index++ {
|
||||
switch value[index] {
|
||||
case '\'', '"':
|
||||
if quote == 0 {
|
||||
quote = value[index]
|
||||
} else if quote == value[index] {
|
||||
quote = 0
|
||||
}
|
||||
case '#':
|
||||
if quote == 0 {
|
||||
return value[:index]
|
||||
}
|
||||
}
|
||||
}
|
||||
return value
|
||||
}
|
||||
|
||||
func parsePorts(value string) (proc.Ports, error) {
|
||||
parts := strings.Split(value, ",")
|
||||
if len(parts) != 3 {
|
||||
return proc.Ports{}, fmt.Errorf("ports must be cdp,server,extension")
|
||||
}
|
||||
parsed := [3]int{}
|
||||
for i, part := range parts {
|
||||
port, err := strconv.Atoi(strings.TrimSpace(part))
|
||||
if err != nil {
|
||||
return proc.Ports{}, fmt.Errorf("parse port %q: %w", part, err)
|
||||
}
|
||||
if port <= 0 || port > 65535 {
|
||||
return proc.Ports{}, fmt.Errorf("port %d out of range", port)
|
||||
}
|
||||
parsed[i] = port
|
||||
}
|
||||
return proc.Ports{CDP: parsed[0], Server: parsed[1], Extension: parsed[2]}, nil
|
||||
}
|
||||
|
||||
func formatPorts(ports proc.Ports) string {
|
||||
return fmt.Sprintf("%d,%d,%d", ports.CDP, ports.Server, ports.Extension)
|
||||
}
|
||||
|
||||
func dogfoodConfigDir() (string, error) {
|
||||
if xdg := strings.TrimSpace(os.Getenv("XDG_CONFIG_HOME")); xdg != "" {
|
||||
return filepath.Join(expandTilde(xdg), "browseros-dogfood"), nil
|
||||
}
|
||||
home, err := os.UserHomeDir()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return filepath.Join(home, ".config", "browseros-dogfood"), nil
|
||||
}
|
||||
|
||||
func loadDogfoodConfig(path string) (dogfoodConfigFile, error) {
|
||||
data, err := os.ReadFile(path)
|
||||
if err != nil {
|
||||
return dogfoodConfigFile{}, fmt.Errorf("read dogfood config at %s: %w", path, err)
|
||||
}
|
||||
var cfg dogfoodConfigFile
|
||||
if err := yaml.Unmarshal(data, &cfg); err != nil {
|
||||
return dogfoodConfigFile{}, fmt.Errorf("parse dogfood config: %w", err)
|
||||
}
|
||||
return cfg, nil
|
||||
}
|
||||
|
||||
func expandTilde(path string) string {
|
||||
if path == "~" {
|
||||
if home, err := os.UserHomeDir(); err == nil {
|
||||
return home
|
||||
}
|
||||
}
|
||||
if strings.HasPrefix(path, "~/") {
|
||||
if home, err := os.UserHomeDir(); err == nil {
|
||||
return filepath.Join(home, path[2:])
|
||||
}
|
||||
}
|
||||
return path
|
||||
}
|
||||
|
||||
func firstNonEmpty(values ...string) string {
|
||||
for _, value := range values {
|
||||
if strings.TrimSpace(value) != "" {
|
||||
return strings.TrimSpace(value)
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user