mirror of
https://github.com/browseros-ai/BrowserOS.git
synced 2026-05-14 08:03:58 +00:00
Compare commits
10 Commits
fix/debug-
...
fix/github
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
c71bc2b8a8 | ||
|
|
dfbce48994 | ||
|
|
7c942e91ce | ||
|
|
1ff92c44b3 | ||
|
|
c81906ecbf | ||
|
|
ffc0f09c86 | ||
|
|
7fb53c9921 | ||
|
|
d38b01a8c7 | ||
|
|
ff36c8412b | ||
|
|
fd5aba249b |
152
.claude/skills/ask-internal/SKILL.md
Normal file
152
.claude/skills/ask-internal/SKILL.md
Normal file
@@ -0,0 +1,152 @@
|
||||
---
|
||||
name: ask-internal
|
||||
description: Answer questions about BrowserOS internal stuff (setup, features, architecture, design decisions) by reading the private internal-docs submodule and the codebase. Use for "how do I X", "where is Y", "what is the deal with Z", or any question that mixes ops/setup knowledge with code knowledge. Can execute steps with per-command confirmation.
|
||||
allowed-tools: Bash, Read, Grep, Glob, Edit, Write
|
||||
---
|
||||
|
||||
# Ask Internal
|
||||
|
||||
Answer team-internal questions by reading `.internal-docs/` and the codebase, synthesizing a direct answer with file:line citations, and optionally running surfaced commands with confirmation.
|
||||
|
||||
**Announce at start:** "I'm using the ask-internal skill to answer this from internal-docs and the codebase."
|
||||
|
||||
## When to use
|
||||
|
||||
- "How do I reset my dogfood profile?"
|
||||
- "What's the deal with the OpenClaw VM startup?"
|
||||
- "Where do we configure release signing?"
|
||||
- Any question whose answer lives in setup runbooks, feature notes, architecture docs, or the code that produced them.
|
||||
|
||||
## Hard rules — never do these
|
||||
|
||||
- NEVER execute a state-mutating command without per-command `y` confirmation from the user.
|
||||
- NEVER edit BrowserOS code in response to an ask-internal question. The skill answers; it does not modify code. Use `/document-internal` for writes.
|
||||
- NEVER guess. If grep finds nothing useful in docs or code, say so plainly.
|
||||
- NEVER run this skill if `.internal-docs/` is missing. Stop with the init command.
|
||||
- NEVER cite a file or line number you have not actually read.
|
||||
|
||||
## Voice rules
|
||||
|
||||
Apply the same voice rules as `document-internal` to the synthesized answer:
|
||||
|
||||
- Lead with the point.
|
||||
- Concrete nouns. Name files, functions, commands.
|
||||
- Short sentences. Active voice. No em dashes.
|
||||
- Banned words: delve, crucial, robust, comprehensive, nuanced, multifaceted, furthermore, moreover, additionally, pivotal, landscape, tapestry, underscore, foster, showcase, intricate, vibrant, fundamental, significant, leverage, utilize.
|
||||
- No filler intros.
|
||||
|
||||
## Workflow
|
||||
|
||||
### Step 0: Pre-flight
|
||||
|
||||
```bash
|
||||
if git submodule status .internal-docs 2>/dev/null | grep -q '^-'; then
|
||||
echo "internal-docs submodule not initialized. Run: git submodule update --init .internal-docs"
|
||||
exit 0
|
||||
fi
|
||||
[ -d .internal-docs ] && [ -n "$(ls -A .internal-docs 2>/dev/null)" ] || {
|
||||
echo ".internal-docs/ missing or empty. Submodule not configured?"
|
||||
exit 0
|
||||
}
|
||||
```
|
||||
|
||||
### Step 1: Parse the question
|
||||
|
||||
Pull the keywords from the user's question. Drop stop words. Identify intent:
|
||||
|
||||
- **Setup-question** ("how do I", "how to", "where do I configure"): bias the search toward `setup/`.
|
||||
- **Feature-question** ("what is X", "why does X work this way"): bias toward `features/` and `architecture/`.
|
||||
- **Free-form** ("anything about Y"): search all categories.
|
||||
|
||||
### Step 2: Multi-source search
|
||||
|
||||
Run grep in parallel across two sources.
|
||||
|
||||
**Internal docs:**
|
||||
|
||||
```bash
|
||||
grep -rni --include='*.md' '<keyword>' .internal-docs/
|
||||
```
|
||||
|
||||
Search each keyword separately. Collect top hits by relevance (more keyword matches = higher).
|
||||
|
||||
**Codebase (skip vendored Chromium and `node_modules`):**
|
||||
|
||||
```bash
|
||||
grep -rni --include='*.ts' --include='*.tsx' --include='*.js' --include='*.json' --include='*.sh' \
|
||||
--exclude-dir=node_modules --exclude-dir=chromium --exclude-dir=.grove \
|
||||
'<keyword>' packages/ scripts/ .config/ .github/
|
||||
```
|
||||
|
||||
Read the top 3-5 doc hits and top 3-5 code hits. Do not skim — read the relevant section fully so citations are accurate.
|
||||
|
||||
### Step 3: Synthesize answer
|
||||
|
||||
Structure the response:
|
||||
|
||||
1. **Direct answer.** First sentence answers the question. No preamble.
|
||||
2. **Steps if applicable.** Numbered list with exact commands.
|
||||
3. **Citations.** Every factual claim references `path/to/file.md:42` or `path/to/code.ts:117`. Run the voice self-check before printing.
|
||||
|
||||
If multiple docs cover the topic at different layers (e.g., a setup runbook and a feature note both mention dogfood profiles), reconcile them in the answer rather than dumping both.
|
||||
|
||||
### Step 4: Offer execution (only if commands surfaced)
|
||||
|
||||
If Step 3 produced executable commands the user could run, ask:
|
||||
|
||||
> Run these for you? (y / n / dry-run)
|
||||
|
||||
- **y:** Execute one at a time. For any command that mutates state (writes a file, modifies config, kills a process, deletes anything), ask "run this? <command>" before each. Read-only commands (`ls`, `cat`, `git status`) run without per-command confirmation but still print before running.
|
||||
- **n:** Skip. Done.
|
||||
- **dry-run:** Print the full sequence as a `bash` block. Do not execute.
|
||||
|
||||
### Step 5: Doc-not-found path
|
||||
|
||||
If Step 2 returned nothing useful (no doc hits AND no clear code answer):
|
||||
|
||||
1. Tell the user: "No doc covers this. Tangentially relevant files: <list>."
|
||||
2. Ask: "Draft a new doc and open a PR to internal-docs?"
|
||||
3. On yes: invoke the full `/document-internal` flow (four sharp questions, draft, voice check, PR), forced to `setup/` doc type, with the code-grep findings handed in as initial context.
|
||||
|
||||
### Step 6: Completion status
|
||||
|
||||
Report one of:
|
||||
|
||||
- **DONE** — answer delivered, citations verified.
|
||||
- **DONE_WITH_CONCERNS** — answered, but flag uncertainty (e.g., docs and code disagreed; user should reconcile).
|
||||
- **BLOCKED** — submodule missing or other pre-flight failure.
|
||||
- **NEEDS_CONTEXT** — question too vague to search effectively. Ask one clarifying question.
|
||||
|
||||
## Citation discipline
|
||||
|
||||
Every "X is at Y" claim in the answer must point to a file:line that the skill actually read. Do not approximate. If you didn't read it, don't cite it.
|
||||
|
||||
If a doc says one thing and the code says another, surface the conflict explicitly:
|
||||
|
||||
> The setup runbook (`setup/dogfood-profile.md:23`) says to delete `~/.cache/browseros/dogfood`, but the actual code path in `packages/cli/src/cleanup.ts:47` removes `~/.local/share/browseros/dogfood`. The doc looks stale. Recommend updating it.
|
||||
|
||||
## Common Mistakes
|
||||
|
||||
**Skimming and then citing**
|
||||
- **Problem:** Citation points to a line that doesn't actually contain the claim.
|
||||
- **Fix:** Read the section fully before citing. If you didn't read line 117, don't cite line 117.
|
||||
|
||||
**Executing without per-command confirmation for mutations**
|
||||
- **Problem:** User says "y" to "run all", skill blasts through `rm -rf`-style commands.
|
||||
- **Fix:** "y" means "run this sequence with per-mutation confirmations". Per-command y is required for writes.
|
||||
|
||||
**Searching only docs, not code**
|
||||
- **Problem:** Doc says X but code does Y; answer is wrong.
|
||||
- **Fix:** Always grep both sources in Step 2.
|
||||
|
||||
## Red Flags
|
||||
|
||||
**Never:**
|
||||
- Cite a file:line you haven't read.
|
||||
- Run mutations without per-command confirmation.
|
||||
- Modify BrowserOS code from this skill (use `/document-internal` for writes).
|
||||
|
||||
**Always:**
|
||||
- Pre-flight check before any search.
|
||||
- Reconcile doc vs code conflicts in the answer, don't hide them.
|
||||
- Plain "no doc covers this" when grep is empty — never invent.
|
||||
208
.claude/skills/document-internal/SKILL.md
Normal file
208
.claude/skills/document-internal/SKILL.md
Normal file
@@ -0,0 +1,208 @@
|
||||
---
|
||||
name: document-internal
|
||||
description: Draft a 1-page internal doc (feature, architecture, or design) for the private browseros-ai/internal-docs repo. Use when wrapping up a feature on a branch, after the PR is open or about to be opened. Skill drafts from the diff, asks four sharp questions, enforces voice rules, and opens a PR to internal-docs.
|
||||
allowed-tools: Bash, Read, Write, Edit, Grep, Glob
|
||||
---
|
||||
|
||||
# Document Internal
|
||||
|
||||
Draft a 1-page internal doc (feature note, architecture note, or design spec) from the current branch's diff and open a PR to `browseros-ai/internal-docs`.
|
||||
|
||||
**Announce at start:** "I'm using the document-internal skill to draft a doc for internal-docs."
|
||||
|
||||
## When to use
|
||||
|
||||
After finishing implementation on a feature branch, when the work is doc-worthy (a major feature, a new subsystem, a setup runbook for something internal, or a design decision that future engineers need to know).
|
||||
|
||||
## Hard rules — never do these
|
||||
|
||||
- NEVER `git add -A` or `git add .` inside the tmp clone of internal-docs. Always specific paths.
|
||||
- NEVER write outside the tmp clone (no spillover into the OSS repo's working tree).
|
||||
- NEVER fabricate filler content for empty template sections. Empty stays empty.
|
||||
- NEVER touch the OSS repo's `.gitmodules` or submodule pointer — the sync workflow handles that.
|
||||
- NEVER run this skill if `.internal-docs/` is missing. Stop with the init command.
|
||||
- NEVER push to `internal-docs/main` directly. Always a feature branch + PR.
|
||||
|
||||
## Voice rules — enforced by Step 4
|
||||
|
||||
The skill MUST follow these and refuse to draft otherwise. After generation, scan for violations and regenerate offending sentences (max 3 attempts).
|
||||
|
||||
- Lead with the point. First sentence answers "what is this?"
|
||||
- Concrete nouns. Name files, functions, commands. Not "the system" or "the component".
|
||||
- Short sentences. Average <20 words. No deeply nested clauses.
|
||||
- Active voice. "X does Y" not "Y is done by X".
|
||||
- No em dashes. Use commas, periods, or rephrase.
|
||||
- Banned words: delve, crucial, robust, comprehensive, nuanced, multifaceted, furthermore, moreover, additionally, pivotal, landscape, tapestry, underscore, foster, showcase, intricate, vibrant, fundamental, significant, leverage, utilize.
|
||||
- "110 IQ" target. Write for a smart engineer who has not seen this code yet.
|
||||
- No filler intros ("This document describes..."). Start with the substance.
|
||||
- Empty sections stay empty. Do not write "N/A" or fabricate content.
|
||||
|
||||
## Workflow
|
||||
|
||||
### Step 0: Pre-flight
|
||||
|
||||
Bail with a clear message on any failure.
|
||||
|
||||
```bash
|
||||
# Submodule must be initialized
|
||||
if git submodule status .internal-docs 2>/dev/null | grep -q '^-'; then
|
||||
echo "internal-docs submodule not initialized. Run: git submodule update --init .internal-docs"
|
||||
exit 0
|
||||
fi
|
||||
[ -d .internal-docs ] || { echo ".internal-docs/ missing. Submodule not configured?"; exit 0; }
|
||||
|
||||
# Must be on a feature branch
|
||||
BRANCH=$(git branch --show-current)
|
||||
if [ "$BRANCH" = "main" ] || [ "$BRANCH" = "dev" ]; then
|
||||
echo "On $BRANCH. Run from a feature branch."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Determine base branch (default: dev for this repo, fall back to main).
|
||||
# Suppress rev-parse's SHA output on stdout so it doesn't get captured into BASE.
|
||||
BASE=$(git rev-parse --verify origin/dev >/dev/null 2>&1 && echo dev || echo main)
|
||||
|
||||
# Gather context
|
||||
git log "$BASE..HEAD" --oneline
|
||||
git diff "$BASE...HEAD" --stat
|
||||
gh pr view --json body -q .body 2>/dev/null # may be empty if no PR yet
|
||||
```
|
||||
|
||||
### Step 1: Identify the doc
|
||||
|
||||
Ask the user for three things in one prompt:
|
||||
|
||||
1. **Doc type:** `feature` (default for `feat/*` branches), `architecture`, or `design`
|
||||
2. **Slug:** kebab-case, short (e.g., `cowork-mcp`, `auto-skill-suggest`)
|
||||
3. **Owner:** GitHub handle (default = `git config user.name` or current `gh api user --jq .login`)
|
||||
|
||||
### Step 2: Decision brief — four sharp questions
|
||||
|
||||
Ask one question at a time. Each answer constrains the next. These force compression before drafting.
|
||||
|
||||
1. "In one sentence: what can someone now DO that they could not before?"
|
||||
2. "What is the one design decision a future engineer needs to know?"
|
||||
3. "Which 3-5 files are the heart of this change?" (suggest candidates from the diff)
|
||||
4. "Any sharp edges or gotchas? (or 'none')"
|
||||
|
||||
Skip any question that is N/A for the doc type. Architecture notes don't need question 1; design specs don't need question 4.
|
||||
|
||||
### Step 3: Draft from the template
|
||||
|
||||
Read the matching template from `.internal-docs/_templates/`:
|
||||
|
||||
- `feature` → `feature-note.md`
|
||||
- `architecture` → `architecture-note.md`
|
||||
- `design` → `design-spec.md`
|
||||
|
||||
If `.internal-docs/_templates/` does not exist (first run, before seeding), fall back to the seeds bundled with this skill at `.claude/skills/document-internal/seeds/_templates/`.
|
||||
|
||||
Generate the 1-pager from the template, the four answers, and the diff context.
|
||||
|
||||
### Step 4: Voice self-check
|
||||
|
||||
Scan the draft for violations:
|
||||
|
||||
- Em dash present (`—`).
|
||||
- Any banned word from the list.
|
||||
- Average sentence length > 20 words.
|
||||
- Body line count > 60 (feature notes only — architecture/design have no cap).
|
||||
|
||||
If any violation found, regenerate the offending sentences in place. Max 3 attempts. If still failing after 3 attempts, stop and report which rules are violated.
|
||||
|
||||
If the body is over 60 lines for a feature note, ask: "This is N lines, target is 60. Trim, or promote to `architecture/` (no length cap)?"
|
||||
|
||||
### Step 5: Show + iterate
|
||||
|
||||
Print the full draft. Ask:
|
||||
|
||||
> Edit needed? Paste any changes, or say "looks good".
|
||||
|
||||
Apply user edits with the Edit tool. Re-run Step 4. Loop until the user approves.
|
||||
|
||||
### Step 6: Open PR to internal-docs
|
||||
|
||||
Use a tmp clone. Never the user's `.internal-docs` checkout — keeps the user's submodule clean.
|
||||
|
||||
```bash
|
||||
TMP=$(mktemp -d)
|
||||
trap 'rm -rf "$TMP"' EXIT # cleans up even if any step below fails
|
||||
git clone -b main git@github.com:browseros-ai/internal-docs.git "$TMP"
|
||||
cd "$TMP"
|
||||
git checkout -b "docs/<slug>"
|
||||
|
||||
# Write the doc
|
||||
mkdir -p "<type>" # features, architecture, designs, or setup
|
||||
cat > "<type>/$(date -u +%Y-%m)-<slug>.md" <<'DOC'
|
||||
<draft content>
|
||||
DOC
|
||||
|
||||
# Update the root README index — insert one line under the matching section
|
||||
# Use Edit tool to add: "- [<title>](<type>/YYYY-MM-<slug>.md) — <one-line description>"
|
||||
|
||||
git add "<type>/$(date -u +%Y-%m)-<slug>.md" README.md
|
||||
git commit -m "docs(<type>): <slug>"
|
||||
git push -u origin "docs/<slug>"
|
||||
|
||||
PR_URL=$(gh pr create -R browseros-ai/internal-docs --base main \
|
||||
--head "docs/<slug>" \
|
||||
--title "docs(<type>): <slug>" \
|
||||
--body "$(cat <<'BODY'
|
||||
## Summary
|
||||
<one-line of what this doc covers>
|
||||
|
||||
## Source
|
||||
- BrowserOS branch: <branch>
|
||||
- Related PR: <#NNN if any>
|
||||
BODY
|
||||
)")
|
||||
|
||||
cd -
|
||||
echo "PR opened: $PR_URL"
|
||||
# trap above cleans up $TMP on EXIT
|
||||
```
|
||||
|
||||
If the slug contains characters that won't shell-escape cleanly, sanitize before substitution.
|
||||
|
||||
### Step 7: Completion status
|
||||
|
||||
Report one of:
|
||||
|
||||
- **DONE** — file written, branch pushed, PR opened. Print PR URL.
|
||||
- **DONE_WITH_CONCERNS** — same as DONE but list concerns (e.g., voice check needed multiple regens, user skipped a question).
|
||||
- **BLOCKED** — submodule missing, auth fail, or template missing. State exactly what's needed.
|
||||
|
||||
## Doc type defaults
|
||||
|
||||
| Branch pattern | Default doc type | Default location |
|
||||
|----------------|------------------|------------------|
|
||||
| `feat/*` | feature | `features/` |
|
||||
| `arch/*` or refactor branches with >10 files in `packages/` | architecture | `architecture/` |
|
||||
| `rfc/*` or `design/*` | design | `designs/` |
|
||||
| Otherwise | ask | ask |
|
||||
|
||||
## Common Mistakes
|
||||
|
||||
**Drafting before asking the four questions**
|
||||
- **Problem:** Output is generic filler that says nothing concrete.
|
||||
- **Fix:** Always ask Step 2 first, even if the diff "looks obvious".
|
||||
|
||||
**Touching `.internal-docs/` directly**
|
||||
- **Problem:** User's submodule HEAD moves, parent repo shows dirty state.
|
||||
- **Fix:** Always use the tmp clone in Step 6.
|
||||
|
||||
**Skipping voice check on user edits**
|
||||
- **Problem:** User pastes prose with em dashes or filler; ships as-is.
|
||||
- **Fix:** Re-run Step 4 after every user edit.
|
||||
|
||||
## Red Flags
|
||||
|
||||
**Never:**
|
||||
- Push to `internal-docs/main`. Always branch + PR.
|
||||
- Modify the OSS repo's `.gitmodules` or submodule pointer.
|
||||
- Fabricate content for empty template sections.
|
||||
|
||||
**Always:**
|
||||
- Pre-flight check before doing any work.
|
||||
- One-pager rule for feature notes (60-line body cap).
|
||||
- File:line citations when referencing code.
|
||||
51
.claude/skills/document-internal/seeds/README.md
Normal file
51
.claude/skills/document-internal/seeds/README.md
Normal file
@@ -0,0 +1,51 @@
|
||||
# BrowserOS Internal Docs
|
||||
|
||||
Private team docs for `browseros-ai`. Mounted as a submodule into the public OSS repo at `.internal-docs/`.
|
||||
|
||||
If you are reading this from a public clone of BrowserOS without team access — this submodule is for the BrowserOS internal team. Nothing here is required to build or use BrowserOS.
|
||||
|
||||
## How to find what you need
|
||||
|
||||
- Setup task ("how do I X locally") → look in [`setup/`](setup/)
|
||||
- Recently shipped feature → look in [`features/`](features/)
|
||||
- Cross-cutting subsystem → look in [`architecture/`](architecture/)
|
||||
- A design decision or RFC → look in [`designs/`](designs/)
|
||||
|
||||
Or run `/ask-internal "<your question>"` from any BrowserOS checkout. The skill greps these docs and the codebase, then synthesizes an answer with citations.
|
||||
|
||||
## How to add a doc
|
||||
|
||||
Run `/document-internal` from a feature branch. The skill drafts a 1-pager from your branch's diff, asks four sharp questions, enforces voice rules, and opens a PR back to this repo.
|
||||
|
||||
## Index
|
||||
|
||||
### Setup
|
||||
<!-- one line per setup runbook: -->
|
||||
<!-- - [Dev environment](setup/dev-environment.md): first-time machine setup -->
|
||||
|
||||
### Features
|
||||
<!-- one line per shipped feature, newest first: -->
|
||||
<!-- - [Cowork MCP](features/2026-04-cowork-mcp.md): bring outside MCPs into the BrowserOS agent -->
|
||||
|
||||
### Architecture
|
||||
<!-- one line per cross-cutting subsystem: -->
|
||||
<!-- - [Chrome fork overview](architecture/chrome-fork-overview.md): what we patched and why -->
|
||||
|
||||
### Designs
|
||||
<!-- one line per design spec, newest first: -->
|
||||
<!-- - [Internal docs submodule](designs/2026-04-30-internal-docs-submodule.md): this system -->
|
||||
|
||||
## Templates
|
||||
|
||||
When `/document-internal` runs, it reads from [`_templates/`](_templates/). Edit the templates here when the team's preferred shape changes.
|
||||
|
||||
## Voice
|
||||
|
||||
Docs in this repo follow these rules. The `/document-internal` skill enforces them; humans editing by hand should match.
|
||||
|
||||
- Lead with the point.
|
||||
- Concrete nouns. Name files, functions, commands.
|
||||
- Short sentences, active voice, no em dashes.
|
||||
- No filler words: delve, crucial, robust, comprehensive, nuanced, multifaceted, leverage, utilize, etc.
|
||||
- Empty sections stay empty. Do not write "N/A" or fake content.
|
||||
- Feature notes target one screen, body 60 lines max.
|
||||
@@ -0,0 +1,31 @@
|
||||
---
|
||||
title: <subsystem name>
|
||||
owner: <github handle>
|
||||
status: current | deprecated
|
||||
date: YYYY-MM-DD
|
||||
related-features: [feature-slug-1, feature-slug-2]
|
||||
---
|
||||
|
||||
# <subsystem name>
|
||||
|
||||
## What this subsystem does
|
||||
<1-2 paragraphs. The top-level responsibility. Boundaries.>
|
||||
|
||||
## Architecture
|
||||
<Diagram (ASCII or mermaid) plus prose. Components and how they talk.>
|
||||
|
||||
## Constraints
|
||||
<Hard rules the design enforces. "X must never call Y" type statements.>
|
||||
|
||||
## Decisions made
|
||||
<Numbered list of non-obvious decisions and the reason for each.>
|
||||
|
||||
## Key files
|
||||
- `path/to/file.ts` — role
|
||||
- `path/to/dir/` — what lives here
|
||||
|
||||
## How to evolve this
|
||||
<Where to add things. Which tests to expect to update. What NOT to touch.>
|
||||
|
||||
## Open questions
|
||||
<What is still being figured out. Empty if none.>
|
||||
@@ -0,0 +1,34 @@
|
||||
---
|
||||
title: <design name>
|
||||
owner: <github handle>
|
||||
status: proposed | accepted | rejected | superseded
|
||||
date: YYYY-MM-DD
|
||||
supersedes: <design-slug or none>
|
||||
---
|
||||
|
||||
# <design name>
|
||||
|
||||
## Goal
|
||||
<2-4 sentences. What this design is trying to accomplish.>
|
||||
|
||||
## Context
|
||||
<1-2 paragraphs. The current state, what is failing, why this needs to change.>
|
||||
|
||||
## Selected Approach
|
||||
<The chosen design at a high level. Architecture, components, data flow.>
|
||||
|
||||
## Alternatives Considered
|
||||
### 1. <name>
|
||||
<2-3 sentences on what this would look like, then pro/con and why rejected (or deferred).>
|
||||
|
||||
### 2. <name>
|
||||
<Same shape.>
|
||||
|
||||
## Out of Scope
|
||||
<What this design does NOT cover. Defer references.>
|
||||
|
||||
## Rollout
|
||||
<Numbered steps from "nothing exists" to "fully shipped".>
|
||||
|
||||
## Open Questions
|
||||
<Resolved during design? Empty. Unresolved? List with owner.>
|
||||
@@ -0,0 +1,29 @@
|
||||
---
|
||||
title: <feature name>
|
||||
owner: <github handle>
|
||||
status: shipped | wip | deprecated
|
||||
date: YYYY-MM-DD
|
||||
prs: ["#NNN"]
|
||||
tags: [agent, browser, mcp]
|
||||
---
|
||||
|
||||
# <feature name>
|
||||
|
||||
## What it does
|
||||
<2-3 sentences. What can someone now do that they could not before. Lead with user-facing impact, not implementation.>
|
||||
|
||||
## Why we built it
|
||||
<1-2 sentences. Motivation. What pain it removed or what unlocked.>
|
||||
|
||||
## How it works
|
||||
<3-6 sentences. The flow at a high level. Name the key files.>
|
||||
|
||||
## Key files
|
||||
- `path/to/file.ts` — what it does
|
||||
- `path/to/other.ts` — what it does
|
||||
|
||||
## How to run / test it locally
|
||||
<bullet list of commands. Empty section if N/A — do not fake.>
|
||||
|
||||
## Gotchas
|
||||
<known sharp edges. "If you see X, that's why." Empty if N/A.>
|
||||
61
.github/workflows/sync-internal-docs.yml
vendored
Normal file
61
.github/workflows/sync-internal-docs.yml
vendored
Normal file
@@ -0,0 +1,61 @@
|
||||
name: Sync internal-docs submodule
|
||||
|
||||
on:
|
||||
schedule:
|
||||
- cron: '0 */4 * * *'
|
||||
workflow_dispatch:
|
||||
|
||||
jobs:
|
||||
sync:
|
||||
name: Bump internal-docs submodule pointer on dev
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: write
|
||||
pull-requests: write
|
||||
steps:
|
||||
- name: Rewrite SSH submodule URL to HTTPS-with-token
|
||||
env:
|
||||
TOKEN: ${{ secrets.INTERNAL_DOCS_SYNC_TOKEN }}
|
||||
run: |
|
||||
git config --global "url.https://x-access-token:${TOKEN}@github.com/.insteadOf" "git@github.com:"
|
||||
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
submodules: true
|
||||
ref: dev
|
||||
fetch-depth: 50
|
||||
|
||||
- name: Open auto-merge PR if internal-docs has new commits
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
run: |
|
||||
set -e
|
||||
|
||||
# Skip if submodule not yet configured (handoff window before someone adds it)
|
||||
if ! git config --file .gitmodules --get-regexp '^submodule\..internal-docs\.path$' >/dev/null 2>&1; then
|
||||
echo "internal-docs submodule not yet configured in .gitmodules. Skipping."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
git submodule update --remote --merge .internal-docs
|
||||
|
||||
if git diff --quiet .internal-docs; then
|
||||
echo "No internal-docs changes to sync."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
BRANCH="bot/sync-internal-docs-$(date -u +%Y%m%d-%H%M%S)"
|
||||
git config user.name "browseros-bot"
|
||||
git config user.email "bot@browseros.ai"
|
||||
git checkout -b "$BRANCH"
|
||||
git add .internal-docs
|
||||
git commit -m "chore: sync internal-docs submodule"
|
||||
git push -u origin "$BRANCH"
|
||||
|
||||
PR_URL=$(gh pr create \
|
||||
--base dev \
|
||||
--head "$BRANCH" \
|
||||
--title "chore: sync internal-docs submodule" \
|
||||
--body "Automated bump of the \`.internal-docs\` submodule pointer. Auto-merging.")
|
||||
|
||||
gh pr merge "$PR_URL" --auto --squash --delete-branch
|
||||
4
.gitmodules
vendored
4
.gitmodules
vendored
@@ -0,0 +1,4 @@
|
||||
[submodule ".internal-docs"]
|
||||
path = .internal-docs
|
||||
url = git@github.com:browseros-ai/internal-docs.git
|
||||
branch = main
|
||||
|
||||
1
.internal-docs
Submodule
1
.internal-docs
Submodule
Submodule .internal-docs added at 01085a4ef5
@@ -38,8 +38,8 @@ browseros-cli install # downloads BrowserOS for your platform
|
||||
# If BrowserOS is installed but not running
|
||||
browseros-cli launch # opens BrowserOS, waits for server
|
||||
|
||||
# Configure the CLI (auto-discovers running BrowserOS)
|
||||
browseros-cli init --auto # detects server URL and saves config
|
||||
# Configure the CLI with the Server URL from BrowserOS settings
|
||||
browseros-cli init http://127.0.0.1:9000/mcp
|
||||
|
||||
# Verify connection
|
||||
browseros-cli health
|
||||
@@ -52,7 +52,7 @@ browseros-cli init <url> # non-interactive — pass URL directly
|
||||
browseros-cli init # interactive — prompts for URL
|
||||
```
|
||||
|
||||
Config is saved to `~/.config/browseros-cli/config.yaml`. The CLI also auto-discovers the server from `~/.browseros/server.json` (written by BrowserOS on startup).
|
||||
Config is saved to `~/.config/browseros-cli/config.yaml`. If `browseros-cli health` cannot connect, copy the current Server URL from BrowserOS Settings > BrowserOS MCP and run `browseros-cli init <Server URL>` again.
|
||||
|
||||
### CLI updates
|
||||
|
||||
@@ -126,9 +126,9 @@ To connect Claude Code, Gemini CLI, or any MCP client, see the [MCP setup guide]
|
||||
| `--debug` | `BOS_DEBUG=1` | Debug output |
|
||||
| `--timeout, -t` | | Request timeout (default: 2m) |
|
||||
|
||||
Priority for server URL: `--server` flag > `BROWSEROS_URL` env > `~/.browseros/server.json` > config file
|
||||
Priority for server URL: `--server` flag > `BROWSEROS_URL` env > config file
|
||||
|
||||
If no server URL is configured, the CLI exits with setup instructions pointing to `install`, `launch`, and `init`.
|
||||
If no server URL is configured, the CLI exits with setup instructions pointing to `install`, `launch`, and `init <Server URL>`.
|
||||
|
||||
## Testing
|
||||
|
||||
@@ -179,7 +179,7 @@ apps/cli/
|
||||
│ └── config.go # Config file (~/.config/browseros-cli/config.yaml)
|
||||
├── cmd/
|
||||
│ ├── root.go # Root command, global flags
|
||||
│ ├── init.go # Server URL configuration (URL arg, --auto, interactive)
|
||||
│ ├── init.go # Server URL configuration (URL arg or interactive)
|
||||
│ ├── install.go # install (download BrowserOS for current platform)
|
||||
│ ├── launch.go # launch (find and start BrowserOS, wait for server)
|
||||
│ ├── open.go # open (new_page / new_hidden_page)
|
||||
|
||||
@@ -17,8 +17,6 @@ import (
|
||||
)
|
||||
|
||||
func init() {
|
||||
var autoDiscover bool
|
||||
|
||||
cmd := &cobra.Command{
|
||||
Use: "init [url]",
|
||||
Short: "Configure the BrowserOS server connection",
|
||||
@@ -34,9 +32,8 @@ You can provide the full URL or just the port number:
|
||||
browseros-cli init http://127.0.0.1:9000/mcp
|
||||
browseros-cli init 9000
|
||||
|
||||
Three modes:
|
||||
Modes:
|
||||
browseros-cli init <url> Non-interactive (full URL or port number)
|
||||
browseros-cli init --auto Auto-discover from ~/.browseros/server.json
|
||||
browseros-cli init Interactive prompt`,
|
||||
Annotations: map[string]string{"group": "Setup:"},
|
||||
Args: cobra.MaximumNArgs(1),
|
||||
@@ -49,22 +46,9 @@ Three modes:
|
||||
|
||||
switch {
|
||||
case len(args) == 1:
|
||||
// Non-interactive: URL provided as argument
|
||||
input = args[0]
|
||||
|
||||
case autoDiscover:
|
||||
// Auto-discover: server.json → config → probe common ports
|
||||
discovered := probeRunningServer()
|
||||
if discovered == "" {
|
||||
output.Error("auto-discovery failed: no running BrowserOS found.\n\n"+
|
||||
" If not running: browseros-cli launch\n"+
|
||||
" If not installed: browseros-cli install", 1)
|
||||
}
|
||||
input = discovered
|
||||
fmt.Printf("Auto-discovered server at %s\n", input)
|
||||
|
||||
default:
|
||||
// Interactive prompt (original behavior)
|
||||
fmt.Println()
|
||||
bold.Println("BrowserOS CLI Setup")
|
||||
fmt.Println()
|
||||
@@ -95,12 +79,14 @@ Three modes:
|
||||
output.Errorf(1, "invalid URL: %s", input)
|
||||
}
|
||||
|
||||
// Verify connectivity
|
||||
fmt.Printf("Checking connection to %s ...\n", baseURL)
|
||||
client := &http.Client{Timeout: 5 * time.Second}
|
||||
resp, err := client.Get(baseURL + "/health")
|
||||
if err != nil {
|
||||
output.Errorf(1, "cannot connect to %s: %v\nIs BrowserOS running?", baseURL, err)
|
||||
output.Errorf(1, "cannot connect to %s: %v\n\n"+
|
||||
"Open BrowserOS Settings > BrowserOS MCP and copy the Server URL.\n"+
|
||||
"Then run: browseros-cli init <Server URL>\n"+
|
||||
"Example: browseros-cli init http://127.0.0.1:9000/mcp", baseURL, err)
|
||||
}
|
||||
resp.Body.Close()
|
||||
|
||||
@@ -121,6 +107,5 @@ Three modes:
|
||||
},
|
||||
}
|
||||
|
||||
cmd.Flags().BoolVar(&autoDiscover, "auto", false, "Auto-discover server URL from ~/.browseros/server.json")
|
||||
rootCmd.AddCommand(cmd)
|
||||
}
|
||||
|
||||
@@ -28,7 +28,7 @@ Linux: Downloads AppImage (or .deb with --deb flag)
|
||||
|
||||
After installation:
|
||||
browseros-cli launch # start BrowserOS
|
||||
browseros-cli init --auto # configure the CLI`,
|
||||
browseros-cli init <url> # configure the CLI with the Server URL`,
|
||||
Annotations: map[string]string{"group": "Setup:"},
|
||||
Args: cobra.NoArgs,
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
@@ -81,7 +81,7 @@ After installation:
|
||||
fmt.Println()
|
||||
bold.Println("Next steps:")
|
||||
dim.Println(" browseros-cli launch # start BrowserOS")
|
||||
dim.Println(" browseros-cli init --auto # configure the CLI")
|
||||
dim.Println(" browseros-cli init <url> # use the Server URL from BrowserOS settings")
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"os"
|
||||
@@ -38,6 +39,7 @@ If BrowserOS is already running, reports the server URL.`,
|
||||
|
||||
if url := probeRunningServer(); url != "" {
|
||||
green.Printf("BrowserOS is already running at %s\n", url)
|
||||
dim.Printf("Next: browseros-cli init %s\n", mcpEndpointURL(url))
|
||||
return
|
||||
}
|
||||
|
||||
@@ -63,7 +65,7 @@ If BrowserOS is already running, reports the server URL.`,
|
||||
|
||||
green.Printf("BrowserOS is ready at %s\n", url)
|
||||
fmt.Println()
|
||||
dim.Println("Next: browseros-cli init --auto")
|
||||
dim.Printf("Next: browseros-cli init %s\n", mcpEndpointURL(url))
|
||||
},
|
||||
}
|
||||
|
||||
@@ -75,39 +77,77 @@ If BrowserOS is already running, reports the server URL.`,
|
||||
// Server probing
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
// probeRunningServer checks server.json, config, and common ports for a running server.
|
||||
var commonBrowserOSPorts = []int{9100, 9200, 9300}
|
||||
|
||||
// probeRunningServer checks launch discovery, explicit config, and common ports for a running server.
|
||||
func probeRunningServer() string {
|
||||
check := func(baseURL string) bool {
|
||||
client := &http.Client{Timeout: 2 * time.Second}
|
||||
resp, err := client.Get(baseURL + "/health")
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
resp.Body.Close()
|
||||
return resp.StatusCode == 200
|
||||
}
|
||||
client := &http.Client{Timeout: 2 * time.Second}
|
||||
|
||||
// 1. server.json — written by BrowserOS on startup with the actual port
|
||||
if url := loadBrowserosServerURL(); url != "" && check(url) {
|
||||
if url := loadBrowserosServerURL(); url != "" && checkServerHealth(client, url) {
|
||||
return url
|
||||
}
|
||||
|
||||
// 2. Saved config / env var
|
||||
if url := defaultServerURL(); url != "" && check(url) {
|
||||
if url := defaultServerURL(); url != "" && checkServerHealth(client, url) {
|
||||
return url
|
||||
}
|
||||
|
||||
// 3. Probe common BrowserOS ports as last resort
|
||||
for _, port := range []int{9100, 9200, 9300} {
|
||||
return probeCommonServerPorts(client)
|
||||
}
|
||||
|
||||
func checkServerHealth(client *http.Client, baseURL string) bool {
|
||||
resp, err := client.Get(baseURL + "/health")
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
resp.Body.Close()
|
||||
return resp.StatusCode == 200
|
||||
}
|
||||
|
||||
func probeCommonServerPorts(client *http.Client) string {
|
||||
for _, port := range commonBrowserOSPorts {
|
||||
url := fmt.Sprintf("http://127.0.0.1:%d", port)
|
||||
if check(url) {
|
||||
if checkServerHealth(client, url) {
|
||||
return url
|
||||
}
|
||||
}
|
||||
|
||||
return ""
|
||||
}
|
||||
|
||||
type serverDiscoveryConfig struct {
|
||||
ServerPort int `json:"server_port"`
|
||||
URL string `json:"url"`
|
||||
ServerVersion string `json:"server_version"`
|
||||
BrowserOSVersion string `json:"browseros_version,omitempty"`
|
||||
ChromiumVersion string `json:"chromium_version,omitempty"`
|
||||
}
|
||||
|
||||
// loadBrowserosServerURL reads BrowserOS's runtime discovery file for launch readiness only.
|
||||
//
|
||||
// Normal command resolution must not call this because it can override a URL the
|
||||
// user explicitly saved with `browseros-cli init <Server URL>`.
|
||||
func loadBrowserosServerURL() string {
|
||||
home, err := os.UserHomeDir()
|
||||
if err != nil {
|
||||
return ""
|
||||
}
|
||||
|
||||
data, err := os.ReadFile(filepath.Join(home, ".browseros", "server.json"))
|
||||
if err != nil {
|
||||
return ""
|
||||
}
|
||||
|
||||
var sc serverDiscoveryConfig
|
||||
if err := json.Unmarshal(data, &sc); err != nil {
|
||||
return ""
|
||||
}
|
||||
|
||||
return normalizeServerURL(sc.URL)
|
||||
}
|
||||
|
||||
func mcpEndpointURL(baseURL string) string {
|
||||
return strings.TrimSuffix(baseURL, "/") + "/mcp"
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Platform-native installation detection
|
||||
// ---------------------------------------------------------------------------
|
||||
@@ -117,7 +157,8 @@ func probeRunningServer() string {
|
||||
// macOS: `open -Ra "BrowserOS"` — queries Launch Services (finds apps anywhere)
|
||||
// Linux: checks /usr/bin/browseros (.deb), browseros.desktop, or AppImage files
|
||||
// Windows: checks executable at %LOCALAPPDATA%\BrowserOS\Application\BrowserOS.exe
|
||||
// and registry uninstall key (per-user Chromium install pattern)
|
||||
//
|
||||
// and registry uninstall key (per-user Chromium install pattern)
|
||||
func isBrowserOSInstalled() bool {
|
||||
switch runtime.GOOS {
|
||||
case "darwin":
|
||||
@@ -271,14 +312,11 @@ func waitForServer(maxWait time.Duration) (string, bool) {
|
||||
|
||||
for time.Now().Before(deadline) {
|
||||
// server.json is written by BrowserOS on startup with the actual port
|
||||
if url := loadBrowserosServerURL(); url != "" {
|
||||
resp, err := client.Get(url + "/health")
|
||||
if err == nil {
|
||||
resp.Body.Close()
|
||||
if resp.StatusCode == 200 {
|
||||
return url, true
|
||||
}
|
||||
}
|
||||
if url := loadBrowserosServerURL(); url != "" && checkServerHealth(client, url) {
|
||||
return url, true
|
||||
}
|
||||
if url := probeCommonServerPorts(client); url != "" {
|
||||
return url, true
|
||||
}
|
||||
fmt.Print(".")
|
||||
time.Sleep(1 * time.Second)
|
||||
|
||||
99
packages/browseros-agent/apps/cli/cmd/launch_test.go
Normal file
99
packages/browseros-agent/apps/cli/cmd/launch_test.go
Normal file
@@ -0,0 +1,99 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"net/url"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"browseros-cli/config"
|
||||
)
|
||||
|
||||
func TestProbeRunningServerUsesDiscoveryBeforeConfig(t *testing.T) {
|
||||
home := t.TempDir()
|
||||
t.Setenv("HOME", home)
|
||||
t.Setenv("USERPROFILE", home)
|
||||
t.Setenv("XDG_CONFIG_HOME", t.TempDir())
|
||||
t.Setenv("BROWSEROS_URL", "")
|
||||
|
||||
discoveredServer := newHealthyServer(t)
|
||||
configServer := newHealthyServer(t)
|
||||
|
||||
serverDir := filepath.Join(home, ".browseros")
|
||||
if err := os.MkdirAll(serverDir, 0755); err != nil {
|
||||
t.Fatalf("os.MkdirAll() error = %v", err)
|
||||
}
|
||||
data := []byte(fmt.Sprintf(`{"url":%q}`, discoveredServer.URL))
|
||||
if err := os.WriteFile(filepath.Join(serverDir, "server.json"), data, 0644); err != nil {
|
||||
t.Fatalf("os.WriteFile() error = %v", err)
|
||||
}
|
||||
if err := config.Save(&config.Config{ServerURL: configServer.URL}); err != nil {
|
||||
t.Fatalf("config.Save() error = %v", err)
|
||||
}
|
||||
|
||||
got := probeRunningServer()
|
||||
if got != normalizeServerURL(discoveredServer.URL) {
|
||||
t.Fatalf("probeRunningServer() = %q, want %q", got, normalizeServerURL(discoveredServer.URL))
|
||||
}
|
||||
}
|
||||
|
||||
func TestWaitForServerUsesCommonPortFallback(t *testing.T) {
|
||||
home := t.TempDir()
|
||||
t.Setenv("HOME", home)
|
||||
t.Setenv("USERPROFILE", home)
|
||||
|
||||
server := newHealthyServer(t)
|
||||
port := serverPort(t, server.URL)
|
||||
|
||||
originalPorts := commonBrowserOSPorts
|
||||
commonBrowserOSPorts = []int{port}
|
||||
t.Cleanup(func() {
|
||||
commonBrowserOSPorts = originalPorts
|
||||
})
|
||||
|
||||
got, ok := waitForServer(100 * time.Millisecond)
|
||||
if !ok {
|
||||
t.Fatal("waitForServer() ok = false, want true")
|
||||
}
|
||||
if got != normalizeServerURL(server.URL) {
|
||||
t.Fatalf("waitForServer() = %q, want %q", got, normalizeServerURL(server.URL))
|
||||
}
|
||||
}
|
||||
|
||||
func newHealthyServer(t *testing.T) *httptest.Server {
|
||||
t.Helper()
|
||||
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
if r.URL.Path != "/health" {
|
||||
http.NotFound(w, r)
|
||||
return
|
||||
}
|
||||
w.WriteHeader(http.StatusOK)
|
||||
}))
|
||||
t.Cleanup(server.Close)
|
||||
return server
|
||||
}
|
||||
|
||||
func serverPort(t *testing.T, rawURL string) int {
|
||||
t.Helper()
|
||||
|
||||
parsed, err := url.Parse(rawURL)
|
||||
if err != nil {
|
||||
t.Fatalf("url.Parse() error = %v", err)
|
||||
}
|
||||
_, portText, err := net.SplitHostPort(parsed.Host)
|
||||
if err != nil {
|
||||
t.Fatalf("net.SplitHostPort() error = %v", err)
|
||||
}
|
||||
port, err := strconv.Atoi(portText)
|
||||
if err != nil {
|
||||
t.Fatalf("strconv.Atoi() error = %v", err)
|
||||
}
|
||||
return port
|
||||
}
|
||||
@@ -2,10 +2,8 @@ package cmd
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
@@ -289,18 +287,15 @@ func drainAutomaticUpdateCheckWithTimeout(done <-chan struct{}, timeout time.Dur
|
||||
}
|
||||
}
|
||||
|
||||
// defaultServerURL returns the implicit target from user-controlled settings only.
|
||||
//
|
||||
// BrowserOS writes a discovery file at runtime, but normal commands intentionally
|
||||
// ignore it so a saved URL is not silently overridden by another running server.
|
||||
func defaultServerURL() string {
|
||||
// 1. Explicit env var always wins
|
||||
if env := normalizeServerURL(os.Getenv("BROWSEROS_URL")); env != "" {
|
||||
return env
|
||||
}
|
||||
|
||||
// 2. Live discovery file from running BrowserOS (most current)
|
||||
if url := loadBrowserosServerURL(); url != "" {
|
||||
return url
|
||||
}
|
||||
|
||||
// 3. Saved config (may be stale if port changed)
|
||||
cfg, err := config.Load()
|
||||
if err == nil {
|
||||
if url := normalizeServerURL(cfg.ServerURL); url != "" {
|
||||
@@ -311,33 +306,6 @@ func defaultServerURL() string {
|
||||
return ""
|
||||
}
|
||||
|
||||
type serverDiscoveryConfig struct {
|
||||
ServerPort int `json:"server_port"`
|
||||
URL string `json:"url"`
|
||||
ServerVersion string `json:"server_version"`
|
||||
BrowserOSVersion string `json:"browseros_version,omitempty"`
|
||||
ChromiumVersion string `json:"chromium_version,omitempty"`
|
||||
}
|
||||
|
||||
func loadBrowserosServerURL() string {
|
||||
home, err := os.UserHomeDir()
|
||||
if err != nil {
|
||||
return ""
|
||||
}
|
||||
|
||||
data, err := os.ReadFile(filepath.Join(home, ".browseros", "server.json"))
|
||||
if err != nil {
|
||||
return ""
|
||||
}
|
||||
|
||||
var sc serverDiscoveryConfig
|
||||
if err := json.Unmarshal(data, &sc); err != nil {
|
||||
return ""
|
||||
}
|
||||
|
||||
return normalizeServerURL(sc.URL)
|
||||
}
|
||||
|
||||
func normalizeServerURL(raw string) string {
|
||||
normalized := strings.TrimSpace(raw)
|
||||
|
||||
@@ -369,8 +337,10 @@ func validateServerURL(raw string) (string, error) {
|
||||
|
||||
return "", fmt.Errorf(
|
||||
"BrowserOS server URL is not configured.\n\n" +
|
||||
" If BrowserOS is running: browseros-cli init --auto\n" +
|
||||
" If BrowserOS is closed: browseros-cli launch\n" +
|
||||
" If not installed: browseros-cli install",
|
||||
" Open BrowserOS Settings > BrowserOS MCP and copy the Server URL.\n" +
|
||||
" Save it with: browseros-cli init <Server URL>\n" +
|
||||
" Example: browseros-cli init http://127.0.0.1:9000/mcp\n" +
|
||||
" If BrowserOS is closed: browseros-cli launch\n" +
|
||||
" If not installed: browseros-cli install",
|
||||
)
|
||||
}
|
||||
|
||||
@@ -1,8 +1,13 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"browseros-cli/config"
|
||||
)
|
||||
|
||||
func TestSetVersionUpdatesRootCommand(t *testing.T) {
|
||||
@@ -100,6 +105,76 @@ func TestShouldSkipAutomaticUpdates(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestDefaultServerURLUsesEnvBeforeConfig(t *testing.T) {
|
||||
t.Setenv("XDG_CONFIG_HOME", t.TempDir())
|
||||
t.Setenv("BROWSEROS_URL", "http://127.0.0.1:9115/mcp")
|
||||
|
||||
if err := config.Save(&config.Config{ServerURL: "http://127.0.0.1:9000/mcp"}); err != nil {
|
||||
t.Fatalf("config.Save() error = %v", err)
|
||||
}
|
||||
|
||||
got := defaultServerURL()
|
||||
if got != "http://127.0.0.1:9115" {
|
||||
t.Fatalf("defaultServerURL() = %q, want %q", got, "http://127.0.0.1:9115")
|
||||
}
|
||||
}
|
||||
|
||||
func TestDefaultServerURLUsesSavedConfig(t *testing.T) {
|
||||
t.Setenv("XDG_CONFIG_HOME", t.TempDir())
|
||||
t.Setenv("BROWSEROS_URL", "")
|
||||
|
||||
if err := config.Save(&config.Config{ServerURL: "http://127.0.0.1:9115/mcp"}); err != nil {
|
||||
t.Fatalf("config.Save() error = %v", err)
|
||||
}
|
||||
|
||||
got := defaultServerURL()
|
||||
if got != "http://127.0.0.1:9115" {
|
||||
t.Fatalf("defaultServerURL() = %q, want %q", got, "http://127.0.0.1:9115")
|
||||
}
|
||||
}
|
||||
|
||||
func TestDefaultServerURLIgnoresBrowserOSServerJSON(t *testing.T) {
|
||||
home := t.TempDir()
|
||||
t.Setenv("HOME", home)
|
||||
t.Setenv("USERPROFILE", home)
|
||||
t.Setenv("XDG_CONFIG_HOME", t.TempDir())
|
||||
t.Setenv("BROWSEROS_URL", "")
|
||||
|
||||
serverDir := filepath.Join(home, ".browseros")
|
||||
if err := os.MkdirAll(serverDir, 0755); err != nil {
|
||||
t.Fatalf("os.MkdirAll() error = %v", err)
|
||||
}
|
||||
data := []byte(`{"url":"http://127.0.0.1:9999"}`)
|
||||
if err := os.WriteFile(filepath.Join(serverDir, "server.json"), data, 0644); err != nil {
|
||||
t.Fatalf("os.WriteFile() error = %v", err)
|
||||
}
|
||||
|
||||
if got := defaultServerURL(); got != "" {
|
||||
t.Fatalf("defaultServerURL() = %q, want empty", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestNormalizeServerURLAcceptsMCPEndpoint(t *testing.T) {
|
||||
got := normalizeServerURL(" http://127.0.0.1:9115/mcp ")
|
||||
if got != "http://127.0.0.1:9115" {
|
||||
t.Fatalf("normalizeServerURL() = %q, want %q", got, "http://127.0.0.1:9115")
|
||||
}
|
||||
}
|
||||
|
||||
func TestValidateServerURLExplainsManualInit(t *testing.T) {
|
||||
_, err := validateServerURL("")
|
||||
if err == nil {
|
||||
t.Fatal("validateServerURL() error = nil, want setup instructions")
|
||||
}
|
||||
msg := err.Error()
|
||||
if !strings.Contains(msg, "browseros-cli init <Server URL>") {
|
||||
t.Fatalf("validateServerURL() error = %q, want manual init instructions", msg)
|
||||
}
|
||||
if strings.Contains(msg, "init --auto") {
|
||||
t.Fatalf("validateServerURL() error = %q, should not mention init --auto", msg)
|
||||
}
|
||||
}
|
||||
|
||||
func TestDrainAutomaticUpdateCheckWithTimeoutWaitsForCompletion(t *testing.T) {
|
||||
done := make(chan struct{})
|
||||
returned := make(chan struct{})
|
||||
|
||||
@@ -44,10 +44,7 @@ func (c *Client) connect(ctx context.Context) (*sdkmcp.ClientSession, error) {
|
||||
|
||||
session, err := sdkClient.Connect(ctx, transport, nil)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("cannot connect to BrowserOS at %s: %w\n\n"+
|
||||
" If BrowserOS is running on a different port: browseros-cli init --auto\n"+
|
||||
" If BrowserOS is not running: browseros-cli launch\n"+
|
||||
" If not installed: browseros-cli install", c.BaseURL, err)
|
||||
return nil, fmt.Errorf("cannot connect to BrowserOS at %s: %w%s", c.BaseURL, err, connectionSetupInstructions())
|
||||
}
|
||||
return session, nil
|
||||
}
|
||||
@@ -187,10 +184,7 @@ func (c *Client) Status() (map[string]any, error) {
|
||||
func (c *Client) restGET(path string) (map[string]any, error) {
|
||||
resp, err := c.HTTPClient.Get(c.BaseURL + path)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("cannot connect to BrowserOS at %s: %w\n\n"+
|
||||
" If BrowserOS is running on a different port: browseros-cli init --auto\n"+
|
||||
" If BrowserOS is not running: browseros-cli launch\n"+
|
||||
" If not installed: browseros-cli install", c.BaseURL, err)
|
||||
return nil, fmt.Errorf("cannot connect to BrowserOS at %s: %w%s", c.BaseURL, err, connectionSetupInstructions())
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
@@ -205,3 +199,14 @@ func (c *Client) restGET(path string) (map[string]any, error) {
|
||||
}
|
||||
return data, nil
|
||||
}
|
||||
|
||||
// connectionSetupInstructions explains how to recover from a stale or missing server URL.
|
||||
func connectionSetupInstructions() string {
|
||||
return "\n\n" +
|
||||
" Open BrowserOS Settings > BrowserOS MCP and copy the Server URL.\n" +
|
||||
" Save it with: browseros-cli init <Server URL>\n" +
|
||||
" Example: browseros-cli init http://127.0.0.1:9000/mcp\n" +
|
||||
" Run once with: browseros-cli --server <Server URL> health\n" +
|
||||
" If BrowserOS is closed: browseros-cli launch\n" +
|
||||
" If not installed: browseros-cli install"
|
||||
}
|
||||
|
||||
@@ -31,8 +31,8 @@ browseros-cli install
|
||||
# Start BrowserOS
|
||||
browseros-cli launch
|
||||
|
||||
# Auto-configure MCP settings for your AI tools
|
||||
browseros-cli init --auto
|
||||
# Configure MCP settings with the Server URL from BrowserOS settings
|
||||
browseros-cli init http://127.0.0.1:9000/mcp
|
||||
|
||||
# Verify everything is working
|
||||
browseros-cli health
|
||||
|
||||
25
packages/browseros-agent/apps/eval/README.md
vendored
25
packages/browseros-agent/apps/eval/README.md
vendored
@@ -9,6 +9,7 @@ Evaluation framework for BrowserOS browser automation agents. Runs tasks from st
|
||||
- **BrowserOS binary** at `/Applications/BrowserOS.app` (macOS) or `BROWSEROS_BINARY` pointing at it
|
||||
- **Bun** runtime
|
||||
- **API keys** for your LLM provider (and `CLAUDE_CODE_OAUTH_TOKEN` if you use `performance_grader`)
|
||||
- **Python 3.10+ with `agisdk`** for AGI SDK / REAL Bench grading. Set `BROWSEROS_EVAL_PYTHON` if your default `python3` is older.
|
||||
|
||||
## Quick Start
|
||||
|
||||
@@ -67,7 +68,7 @@ This lets us run the same suite against multiple model setups without copying th
|
||||
|
||||
```txt
|
||||
agisdk-daily-10 + kimi-fireworks
|
||||
agisdk-daily-10 + claude-sonnet
|
||||
agisdk-daily-10 + claude-opus
|
||||
agisdk-daily-10 + clado-action-000159
|
||||
```
|
||||
|
||||
@@ -79,6 +80,7 @@ For `orchestrator-executor` suites, there can also be an executor model/backend.
|
||||
|------|-------------|
|
||||
| `single` | Single LLM agent driven by the BrowserOS tool loop (CDP) |
|
||||
| `orchestrator-executor` | High-level orchestrator + per-step executor (LLM or Clado visual model) |
|
||||
| `claude-code` | External Claude Code CLI driven through BrowserOS MCP |
|
||||
|
||||
### Single agent
|
||||
|
||||
@@ -119,6 +121,24 @@ The orchestrator works with any LLM provider. The executor can be another LLM, o
|
||||
}
|
||||
```
|
||||
|
||||
### Claude Code
|
||||
|
||||
Claude Code runs as an external `claude -p` subprocess. The eval runner passes a task-scoped MCP config that points Claude Code at the active worker's BrowserOS MCP endpoint, while the eval capture layer still saves messages, screenshots, trajectory metadata, and grader outputs.
|
||||
|
||||
```json
|
||||
{
|
||||
"agent": {
|
||||
"type": "claude-code",
|
||||
"model": "opus"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```bash
|
||||
BROWSEROS_EVAL_PYTHON=/path/to/python3 bun run eval run --config configs/legacy/claude-code-agisdk-real.json
|
||||
bun run eval suite --config configs/legacy/claude-code-agisdk-real.json --publish r2
|
||||
```
|
||||
|
||||
## Graders
|
||||
|
||||
| Name | Description |
|
||||
@@ -151,6 +171,7 @@ The `apiKey` field supports two formats:
|
||||
| `CLADO_ACTION_MODEL`, `CLADO_ACTION_API_KEY`, `CLADO_ACTION_BASE_URL` | Clado executor defaults |
|
||||
| `BROWSEROS_BINARY` | BrowserOS binary path in CI/local smoke runs |
|
||||
| `BROWSEROS_SERVER_URL` | Optional grader MCP URL override |
|
||||
| `BROWSEROS_EVAL_PYTHON` | Optional Python interpreter for JSON graders such as `agisdk_state_diff` |
|
||||
| `WEBARENA_INFINITY_DIR` | Local WebArena-Infinity checkout for Infinity tasks |
|
||||
| `NOPECHA_API_KEY` | CAPTCHA solver extension |
|
||||
| `EVAL_R2_ACCOUNT_ID`, `EVAL_R2_ACCESS_KEY_ID`, `EVAL_R2_SECRET_ACCESS_KEY`, `EVAL_R2_BUCKET`, `EVAL_R2_CDN_BASE_URL` | R2 upload and viewer URL |
|
||||
@@ -194,7 +215,7 @@ Published runs are available at `EVAL_R2_CDN_BASE_URL/viewer.html?run=<run-id>`.
|
||||
"base_server_port": 9110,
|
||||
"base_extension_port": 9310,
|
||||
"load_extensions": false,
|
||||
"headless": true
|
||||
"headless": false
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
"baseUrl": "https://openrouter.ai/api/v1",
|
||||
"supportsImages": true
|
||||
},
|
||||
"dataset": "../../data/webbench-2of4-50.jsonl",
|
||||
"dataset": "../../data/agisdk-real.jsonl",
|
||||
"num_workers": 10,
|
||||
"restart_server_per_task": true,
|
||||
"browseros": {
|
||||
@@ -21,6 +21,6 @@
|
||||
"captcha": {
|
||||
"api_key_env": "NOPECHA_API_KEY"
|
||||
},
|
||||
"graders": ["performance_grader"],
|
||||
"graders": ["agisdk_state_diff"],
|
||||
"timeout_ms": 1800000
|
||||
}
|
||||
|
||||
@@ -23,7 +23,7 @@
|
||||
"base_server_port": 9110,
|
||||
"base_extension_port": 9310,
|
||||
"load_extensions": false,
|
||||
"headless": true
|
||||
"headless": false
|
||||
},
|
||||
"captcha": {
|
||||
"api_key_env": "NOPECHA_API_KEY"
|
||||
|
||||
22
packages/browseros-agent/apps/eval/configs/legacy/claude-code-agisdk-real.json
vendored
Normal file
22
packages/browseros-agent/apps/eval/configs/legacy/claude-code-agisdk-real.json
vendored
Normal file
@@ -0,0 +1,22 @@
|
||||
{
|
||||
"agent": {
|
||||
"type": "claude-code",
|
||||
"model": "opus"
|
||||
},
|
||||
"dataset": "../../data/agisdk-real.jsonl",
|
||||
"num_workers": 1,
|
||||
"restart_server_per_task": true,
|
||||
"browseros": {
|
||||
"server_url": "http://127.0.0.1:9110",
|
||||
"base_cdp_port": 9010,
|
||||
"base_server_port": 9110,
|
||||
"base_extension_port": 9310,
|
||||
"load_extensions": false,
|
||||
"headless": false
|
||||
},
|
||||
"captcha": {
|
||||
"api_key_env": "NOPECHA_API_KEY"
|
||||
},
|
||||
"graders": ["agisdk_state_diff"],
|
||||
"timeout_ms": 1800000
|
||||
}
|
||||
@@ -14,7 +14,7 @@
|
||||
"base_server_port": 9110,
|
||||
"base_extension_port": 9310,
|
||||
"load_extensions": false,
|
||||
"headless": true
|
||||
"headless": false
|
||||
},
|
||||
"captcha": {
|
||||
"api_key_env": "NOPECHA_API_KEY"
|
||||
|
||||
238
packages/browseros-agent/apps/eval/src/agents/claude-code/index.ts
vendored
Normal file
238
packages/browseros-agent/apps/eval/src/agents/claude-code/index.ts
vendored
Normal file
@@ -0,0 +1,238 @@
|
||||
import { writeFile } from 'node:fs/promises'
|
||||
import { join } from 'node:path'
|
||||
import { DEFAULT_TIMEOUT_MS } from '../../constants'
|
||||
import type { ClaudeCodeAgentConfig, UIMessageStreamEvent } from '../../types'
|
||||
import { withEvalTimeout } from '../../utils/with-eval-timeout'
|
||||
import type { AgentContext, AgentEvaluator, AgentResult } from '../types'
|
||||
import {
|
||||
type ClaudeCodeProcessRunner,
|
||||
createClaudeCodeProcessRunner,
|
||||
} from './process-runner'
|
||||
import {
|
||||
ClaudeCodeStreamParser,
|
||||
shouldCaptureScreenshotForTool,
|
||||
} from './stream-parser'
|
||||
|
||||
export interface ClaudeCodeEvaluatorDeps {
|
||||
processRunner?: ClaudeCodeProcessRunner
|
||||
}
|
||||
|
||||
export class ClaudeCodeEvaluator implements AgentEvaluator {
|
||||
private processRunner: ClaudeCodeProcessRunner
|
||||
|
||||
constructor(
|
||||
private ctx: AgentContext,
|
||||
deps: ClaudeCodeEvaluatorDeps = {},
|
||||
) {
|
||||
this.processRunner = deps.processRunner ?? createClaudeCodeProcessRunner()
|
||||
}
|
||||
|
||||
async execute(): Promise<AgentResult> {
|
||||
const { config, task, capture, taskOutputDir } = this.ctx
|
||||
const startTime = Date.now()
|
||||
const timeoutMs = config.timeout_ms ?? DEFAULT_TIMEOUT_MS
|
||||
|
||||
await capture.messageLogger.logUser(task.query)
|
||||
|
||||
if (config.agent.type !== 'claude-code') {
|
||||
throw new Error('ClaudeCodeEvaluator only supports claude-code config')
|
||||
}
|
||||
const agentConfig = config.agent
|
||||
|
||||
const mcpConfigPath = join(taskOutputDir, 'claude-code-mcp.json')
|
||||
await writeFile(
|
||||
mcpConfigPath,
|
||||
JSON.stringify(
|
||||
buildClaudeCodeMcpConfig(config.browseros.server_url),
|
||||
null,
|
||||
2,
|
||||
),
|
||||
)
|
||||
|
||||
const parser = new ClaudeCodeStreamParser()
|
||||
const toolNamesById = new Map<string, string>()
|
||||
const prompt = buildClaudeCodePrompt(task.query)
|
||||
const args = buildClaudeCodeArgs({
|
||||
prompt,
|
||||
mcpConfigPath,
|
||||
config: agentConfig,
|
||||
})
|
||||
|
||||
const { terminationReason } = await withEvalTimeout(
|
||||
timeoutMs,
|
||||
capture,
|
||||
async (signal) => {
|
||||
const runResult = await this.processRunner.run({
|
||||
executable: agentConfig.claudePath,
|
||||
args,
|
||||
cwd: taskOutputDir,
|
||||
signal,
|
||||
onStdoutLine: async (line) => {
|
||||
const events = parser.pushLine(line)
|
||||
for (const event of events) {
|
||||
await this.handleStreamEvent(event, toolNamesById)
|
||||
}
|
||||
},
|
||||
})
|
||||
|
||||
if (runResult.exitCode !== 0) {
|
||||
const message =
|
||||
runResult.stderr.trim() ||
|
||||
`Claude Code exited with status ${runResult.exitCode}`
|
||||
capture.addError('agent_execution', message, {
|
||||
exitCode: runResult.exitCode,
|
||||
})
|
||||
if (!parser.getLastText()) {
|
||||
throw new Error(message)
|
||||
}
|
||||
}
|
||||
|
||||
for (const error of runResult.streamErrors ?? []) {
|
||||
capture.addWarning(
|
||||
'message_logging',
|
||||
`Claude Code stream event processing failed: ${error}`,
|
||||
)
|
||||
}
|
||||
|
||||
return runResult
|
||||
},
|
||||
)
|
||||
|
||||
const endTime = Date.now()
|
||||
const finalAnswer = parser.getLastText() ?? capture.getLastAssistantText()
|
||||
const metadata = {
|
||||
query_id: task.query_id,
|
||||
dataset: task.dataset,
|
||||
query: task.query,
|
||||
started_at: new Date(startTime).toISOString(),
|
||||
completed_at: new Date(endTime).toISOString(),
|
||||
total_duration_ms: endTime - startTime,
|
||||
total_steps: parser.getToolCallCount() || capture.getScreenshotCount(),
|
||||
termination_reason: terminationReason,
|
||||
final_answer: finalAnswer,
|
||||
errors: capture.getErrors(),
|
||||
warnings: capture.getWarnings(),
|
||||
device_pixel_ratio: capture.screenshot.getDevicePixelRatio(),
|
||||
agent_config: {
|
||||
type: 'claude-code' as const,
|
||||
model: agentConfig.model,
|
||||
},
|
||||
grader_results: {},
|
||||
}
|
||||
|
||||
await capture.trajectorySaver.saveMetadata(metadata)
|
||||
|
||||
return {
|
||||
metadata,
|
||||
messages: capture.getMessages(),
|
||||
finalAnswer,
|
||||
}
|
||||
}
|
||||
|
||||
private async handleStreamEvent(
|
||||
event: UIMessageStreamEvent,
|
||||
toolNamesById: Map<string, string>,
|
||||
): Promise<void> {
|
||||
const { capture, task } = this.ctx
|
||||
let screenshot: number | undefined
|
||||
|
||||
if (event.type === 'tool-input-available') {
|
||||
toolNamesById.set(event.toolCallId, event.toolName)
|
||||
if (isPageInput(event.input)) {
|
||||
capture.setActivePageId(event.input.page)
|
||||
}
|
||||
}
|
||||
|
||||
if (
|
||||
event.type === 'tool-output-available' ||
|
||||
event.type === 'tool-output-error'
|
||||
) {
|
||||
const toolName = toolNamesById.get(event.toolCallId)
|
||||
if (toolName && shouldCaptureScreenshotForTool(toolName)) {
|
||||
screenshot = await this.captureScreenshot()
|
||||
}
|
||||
}
|
||||
|
||||
await capture.messageLogger.logStreamEvent(event, screenshot)
|
||||
capture.emitEvent(task.query_id, {
|
||||
...event,
|
||||
...(screenshot !== undefined && { screenshot }),
|
||||
})
|
||||
}
|
||||
|
||||
private async captureScreenshot(): Promise<number | undefined> {
|
||||
const { capture, task } = this.ctx
|
||||
try {
|
||||
const screenshot = await capture.screenshot.capture(
|
||||
capture.getActivePageId(),
|
||||
)
|
||||
capture.emitEvent(task.query_id, {
|
||||
type: 'screenshot-captured',
|
||||
screenshot,
|
||||
})
|
||||
return screenshot
|
||||
} catch {
|
||||
return undefined
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function isPageInput(input: unknown): input is { page: number } {
|
||||
return (
|
||||
typeof input === 'object' &&
|
||||
input !== null &&
|
||||
'page' in input &&
|
||||
typeof input.page === 'number'
|
||||
)
|
||||
}
|
||||
|
||||
function buildClaudeCodePrompt(taskQuery: string): string {
|
||||
return [
|
||||
'You are running inside BrowserOS eval.',
|
||||
'Use the BrowserOS MCP tools to interact with the already-open browser and complete the user task.',
|
||||
'When the task is complete, respond with the final answer only.',
|
||||
'If blocked, explain the blocker clearly.',
|
||||
'',
|
||||
`Task: ${taskQuery}`,
|
||||
].join('\n')
|
||||
}
|
||||
|
||||
function buildClaudeCodeArgs({
|
||||
prompt,
|
||||
mcpConfigPath,
|
||||
config,
|
||||
}: {
|
||||
prompt: string
|
||||
mcpConfigPath: string
|
||||
config: ClaudeCodeAgentConfig
|
||||
}): string[] {
|
||||
const args = [
|
||||
'-p',
|
||||
prompt,
|
||||
'--mcp-config',
|
||||
mcpConfigPath,
|
||||
'--strict-mcp-config',
|
||||
'--output-format',
|
||||
'stream-json',
|
||||
'--verbose',
|
||||
]
|
||||
|
||||
if (config.model) args.push('--model', config.model)
|
||||
args.push(...config.extraArgs)
|
||||
|
||||
return args
|
||||
}
|
||||
|
||||
function buildClaudeCodeMcpConfig(serverUrl: string) {
|
||||
const trimmed = serverUrl.replace(/\/$/, '')
|
||||
const url = trimmed.endsWith('/mcp') ? trimmed : `${trimmed}/mcp`
|
||||
return {
|
||||
mcpServers: {
|
||||
browseros: {
|
||||
type: 'http',
|
||||
url,
|
||||
headers: { 'X-BrowserOS-Source': 'sdk-internal' },
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
114
packages/browseros-agent/apps/eval/src/agents/claude-code/process-runner.ts
vendored
Normal file
114
packages/browseros-agent/apps/eval/src/agents/claude-code/process-runner.ts
vendored
Normal file
@@ -0,0 +1,114 @@
|
||||
export interface ClaudeCodeRunOptions {
|
||||
executable: string
|
||||
args: string[]
|
||||
cwd: string
|
||||
signal?: AbortSignal
|
||||
onStdoutLine: (line: string) => Promise<void>
|
||||
}
|
||||
|
||||
export interface ClaudeCodeRunResult {
|
||||
exitCode: number
|
||||
stderr: string
|
||||
streamErrors?: string[]
|
||||
}
|
||||
|
||||
export interface ClaudeCodeProcessRunner {
|
||||
run(options: ClaudeCodeRunOptions): Promise<ClaudeCodeRunResult>
|
||||
}
|
||||
|
||||
export interface SpawnOptions {
|
||||
cwd: string
|
||||
signal?: AbortSignal
|
||||
onStdoutLine: (line: string) => Promise<void>
|
||||
}
|
||||
|
||||
export interface CreateClaudeCodeProcessRunnerDeps {
|
||||
spawn?: (cmd: string[], options: SpawnOptions) => Promise<ClaudeCodeRunResult>
|
||||
}
|
||||
|
||||
export function createClaudeCodeProcessRunner(
|
||||
deps: CreateClaudeCodeProcessRunnerDeps = {},
|
||||
): ClaudeCodeProcessRunner {
|
||||
const spawn = deps.spawn ?? spawnClaudeCode
|
||||
return {
|
||||
run: async ({ executable, args, cwd, signal, onStdoutLine }) =>
|
||||
spawn([executable, ...args], { cwd, signal, onStdoutLine }),
|
||||
}
|
||||
}
|
||||
|
||||
async function spawnClaudeCode(
|
||||
cmd: string[],
|
||||
options: SpawnOptions,
|
||||
): Promise<ClaudeCodeRunResult> {
|
||||
const proc = Bun.spawn({
|
||||
cmd,
|
||||
cwd: options.cwd,
|
||||
stdin: 'ignore',
|
||||
stdout: 'pipe',
|
||||
stderr: 'pipe',
|
||||
})
|
||||
|
||||
const abort = () => {
|
||||
try {
|
||||
proc.kill('SIGTERM')
|
||||
} catch {
|
||||
// Process may already have exited.
|
||||
}
|
||||
}
|
||||
options.signal?.addEventListener('abort', abort, { once: true })
|
||||
|
||||
try {
|
||||
const streamErrors: string[] = []
|
||||
const stdoutPromise = readLines(
|
||||
proc.stdout,
|
||||
options.onStdoutLine,
|
||||
streamErrors,
|
||||
)
|
||||
const stderrPromise = new Response(proc.stderr).text()
|
||||
const exitCode = await proc.exited
|
||||
await stdoutPromise
|
||||
const stderr = await stderrPromise
|
||||
return { exitCode, stderr, streamErrors }
|
||||
} finally {
|
||||
options.signal?.removeEventListener('abort', abort)
|
||||
}
|
||||
}
|
||||
|
||||
async function readLines(
|
||||
stream: ReadableStream<Uint8Array>,
|
||||
onLine: (line: string) => Promise<void>,
|
||||
streamErrors: string[],
|
||||
): Promise<void> {
|
||||
const reader = stream.getReader()
|
||||
const decoder = new TextDecoder()
|
||||
let buffer = ''
|
||||
|
||||
while (true) {
|
||||
const { done, value } = await reader.read()
|
||||
if (done) break
|
||||
|
||||
buffer += decoder.decode(value, { stream: true })
|
||||
const lines = buffer.split('\n')
|
||||
buffer = lines.pop() ?? ''
|
||||
for (const line of lines) {
|
||||
await emitLine(line, onLine, streamErrors)
|
||||
}
|
||||
}
|
||||
|
||||
buffer += decoder.decode()
|
||||
if (buffer.length > 0) {
|
||||
await emitLine(buffer, onLine, streamErrors)
|
||||
}
|
||||
}
|
||||
|
||||
async function emitLine(
|
||||
line: string,
|
||||
onLine: (line: string) => Promise<void>,
|
||||
streamErrors: string[],
|
||||
): Promise<void> {
|
||||
try {
|
||||
await onLine(line)
|
||||
} catch (error) {
|
||||
streamErrors.push(error instanceof Error ? error.message : String(error))
|
||||
}
|
||||
}
|
||||
142
packages/browseros-agent/apps/eval/src/agents/claude-code/stream-parser.ts
vendored
Normal file
142
packages/browseros-agent/apps/eval/src/agents/claude-code/stream-parser.ts
vendored
Normal file
@@ -0,0 +1,142 @@
|
||||
import { randomUUID } from 'node:crypto'
|
||||
import type { UIMessageStreamEvent } from '../../types'
|
||||
|
||||
type JsonObject = Record<string, unknown>
|
||||
|
||||
export class ClaudeCodeStreamParser {
|
||||
private lastText: string | null = null
|
||||
private toolCallCount = 0
|
||||
|
||||
pushLine(line: string): UIMessageStreamEvent[] {
|
||||
const trimmed = line.trim()
|
||||
if (!trimmed) return []
|
||||
|
||||
let parsed: unknown
|
||||
try {
|
||||
parsed = JSON.parse(trimmed)
|
||||
} catch {
|
||||
return []
|
||||
}
|
||||
|
||||
if (!isObject(parsed)) return []
|
||||
|
||||
if (parsed.type === 'assistant') {
|
||||
return this.parseAssistantMessage(parsed)
|
||||
}
|
||||
if (parsed.type === 'user') {
|
||||
return this.parseUserMessage(parsed)
|
||||
}
|
||||
if (parsed.type === 'result' && typeof parsed.result === 'string') {
|
||||
this.lastText = parsed.result
|
||||
}
|
||||
|
||||
return []
|
||||
}
|
||||
|
||||
getLastText(): string | null {
|
||||
return this.lastText
|
||||
}
|
||||
|
||||
getToolCallCount(): number {
|
||||
return this.toolCallCount
|
||||
}
|
||||
|
||||
private parseAssistantMessage(message: JsonObject): UIMessageStreamEvent[] {
|
||||
const content = contentBlocks(message)
|
||||
const events: UIMessageStreamEvent[] = []
|
||||
|
||||
for (const block of content) {
|
||||
if (block.type === 'text' && typeof block.text === 'string') {
|
||||
const id = randomUUID()
|
||||
this.lastText = block.text
|
||||
events.push(
|
||||
{ type: 'text-start', id },
|
||||
{ type: 'text-delta', id, delta: block.text },
|
||||
{ type: 'text-end', id },
|
||||
)
|
||||
} else if (
|
||||
block.type === 'tool_use' &&
|
||||
typeof block.id === 'string' &&
|
||||
typeof block.name === 'string'
|
||||
) {
|
||||
this.toolCallCount++
|
||||
events.push({
|
||||
type: 'tool-input-available',
|
||||
toolCallId: block.id,
|
||||
toolName: block.name,
|
||||
input: block.input,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return events
|
||||
}
|
||||
|
||||
private parseUserMessage(message: JsonObject): UIMessageStreamEvent[] {
|
||||
const content = contentBlocks(message)
|
||||
const events: UIMessageStreamEvent[] = []
|
||||
|
||||
for (const block of content) {
|
||||
if (
|
||||
block.type !== 'tool_result' ||
|
||||
typeof block.tool_use_id !== 'string'
|
||||
) {
|
||||
continue
|
||||
}
|
||||
|
||||
if (block.is_error === true) {
|
||||
events.push({
|
||||
type: 'tool-output-error',
|
||||
toolCallId: block.tool_use_id,
|
||||
errorText: stringifyToolContent(block.content),
|
||||
})
|
||||
} else {
|
||||
events.push({
|
||||
type: 'tool-output-available',
|
||||
toolCallId: block.tool_use_id,
|
||||
output: normalizeToolContent(block.content),
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return events
|
||||
}
|
||||
}
|
||||
|
||||
export function shouldCaptureScreenshotForTool(toolName: string): boolean {
|
||||
if (!toolName.startsWith('mcp__browseros__')) return false
|
||||
return !toolName.endsWith('__take_screenshot')
|
||||
}
|
||||
|
||||
function contentBlocks(message: JsonObject): JsonObject[] {
|
||||
const inner = isObject(message.message) ? message.message : message
|
||||
return Array.isArray(inner.content) ? inner.content.filter(isObject) : []
|
||||
}
|
||||
|
||||
function isObject(value: unknown): value is JsonObject {
|
||||
return typeof value === 'object' && value !== null
|
||||
}
|
||||
|
||||
function normalizeToolContent(content: unknown): unknown {
|
||||
if (!Array.isArray(content)) return content
|
||||
return content.map((item) => {
|
||||
if (
|
||||
isObject(item) &&
|
||||
item.type === 'text' &&
|
||||
typeof item.text === 'string'
|
||||
) {
|
||||
return item.text
|
||||
}
|
||||
return item
|
||||
})
|
||||
}
|
||||
|
||||
function stringifyToolContent(content: unknown): string {
|
||||
const normalized = normalizeToolContent(content)
|
||||
if (typeof normalized === 'string') return normalized
|
||||
try {
|
||||
return JSON.stringify(normalized)
|
||||
} catch {
|
||||
return String(normalized)
|
||||
}
|
||||
}
|
||||
@@ -1,3 +1,4 @@
|
||||
import { ClaudeCodeEvaluator } from './claude-code'
|
||||
import { OrchestratorExecutorEvaluator } from './orchestrator-executor'
|
||||
import { SingleAgentEvaluator } from './single-agent'
|
||||
import type { AgentContext, AgentEvaluator } from './types'
|
||||
@@ -8,6 +9,8 @@ export function createAgent(context: AgentContext): AgentEvaluator {
|
||||
return new SingleAgentEvaluator(context)
|
||||
case 'orchestrator-executor':
|
||||
return new OrchestratorExecutorEvaluator(context)
|
||||
case 'claude-code':
|
||||
return new ClaudeCodeEvaluator(context)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -105,7 +105,10 @@ export class TrajectorySaver {
|
||||
errors: [],
|
||||
warnings: [],
|
||||
agent_config: {
|
||||
type: agentConfig.type as 'single' | 'orchestrator-executor',
|
||||
type: agentConfig.type as
|
||||
| 'single'
|
||||
| 'orchestrator-executor'
|
||||
| 'claude-code',
|
||||
model: agentConfig.model,
|
||||
},
|
||||
grader_results: {},
|
||||
|
||||
@@ -82,6 +82,16 @@ function suiteToEvalConfig(
|
||||
})
|
||||
}
|
||||
|
||||
if (suite.agent.type === 'claude-code') {
|
||||
return EvalConfigSchema.parse({
|
||||
...base,
|
||||
agent: {
|
||||
type: 'claude-code',
|
||||
...(variant.agent.model && { model: variant.agent.model }),
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
const executorBackend = suite.agent.executorBackend ?? 'tool-loop'
|
||||
const executor =
|
||||
executorBackend === 'clado'
|
||||
@@ -135,7 +145,10 @@ export async function resolveSuiteCommand(
|
||||
const loaded = await loadSuite(options.suitePath)
|
||||
const variant = resolveVariant({
|
||||
variantId: options.variantId,
|
||||
provider: options.provider,
|
||||
provider:
|
||||
loaded.suite.agent.type === 'claude-code'
|
||||
? 'claude-code'
|
||||
: options.provider,
|
||||
model: options.model,
|
||||
apiKey: options.apiKey,
|
||||
baseUrl: options.baseUrl,
|
||||
|
||||
@@ -2,6 +2,7 @@ export interface PythonEvaluatorOptions {
|
||||
scriptPath: string
|
||||
input: unknown
|
||||
timeoutMs: number
|
||||
pythonPath?: string
|
||||
}
|
||||
|
||||
export interface PythonEvaluatorResult<T> {
|
||||
@@ -15,7 +16,9 @@ export interface PythonEvaluatorResult<T> {
|
||||
export async function runPythonJsonEvaluator<T>(
|
||||
options: PythonEvaluatorOptions,
|
||||
): Promise<PythonEvaluatorResult<T>> {
|
||||
const proc = Bun.spawn(['python3', options.scriptPath], {
|
||||
const pythonPath =
|
||||
options.pythonPath || process.env.BROWSEROS_EVAL_PYTHON || 'python3'
|
||||
const proc = Bun.spawn([pythonPath, options.scriptPath], {
|
||||
stdin: 'pipe',
|
||||
stdout: 'pipe',
|
||||
stderr: 'pipe',
|
||||
|
||||
@@ -33,6 +33,13 @@ function variantSource(config: EvalConfig): {
|
||||
baseUrl?: string
|
||||
supportsImages?: boolean
|
||||
} {
|
||||
if (config.agent.type === 'claude-code') {
|
||||
return {
|
||||
provider: 'claude-code',
|
||||
model: config.agent.model ?? 'default',
|
||||
}
|
||||
}
|
||||
|
||||
const agent =
|
||||
config.agent.type === 'single' ? config.agent : config.agent.orchestrator
|
||||
if (!agent.model) {
|
||||
@@ -76,10 +83,7 @@ export async function adaptEvalConfigFile(
|
||||
suite: {
|
||||
id,
|
||||
dataset: evalConfig.dataset,
|
||||
agent:
|
||||
evalConfig.agent.type === 'single'
|
||||
? { type: 'tool-loop' }
|
||||
: { type: 'orchestrated', executorBackend: backend ?? 'tool-loop' },
|
||||
agent: suiteAgent(evalConfig, backend),
|
||||
graders: evalConfig.graders ?? [],
|
||||
workers: evalConfig.num_workers,
|
||||
restartBrowserPerTask: evalConfig.restart_server_per_task,
|
||||
@@ -99,3 +103,17 @@ export async function adaptEvalConfigFile(
|
||||
}),
|
||||
}
|
||||
}
|
||||
|
||||
function suiteAgent(
|
||||
config: EvalConfig,
|
||||
backend: ReturnType<typeof executorBackend>,
|
||||
): EvalSuite['agent'] {
|
||||
switch (config.agent.type) {
|
||||
case 'single':
|
||||
return { type: 'tool-loop' }
|
||||
case 'orchestrator-executor':
|
||||
return { type: 'orchestrated', executorBackend: backend ?? 'tool-loop' }
|
||||
case 'claude-code':
|
||||
return { type: 'claude-code' }
|
||||
}
|
||||
}
|
||||
|
||||
@@ -57,10 +57,30 @@ export function resolveVariant(
|
||||
options: ResolveVariantOptions = {},
|
||||
): EvalVariant {
|
||||
const env = options.env ?? process.env
|
||||
const id = options.variantId ?? env.EVAL_VARIANT ?? 'default'
|
||||
const provider =
|
||||
options.provider ?? env.EVAL_AGENT_PROVIDER ?? 'openai-compatible'
|
||||
const model = options.model ?? env.EVAL_AGENT_MODEL
|
||||
|
||||
if (provider === 'claude-code') {
|
||||
const id = options.variantId ?? env.EVAL_VARIANT ?? 'claude-code'
|
||||
return {
|
||||
id,
|
||||
agent: {
|
||||
provider,
|
||||
model: model ?? '',
|
||||
},
|
||||
publicMetadata: {
|
||||
id,
|
||||
agent: {
|
||||
provider,
|
||||
model: model || 'default',
|
||||
apiKeyConfigured: false,
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
const id = options.variantId ?? env.EVAL_VARIANT ?? 'default'
|
||||
const apiKey = options.apiKey ?? env.EVAL_AGENT_API_KEY
|
||||
const apiKeyEnv =
|
||||
options.apiKeyEnv ?? (options.apiKey ? undefined : 'EVAL_AGENT_API_KEY')
|
||||
|
||||
@@ -8,6 +8,7 @@ export const SuiteAgentSchema = z
|
||||
'single',
|
||||
'orchestrated',
|
||||
'orchestrator-executor',
|
||||
'claude-code',
|
||||
]),
|
||||
executorBackend: z.enum(['tool-loop', 'clado']).optional(),
|
||||
})
|
||||
|
||||
@@ -19,9 +19,19 @@ export const OrchestratorExecutorConfigSchema = z.object({
|
||||
}),
|
||||
})
|
||||
|
||||
export const ClaudeCodeAgentConfigSchema = z
|
||||
.object({
|
||||
type: z.literal('claude-code'),
|
||||
model: z.string().min(1).optional(),
|
||||
claudePath: z.string().min(1).default('claude'),
|
||||
extraArgs: z.array(z.string()).default([]),
|
||||
})
|
||||
.strict()
|
||||
|
||||
export const AgentConfigSchema = z.discriminatedUnion('type', [
|
||||
SingleAgentConfigSchema,
|
||||
OrchestratorExecutorConfigSchema,
|
||||
ClaudeCodeAgentConfigSchema,
|
||||
])
|
||||
|
||||
export const EvalConfigSchema = z.object({
|
||||
@@ -53,5 +63,6 @@ export type SingleAgentConfig = z.infer<typeof SingleAgentConfigSchema>
|
||||
export type OrchestratorExecutorConfig = z.infer<
|
||||
typeof OrchestratorExecutorConfigSchema
|
||||
>
|
||||
export type ClaudeCodeAgentConfig = z.infer<typeof ClaudeCodeAgentConfigSchema>
|
||||
export type AgentConfig = z.infer<typeof AgentConfigSchema>
|
||||
export type EvalConfig = z.infer<typeof EvalConfigSchema>
|
||||
|
||||
@@ -2,6 +2,8 @@
|
||||
export {
|
||||
type AgentConfig,
|
||||
AgentConfigSchema,
|
||||
type ClaudeCodeAgentConfig,
|
||||
ClaudeCodeAgentConfigSchema,
|
||||
type EvalConfig,
|
||||
EvalConfigSchema,
|
||||
type OrchestratorExecutorConfig,
|
||||
|
||||
@@ -13,7 +13,7 @@ export const GraderResultSchema = z.object({
|
||||
// Agent config in metadata
|
||||
const AgentConfigMetaSchema = z
|
||||
.object({
|
||||
type: z.enum(['single', 'orchestrator-executor']),
|
||||
type: z.enum(['single', 'orchestrator-executor', 'claude-code']),
|
||||
model: z.string().optional(),
|
||||
})
|
||||
.passthrough()
|
||||
|
||||
@@ -59,7 +59,7 @@ export async function validateConfig(
|
||||
) {
|
||||
envVarsToCheck.push(config.agent.apiKey)
|
||||
}
|
||||
} else {
|
||||
} else if (config.agent.type === 'orchestrator-executor') {
|
||||
const { orchestrator, executor } = config.agent
|
||||
if (orchestrator.apiKey && isEnvVarName(orchestrator.apiKey)) {
|
||||
envVarsToCheck.push(orchestrator.apiKey)
|
||||
|
||||
268
packages/browseros-agent/apps/eval/tests/agents/claude-code-evaluator.test.ts
vendored
Normal file
268
packages/browseros-agent/apps/eval/tests/agents/claude-code-evaluator.test.ts
vendored
Normal file
@@ -0,0 +1,268 @@
|
||||
import { describe, expect, it } from 'bun:test'
|
||||
import { mkdtemp, readFile } from 'node:fs/promises'
|
||||
import { tmpdir } from 'node:os'
|
||||
import { join } from 'node:path'
|
||||
import { createAgent } from '../../src/agents'
|
||||
import { ClaudeCodeEvaluator } from '../../src/agents/claude-code'
|
||||
import { CaptureContext } from '../../src/capture/context'
|
||||
import {
|
||||
AgentConfigSchema,
|
||||
type EvalConfig,
|
||||
EvalConfigSchema,
|
||||
type Task,
|
||||
TaskMetadataSchema,
|
||||
} from '../../src/types'
|
||||
|
||||
function config(): EvalConfig {
|
||||
return {
|
||||
agent: {
|
||||
type: 'claude-code',
|
||||
model: 'opus',
|
||||
claudePath: 'claude',
|
||||
extraArgs: [],
|
||||
},
|
||||
dataset: 'data/test.jsonl',
|
||||
num_workers: 1,
|
||||
restart_server_per_task: false,
|
||||
browseros: {
|
||||
server_url: 'http://127.0.0.1:9110',
|
||||
base_cdp_port: 9010,
|
||||
base_server_port: 9110,
|
||||
base_extension_port: 9310,
|
||||
load_extensions: false,
|
||||
headless: false,
|
||||
},
|
||||
graders: [],
|
||||
}
|
||||
}
|
||||
|
||||
const task: Task = {
|
||||
query_id: 'task-1',
|
||||
dataset: 'test',
|
||||
query: 'Find the title',
|
||||
graders: [],
|
||||
metadata: {
|
||||
original_task_id: 'task-1',
|
||||
},
|
||||
}
|
||||
|
||||
describe('ClaudeCodeEvaluator', () => {
|
||||
it('accepts claude-code config defaults without permission mode', () => {
|
||||
const agent = AgentConfigSchema.parse({ type: 'claude-code' })
|
||||
|
||||
expect(agent).toEqual({
|
||||
type: 'claude-code',
|
||||
claudePath: 'claude',
|
||||
extraArgs: [],
|
||||
})
|
||||
})
|
||||
|
||||
it('accepts claude-code as a runnable eval agent', () => {
|
||||
const parsed = EvalConfigSchema.parse({
|
||||
agent: {
|
||||
type: 'claude-code',
|
||||
model: 'opus',
|
||||
},
|
||||
dataset: 'data/test-set.jsonl',
|
||||
browseros: {
|
||||
server_url: 'http://127.0.0.1:9110',
|
||||
},
|
||||
})
|
||||
|
||||
expect(parsed.agent.type).toBe('claude-code')
|
||||
expect(parsed.agent.model).toBe('opus')
|
||||
})
|
||||
|
||||
it('rejects unsupported claude-code settings instead of silently ignoring them', () => {
|
||||
expect(
|
||||
AgentConfigSchema.safeParse({
|
||||
type: 'claude-code',
|
||||
permissionMode: 'bypassPermissions',
|
||||
}).success,
|
||||
).toBe(false)
|
||||
expect(
|
||||
AgentConfigSchema.safeParse({
|
||||
type: 'claude-code',
|
||||
maxTurns: 3,
|
||||
}).success,
|
||||
).toBe(false)
|
||||
})
|
||||
|
||||
it('allows claude-code in task metadata', () => {
|
||||
const metadata = TaskMetadataSchema.parse({
|
||||
query_id: 'task-1',
|
||||
dataset: 'test',
|
||||
query: 'Do the thing',
|
||||
started_at: new Date().toISOString(),
|
||||
completed_at: new Date().toISOString(),
|
||||
total_duration_ms: 100,
|
||||
total_steps: 1,
|
||||
termination_reason: 'completed',
|
||||
final_answer: 'done',
|
||||
errors: [],
|
||||
warnings: [],
|
||||
agent_config: {
|
||||
type: 'claude-code',
|
||||
model: 'opus',
|
||||
},
|
||||
grader_results: {},
|
||||
})
|
||||
|
||||
expect(metadata.agent_config.type).toBe('claude-code')
|
||||
})
|
||||
|
||||
it('is created by the agent factory', async () => {
|
||||
const outputDir = await mkdtemp(join(tmpdir(), 'claude-code-eval-'))
|
||||
const { capture, taskOutputDir } = await CaptureContext.create({
|
||||
serverUrl: 'http://127.0.0.1:9110',
|
||||
outputDir,
|
||||
taskId: task.query_id,
|
||||
initialPageId: 1,
|
||||
})
|
||||
|
||||
const agent = createAgent({
|
||||
config: config(),
|
||||
task,
|
||||
workerIndex: 0,
|
||||
initialPageId: 1,
|
||||
outputDir,
|
||||
taskOutputDir,
|
||||
capture,
|
||||
})
|
||||
|
||||
expect(agent).toBeInstanceOf(ClaudeCodeEvaluator)
|
||||
})
|
||||
|
||||
it('runs claude code, logs messages, writes MCP config, and saves metadata', async () => {
|
||||
const outputDir = await mkdtemp(join(tmpdir(), 'claude-code-eval-'))
|
||||
const { capture, taskOutputDir } = await CaptureContext.create({
|
||||
serverUrl: 'http://127.0.0.1:9110',
|
||||
outputDir,
|
||||
taskId: task.query_id,
|
||||
initialPageId: 1,
|
||||
})
|
||||
const calls: Array<{ executable: string; args: string[]; cwd: string }> = []
|
||||
const evaluator = new ClaudeCodeEvaluator(
|
||||
{
|
||||
config: config(),
|
||||
task,
|
||||
workerIndex: 0,
|
||||
initialPageId: 1,
|
||||
outputDir,
|
||||
taskOutputDir,
|
||||
capture,
|
||||
},
|
||||
{
|
||||
processRunner: {
|
||||
async run(options) {
|
||||
calls.push(options)
|
||||
await options.onStdoutLine(
|
||||
JSON.stringify({
|
||||
type: 'assistant',
|
||||
message: {
|
||||
content: [{ type: 'text', text: 'The title is Example' }],
|
||||
},
|
||||
}),
|
||||
)
|
||||
await options.onStdoutLine(
|
||||
JSON.stringify({
|
||||
type: 'result',
|
||||
subtype: 'success',
|
||||
result: 'The title is Example',
|
||||
}),
|
||||
)
|
||||
return { exitCode: 0, stderr: '' }
|
||||
},
|
||||
},
|
||||
},
|
||||
)
|
||||
|
||||
const result = await evaluator.execute()
|
||||
|
||||
expect(result.finalAnswer).toBe('The title is Example')
|
||||
expect(result.metadata.agent_config).toMatchObject({
|
||||
type: 'claude-code',
|
||||
model: 'opus',
|
||||
})
|
||||
expect(result.messages.some((msg) => msg.type === 'user')).toBe(true)
|
||||
expect(result.messages.some((msg) => msg.type === 'text-delta')).toBe(true)
|
||||
const mcpConfig = JSON.parse(
|
||||
await readFile(join(taskOutputDir, 'claude-code-mcp.json'), 'utf-8'),
|
||||
)
|
||||
expect(mcpConfig.mcpServers.browseros).toMatchObject({
|
||||
type: 'http',
|
||||
url: 'http://127.0.0.1:9110/mcp',
|
||||
headers: {
|
||||
'X-BrowserOS-Source': 'sdk-internal',
|
||||
},
|
||||
})
|
||||
expect(calls).toEqual([
|
||||
expect.objectContaining({
|
||||
executable: 'claude',
|
||||
cwd: taskOutputDir,
|
||||
args: [
|
||||
'-p',
|
||||
expect.stringContaining('Task: Find the title'),
|
||||
'--mcp-config',
|
||||
join(taskOutputDir, 'claude-code-mcp.json'),
|
||||
'--strict-mcp-config',
|
||||
'--output-format',
|
||||
'stream-json',
|
||||
'--verbose',
|
||||
'--model',
|
||||
'opus',
|
||||
],
|
||||
}),
|
||||
])
|
||||
expect(calls[0].args).not.toContain('--permission-mode')
|
||||
})
|
||||
|
||||
it('records non-fatal stream processing errors as warnings', async () => {
|
||||
const outputDir = await mkdtemp(join(tmpdir(), 'claude-code-eval-'))
|
||||
const { capture, taskOutputDir } = await CaptureContext.create({
|
||||
serverUrl: 'http://127.0.0.1:9110',
|
||||
outputDir,
|
||||
taskId: task.query_id,
|
||||
initialPageId: 1,
|
||||
})
|
||||
const evaluator = new ClaudeCodeEvaluator(
|
||||
{
|
||||
config: config(),
|
||||
task,
|
||||
workerIndex: 0,
|
||||
initialPageId: 1,
|
||||
outputDir,
|
||||
taskOutputDir,
|
||||
capture,
|
||||
},
|
||||
{
|
||||
processRunner: {
|
||||
async run(options) {
|
||||
await options.onStdoutLine(
|
||||
JSON.stringify({
|
||||
type: 'result',
|
||||
subtype: 'success',
|
||||
result: 'done',
|
||||
}),
|
||||
)
|
||||
return {
|
||||
exitCode: 0,
|
||||
stderr: '',
|
||||
streamErrors: ['bad stream line'],
|
||||
}
|
||||
},
|
||||
},
|
||||
},
|
||||
)
|
||||
|
||||
const result = await evaluator.execute()
|
||||
|
||||
expect(result.finalAnswer).toBe('done')
|
||||
expect(result.metadata.warnings).toEqual([
|
||||
expect.objectContaining({
|
||||
source: 'message_logging',
|
||||
message: 'Claude Code stream event processing failed: bad stream line',
|
||||
}),
|
||||
])
|
||||
})
|
||||
})
|
||||
78
packages/browseros-agent/apps/eval/tests/agents/claude-code-process-runner.test.ts
vendored
Normal file
78
packages/browseros-agent/apps/eval/tests/agents/claude-code-process-runner.test.ts
vendored
Normal file
@@ -0,0 +1,78 @@
|
||||
import { describe, expect, it } from 'bun:test'
|
||||
import { chmod, mkdtemp, writeFile } from 'node:fs/promises'
|
||||
import { tmpdir } from 'node:os'
|
||||
import { join } from 'node:path'
|
||||
import { createClaudeCodeProcessRunner } from '../../src/agents/claude-code/process-runner'
|
||||
|
||||
async function writeStdoutScript(): Promise<string> {
|
||||
const dir = await mkdtemp(join(tmpdir(), 'claude-code-runner-'))
|
||||
const script = join(dir, 'stdout-lines')
|
||||
await writeFile(script, '#!/bin/sh\nprintf "first\\nbad\\nlast\\n"\n')
|
||||
await chmod(script, 0o755)
|
||||
return script
|
||||
}
|
||||
|
||||
describe('createClaudeCodeProcessRunner', () => {
|
||||
it('passes executable and args to the spawn dependency', async () => {
|
||||
const calls: unknown[] = []
|
||||
const runner = createClaudeCodeProcessRunner({
|
||||
spawn: async (cmd, options) => {
|
||||
calls.push({ cmd, options })
|
||||
await options.onStdoutLine('{"type":"result","result":"done"}')
|
||||
return { exitCode: 0, stderr: '' }
|
||||
},
|
||||
})
|
||||
|
||||
const result = await runner.run({
|
||||
executable: 'claude',
|
||||
args: ['-p', 'hello'],
|
||||
cwd: '/tmp',
|
||||
signal: new AbortController().signal,
|
||||
onStdoutLine: async () => {},
|
||||
})
|
||||
|
||||
expect(result.exitCode).toBe(0)
|
||||
expect(calls).toEqual([
|
||||
{
|
||||
cmd: ['claude', '-p', 'hello'],
|
||||
options: expect.objectContaining({ cwd: '/tmp' }),
|
||||
},
|
||||
])
|
||||
})
|
||||
|
||||
it('returns stderr and non-zero exit codes', async () => {
|
||||
const runner = createClaudeCodeProcessRunner({
|
||||
spawn: async () => ({ exitCode: 2, stderr: 'bad auth' }),
|
||||
})
|
||||
|
||||
const result = await runner.run({
|
||||
executable: 'claude',
|
||||
args: [],
|
||||
cwd: '/tmp',
|
||||
signal: new AbortController().signal,
|
||||
onStdoutLine: async () => {},
|
||||
})
|
||||
|
||||
expect(result).toEqual({ exitCode: 2, stderr: 'bad auth' })
|
||||
})
|
||||
|
||||
it('continues reading stdout after a line handler error', async () => {
|
||||
const script = await writeStdoutScript()
|
||||
const lines: string[] = []
|
||||
const runner = createClaudeCodeProcessRunner()
|
||||
|
||||
const result = await runner.run({
|
||||
executable: script,
|
||||
args: [],
|
||||
cwd: '/tmp',
|
||||
onStdoutLine: async (line) => {
|
||||
lines.push(line)
|
||||
if (line === 'bad') throw new Error('bad line')
|
||||
},
|
||||
})
|
||||
|
||||
expect(result.exitCode).toBe(0)
|
||||
expect(result.streamErrors).toEqual(['bad line'])
|
||||
expect(lines).toEqual(['first', 'bad', 'last'])
|
||||
})
|
||||
})
|
||||
102
packages/browseros-agent/apps/eval/tests/agents/claude-code-stream-parser.test.ts
vendored
Normal file
102
packages/browseros-agent/apps/eval/tests/agents/claude-code-stream-parser.test.ts
vendored
Normal file
@@ -0,0 +1,102 @@
|
||||
import { describe, expect, it } from 'bun:test'
|
||||
import {
|
||||
ClaudeCodeStreamParser,
|
||||
shouldCaptureScreenshotForTool,
|
||||
} from '../../src/agents/claude-code/stream-parser'
|
||||
|
||||
describe('ClaudeCodeStreamParser', () => {
|
||||
it('maps assistant text and MCP tool use into eval stream events', () => {
|
||||
const parser = new ClaudeCodeStreamParser()
|
||||
const events = parser.pushLine(
|
||||
JSON.stringify({
|
||||
type: 'assistant',
|
||||
message: {
|
||||
content: [
|
||||
{ type: 'text', text: 'I will navigate.' },
|
||||
{
|
||||
type: 'tool_use',
|
||||
id: 'toolu_1',
|
||||
name: 'mcp__browseros__navigate_page',
|
||||
input: { page: 2, url: 'https://example.com' },
|
||||
},
|
||||
],
|
||||
},
|
||||
}),
|
||||
)
|
||||
|
||||
expect(events).toEqual([
|
||||
{ type: 'text-start', id: expect.any(String) },
|
||||
{
|
||||
type: 'text-delta',
|
||||
id: expect.any(String),
|
||||
delta: 'I will navigate.',
|
||||
},
|
||||
{ type: 'text-end', id: expect.any(String) },
|
||||
{
|
||||
type: 'tool-input-available',
|
||||
toolCallId: 'toolu_1',
|
||||
toolName: 'mcp__browseros__navigate_page',
|
||||
input: { page: 2, url: 'https://example.com' },
|
||||
},
|
||||
])
|
||||
expect(parser.getLastText()).toBe('I will navigate.')
|
||||
expect(parser.getToolCallCount()).toBe(1)
|
||||
})
|
||||
|
||||
it('maps Claude Code tool results into eval output events', () => {
|
||||
const parser = new ClaudeCodeStreamParser()
|
||||
const events = parser.pushLine(
|
||||
JSON.stringify({
|
||||
type: 'user',
|
||||
message: {
|
||||
content: [
|
||||
{
|
||||
type: 'tool_result',
|
||||
tool_use_id: 'toolu_1',
|
||||
content: 'Navigated successfully',
|
||||
},
|
||||
],
|
||||
},
|
||||
}),
|
||||
)
|
||||
|
||||
expect(events).toEqual([
|
||||
{
|
||||
type: 'tool-output-available',
|
||||
toolCallId: 'toolu_1',
|
||||
output: 'Navigated successfully',
|
||||
},
|
||||
])
|
||||
})
|
||||
|
||||
it('uses result messages as the authoritative final text', () => {
|
||||
const parser = new ClaudeCodeStreamParser()
|
||||
parser.pushLine(
|
||||
JSON.stringify({
|
||||
type: 'assistant',
|
||||
message: {
|
||||
content: [{ type: 'text', text: 'I will complete the task.' }],
|
||||
},
|
||||
}),
|
||||
)
|
||||
parser.pushLine(
|
||||
JSON.stringify({
|
||||
type: 'result',
|
||||
subtype: 'success',
|
||||
result: 'Final answer',
|
||||
}),
|
||||
)
|
||||
|
||||
expect(parser.getLastText()).toBe('Final answer')
|
||||
})
|
||||
|
||||
it('identifies BrowserOS MCP tools that should trigger screenshots', () => {
|
||||
expect(
|
||||
shouldCaptureScreenshotForTool('mcp__browseros__navigate_page'),
|
||||
).toBe(true)
|
||||
expect(
|
||||
shouldCaptureScreenshotForTool('mcp__browseros__take_screenshot'),
|
||||
).toBe(false)
|
||||
expect(shouldCaptureScreenshotForTool('Read')).toBe(false)
|
||||
})
|
||||
})
|
||||
@@ -7,8 +7,11 @@ import {
|
||||
runSuiteCommand,
|
||||
} from '../../src/cli/commands/suite'
|
||||
import type { RunEvalOptions } from '../../src/runner/types'
|
||||
import type { EvalSuite } from '../../src/suites/schema'
|
||||
|
||||
async function writeTempSuite(): Promise<{ dir: string; suitePath: string }> {
|
||||
async function writeTempSuite(
|
||||
overrides: Partial<EvalSuite> = {},
|
||||
): Promise<{ dir: string; suitePath: string }> {
|
||||
const dir = await mkdtemp(join(tmpdir(), 'eval-suite-cli-'))
|
||||
const suitePath = join(dir, 'agisdk-daily-10.json')
|
||||
await writeFile(
|
||||
@@ -23,8 +26,9 @@ async function writeTempSuite(): Promise<{ dir: string; suitePath: string }> {
|
||||
restartBrowserPerTask: true,
|
||||
browseros: {
|
||||
server_url: 'http://127.0.0.1:9110',
|
||||
headless: true,
|
||||
headless: false,
|
||||
},
|
||||
...overrides,
|
||||
},
|
||||
null,
|
||||
2,
|
||||
@@ -43,9 +47,7 @@ describe('suite command', () => {
|
||||
|
||||
expect(resolved.kind).toBe('config')
|
||||
expect(resolved.suite.id).toBe('browseros-agent-weekly')
|
||||
expect(resolved.evalConfig.dataset).toBe(
|
||||
'../../data/webbench-2of4-50.jsonl',
|
||||
)
|
||||
expect(resolved.evalConfig.dataset).toBe('../../data/agisdk-real.jsonl')
|
||||
expect(resolved.variant.publicMetadata.agent.apiKeyConfigured).toBe(true)
|
||||
})
|
||||
|
||||
@@ -75,6 +77,25 @@ describe('suite command', () => {
|
||||
expect(resolved.evalConfig.num_workers).toBe(2)
|
||||
})
|
||||
|
||||
it('resolves claude-code suites without provider API credentials', async () => {
|
||||
const { dir, suitePath } = await writeTempSuite({
|
||||
agent: { type: 'claude-code' },
|
||||
})
|
||||
|
||||
const resolved = await resolveSuiteCommand({
|
||||
suitePath,
|
||||
model: 'opus',
|
||||
env: {},
|
||||
})
|
||||
|
||||
expect(resolved.kind).toBe('suite')
|
||||
expect(resolved.evalConfig.agent).toMatchObject({
|
||||
type: 'claude-code',
|
||||
model: 'opus',
|
||||
})
|
||||
expect(resolved.datasetPath).toBe(join(dir, 'tasks.jsonl'))
|
||||
})
|
||||
|
||||
it('runs config and suite commands through the runner dependency', async () => {
|
||||
const calls: RunEvalOptions[] = []
|
||||
await runSuiteCommand(
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
import { describe, expect, it } from 'bun:test'
|
||||
import { mkdtemp, writeFile } from 'node:fs/promises'
|
||||
import { chmod, mkdtemp, writeFile } from 'node:fs/promises'
|
||||
import { tmpdir } from 'node:os'
|
||||
import { join } from 'node:path'
|
||||
import { runPythonJsonEvaluator } from '../../src/grading/python-evaluator'
|
||||
@@ -11,6 +11,17 @@ async function writeScript(source: string): Promise<string> {
|
||||
return script
|
||||
}
|
||||
|
||||
async function writePythonWrapper(): Promise<string> {
|
||||
const dir = await mkdtemp(join(tmpdir(), 'eval-python-wrapper-'))
|
||||
const wrapper = join(dir, 'python-wrapper')
|
||||
await writeFile(
|
||||
wrapper,
|
||||
'#!/bin/sh\necho custom-python >&2\nexec python3 "$@"\n',
|
||||
)
|
||||
await chmod(wrapper, 0o755)
|
||||
return wrapper
|
||||
}
|
||||
|
||||
describe('runPythonJsonEvaluator', () => {
|
||||
it('sends JSON on stdin, captures stderr, and parses stdout JSON', async () => {
|
||||
const script = await writeScript(`
|
||||
@@ -49,6 +60,34 @@ sys.exit(3)
|
||||
).rejects.toThrow('bad verifier')
|
||||
})
|
||||
|
||||
it('uses BROWSEROS_EVAL_PYTHON when provided', async () => {
|
||||
const script = await writeScript(`
|
||||
import json, sys
|
||||
data = json.loads(sys.stdin.read())
|
||||
print(json.dumps({"ok": data["ok"]}))
|
||||
`)
|
||||
const wrapper = await writePythonWrapper()
|
||||
const previousPythonPath = process.env.BROWSEROS_EVAL_PYTHON
|
||||
process.env.BROWSEROS_EVAL_PYTHON = wrapper
|
||||
|
||||
try {
|
||||
const result = await runPythonJsonEvaluator<{ ok: boolean }>({
|
||||
scriptPath: script,
|
||||
input: { ok: true },
|
||||
timeoutMs: 5_000,
|
||||
})
|
||||
|
||||
expect(result.output).toEqual({ ok: true })
|
||||
expect(result.stderr).toContain('custom-python')
|
||||
} finally {
|
||||
if (previousPythonPath === undefined) {
|
||||
delete process.env.BROWSEROS_EVAL_PYTHON
|
||||
} else {
|
||||
process.env.BROWSEROS_EVAL_PYTHON = previousPythonPath
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
it('enforces timeouts', async () => {
|
||||
const script = await writeScript(`
|
||||
import time
|
||||
|
||||
@@ -1,15 +1,18 @@
|
||||
import { describe, expect, it } from 'bun:test'
|
||||
import { mkdtemp, writeFile } from 'node:fs/promises'
|
||||
import { tmpdir } from 'node:os'
|
||||
import { join } from 'node:path'
|
||||
import { adaptEvalConfigFile } from '../../src/suites/config-adapter'
|
||||
|
||||
describe('adaptEvalConfigFile', () => {
|
||||
it('preserves browseros-agent-weekly config semantics', async () => {
|
||||
it('preserves browseros-agent-weekly AGI SDK config semantics', async () => {
|
||||
const adapted = await adaptEvalConfigFile(
|
||||
'apps/eval/configs/legacy/browseros-agent-weekly.json',
|
||||
)
|
||||
|
||||
expect(adapted.suite.id).toBe('browseros-agent-weekly')
|
||||
expect(adapted.suite.dataset).toBe('../../data/webbench-2of4-50.jsonl')
|
||||
expect(adapted.suite.graders).toEqual(['performance_grader'])
|
||||
expect(adapted.suite.dataset).toBe('../../data/agisdk-real.jsonl')
|
||||
expect(adapted.suite.graders).toEqual(['agisdk_state_diff'])
|
||||
expect(adapted.suite.workers).toBe(10)
|
||||
expect(adapted.suite.restartBrowserPerTask).toBe(true)
|
||||
expect(adapted.suite.timeoutMs).toBe(1_800_000)
|
||||
@@ -34,4 +37,33 @@ describe('adaptEvalConfigFile', () => {
|
||||
'secret-openrouter-value',
|
||||
)
|
||||
})
|
||||
|
||||
it('adapts claude-code configs without provider credentials', async () => {
|
||||
const dir = await mkdtemp(join(tmpdir(), 'claude-code-config-'))
|
||||
const configPath = join(dir, 'claude-code-agisdk.json')
|
||||
await writeFile(
|
||||
configPath,
|
||||
JSON.stringify({
|
||||
agent: {
|
||||
type: 'claude-code',
|
||||
model: 'opus',
|
||||
},
|
||||
dataset: 'tasks.jsonl',
|
||||
num_workers: 1,
|
||||
restart_server_per_task: false,
|
||||
browseros: {
|
||||
server_url: 'http://127.0.0.1:9110',
|
||||
headless: false,
|
||||
},
|
||||
}),
|
||||
)
|
||||
|
||||
const adapted = await adaptEvalConfigFile(configPath, { env: {} })
|
||||
|
||||
expect(adapted.suite.agent).toEqual({ type: 'claude-code' })
|
||||
expect(adapted.variant.agent).toMatchObject({
|
||||
provider: 'claude-code',
|
||||
model: 'opus',
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
@@ -35,6 +35,16 @@ describe('EvalSuiteSchema', () => {
|
||||
expect(parsed.success).toBe(false)
|
||||
})
|
||||
|
||||
it('validates claude-code suites', () => {
|
||||
const suite = EvalSuiteSchema.parse({
|
||||
id: 'claude-code-agisdk',
|
||||
dataset: 'data/agisdk-real.jsonl',
|
||||
agent: { type: 'claude-code' },
|
||||
})
|
||||
|
||||
expect(suite.agent.type).toBe('claude-code')
|
||||
})
|
||||
|
||||
it('validates the daily AGISDK 10-task suite', async () => {
|
||||
const loaded = await loadSuite(
|
||||
'apps/eval/configs/suites/agisdk-daily-10.json',
|
||||
@@ -89,4 +99,40 @@ describe('resolveVariant', () => {
|
||||
}),
|
||||
).toThrow('EVAL_AGENT_API_KEY')
|
||||
})
|
||||
|
||||
it('resolves claude-code variants without model or API key requirements', () => {
|
||||
const variant = resolveVariant({
|
||||
variantId: 'claude-opus',
|
||||
provider: 'claude-code',
|
||||
model: 'opus',
|
||||
env: {},
|
||||
})
|
||||
|
||||
expect(variant.id).toBe('claude-opus')
|
||||
expect(variant.agent).toEqual({
|
||||
provider: 'claude-code',
|
||||
model: 'opus',
|
||||
})
|
||||
expect(variant.publicMetadata.agent).toEqual({
|
||||
provider: 'claude-code',
|
||||
model: 'opus',
|
||||
apiKeyConfigured: false,
|
||||
})
|
||||
|
||||
const defaultVariant = resolveVariant({
|
||||
provider: 'claude-code',
|
||||
env: {},
|
||||
})
|
||||
|
||||
expect(defaultVariant.id).toBe('claude-code')
|
||||
expect(defaultVariant.agent).toEqual({
|
||||
provider: 'claude-code',
|
||||
model: '',
|
||||
})
|
||||
expect(defaultVariant.publicMetadata.agent).toEqual({
|
||||
provider: 'claude-code',
|
||||
model: 'default',
|
||||
apiKeyConfigured: false,
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
@@ -108,6 +108,7 @@
|
||||
"klavis": "^2.15.0",
|
||||
"pino": "^9.6.0",
|
||||
"posthog-node": "^4.17.0",
|
||||
"proper-lockfile": "^4.1.2",
|
||||
"puppeteer-core": "24.23.0",
|
||||
"ws": "^8.18.0",
|
||||
"zod": "^3.24.2",
|
||||
@@ -117,6 +118,7 @@
|
||||
"@types/bun": "1.3.5",
|
||||
"@types/debug": "^4.1.12",
|
||||
"@types/node": "^24.3.3",
|
||||
"@types/proper-lockfile": "^4.1.4",
|
||||
"@types/sinon": "^21.0.0",
|
||||
"@types/ws": "^8.5.13",
|
||||
"async-mutex": "^0.5.0",
|
||||
|
||||
@@ -15,18 +15,26 @@ import type {
|
||||
ContainerCommandResult,
|
||||
ContainerSpec,
|
||||
LogFn,
|
||||
WaitForContainerNameReleaseOptions,
|
||||
} from '../../../lib/container'
|
||||
import { isContainerNameInUse } from '../../../lib/container'
|
||||
import { logger } from '../../../lib/logger'
|
||||
import {
|
||||
GUEST_VM_STATE,
|
||||
hostPathToGuest,
|
||||
type VmRuntime,
|
||||
} from '../../../lib/vm'
|
||||
import { ContainerNameInUseError } from '../../../lib/vm/errors'
|
||||
|
||||
const GATEWAY_CONTAINER_HOME = '/home/node'
|
||||
const GATEWAY_STATE_DIR = `${GATEWAY_CONTAINER_HOME}/.openclaw`
|
||||
const GUEST_OPENCLAW_HOME = `${GUEST_VM_STATE}/openclaw`
|
||||
const GATEWAY_NPM_PREFIX = `${GATEWAY_CONTAINER_HOME}/.npm-global`
|
||||
const CREATE_CONTAINER_MAX_ATTEMPTS = 3
|
||||
const OPENCLAW_NAME_RELEASE_WAIT: WaitForContainerNameReleaseOptions = {
|
||||
timeoutMs: 10_000,
|
||||
intervalMs: 100,
|
||||
}
|
||||
// Prepend user-installed bin so tools like `claude` / `gemini` CLI that
|
||||
// are installed via npm into the mounted home are discoverable by
|
||||
// OpenClaw's child-process spawns (no login shell is involved).
|
||||
@@ -121,10 +129,9 @@ export class ContainerRuntime {
|
||||
input: GatewayContainerSpec,
|
||||
onLog?: LogFn,
|
||||
): Promise<void> {
|
||||
await this.removeGatewayContainer(onLog)
|
||||
const image = await this.ensureGatewayImageLoaded(onLog)
|
||||
const container = await this.buildGatewayContainerSpec(input, image)
|
||||
await this.shell.createContainer(container, onLog)
|
||||
await this.createContainerWithNameReconcile(container, onLog)
|
||||
await this.shell.startContainer(container.name)
|
||||
}
|
||||
|
||||
@@ -208,10 +215,11 @@ export class ContainerRuntime {
|
||||
onLog?: LogFn,
|
||||
): Promise<number> {
|
||||
const setupContainerName = `${OPENCLAW_GATEWAY_CONTAINER_NAME}-setup`
|
||||
await this.shell.removeContainer(setupContainerName, { force: true }, onLog)
|
||||
await this.removeContainerAndWait(setupContainerName, onLog)
|
||||
const image = await this.ensureGatewayImageLoaded(onLog)
|
||||
const setupArgs = command[0] === 'node' ? command.slice(1) : command
|
||||
const createResult = await this.shell.runCommand(
|
||||
const createResult = await this.runSetupCreateWithNameReconcile(
|
||||
setupContainerName,
|
||||
[
|
||||
'create',
|
||||
'--name',
|
||||
@@ -252,10 +260,74 @@ export class ContainerRuntime {
|
||||
}
|
||||
|
||||
private async removeGatewayContainer(onLog?: LogFn): Promise<void> {
|
||||
await this.shell.removeContainer(
|
||||
OPENCLAW_GATEWAY_CONTAINER_NAME,
|
||||
{ force: true },
|
||||
onLog,
|
||||
await this.removeContainerAndWait(OPENCLAW_GATEWAY_CONTAINER_NAME, onLog)
|
||||
}
|
||||
|
||||
/** Create the fixed-name gateway after reconciling stale nerdctl name ownership. */
|
||||
private async createContainerWithNameReconcile(
|
||||
container: ContainerSpec,
|
||||
onLog?: LogFn,
|
||||
): Promise<void> {
|
||||
let attempt = 1
|
||||
while (true) {
|
||||
await this.removeContainerAndWait(container.name, onLog)
|
||||
try {
|
||||
await this.shell.createContainer(container, onLog)
|
||||
return
|
||||
} catch (err) {
|
||||
if (
|
||||
!(err instanceof ContainerNameInUseError) ||
|
||||
attempt >= CREATE_CONTAINER_MAX_ATTEMPTS
|
||||
) {
|
||||
throw err
|
||||
}
|
||||
logger.warn('OpenClaw container name still in use; retrying create', {
|
||||
containerName: container.name,
|
||||
attempt,
|
||||
maxAttempts: CREATE_CONTAINER_MAX_ATTEMPTS,
|
||||
})
|
||||
attempt++
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private async runSetupCreateWithNameReconcile(
|
||||
setupContainerName: string,
|
||||
createArgs: string[],
|
||||
onLog?: LogFn,
|
||||
): Promise<ContainerCommandResult> {
|
||||
let attempt = 1
|
||||
while (true) {
|
||||
const result = await this.shell.runCommand(createArgs, onLog)
|
||||
if (
|
||||
result.exitCode === 0 ||
|
||||
!isContainerNameInUse(result.stderr) ||
|
||||
attempt >= CREATE_CONTAINER_MAX_ATTEMPTS
|
||||
) {
|
||||
return result
|
||||
}
|
||||
|
||||
logger.warn(
|
||||
'OpenClaw setup container name still in use; retrying create',
|
||||
{
|
||||
containerName: setupContainerName,
|
||||
attempt,
|
||||
maxAttempts: CREATE_CONTAINER_MAX_ATTEMPTS,
|
||||
},
|
||||
)
|
||||
await this.removeContainerAndWait(setupContainerName, onLog)
|
||||
attempt++
|
||||
}
|
||||
}
|
||||
|
||||
private async removeContainerAndWait(
|
||||
containerName: string,
|
||||
onLog?: LogFn,
|
||||
): Promise<void> {
|
||||
await this.shell.removeContainer(containerName, { force: true }, onLog)
|
||||
await this.shell.waitForContainerNameRelease(
|
||||
containerName,
|
||||
OPENCLAW_NAME_RELEASE_WAIT,
|
||||
)
|
||||
}
|
||||
|
||||
|
||||
@@ -10,6 +10,7 @@
|
||||
|
||||
import { existsSync } from 'node:fs'
|
||||
import { mkdir, readFile, writeFile } from 'node:fs/promises'
|
||||
import { join } from 'node:path'
|
||||
import {
|
||||
OPENCLAW_CONTAINER_HOME,
|
||||
OPENCLAW_GATEWAY_CONTAINER_PORT,
|
||||
@@ -18,6 +19,7 @@ import {
|
||||
import { DEFAULT_PORTS } from '@browseros/shared/constants/ports'
|
||||
import { getOpenClawDir } from '../../../lib/browseros-dir'
|
||||
import { logger } from '../../../lib/logger'
|
||||
import { withProcessLock } from '../../../lib/process-lock'
|
||||
import {
|
||||
type AgentLiveStatus,
|
||||
type AgentSessionState,
|
||||
@@ -1012,10 +1014,16 @@ export class OpenClawService {
|
||||
if (persistedPort !== null) {
|
||||
this.setPort(persistedPort)
|
||||
}
|
||||
if (await this.isGatewayAvailable(this.hostPort)) {
|
||||
const currentPortReady = await this.isGatewayPortReady(this.hostPort)
|
||||
if (
|
||||
currentPortReady &&
|
||||
(await this.isGatewayAuthenticated(this.hostPort))
|
||||
) {
|
||||
return
|
||||
}
|
||||
const hostPort = await allocateGatewayPort(this.openclawDir)
|
||||
const hostPort = await allocateGatewayPort(this.openclawDir, {
|
||||
excludePort: currentPortReady ? this.hostPort : undefined,
|
||||
})
|
||||
if (hostPort !== this.hostPort) {
|
||||
logProgress?.(`Allocated OpenClaw gateway host port ${hostPort}`)
|
||||
logger.info('Allocated OpenClaw gateway host port', { hostPort })
|
||||
@@ -1025,7 +1033,10 @@ export class OpenClawService {
|
||||
|
||||
private async isGatewayAvailable(hostPort: number): Promise<boolean> {
|
||||
if (!(await this.isGatewayPortReady(hostPort))) return false
|
||||
return this.isGatewayAuthenticated(hostPort)
|
||||
}
|
||||
|
||||
private async isGatewayAuthenticated(hostPort: number): Promise<boolean> {
|
||||
if (!this.tokenLoaded) {
|
||||
logger.debug(
|
||||
'OpenClaw gateway port is ready before auth token is loaded',
|
||||
@@ -1512,8 +1523,14 @@ export class OpenClawService {
|
||||
})
|
||||
await previous.catch(() => undefined)
|
||||
try {
|
||||
logger.debug('OpenClaw lifecycle operation started', { operation })
|
||||
return await fn()
|
||||
return await withProcessLock(
|
||||
'openclaw-lifecycle',
|
||||
{ lockDir: join(this.openclawDir, '.locks') },
|
||||
async () => {
|
||||
logger.debug('OpenClaw lifecycle operation started', { operation })
|
||||
return await fn()
|
||||
},
|
||||
)
|
||||
} finally {
|
||||
release()
|
||||
}
|
||||
|
||||
@@ -16,6 +16,7 @@ import { OPENCLAW_GATEWAY_CONTAINER_PORT } from '@browseros/shared/constants/ope
|
||||
import { getOpenClawStateDir } from './openclaw-env'
|
||||
|
||||
const RUNTIME_STATE_FILE = 'runtime-state.json'
|
||||
const MAX_TCP_PORT = 65_535
|
||||
|
||||
interface RuntimeState {
|
||||
gatewayPort: number
|
||||
@@ -26,7 +27,7 @@ function readForcedGatewayPort(): number | null {
|
||||
if (!raw) return null
|
||||
|
||||
const parsed = Number.parseInt(raw, 10)
|
||||
if (!Number.isInteger(parsed) || parsed <= 0 || parsed > 65535) {
|
||||
if (!Number.isInteger(parsed) || parsed <= 0 || parsed > MAX_TCP_PORT) {
|
||||
return null
|
||||
}
|
||||
return parsed
|
||||
@@ -49,7 +50,7 @@ export async function readPersistedGatewayPort(
|
||||
typeof parsed.gatewayPort === 'number' &&
|
||||
Number.isInteger(parsed.gatewayPort) &&
|
||||
parsed.gatewayPort > 0 &&
|
||||
parsed.gatewayPort <= 65535
|
||||
parsed.gatewayPort <= MAX_TCP_PORT
|
||||
) {
|
||||
return parsed.gatewayPort
|
||||
}
|
||||
@@ -82,14 +83,26 @@ function isPortAvailable(port: number): Promise<boolean> {
|
||||
})
|
||||
}
|
||||
|
||||
async function findAvailablePort(startPort: number): Promise<number> {
|
||||
async function findAvailablePort(
|
||||
startPort: number,
|
||||
excludePort?: number,
|
||||
): Promise<number> {
|
||||
let port = startPort
|
||||
while (!(await isPortAvailable(port))) {
|
||||
while (port === excludePort || !(await isPortAvailable(port))) {
|
||||
port++
|
||||
if (port > MAX_TCP_PORT) {
|
||||
throw new Error(
|
||||
`No available OpenClaw gateway port found from ${startPort}`,
|
||||
)
|
||||
}
|
||||
}
|
||||
return port
|
||||
}
|
||||
|
||||
export interface AllocateGatewayPortOptions {
|
||||
excludePort?: number
|
||||
}
|
||||
|
||||
/**
|
||||
* Pick a host port for the gateway container and persist it. Prefers the
|
||||
* previously persisted port when it's still bindable; otherwise scans
|
||||
@@ -97,6 +110,7 @@ async function findAvailablePort(startPort: number): Promise<number> {
|
||||
*/
|
||||
export async function allocateGatewayPort(
|
||||
openclawDir: string,
|
||||
opts: AllocateGatewayPortOptions = {},
|
||||
): Promise<number> {
|
||||
const forcedPort = readForcedGatewayPort()
|
||||
if (forcedPort !== null) {
|
||||
@@ -105,10 +119,17 @@ export async function allocateGatewayPort(
|
||||
}
|
||||
|
||||
const persisted = await readPersistedGatewayPort(openclawDir)
|
||||
if (persisted !== null && (await isPortAvailable(persisted))) {
|
||||
if (
|
||||
persisted !== null &&
|
||||
persisted !== opts.excludePort &&
|
||||
(await isPortAvailable(persisted))
|
||||
) {
|
||||
return persisted
|
||||
}
|
||||
const port = await findAvailablePort(OPENCLAW_GATEWAY_CONTAINER_PORT)
|
||||
const port = await findAvailablePort(
|
||||
OPENCLAW_GATEWAY_CONTAINER_PORT,
|
||||
opts.excludePort,
|
||||
)
|
||||
await writePersistedGatewayPort(openclawDir, port)
|
||||
return port
|
||||
}
|
||||
|
||||
@@ -4,9 +4,20 @@
|
||||
* SPDX-License-Identifier: AGPL-3.0-or-later
|
||||
*/
|
||||
|
||||
import { ContainerCliError } from '../vm/errors'
|
||||
import {
|
||||
ContainerCliError,
|
||||
ContainerNameInUseError,
|
||||
ContainerNameReleaseTimeoutError,
|
||||
} from '../vm/errors'
|
||||
import { LimaCli } from '../vm/lima-cli'
|
||||
import type { ContainerSpec, LogFn, MountSpec, PortMapping } from './types'
|
||||
import type {
|
||||
ContainerInfo,
|
||||
ContainerSpec,
|
||||
LogFn,
|
||||
MountSpec,
|
||||
PortMapping,
|
||||
WaitForContainerNameReleaseOptions,
|
||||
} from './types'
|
||||
|
||||
export function buildNerdctlCommand(args: string[]): string[] {
|
||||
return ['nerdctl', ...args]
|
||||
@@ -58,7 +69,18 @@ export class ContainerCli {
|
||||
}
|
||||
|
||||
async createContainer(spec: ContainerSpec, onLog?: LogFn): Promise<void> {
|
||||
await this.runRequired(buildCreateArgs(spec), onLog)
|
||||
const args = buildCreateArgs(spec)
|
||||
const result = await this.runCommand(args, onLog)
|
||||
if (result.exitCode === 0) return
|
||||
if (isContainerNameInUse(result.stderr)) {
|
||||
throw new ContainerNameInUseError(
|
||||
spec.name,
|
||||
`nerdctl ${args.join(' ')}`,
|
||||
result.exitCode,
|
||||
result.stderr.trim(),
|
||||
)
|
||||
}
|
||||
throw this.commandError(args, result)
|
||||
}
|
||||
|
||||
async startContainer(name: string, onLog?: LogFn): Promise<void> {
|
||||
@@ -84,6 +106,36 @@ export class ContainerCli {
|
||||
throw this.commandError(args, result)
|
||||
}
|
||||
|
||||
/** Inspect a named container without treating absence as a command failure. */
|
||||
async inspectContainer(name: string): Promise<ContainerInfo | null> {
|
||||
const args = ['container', 'inspect', '--format', '{{json .}}', name]
|
||||
const result = await this.runCommand(args)
|
||||
if (result.exitCode === 0) {
|
||||
return parseContainerInfo(result.stdout, name)
|
||||
}
|
||||
if (isNoSuchContainer(result.stderr)) return null
|
||||
throw this.commandError(args, result)
|
||||
}
|
||||
|
||||
/** Wait for containerd/nerdctl to stop resolving a container name after rm. */
|
||||
async waitForContainerNameRelease(
|
||||
name: string,
|
||||
opts: WaitForContainerNameReleaseOptions = {},
|
||||
): Promise<void> {
|
||||
const timeoutMs = opts.timeoutMs ?? 5_000
|
||||
const intervalMs = opts.intervalMs ?? 100
|
||||
const startedAt = Date.now()
|
||||
|
||||
while (Date.now() - startedAt <= timeoutMs) {
|
||||
if (!(await this.inspectContainer(name))) return
|
||||
const remainingMs = timeoutMs - (Date.now() - startedAt)
|
||||
if (remainingMs <= 0) break
|
||||
await Bun.sleep(Math.min(intervalMs, remainingMs))
|
||||
}
|
||||
|
||||
throw new ContainerNameReleaseTimeoutError(name, timeoutMs)
|
||||
}
|
||||
|
||||
async exec(name: string, cmd: string[], onLog?: LogFn): Promise<number> {
|
||||
const result = await this.runCommand(['exec', name, ...cmd], onLog)
|
||||
return result.exitCode
|
||||
@@ -198,12 +250,65 @@ function mountArg(mount: MountSpec): string {
|
||||
return `${mount.source}:${mount.target}${mount.readonly ? ':ro' : ''}`
|
||||
}
|
||||
|
||||
function parseContainerInfo(
|
||||
stdout: string,
|
||||
fallbackName: string,
|
||||
): ContainerInfo {
|
||||
const line = stdout
|
||||
.trim()
|
||||
.split('\n')
|
||||
.map((entry) => entry.trim())
|
||||
.find(Boolean)
|
||||
if (!line) {
|
||||
throw new Error(`nerdctl container inspect returned empty output`)
|
||||
}
|
||||
const parsed = JSON.parse(line) as unknown
|
||||
const container = Array.isArray(parsed) ? parsed[0] : parsed
|
||||
const object = isRecord(container) ? container : {}
|
||||
const config = isRecord(object.Config) ? object.Config : {}
|
||||
const state = isRecord(object.State) ? object.State : {}
|
||||
const name = stringValue(object.Name)?.replace(/^\/+/, '') ?? fallbackName
|
||||
const status = stringValue(state.Status) ?? stringValue(object.Status)
|
||||
const running =
|
||||
typeof state.Running === 'boolean'
|
||||
? state.Running
|
||||
: status
|
||||
? status.toLowerCase() === 'running'
|
||||
: null
|
||||
|
||||
return {
|
||||
id: stringValue(object.ID) ?? stringValue(object.Id),
|
||||
name,
|
||||
image: stringValue(config.Image) ?? stringValue(object.Image),
|
||||
status,
|
||||
running,
|
||||
}
|
||||
}
|
||||
|
||||
function isNoSuchContainer(stderr: string): boolean {
|
||||
const lower = stderr.toLowerCase()
|
||||
return lower.includes('no such container') || lower.includes('not found')
|
||||
return (
|
||||
lower.includes('no such container') || lower.includes('container not found')
|
||||
)
|
||||
}
|
||||
|
||||
export function isContainerNameInUse(stderr: string): boolean {
|
||||
const lower = stderr.toLowerCase()
|
||||
return (
|
||||
(lower.includes('name-store error') && lower.includes('already used')) ||
|
||||
lower.includes('name is already in use')
|
||||
)
|
||||
}
|
||||
|
||||
function linesToOutput(lines: string[]): string {
|
||||
if (lines.length === 0) return ''
|
||||
return `${lines.join('\n')}\n`
|
||||
}
|
||||
|
||||
function isRecord(value: unknown): value is Record<string, unknown> {
|
||||
return typeof value === 'object' && value !== null
|
||||
}
|
||||
|
||||
function stringValue(value: unknown): string | null {
|
||||
return typeof value === 'string' && value ? value : null
|
||||
}
|
||||
|
||||
@@ -38,6 +38,19 @@ export interface ContainerSpec {
|
||||
command?: string[]
|
||||
}
|
||||
|
||||
export interface ContainerInfo {
|
||||
id: string | null
|
||||
name: string
|
||||
image: string | null
|
||||
status: string | null
|
||||
running: boolean | null
|
||||
}
|
||||
|
||||
export interface WaitForContainerNameReleaseOptions {
|
||||
timeoutMs?: number
|
||||
intervalMs?: number
|
||||
}
|
||||
|
||||
export interface LogLine {
|
||||
stream: 'stdout' | 'stderr'
|
||||
line: string
|
||||
|
||||
130
packages/browseros-agent/apps/server/src/lib/process-lock.ts
Normal file
130
packages/browseros-agent/apps/server/src/lib/process-lock.ts
Normal file
@@ -0,0 +1,130 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 BrowserOS
|
||||
* SPDX-License-Identifier: AGPL-3.0-or-later
|
||||
*/
|
||||
|
||||
import { mkdir } from 'node:fs/promises'
|
||||
import { join } from 'node:path'
|
||||
import lockfile from 'proper-lockfile'
|
||||
|
||||
const DEFAULT_STALE_MS = 60_000
|
||||
const DEFAULT_UPDATE_MS = 15_000
|
||||
const DEFAULT_TIMEOUT_MS = 120_000
|
||||
const DEFAULT_RETRY_MIN_TIMEOUT_MS = 100
|
||||
const DEFAULT_RETRY_MAX_TIMEOUT_MS = 1_000
|
||||
|
||||
export interface ProcessLockOptions {
|
||||
lockDir: string
|
||||
staleMs?: number
|
||||
updateMs?: number
|
||||
timeoutMs?: number
|
||||
retryMinTimeoutMs?: number
|
||||
retryMaxTimeoutMs?: number
|
||||
randomize?: boolean
|
||||
}
|
||||
|
||||
export class ProcessLockTimeoutError extends Error {
|
||||
constructor(
|
||||
public readonly lockName: string,
|
||||
public readonly lockPath: string,
|
||||
public readonly timeoutMs: number,
|
||||
public override readonly cause?: unknown,
|
||||
) {
|
||||
super(
|
||||
`Timed out acquiring process lock "${lockName}" at ${lockPath} after ${timeoutMs}ms`,
|
||||
)
|
||||
this.name = 'ProcessLockTimeoutError'
|
||||
}
|
||||
}
|
||||
|
||||
/** Run a critical section while holding a named lock shared across processes. */
|
||||
export async function withProcessLock<T>(
|
||||
name: string,
|
||||
options: ProcessLockOptions,
|
||||
fn: () => Promise<T>,
|
||||
): Promise<T> {
|
||||
const release = await acquireProcessLock(name, options)
|
||||
try {
|
||||
return await fn()
|
||||
} finally {
|
||||
await release()
|
||||
}
|
||||
}
|
||||
|
||||
export function resolveProcessLockPath(lockDir: string, name: string): string {
|
||||
return join(lockDir, `${sanitizeLockName(name)}.lock`)
|
||||
}
|
||||
|
||||
async function acquireProcessLock(
|
||||
name: string,
|
||||
options: ProcessLockOptions,
|
||||
): Promise<() => Promise<void>> {
|
||||
await mkdir(options.lockDir, { recursive: true })
|
||||
|
||||
const lockPath = resolveProcessLockPath(options.lockDir, name)
|
||||
const timeoutMs = options.timeoutMs ?? DEFAULT_TIMEOUT_MS
|
||||
const retryMinTimeoutMs =
|
||||
options.retryMinTimeoutMs ?? DEFAULT_RETRY_MIN_TIMEOUT_MS
|
||||
const retryMaxTimeoutMs =
|
||||
options.retryMaxTimeoutMs ?? DEFAULT_RETRY_MAX_TIMEOUT_MS
|
||||
const startedAt = Date.now()
|
||||
let lastError: unknown
|
||||
|
||||
while (Date.now() - startedAt <= timeoutMs) {
|
||||
try {
|
||||
return await lockfile.lock(lockPath, {
|
||||
lockfilePath: lockPath,
|
||||
realpath: false,
|
||||
stale: options.staleMs ?? DEFAULT_STALE_MS,
|
||||
update: options.updateMs ?? DEFAULT_UPDATE_MS,
|
||||
// The wrapper owns retry/backoff so acquisition respects timeoutMs.
|
||||
retries: 0,
|
||||
})
|
||||
} catch (err) {
|
||||
if (!isLockedError(err)) throw err
|
||||
lastError = err
|
||||
}
|
||||
|
||||
const remainingMs = timeoutMs - (Date.now() - startedAt)
|
||||
if (remainingMs <= 0) break
|
||||
await Bun.sleep(
|
||||
Math.min(
|
||||
remainingMs,
|
||||
nextRetryDelay(retryMinTimeoutMs, retryMaxTimeoutMs, options.randomize),
|
||||
),
|
||||
)
|
||||
}
|
||||
|
||||
throw new ProcessLockTimeoutError(name, lockPath, timeoutMs, lastError)
|
||||
}
|
||||
|
||||
function sanitizeLockName(name: string): string {
|
||||
const safeName = name
|
||||
.trim()
|
||||
.replace(/[^a-zA-Z0-9._-]+/g, '-')
|
||||
.replace(/^[.-]+|[.-]+$/g, '')
|
||||
if (!safeName) throw new Error('Process lock name must not be empty')
|
||||
return safeName
|
||||
}
|
||||
|
||||
function isLockedError(err: unknown): boolean {
|
||||
return (
|
||||
typeof err === 'object' &&
|
||||
err !== null &&
|
||||
'code' in err &&
|
||||
err.code === 'ELOCKED'
|
||||
)
|
||||
}
|
||||
|
||||
function nextRetryDelay(
|
||||
minTimeoutMs: number,
|
||||
maxTimeoutMs: number,
|
||||
randomize = true,
|
||||
): number {
|
||||
if (maxTimeoutMs <= minTimeoutMs) return minTimeoutMs
|
||||
if (!randomize) return minTimeoutMs
|
||||
return (
|
||||
minTimeoutMs + Math.floor(Math.random() * (maxTimeoutMs - minTimeoutMs))
|
||||
)
|
||||
}
|
||||
@@ -30,8 +30,36 @@ export class ContainerCliError extends VmError {
|
||||
command: string,
|
||||
public readonly exitCode: number,
|
||||
public readonly stderr: string,
|
||||
message = `${command} failed with exit code ${exitCode}: ${stderr}`,
|
||||
) {
|
||||
super(`${command} failed with exit code ${exitCode}: ${stderr}`)
|
||||
super(message)
|
||||
}
|
||||
}
|
||||
|
||||
export class ContainerNameInUseError extends ContainerCliError {
|
||||
constructor(
|
||||
public readonly containerName: string,
|
||||
command: string,
|
||||
exitCode: number,
|
||||
stderr: string,
|
||||
) {
|
||||
super(
|
||||
command,
|
||||
exitCode,
|
||||
stderr,
|
||||
`${command} failed because container name "${containerName}" is already in use: ${stderr}`,
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
export class ContainerNameReleaseTimeoutError extends VmError {
|
||||
constructor(
|
||||
public readonly containerName: string,
|
||||
public readonly timeoutMs: number,
|
||||
) {
|
||||
super(
|
||||
`Timed out waiting ${timeoutMs}ms for container name "${containerName}" to be released`,
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -9,8 +9,10 @@ import {
|
||||
OPENCLAW_IMAGE,
|
||||
} from '@browseros/shared/constants/openclaw'
|
||||
import { ContainerRuntime } from '../../../../src/api/services/openclaw/container-runtime'
|
||||
import { ContainerNameInUseError } from '../../../../src/lib/vm/errors'
|
||||
|
||||
const PROJECT_DIR = '/tmp/openclaw'
|
||||
const OPENCLAW_NAME_RELEASE_WAIT = { timeoutMs: 10_000, intervalMs: 100 }
|
||||
const defaultSpec = {
|
||||
hostPort: 18789,
|
||||
hostHome: '/Users/me/.browseros/vm/openclaw',
|
||||
@@ -36,6 +38,10 @@ describe('ContainerRuntime', () => {
|
||||
{ force: true },
|
||||
undefined,
|
||||
)
|
||||
expect(deps.shell.waitForContainerNameRelease).toHaveBeenCalledWith(
|
||||
OPENCLAW_GATEWAY_CONTAINER_NAME,
|
||||
OPENCLAW_NAME_RELEASE_WAIT,
|
||||
)
|
||||
expect(deps.loader.ensureAgentImageLoaded).toHaveBeenCalledWith(
|
||||
'openclaw',
|
||||
undefined,
|
||||
@@ -68,6 +74,62 @@ describe('ContainerRuntime', () => {
|
||||
)
|
||||
})
|
||||
|
||||
it('reconciles and retries when gateway create reports name-in-use', async () => {
|
||||
const deps = createDeps()
|
||||
deps.shell.createContainer = mock(async () => {
|
||||
if (deps.shell.createContainer.mock.calls.length === 1) {
|
||||
throw new ContainerNameInUseError(
|
||||
OPENCLAW_GATEWAY_CONTAINER_NAME,
|
||||
'nerdctl create',
|
||||
1,
|
||||
`name-store error\nname "${OPENCLAW_GATEWAY_CONTAINER_NAME}" is already used`,
|
||||
)
|
||||
}
|
||||
})
|
||||
const runtime = new ContainerRuntime({
|
||||
vm: deps.vm,
|
||||
shell: deps.shell,
|
||||
loader: deps.loader,
|
||||
projectDir: PROJECT_DIR,
|
||||
})
|
||||
|
||||
await runtime.startGateway(defaultSpec)
|
||||
|
||||
expect(deps.shell.createContainer).toHaveBeenCalledTimes(2)
|
||||
expect(deps.shell.removeContainer).toHaveBeenCalledTimes(2)
|
||||
expect(deps.shell.waitForContainerNameRelease).toHaveBeenCalledTimes(2)
|
||||
expect(deps.shell.startContainer).toHaveBeenCalledWith(
|
||||
OPENCLAW_GATEWAY_CONTAINER_NAME,
|
||||
)
|
||||
})
|
||||
|
||||
it('bounds gateway create retries when the name stays in use', async () => {
|
||||
const deps = createDeps()
|
||||
deps.shell.createContainer = mock(async () => {
|
||||
throw new ContainerNameInUseError(
|
||||
OPENCLAW_GATEWAY_CONTAINER_NAME,
|
||||
'nerdctl create',
|
||||
1,
|
||||
`name-store error\nname "${OPENCLAW_GATEWAY_CONTAINER_NAME}" is already used`,
|
||||
)
|
||||
})
|
||||
const runtime = new ContainerRuntime({
|
||||
vm: deps.vm,
|
||||
shell: deps.shell,
|
||||
loader: deps.loader,
|
||||
projectDir: PROJECT_DIR,
|
||||
})
|
||||
|
||||
await expect(runtime.startGateway(defaultSpec)).rejects.toBeInstanceOf(
|
||||
ContainerNameInUseError,
|
||||
)
|
||||
|
||||
expect(deps.shell.createContainer).toHaveBeenCalledTimes(3)
|
||||
expect(deps.shell.removeContainer).toHaveBeenCalledTimes(3)
|
||||
expect(deps.shell.waitForContainerNameRelease).toHaveBeenCalledTimes(3)
|
||||
expect(deps.shell.startContainer).not.toHaveBeenCalled()
|
||||
})
|
||||
|
||||
it('uses OPENCLAW_IMAGE as a direct image override', async () => {
|
||||
const previous = process.env.OPENCLAW_IMAGE
|
||||
process.env.OPENCLAW_IMAGE = 'localhost/openclaw:test'
|
||||
@@ -152,6 +214,45 @@ describe('ContainerRuntime', () => {
|
||||
{ force: true },
|
||||
undefined,
|
||||
)
|
||||
expect(deps.shell.waitForContainerNameRelease).toHaveBeenCalledWith(
|
||||
`${OPENCLAW_GATEWAY_CONTAINER_NAME}-setup`,
|
||||
OPENCLAW_NAME_RELEASE_WAIT,
|
||||
)
|
||||
})
|
||||
|
||||
it('reconciles and retries when setup create reports name-in-use', async () => {
|
||||
const deps = createDeps()
|
||||
let setupCreateCount = 0
|
||||
deps.shell.runCommand = mock(async (args: string[]) => {
|
||||
if (args[0] === 'create') {
|
||||
setupCreateCount += 1
|
||||
if (setupCreateCount === 1) {
|
||||
return {
|
||||
exitCode: 1,
|
||||
stdout: '',
|
||||
stderr: `name-store error\nname "${OPENCLAW_GATEWAY_CONTAINER_NAME}-setup" is already used`,
|
||||
}
|
||||
}
|
||||
}
|
||||
return { exitCode: 0, stdout: '', stderr: '' }
|
||||
})
|
||||
const runtime = new ContainerRuntime({
|
||||
vm: deps.vm,
|
||||
shell: deps.shell,
|
||||
loader: deps.loader,
|
||||
projectDir: PROJECT_DIR,
|
||||
})
|
||||
|
||||
await expect(
|
||||
runtime.runGatewaySetupCommand(
|
||||
['node', 'dist/index.js', 'agents', 'list', '--json'],
|
||||
defaultSpec,
|
||||
),
|
||||
).resolves.toBe(0)
|
||||
|
||||
expect(setupCreateCount).toBe(2)
|
||||
expect(deps.shell.waitForContainerNameRelease).toHaveBeenCalledTimes(2)
|
||||
expect(deps.shell.removeContainer).toHaveBeenCalledTimes(3)
|
||||
})
|
||||
|
||||
it('tails and fetches gateway logs through the new transport', async () => {
|
||||
@@ -257,6 +358,7 @@ function createDeps() {
|
||||
stopContainer: mock(async () => {}),
|
||||
removeContainer: mock(async () => {}),
|
||||
containerImageRef: mock(async () => OPENCLAW_IMAGE),
|
||||
waitForContainerNameRelease: mock(async () => {}),
|
||||
exec: mock(async () => 0),
|
||||
runCommand: mock(
|
||||
async (_args: string[], onLog?: (line: string) => void) => {
|
||||
|
||||
@@ -737,6 +737,77 @@ describe('OpenClawService', () => {
|
||||
expect(probe).toHaveBeenCalledTimes(2)
|
||||
})
|
||||
|
||||
it('serializes start across service instances sharing an OpenClaw dir', async () => {
|
||||
tempDir = await mkdtemp(join(tmpdir(), 'openclaw-service-'))
|
||||
await mkdir(join(tempDir, '.openclaw'), { recursive: true })
|
||||
await writeFile(
|
||||
join(tempDir, '.openclaw', 'openclaw.json'),
|
||||
JSON.stringify({
|
||||
gateway: {
|
||||
auth: {
|
||||
token: 'cli-token',
|
||||
},
|
||||
},
|
||||
}),
|
||||
)
|
||||
let gatewayReady = false
|
||||
let releaseStartGateway!: () => void
|
||||
let notifyStartGatewayEntered!: () => void
|
||||
const startGatewayEntered = new Promise<void>((resolve) => {
|
||||
notifyStartGatewayEntered = resolve
|
||||
})
|
||||
const unblockStartGateway = new Promise<void>((resolve) => {
|
||||
releaseStartGateway = resolve
|
||||
})
|
||||
const firstEnsureReady = mock(async () => {})
|
||||
const secondEnsureReady = mock(async () => {})
|
||||
const startGateway = mock(async () => {
|
||||
notifyStartGatewayEntered()
|
||||
await unblockStartGateway
|
||||
gatewayReady = true
|
||||
})
|
||||
const waitForReady = mock(async () => true)
|
||||
const probe = mock(async () => {})
|
||||
const firstService = new OpenClawService() as MutableOpenClawService
|
||||
const secondService = new OpenClawService() as MutableOpenClawService
|
||||
|
||||
firstService.openclawDir = tempDir
|
||||
secondService.openclawDir = tempDir
|
||||
firstService.runtime = {
|
||||
ensureReady: firstEnsureReady,
|
||||
isReady: async () => gatewayReady,
|
||||
isGatewayCurrent: async () => true,
|
||||
startGateway,
|
||||
waitForReady,
|
||||
}
|
||||
secondService.runtime = {
|
||||
ensureReady: secondEnsureReady,
|
||||
isReady: async () => gatewayReady,
|
||||
isGatewayCurrent: async () => true,
|
||||
startGateway,
|
||||
waitForReady,
|
||||
}
|
||||
firstService.cliClient = { probe }
|
||||
secondService.cliClient = { probe }
|
||||
mockGatewayAuth()
|
||||
|
||||
const firstStart = firstService.start()
|
||||
await startGatewayEntered
|
||||
const secondStart = secondService.start()
|
||||
await Bun.sleep(25)
|
||||
const secondEnteredBeforeFirstFinished = secondEnsureReady.mock.calls.length
|
||||
|
||||
releaseStartGateway()
|
||||
await Promise.all([firstStart, secondStart])
|
||||
|
||||
expect(secondEnteredBeforeFirstFinished).toBe(0)
|
||||
expect(firstEnsureReady).toHaveBeenCalledTimes(1)
|
||||
expect(secondEnsureReady).toHaveBeenCalledTimes(1)
|
||||
expect(startGateway).toHaveBeenCalledTimes(1)
|
||||
expect(waitForReady).toHaveBeenCalledTimes(1)
|
||||
expect(probe).toHaveBeenCalledTimes(2)
|
||||
})
|
||||
|
||||
it('does not restart a ready gateway when start is called again', async () => {
|
||||
tempDir = await mkdtemp(join(tmpdir(), 'openclaw-service-'))
|
||||
await mkdir(join(tempDir, '.openclaw'), { recursive: true })
|
||||
|
||||
@@ -4,10 +4,20 @@
|
||||
*/
|
||||
|
||||
import { afterEach, beforeEach, describe, expect, it } from 'bun:test'
|
||||
import { mkdir, mkdtemp, readFile, rm, writeFile } from 'node:fs/promises'
|
||||
import {
|
||||
chmod,
|
||||
mkdir,
|
||||
mkdtemp,
|
||||
readFile,
|
||||
rm,
|
||||
writeFile,
|
||||
} from 'node:fs/promises'
|
||||
import { join } from 'node:path'
|
||||
import { ContainerCli } from '../../../src/lib/container/container-cli'
|
||||
import { ContainerCliError } from '../../../src/lib/vm/errors'
|
||||
import {
|
||||
ContainerCliError,
|
||||
ContainerNameInUseError,
|
||||
} from '../../../src/lib/vm/errors'
|
||||
import { fakeSsh } from '../../__helpers__/fake-ssh'
|
||||
|
||||
describe('ContainerCli', () => {
|
||||
@@ -163,6 +173,92 @@ describe('ContainerCli', () => {
|
||||
)
|
||||
})
|
||||
|
||||
it('inspects a container by name', async () => {
|
||||
const sshPath = await fakeSsh(
|
||||
{
|
||||
stdout: JSON.stringify({
|
||||
ID: 'abc123',
|
||||
Name: 'gateway',
|
||||
Config: { Image: 'openclaw:v1' },
|
||||
State: { Status: 'running', Running: true },
|
||||
}),
|
||||
},
|
||||
logPath,
|
||||
)
|
||||
const cli = await createCli(sshPath, tempDir)
|
||||
|
||||
await expect(cli.inspectContainer('gateway')).resolves.toEqual({
|
||||
id: 'abc123',
|
||||
name: 'gateway',
|
||||
image: 'openclaw:v1',
|
||||
status: 'running',
|
||||
running: true,
|
||||
})
|
||||
|
||||
await expect(readFile(logPath, 'utf8')).resolves.toContain(
|
||||
"lima-browseros-vm 'nerdctl' 'container' 'inspect' '--format' '{{json .}}' 'gateway'",
|
||||
)
|
||||
})
|
||||
|
||||
it('returns null when inspected containers are absent', async () => {
|
||||
const sshPath = await fakeSsh(
|
||||
{ stderr: 'no such container', exit: 1 },
|
||||
logPath,
|
||||
)
|
||||
const cli = await createCli(sshPath, tempDir)
|
||||
|
||||
await expect(cli.inspectContainer('gateway')).resolves.toBeNull()
|
||||
})
|
||||
|
||||
it('does not treat unrelated not found errors as absent containers', async () => {
|
||||
const sshPath = await fakeSsh(
|
||||
{ stderr: 'network interface not found', exit: 1 },
|
||||
logPath,
|
||||
)
|
||||
const cli = await createCli(sshPath, tempDir)
|
||||
|
||||
await expect(cli.inspectContainer('gateway')).rejects.toBeInstanceOf(
|
||||
ContainerCliError,
|
||||
)
|
||||
})
|
||||
|
||||
it('waits until a container name is no longer resolvable', async () => {
|
||||
const sshPath = await fakeSshContainerExistsThenMissing(tempDir, logPath)
|
||||
const cli = await createCli(sshPath, tempDir)
|
||||
|
||||
await expect(
|
||||
cli.waitForContainerNameRelease('gateway', {
|
||||
timeoutMs: 500,
|
||||
intervalMs: 5,
|
||||
}),
|
||||
).resolves.toBeUndefined()
|
||||
|
||||
const inspectCalls = (await readFile(logPath, 'utf8'))
|
||||
.split('\n')
|
||||
.filter((line) => line.includes("'container' 'inspect'"))
|
||||
expect(inspectCalls).toHaveLength(2)
|
||||
})
|
||||
|
||||
it('classifies create name-store collisions as name-in-use errors', async () => {
|
||||
const sshPath = await fakeSsh(
|
||||
{
|
||||
stderr:
|
||||
'name-store error\nname "gateway" is already used by ID "abc123"',
|
||||
exit: 1,
|
||||
},
|
||||
logPath,
|
||||
)
|
||||
const cli = await createCli(sshPath, tempDir)
|
||||
|
||||
const error = await cli
|
||||
.createContainer({ name: 'gateway', image: 'openclaw:v1' })
|
||||
.catch((err) => err)
|
||||
|
||||
expect(error).toBeInstanceOf(ContainerNameInUseError)
|
||||
expect(error.containerName).toBe('gateway')
|
||||
expect(error.stderr).toContain('name "gateway" is already used')
|
||||
})
|
||||
|
||||
it('tolerates removal when the container is already absent', async () => {
|
||||
const sshPath = await fakeSsh(
|
||||
{ stderr: 'no such container', exit: 1 },
|
||||
@@ -215,3 +311,31 @@ function sshConfigPath(tempDir: string): string {
|
||||
function sshPrefix(configPath: string): string {
|
||||
return `ARGS:-F ${configPath} lima-browseros-vm`
|
||||
}
|
||||
|
||||
async function fakeSshContainerExistsThenMissing(
|
||||
tempDir: string,
|
||||
logPath: string,
|
||||
): Promise<string> {
|
||||
const path = join(tempDir, 'ssh-container-exists-then-missing')
|
||||
const counterPath = join(tempDir, 'ssh-container-exists-then-missing.count')
|
||||
const body = `#!/usr/bin/env bash
|
||||
set -u
|
||||
echo "ARGS:$*" >> "${logPath}"
|
||||
count="$(cat "${counterPath}" 2>/dev/null || echo 0)"
|
||||
next=$((count + 1))
|
||||
printf '%s' "$next" > "${counterPath}"
|
||||
case "$count" in
|
||||
0)
|
||||
printf '{"ID":"abc123","Name":"gateway","Config":{"Image":"openclaw:v1"},"State":{"Status":"exited","Running":false}}'
|
||||
exit 0
|
||||
;;
|
||||
*)
|
||||
echo "no such container" >&2
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
`
|
||||
await writeFile(path, body)
|
||||
await chmod(path, 0o755)
|
||||
return path
|
||||
}
|
||||
|
||||
@@ -0,0 +1,129 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 BrowserOS
|
||||
*/
|
||||
|
||||
import { afterEach, beforeEach, describe, expect, it } from 'bun:test'
|
||||
import { mkdtemp, readdir, rm } from 'node:fs/promises'
|
||||
import { tmpdir } from 'node:os'
|
||||
import { join } from 'node:path'
|
||||
import {
|
||||
ProcessLockTimeoutError,
|
||||
resolveProcessLockPath,
|
||||
withProcessLock,
|
||||
} from '../../src/lib/process-lock'
|
||||
|
||||
describe('process-lock', () => {
|
||||
let tempDir: string
|
||||
let lockDir: string
|
||||
|
||||
beforeEach(async () => {
|
||||
tempDir = await mkdtemp(join(tmpdir(), 'process-lock-'))
|
||||
lockDir = join(tempDir, '.locks')
|
||||
})
|
||||
|
||||
afterEach(async () => {
|
||||
await rm(tempDir, { recursive: true, force: true })
|
||||
})
|
||||
|
||||
it('serializes concurrent callers for the same lock name', async () => {
|
||||
const events: string[] = []
|
||||
let releaseFirst!: () => void
|
||||
const firstMayFinish = new Promise<void>((resolve) => {
|
||||
releaseFirst = resolve
|
||||
})
|
||||
|
||||
const first = withProcessLock(
|
||||
'openclaw-lifecycle',
|
||||
{ lockDir },
|
||||
async () => {
|
||||
events.push('first:start')
|
||||
await firstMayFinish
|
||||
events.push('first:end')
|
||||
},
|
||||
)
|
||||
|
||||
while (!events.includes('first:start')) await Bun.sleep(1)
|
||||
|
||||
const second = withProcessLock(
|
||||
'openclaw-lifecycle',
|
||||
{
|
||||
lockDir,
|
||||
retryMinTimeoutMs: 5,
|
||||
retryMaxTimeoutMs: 5,
|
||||
},
|
||||
async () => {
|
||||
events.push('second')
|
||||
},
|
||||
)
|
||||
|
||||
await Bun.sleep(25)
|
||||
expect(events).toEqual(['first:start'])
|
||||
|
||||
releaseFirst()
|
||||
await Promise.all([first, second])
|
||||
expect(events).toEqual(['first:start', 'first:end', 'second'])
|
||||
})
|
||||
|
||||
it('releases the lock when the callback throws', async () => {
|
||||
await expect(
|
||||
withProcessLock('openclaw-lifecycle', { lockDir }, async () => {
|
||||
throw new Error('boom')
|
||||
}),
|
||||
).rejects.toThrow('boom')
|
||||
|
||||
await expect(
|
||||
withProcessLock('openclaw-lifecycle', { lockDir }, async () => 'ok'),
|
||||
).resolves.toBe('ok')
|
||||
})
|
||||
|
||||
it('fails with a structured timeout error when acquisition takes too long', async () => {
|
||||
let releaseFirst!: () => void
|
||||
const firstMayFinish = new Promise<void>((resolve) => {
|
||||
releaseFirst = resolve
|
||||
})
|
||||
|
||||
const first = withProcessLock(
|
||||
'openclaw-lifecycle',
|
||||
{ lockDir },
|
||||
async () => {
|
||||
await firstMayFinish
|
||||
},
|
||||
)
|
||||
|
||||
await Bun.sleep(10)
|
||||
|
||||
try {
|
||||
await expect(
|
||||
withProcessLock(
|
||||
'openclaw-lifecycle',
|
||||
{
|
||||
lockDir,
|
||||
timeoutMs: 25,
|
||||
retryMinTimeoutMs: 5,
|
||||
retryMaxTimeoutMs: 5,
|
||||
},
|
||||
async () => undefined,
|
||||
),
|
||||
).rejects.toBeInstanceOf(ProcessLockTimeoutError)
|
||||
} finally {
|
||||
releaseFirst()
|
||||
await first
|
||||
}
|
||||
})
|
||||
|
||||
it('sanitizes lock names into the lock directory', async () => {
|
||||
const path = resolveProcessLockPath(lockDir, '../OpenClaw Lifecycle!')
|
||||
|
||||
expect(path).toBe(join(lockDir, 'OpenClaw-Lifecycle.lock'))
|
||||
|
||||
await withProcessLock(
|
||||
'../OpenClaw Lifecycle!',
|
||||
{ lockDir },
|
||||
async () => undefined,
|
||||
)
|
||||
|
||||
const entries = await readdir(lockDir)
|
||||
expect(entries).not.toContain('..')
|
||||
})
|
||||
})
|
||||
@@ -16,7 +16,6 @@
|
||||
"globals": "^16.4.0",
|
||||
"lefthook": "^2.0.12",
|
||||
"picocolors": "^1.1.1",
|
||||
"rimraf": "^6.0.1",
|
||||
"typedoc": "^0.28.15",
|
||||
"typescript": "^5.9.2",
|
||||
},
|
||||
@@ -196,6 +195,7 @@
|
||||
"klavis": "^2.15.0",
|
||||
"pino": "^9.6.0",
|
||||
"posthog-node": "^4.17.0",
|
||||
"proper-lockfile": "^4.1.2",
|
||||
"puppeteer-core": "24.23.0",
|
||||
"ws": "^8.18.0",
|
||||
"zod": "^3.24.2",
|
||||
@@ -205,6 +205,7 @@
|
||||
"@types/bun": "1.3.5",
|
||||
"@types/debug": "^4.1.12",
|
||||
"@types/node": "^24.3.3",
|
||||
"@types/proper-lockfile": "^4.1.4",
|
||||
"@types/sinon": "^21.0.0",
|
||||
"@types/ws": "^8.5.13",
|
||||
"async-mutex": "^0.5.0",
|
||||
@@ -1829,12 +1830,16 @@
|
||||
|
||||
"@types/pg-pool": ["@types/pg-pool@2.0.7", "", { "dependencies": { "@types/pg": "*" } }, "sha512-U4CwmGVQcbEuqpyju8/ptOKg6gEC+Tqsvj2xS9o1g71bUh8twxnC6ZL5rZKCsGN0iyH0CwgUyc9VR5owNQF9Ng=="],
|
||||
|
||||
"@types/proper-lockfile": ["@types/proper-lockfile@4.1.4", "", { "dependencies": { "@types/retry": "*" } }, "sha512-uo2ABllncSqg9F1D4nugVl9v93RmjxF6LJzQLMLDdPaXCUIDPeOJ21Gbqi43xNKzBi/WQ0Q0dICqufzQbMjipQ=="],
|
||||
|
||||
"@types/react": ["@types/react@19.2.9", "", { "dependencies": { "csstype": "^3.2.2" } }, "sha512-Lpo8kgb/igvMIPeNV2rsYKTgaORYdO1XGVZ4Qz3akwOj0ySGYMPlQWa8BaLn0G63D1aSaAQ5ldR06wCpChQCjA=="],
|
||||
|
||||
"@types/react-dom": ["@types/react-dom@19.2.3", "", { "peerDependencies": { "@types/react": "^19.2.0" } }, "sha512-jp2L/eY6fn+KgVVQAOqYItbF0VY/YApe5Mz2F0aykSO8gx31bYCZyvSeYxCHKvzHG5eZjc+zyaS5BrBWya2+kQ=="],
|
||||
|
||||
"@types/request": ["@types/request@2.48.13", "", { "dependencies": { "@types/caseless": "*", "@types/node": "*", "@types/tough-cookie": "*", "form-data": "^2.5.5" } }, "sha512-FGJ6udDNUCjd19pp0Q3iTiDkwhYup7J8hpMW9c4k53NrccQFFWKRho6hvtPPEhnXWKvukfwAlB6DbDz4yhH5Gg=="],
|
||||
|
||||
"@types/retry": ["@types/retry@0.12.5", "", {}, "sha512-3xSjTp3v03X/lSQLkczaN9UIEwJMoMCA1+Nb5HfbJEQWogdeQIyVtTvxPXDQjZ5zws8rFQfVfRdz03ARihPJgw=="],
|
||||
|
||||
"@types/sinon": ["@types/sinon@21.0.0", "", { "dependencies": { "@types/sinonjs__fake-timers": "*" } }, "sha512-+oHKZ0lTI+WVLxx1IbJDNmReQaIsQJjN2e7UUrJHEeByG7bFeKJYsv1E75JxTQ9QKJDp21bAa/0W2Xo4srsDnw=="],
|
||||
|
||||
"@types/sinonjs__fake-timers": ["@types/sinonjs__fake-timers@15.0.1", "", {}, "sha512-Ko2tjWJq8oozHzHV+reuvS5KYIRAokHnGbDwGh/J64LntgpbuylF74ipEL24HCyRjf9FOlBiBHWBR1RlVKsI1w=="],
|
||||
@@ -2669,7 +2674,7 @@
|
||||
|
||||
"giscus": ["giscus@1.6.0", "", { "dependencies": { "lit": "^3.2.1" } }, "sha512-Zrsi8r4t1LVW950keaWcsURuZUQwUaMKjvJgTCY125vkW6OiEBkatE7ScJDbpqKHdZwb///7FVC21SE3iFK3PQ=="],
|
||||
|
||||
"glob": ["glob@13.0.0", "", { "dependencies": { "minimatch": "^10.1.1", "minipass": "^7.1.2", "path-scurry": "^2.0.0" } }, "sha512-tvZgpqk6fz4BaNZ66ZsRaZnbHvP/jG3uKJvAZOwEVUL4RTA5nJeeLYfyN9/VA8NX/V3IBG+hkeuGpKjvELkVhA=="],
|
||||
"glob": ["glob@10.5.0", "", { "dependencies": { "foreground-child": "^3.1.0", "jackspeak": "^3.1.2", "minimatch": "^9.0.4", "minipass": "^7.1.2", "package-json-from-dist": "^1.0.0", "path-scurry": "^1.11.1" }, "bin": { "glob": "dist/esm/bin.mjs" } }, "sha512-DfXN8DfhJ7NH3Oe7cFmu3NCu1wKbkReJ8TorzSAFbSKrlNaQSKfIzqYqVY8zlbs2NLBbWpRiU52GX2PbaBVNkg=="],
|
||||
|
||||
"glob-parent": ["glob-parent@5.1.2", "", { "dependencies": { "is-glob": "^4.0.1" } }, "sha512-AOIgSQCepiJYwP3ARnGx+5VnTu2HBYdzbGP45eLw1vr3zB3vZLeyed1sC9hnbcOc9/SrMyM5RPQrkGz4aS9Zow=="],
|
||||
|
||||
@@ -3103,7 +3108,7 @@
|
||||
|
||||
"lowercase-keys": ["lowercase-keys@3.0.0", "", {}, "sha512-ozCC6gdQ+glXOQsveKD0YsDy8DSQFjDTz4zyzEHNV5+JP5D62LmfDZ6o1cycFx9ouG940M5dE8C8CTewdj2YWQ=="],
|
||||
|
||||
"lru-cache": ["lru-cache@11.2.4", "", {}, "sha512-B5Y16Jr9LB9dHVkh6ZevG+vAbOsNOYCX+sXvFWFu7B3Iz5mijW3zdbMyhsh8ANd2mSWBYdJgnqi+mL7/LrOPYg=="],
|
||||
"lru-cache": ["lru-cache@10.4.3", "", {}, "sha512-JNAzZcXrCt42VGLuYz0zfAzDfAvJWW6AfYlDBQyDV5DClI2m5sAmK+OIO7s59XfsRsWHp02jAJrRadPRGTt6SQ=="],
|
||||
|
||||
"lucide-react": ["lucide-react@0.562.0", "", { "peerDependencies": { "react": "^16.5.1 || ^17.0.0 || ^18.0.0 || ^19.0.0" } }, "sha512-82hOAu7y0dbVuFfmO4bYF1XEwYk/mEbM5E+b1jgci/udUBEE/R7LF5Ip0CCEmXe8AybRM8L+04eP+LGZeDvkiw=="],
|
||||
|
||||
@@ -3479,7 +3484,7 @@
|
||||
|
||||
"path-root-regex": ["path-root-regex@0.1.2", "", {}, "sha512-4GlJ6rZDhQZFE0DPVKh0e9jmZ5egZfxTkp7bcRDuPlJXbAwhxcl2dINPUAsjLdejqaLsCeg8axcLjIbvBjN4pQ=="],
|
||||
|
||||
"path-scurry": ["path-scurry@2.0.1", "", { "dependencies": { "lru-cache": "^11.0.0", "minipass": "^7.1.2" } }, "sha512-oWyT4gICAu+kaA7QWk/jvCHWarMKNs6pXOGWKDTr7cw4IGcUbW+PeTfbaQiLGheFRpjo6O9J0PmyMfQPjH71oA=="],
|
||||
"path-scurry": ["path-scurry@1.11.1", "", { "dependencies": { "lru-cache": "^10.2.0", "minipass": "^5.0.0 || ^6.0.2 || ^7.0.0" } }, "sha512-Xa4Nw17FS9ApQFJ9umLiJS4orGjm7ZzwUrwamcGQuHSzDyth9boKDaycYdDcZDuqYATXw4HFXgaqWTctW/v1HA=="],
|
||||
|
||||
"path-to-regexp": ["path-to-regexp@8.3.0", "", {}, "sha512-7jdwVIRtsP8MYpdXSwOS0YdD0Du+qOoF/AEPIt88PcCFrZCzx41oxku1jD88hZBwbNUIEfpqvuhjFaMAqMTWnA=="],
|
||||
|
||||
@@ -3569,6 +3574,8 @@
|
||||
|
||||
"prop-types": ["prop-types@15.8.1", "", { "dependencies": { "loose-envify": "^1.4.0", "object-assign": "^4.1.1", "react-is": "^16.13.1" } }, "sha512-oj87CgZICdulUohogVAR7AjlC0327U4el4L6eAvOqCeudMDVU0NThNaV+b9Df4dXgSP1gXMTnPdhfe/2qDH5cg=="],
|
||||
|
||||
"proper-lockfile": ["proper-lockfile@4.1.2", "", { "dependencies": { "graceful-fs": "^4.2.4", "retry": "^0.12.0", "signal-exit": "^3.0.2" } }, "sha512-TjNPblN4BwAWMXU8s9AEz4JmQxnD1NNL7bNOY/AKUzyamc379FWASUhc/K1pL2noVb+XmZKLL68cjzLsiOAMaA=="],
|
||||
|
||||
"property-information": ["property-information@7.1.0", "", {}, "sha512-TwEZ+X+yCJmYfL7TPUOcvBZ4QfoT5YenQiJuX//0th53DE6w0xxLEtfK3iyryQFddXuvkIk51EEgrJQ0WJkOmQ=="],
|
||||
|
||||
"proto-list": ["proto-list@1.2.4", "", {}, "sha512-vtK/94akxsTMhe0/cbfpR+syPuszcuwhqVjJq26CuNDgFGj682oRBXOP5MJpv2r7JtE8MsiepGIqvvOTBwn2vA=="],
|
||||
@@ -3829,13 +3836,15 @@
|
||||
|
||||
"restore-cursor": ["restore-cursor@5.1.0", "", { "dependencies": { "onetime": "^7.0.0", "signal-exit": "^4.1.0" } }, "sha512-oMA2dcrw6u0YfxJQXm342bFKX/E4sG9rbTzO9ptUcR/e8A33cHuvStiYOwH7fszkZlZ1z/ta9AAoPk2F4qIOHA=="],
|
||||
|
||||
"retry": ["retry@0.12.0", "", {}, "sha512-9LkiTwjUh6rT555DtE9rTX+BKByPfrMzEAtnlEtdEwr3Nkffwiihqe2bWADg+OQRjt9gl6ICdmB/ZFDCGAtSow=="],
|
||||
|
||||
"retry-request": ["retry-request@7.0.2", "", { "dependencies": { "@types/request": "^2.48.8", "extend": "^3.0.2", "teeny-request": "^9.0.0" } }, "sha512-dUOvLMJ0/JJYEn8NrpOaGNE7X3vpI5XlZS/u0ANjqtcZVKnIxP7IgCFwrKTxENw29emmwug53awKtaMm4i9g5w=="],
|
||||
|
||||
"reusify": ["reusify@1.1.0", "", {}, "sha512-g6QUff04oZpHs0eG5p83rFLhHeV00ug/Yf9nZM6fLeUrPguBTkTQOdpAWWspMh55TZfVQDPaN3NQJfbVRAxdIw=="],
|
||||
|
||||
"rfdc": ["rfdc@1.4.1", "", {}, "sha512-q1b3N5QkRUWUl7iyylaaj3kOpIT0N2i9MqIEQXP73GVsN9cw3fdx8X63cEmWhJGi2PPCF23Ijp7ktmd39rawIA=="],
|
||||
|
||||
"rimraf": ["rimraf@6.1.2", "", { "dependencies": { "glob": "^13.0.0", "package-json-from-dist": "^1.0.1" }, "bin": { "rimraf": "dist/esm/bin.mjs" } }, "sha512-cFCkPslJv7BAXJsYlK1dZsbP8/ZNLkCAQ0bi1hf5EKX2QHegmDFEFA6QhuYJlk7UDdc+02JjO80YSOrWPpw06g=="],
|
||||
"rimraf": ["rimraf@5.0.10", "", { "dependencies": { "glob": "^10.3.7" }, "bin": { "rimraf": "dist/esm/bin.mjs" } }, "sha512-l0OE8wL34P4nJH/H2ffoaniAokM2qSmrtXHmlpvYr5AVVX8msAyW0l8NVJFDxlSK4u3Uh/f41cQheDVdnYijwQ=="],
|
||||
|
||||
"roarr": ["roarr@2.15.4", "", { "dependencies": { "boolean": "^3.0.1", "detect-node": "^2.0.4", "globalthis": "^1.0.1", "json-stringify-safe": "^5.0.1", "semver-compare": "^1.0.0", "sprintf-js": "^1.1.2" } }, "sha512-CHhPh+UNHD2GTXNYhPWLnU8ONHdI+5DI+4EYIAOaiD63rHeYlZvyh8P+in5999TTSFgUYuKUAjzRI4mdh/p+2A=="],
|
||||
|
||||
@@ -3921,7 +3930,7 @@
|
||||
|
||||
"side-channel-weakmap": ["side-channel-weakmap@1.0.2", "", { "dependencies": { "call-bound": "^1.0.2", "es-errors": "^1.3.0", "get-intrinsic": "^1.2.5", "object-inspect": "^1.13.3", "side-channel-map": "^1.0.1" } }, "sha512-WPS/HvHQTYnHisLo9McqBHOJk2FkHO/tlpvldyrnem4aeQp4hai3gythswg6p01oSoTl58rcpiFAjF2br2Ak2A=="],
|
||||
|
||||
"signal-exit": ["signal-exit@4.1.0", "", {}, "sha512-bzyZ1e88w9O1iNJbKnOlvYTrWPDl46O1bG0D3XInv+9tkPrxrN8jUUTiFlDkkmKWgn1M6CfIA13SuGqOa9Korw=="],
|
||||
"signal-exit": ["signal-exit@3.0.7", "", {}, "sha512-wnD2ZE+l+SPC/uoS0vXeE9L1+0wuaMqKlfz9AMUo38JsyLSBWSFcHR1Rri62LZc12vLr1gb3jl7iwQhgwpAbGQ=="],
|
||||
|
||||
"signedsource": ["signedsource@1.0.0", "", {}, "sha512-6+eerH9fEnNmi/hyM1DXcRK3pWdoMQtlkQ+ns0ntzunjKqp5i3sKCc80ym8Fib3iaYhdJUOPdhlJWj1tvge2Ww=="],
|
||||
|
||||
@@ -4415,8 +4424,6 @@
|
||||
|
||||
"@google/gemini-cli-core/@opentelemetry/exporter-logs-otlp-http": ["@opentelemetry/exporter-logs-otlp-http@0.203.0", "", { "dependencies": { "@opentelemetry/api-logs": "0.203.0", "@opentelemetry/core": "2.0.1", "@opentelemetry/otlp-exporter-base": "0.203.0", "@opentelemetry/otlp-transformer": "0.203.0", "@opentelemetry/sdk-logs": "0.203.0" }, "peerDependencies": { "@opentelemetry/api": "^1.3.0" } }, "sha512-s0hys1ljqlMTbXx2XiplmMJg9wG570Z5lH7wMvrZX6lcODI56sG4HL03jklF63tBeyNwK2RV1/ntXGo3HgG4Qw=="],
|
||||
|
||||
"@google/gemini-cli-core/glob": ["glob@10.5.0", "", { "dependencies": { "foreground-child": "^3.1.0", "jackspeak": "^3.1.2", "minimatch": "^9.0.4", "minipass": "^7.1.2", "package-json-from-dist": "^1.0.0", "path-scurry": "^1.11.1" }, "bin": { "glob": "dist/esm/bin.mjs" } }, "sha512-DfXN8DfhJ7NH3Oe7cFmu3NCu1wKbkReJ8TorzSAFbSKrlNaQSKfIzqYqVY8zlbs2NLBbWpRiU52GX2PbaBVNkg=="],
|
||||
|
||||
"@google/gemini-cli-core/https-proxy-agent": ["https-proxy-agent@7.0.6", "", { "dependencies": { "agent-base": "^7.1.2", "debug": "4" } }, "sha512-vK9P5/iUfdl95AI+JVyUuIcVtd4ofvtrOr3HNtM2yxC9bnMbEdp3x01OhQNnjb8IJYi38VlTE3mBXwcfvywuSw=="],
|
||||
|
||||
"@google/gemini-cli-core/marked": ["marked@15.0.12", "", { "bin": { "marked": "bin/marked.js" } }, "sha512-8dD6FusOQSrpv9Z1rdNMdlSgQOIP880DHqnohobOmYLElGEqAL/JvxvuxZO16r4HtjTlfPRDC1hbvxC9dPN2nA=="],
|
||||
@@ -4491,6 +4498,8 @@
|
||||
|
||||
"@hono/zod-validator/zod": ["zod@3.25.76", "", {}, "sha512-gzUt/qt81nXsFGKIFcC3YnfEAx5NkunCfnDlvuBSSFS02bcXu4Lmea0AFIUwbLWxWPx3d9p8S5QoaujKcNQxcQ=="],
|
||||
|
||||
"@inquirer/core/signal-exit": ["signal-exit@4.1.0", "", {}, "sha512-bzyZ1e88w9O1iNJbKnOlvYTrWPDl46O1bG0D3XInv+9tkPrxrN8jUUTiFlDkkmKWgn1M6CfIA13SuGqOa9Korw=="],
|
||||
|
||||
"@inquirer/core/wrap-ansi": ["wrap-ansi@6.2.0", "", { "dependencies": { "ansi-styles": "^4.0.0", "string-width": "^4.1.0", "strip-ansi": "^6.0.0" } }, "sha512-r6lPcBGxZXlIcymEu7InxDMhdW0KDxpLgoFLcguasxCaJ/SOIZwINatK9KY/tf+ZrlywOKU0UDj3ATXUBfxJXA=="],
|
||||
|
||||
"@isaacs/cliui/string-width": ["string-width@5.1.2", "", { "dependencies": { "eastasianwidth": "^0.2.0", "emoji-regex": "^9.2.2", "strip-ansi": "^7.0.1" } }, "sha512-HnLOCR3vjcY8beoNLtcjZ5/nxn2afmME6lhrDrebokqMap+XbeW8n9TXpPDOqdGK5qcI3oT0GKTW6wC7EMiVqA=="],
|
||||
@@ -4791,8 +4800,6 @@
|
||||
|
||||
"@sentry/bundler-plugin-core/dotenv": ["dotenv@16.6.1", "", {}, "sha512-uBq4egWHTcTt33a72vpSG0z3HnPuIl6NqYcTrKEg2azoEyl2hpW0zqlxysq2pK9HlDIHyHyakeYaYnSAwd8bow=="],
|
||||
|
||||
"@sentry/bundler-plugin-core/glob": ["glob@10.5.0", "", { "dependencies": { "foreground-child": "^3.1.0", "jackspeak": "^3.1.2", "minimatch": "^9.0.4", "minipass": "^7.1.2", "package-json-from-dist": "^1.0.0", "path-scurry": "^1.11.1" }, "bin": { "glob": "dist/esm/bin.mjs" } }, "sha512-DfXN8DfhJ7NH3Oe7cFmu3NCu1wKbkReJ8TorzSAFbSKrlNaQSKfIzqYqVY8zlbs2NLBbWpRiU52GX2PbaBVNkg=="],
|
||||
|
||||
"@sentry/bundler-plugin-core/magic-string": ["magic-string@0.30.8", "", { "dependencies": { "@jridgewell/sourcemap-codec": "^1.4.15" } }, "sha512-ISQTe55T2ao7XtlAStud6qwYPZjE4GK1S/BeVPus4jrq6JuOnQ00YKQC581RWhR122W7msZV263KzVeLoqidyQ=="],
|
||||
|
||||
"@sentry/node/@opentelemetry/core": ["@opentelemetry/core@2.4.0", "", { "dependencies": { "@opentelemetry/semantic-conventions": "^1.29.0" }, "peerDependencies": { "@opentelemetry/api": ">=1.0.0 <1.10.0" } }, "sha512-KtcyFHssTn5ZgDu6SXmUznS80OFs/wN7y6MyFRRcKU6TOw8hNcGxKvt8hsdaLJfhzUszNSjURetq5Qpkad14Gw=="],
|
||||
@@ -4885,6 +4892,8 @@
|
||||
|
||||
"eventid/uuid": ["uuid@8.3.2", "", { "bin": { "uuid": "dist/bin/uuid" } }, "sha512-+NYs2QeMWy+GWFOEm9xnn6HCDp0l7QBD7ml8zLUmJ+93Q5NF0NocErnwkTkXVFNiX3/fpC6afS8Dhb/gz7R7eg=="],
|
||||
|
||||
"execa/signal-exit": ["signal-exit@4.1.0", "", {}, "sha512-bzyZ1e88w9O1iNJbKnOlvYTrWPDl46O1bG0D3XInv+9tkPrxrN8jUUTiFlDkkmKWgn1M6CfIA13SuGqOa9Korw=="],
|
||||
|
||||
"express/cookie": ["cookie@0.7.2", "", {}, "sha512-yki5XnKuf750l50uGTllt6kKILY4nQ1eNIQatoXEByZ5dWgnKqbnqmTrBE5B4N7lrMJKQ2ytWMiTO2o0v6Ew/w=="],
|
||||
|
||||
"extract-zip/get-stream": ["get-stream@5.2.0", "", { "dependencies": { "pump": "^3.0.0" } }, "sha512-nBF+F1rAZVCu/p7rjzgA+Yb4lfYXrpl7a6VmJrU8wF9I1CKvP/QwPNZHnOlwbTkY6dvtFIzFMSyQXbLoTQPRpA=="],
|
||||
@@ -4895,6 +4904,8 @@
|
||||
|
||||
"find-up/path-exists": ["path-exists@4.0.0", "", {}, "sha512-ak9Qy5Q7jYb2Wwcey5Fpvg2KoAc/ZIhLSLOSBmRmygPsGwkVVt0fZa0qrtMz+m6tJTAHfZQ8FnmB4MG4LWy7/w=="],
|
||||
|
||||
"foreground-child/signal-exit": ["signal-exit@4.1.0", "", {}, "sha512-bzyZ1e88w9O1iNJbKnOlvYTrWPDl46O1bG0D3XInv+9tkPrxrN8jUUTiFlDkkmKWgn1M6CfIA13SuGqOa9Korw=="],
|
||||
|
||||
"form-data/mime-types": ["mime-types@2.1.35", "", { "dependencies": { "mime-db": "1.52.0" } }, "sha512-ZDY+bPm5zTTF+YpCrAU9nK0UgICYPT0QtT1NZWFv4s++TNkcgVaT0g6+4R2uI4MjQjzysHB1zxuWL50hzaeXiw=="],
|
||||
|
||||
"fx-runner/commander": ["commander@2.9.0", "", { "dependencies": { "graceful-readlink": ">= 1.0.0" } }, "sha512-bmkUukX8wAOjHdN26xj5c4ctEV22TQ7dQYhSmuckKhToXrkUn0iIaolHdIxYYqD55nhpSPA9zPQ1yP57GdXP2A=="],
|
||||
@@ -4913,8 +4924,6 @@
|
||||
|
||||
"giget/nypm": ["nypm@0.6.4", "", { "dependencies": { "citty": "^0.2.0", "pathe": "^2.0.3", "tinyexec": "^1.0.2" }, "bin": { "nypm": "dist/cli.mjs" } }, "sha512-1TvCKjZyyklN+JJj2TS3P4uSQEInrM/HkkuSXsEzm1ApPgBffOn8gFguNnZf07r/1X6vlryfIqMUkJKQMzlZiw=="],
|
||||
|
||||
"glob/minimatch": ["minimatch@10.2.4", "", { "dependencies": { "brace-expansion": "^5.0.2" } }, "sha512-oRjTw/97aTBN0RHbYCdtF1MQfvusSIBQM0IZEgzl6426+8jSC0nF1a/GmnVLpfB9yyr6g6FTqWqiZVbxrtaCIg=="],
|
||||
|
||||
"global-agent/serialize-error": ["serialize-error@7.0.1", "", { "dependencies": { "type-fest": "^0.13.1" } }, "sha512-8I8TjW5KMOKsZQTvoxjuSIa7foAwPWGOts+6o7sgjz41/qMD9VQHEDxi6PBvK2l0MXUmqZyNpUK+T2tQaaElvw=="],
|
||||
|
||||
"global-directory/ini": ["ini@4.1.1", "", {}, "sha512-QQnnxNyfvmHFIsj7gkPcYymR8Jdw/o7mp5ZFihxn6h8Ci6fh3Dx4E1gPjpQEpIuPo9XVNY/ZUwh4BPMjGyL01g=="],
|
||||
@@ -4935,8 +4944,6 @@
|
||||
|
||||
"hoist-non-react-statics/react-is": ["react-is@16.13.1", "", {}, "sha512-24e6ynE2H+OKt4kqsOvNd8kBpV65zoxbA4BVsEOB3ARVWQki/DHzaUoC5KuON/BiccDaCCTZBuOcfZs70kR8bQ=="],
|
||||
|
||||
"hosted-git-info/lru-cache": ["lru-cache@10.4.3", "", {}, "sha512-JNAzZcXrCt42VGLuYz0zfAzDfAvJWW6AfYlDBQyDV5DClI2m5sAmK+OIO7s59XfsRsWHp02jAJrRadPRGTt6SQ=="],
|
||||
|
||||
"html-to-text/htmlparser2": ["htmlparser2@8.0.2", "", { "dependencies": { "domelementtype": "^2.3.0", "domhandler": "^5.0.3", "domutils": "^3.0.1", "entities": "^4.4.0" } }, "sha512-GYdjWKDkbRLkZ5geuHs5NY1puJ+PXwP7+fHPRz06Eirsb9ugf6d8kkXav6ADhcODhFFPMIXyxkxSuMf3D6NCFA=="],
|
||||
|
||||
"htmlparser2/entities": ["entities@7.0.1", "", {}, "sha512-TWrgLOFUQTH994YUyl1yT4uyavY5nNB5muff+RtWaqNVCAK408b5ZnnbNAUEWLTCpum9w6arT70i1XdQ4UeOPA=="],
|
||||
@@ -5051,6 +5058,8 @@
|
||||
|
||||
"read-pkg/type-fest": ["type-fest@4.41.0", "", {}, "sha512-TeTSQ6H5YHvpqVwBRcnLDCBnDOHWYu7IvGbHT6N8AOymcr9PJGjc1GTtiWZTYg0NCgYwvnYWEkVChQAr9bjfwA=="],
|
||||
|
||||
"restore-cursor/signal-exit": ["signal-exit@4.1.0", "", {}, "sha512-bzyZ1e88w9O1iNJbKnOlvYTrWPDl46O1bG0D3XInv+9tkPrxrN8jUUTiFlDkkmKWgn1M6CfIA13SuGqOa9Korw=="],
|
||||
|
||||
"roarr/sprintf-js": ["sprintf-js@1.1.3", "", {}, "sha512-Oo+0REFV59/rz3gfJNKQiBlwfHaSESl1pcGyABQsnnIfWOFt6JNj5gCog2U6MLZ//IGYD+nA8nI+mTShREReaA=="],
|
||||
|
||||
"sinon/diff": ["diff@8.0.3", "", {}, "sha512-qejHi7bcSD4hQAZE0tNAawRK1ZtafHDmMTMkrrIGgSLl7hTnQHmKCeB45xAcbfTqK2zowkM3j3bHt/4b/ARbYQ=="],
|
||||
@@ -5351,8 +5360,6 @@
|
||||
|
||||
"@google/gemini-cli-core/@opentelemetry/exporter-logs-otlp-http/@opentelemetry/sdk-logs": ["@opentelemetry/sdk-logs@0.203.0", "", { "dependencies": { "@opentelemetry/api-logs": "0.203.0", "@opentelemetry/core": "2.0.1", "@opentelemetry/resources": "2.0.1" }, "peerDependencies": { "@opentelemetry/api": ">=1.4.0 <1.10.0" } }, "sha512-vM2+rPq0Vi3nYA5akQD2f3QwossDnTDLvKbea6u/A2NZ3XDkPxMfo/PNrDoXhDUD/0pPo2CdH5ce/thn9K0kLw=="],
|
||||
|
||||
"@google/gemini-cli-core/glob/path-scurry": ["path-scurry@1.11.1", "", { "dependencies": { "lru-cache": "^10.2.0", "minipass": "^5.0.0 || ^6.0.2 || ^7.0.0" } }, "sha512-Xa4Nw17FS9ApQFJ9umLiJS4orGjm7ZzwUrwamcGQuHSzDyth9boKDaycYdDcZDuqYATXw4HFXgaqWTctW/v1HA=="],
|
||||
|
||||
"@google/gemini-cli-core/https-proxy-agent/agent-base": ["agent-base@7.1.4", "", {}, "sha512-MnA+YT8fwfJPgBx3m60MNqakm30XOkyIoH1y6huTQvC0PwZG7ki8NacLBcrPbNoo8vEZy7Jpuk7+jMO+CUovTQ=="],
|
||||
|
||||
"@google/gemini-cli-core/open/wsl-utils": ["wsl-utils@0.1.0", "", { "dependencies": { "is-wsl": "^3.1.0" } }, "sha512-h3Fbisa2nKGPxCpm89Hk33lBLsnaGBvctQopaBSOW/uIs6FTe1ATyAnKFJrzVs9vpGdsTe73WF3V4lIsk4Gacw=="],
|
||||
@@ -5529,8 +5536,6 @@
|
||||
|
||||
"@prisma/instrumentation/@opentelemetry/instrumentation/require-in-the-middle": ["require-in-the-middle@8.0.1", "", { "dependencies": { "debug": "^4.3.5", "module-details-from-path": "^1.0.3" } }, "sha512-QT7FVMXfWOYFbeRBF6nu+I6tr2Tf3u0q8RIEjNob/heKY/nh7drD/k7eeMFmSQgnTtCzLDcCu/XEnpW2wk4xCQ=="],
|
||||
|
||||
"@sentry/bundler-plugin-core/glob/path-scurry": ["path-scurry@1.11.1", "", { "dependencies": { "lru-cache": "^10.2.0", "minipass": "^5.0.0 || ^6.0.2 || ^7.0.0" } }, "sha512-Xa4Nw17FS9ApQFJ9umLiJS4orGjm7ZzwUrwamcGQuHSzDyth9boKDaycYdDcZDuqYATXw4HFXgaqWTctW/v1HA=="],
|
||||
|
||||
"@sentry/node/@opentelemetry/instrumentation/@opentelemetry/api-logs": ["@opentelemetry/api-logs@0.210.0", "", { "dependencies": { "@opentelemetry/api": "^1.3.0" } }, "sha512-CMtLxp+lYDriveZejpBND/2TmadrrhUfChyxzmkFtHaMDdSKfP59MAYyA0ICBvEBdm3iXwLcaj/8Ic/pnGw9Yg=="],
|
||||
|
||||
"@sentry/node/@opentelemetry/instrumentation/require-in-the-middle": ["require-in-the-middle@8.0.1", "", { "dependencies": { "debug": "^4.3.5", "module-details-from-path": "^1.0.3" } }, "sha512-QT7FVMXfWOYFbeRBF6nu+I6tr2Tf3u0q8RIEjNob/heKY/nh7drD/k7eeMFmSQgnTtCzLDcCu/XEnpW2wk4xCQ=="],
|
||||
@@ -5565,8 +5570,6 @@
|
||||
|
||||
"giget/nypm/citty": ["citty@0.2.0", "", {}, "sha512-8csy5IBFI2ex2hTVpaHN2j+LNE199AgiI7y4dMintrr8i0lQiFn+0AWMZrWdHKIgMOer65f8IThysYhoReqjWA=="],
|
||||
|
||||
"glob/minimatch/brace-expansion": ["brace-expansion@5.0.4", "", { "dependencies": { "balanced-match": "^4.0.2" } }, "sha512-h+DEnpVvxmfVefa4jFbCf5HdH5YMDXRsmKflpf1pILZWRFlTbJpxeU55nJl4Smt5HQaGzg1o6RHFPJaOqnmBDg=="],
|
||||
|
||||
"global-agent/serialize-error/type-fest": ["type-fest@0.13.1", "", {}, "sha512-34R7HTnG0XIJcBSn5XhDd7nNFPRcXYRZrBB2O2jdKqYODldSzBAqzsWoZYYvduky73toYS/ESqxPvkDf/F0XMg=="],
|
||||
|
||||
"graphql-config/@graphql-tools/url-loader/@graphql-tools/executor-graphql-ws": ["@graphql-tools/executor-graphql-ws@2.0.7", "", { "dependencies": { "@graphql-tools/executor-common": "^0.0.6", "@graphql-tools/utils": "^10.9.1", "@whatwg-node/disposablestack": "^0.0.6", "graphql-ws": "^6.0.6", "isomorphic-ws": "^5.0.0", "tslib": "^2.8.1", "ws": "^8.18.3" }, "peerDependencies": { "graphql": "^14.0.0 || ^15.0.0 || ^16.0.0 || ^17.0.0" } }, "sha512-J27za7sKF6RjhmvSOwOQFeNhNHyP4f4niqPnerJmq73OtLx9Y2PGOhkXOEB0PjhvPJceuttkD2O1yMgEkTGs3Q=="],
|
||||
@@ -5761,24 +5764,16 @@
|
||||
|
||||
"@google/gemini-cli-core/@opentelemetry/exporter-logs-otlp-http/@opentelemetry/sdk-logs/@opentelemetry/resources": ["@opentelemetry/resources@2.0.1", "", { "dependencies": { "@opentelemetry/core": "2.0.1", "@opentelemetry/semantic-conventions": "^1.29.0" }, "peerDependencies": { "@opentelemetry/api": ">=1.3.0 <1.10.0" } }, "sha512-dZOB3R6zvBwDKnHDTB4X1xtMArB/d324VsbiPkX/Yu0Q8T2xceRthoIVFhJdvgVM2QhGVUyX9tzwiNxGtoBJUw=="],
|
||||
|
||||
"@google/gemini-cli-core/glob/path-scurry/lru-cache": ["lru-cache@10.4.3", "", {}, "sha512-JNAzZcXrCt42VGLuYz0zfAzDfAvJWW6AfYlDBQyDV5DClI2m5sAmK+OIO7s59XfsRsWHp02jAJrRadPRGTt6SQ=="],
|
||||
|
||||
"@google/genai/google-auth-library/gaxios/https-proxy-agent": ["https-proxy-agent@7.0.6", "", { "dependencies": { "agent-base": "^7.1.2", "debug": "4" } }, "sha512-vK9P5/iUfdl95AI+JVyUuIcVtd4ofvtrOr3HNtM2yxC9bnMbEdp3x01OhQNnjb8IJYi38VlTE3mBXwcfvywuSw=="],
|
||||
|
||||
"@google/genai/google-auth-library/gaxios/node-fetch": ["node-fetch@3.3.2", "", { "dependencies": { "data-uri-to-buffer": "^4.0.0", "fetch-blob": "^3.1.4", "formdata-polyfill": "^4.0.10" } }, "sha512-dRB78srN/l6gqWulah9SrxeYnxeddIG30+GOqK/9OlLVyLg3HPnr6SqOWTWOXKRwC2eGYCkZ59NNuSgvSrpgOA=="],
|
||||
|
||||
"@google/genai/google-auth-library/gaxios/rimraf": ["rimraf@5.0.10", "", { "dependencies": { "glob": "^10.3.7" }, "bin": { "rimraf": "dist/esm/bin.mjs" } }, "sha512-l0OE8wL34P4nJH/H2ffoaniAokM2qSmrtXHmlpvYr5AVVX8msAyW0l8NVJFDxlSK4u3Uh/f41cQheDVdnYijwQ=="],
|
||||
|
||||
"@inquirer/core/wrap-ansi/strip-ansi/ansi-regex": ["ansi-regex@5.0.1", "", {}, "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ=="],
|
||||
|
||||
"@sentry/bundler-plugin-core/glob/path-scurry/lru-cache": ["lru-cache@10.4.3", "", {}, "sha512-JNAzZcXrCt42VGLuYz0zfAzDfAvJWW6AfYlDBQyDV5DClI2m5sAmK+OIO7s59XfsRsWHp02jAJrRadPRGTt6SQ=="],
|
||||
|
||||
"@types/request/form-data/mime-types/mime-db": ["mime-db@1.52.0", "", {}, "sha512-sPU4uV7dYlvtWJxwwxHD0PuihVNiE7TyAbQ5SWxDCB9mUYvOgroQOwYQQOKPJ8CIbE+1ETVlOoK1UC2nU3gYvg=="],
|
||||
|
||||
"fx-runner/which/is-absolute/is-relative": ["is-relative@0.1.3", "", {}, "sha512-wBOr+rNM4gkAZqoLRJI4myw5WzzIdQosFAAbnvfXP5z1LyzgAI3ivOKehC5KfqlQJZoihVhirgtCBj378Eg8GA=="],
|
||||
|
||||
"glob/minimatch/brace-expansion/balanced-match": ["balanced-match@4.0.4", "", {}, "sha512-BLrgEcRTwX2o6gGxGOCNyMvGSp35YofuYzw9h1IMTRmKqttAZZVU67bdb9Pr2vUHA8+j3i2tJfjO6C6+4myGTA=="],
|
||||
|
||||
"graphql-config/@graphql-tools/url-loader/@graphql-tools/executor-graphql-ws/@graphql-tools/executor-common": ["@graphql-tools/executor-common@0.0.6", "", { "dependencies": { "@envelop/core": "^5.3.0", "@graphql-tools/utils": "^10.9.1" }, "peerDependencies": { "graphql": "^14.0.0 || ^15.0.0 || ^16.0.0 || ^17.0.0" } }, "sha512-JAH/R1zf77CSkpYATIJw+eOJwsbWocdDjY+avY7G+P5HCXxwQjAjWVkJI1QJBQYjPQDVxwf1fmTZlIN3VOadow=="],
|
||||
|
||||
"graphql-config/@graphql-tools/url-loader/@graphql-tools/executor-http/@graphql-hive/signal": ["@graphql-hive/signal@1.0.0", "", {}, "sha512-RiwLMc89lTjvyLEivZ/qxAC5nBHoS2CtsWFSOsN35sxG9zoo5Z+JsFHM8MlvmO9yt+MJNIyC5MLE1rsbOphlag=="],
|
||||
@@ -5831,8 +5826,6 @@
|
||||
|
||||
"@google/genai/google-auth-library/gaxios/https-proxy-agent/agent-base": ["agent-base@7.1.4", "", {}, "sha512-MnA+YT8fwfJPgBx3m60MNqakm30XOkyIoH1y6huTQvC0PwZG7ki8NacLBcrPbNoo8vEZy7Jpuk7+jMO+CUovTQ=="],
|
||||
|
||||
"@google/genai/google-auth-library/gaxios/rimraf/glob": ["glob@10.5.0", "", { "dependencies": { "foreground-child": "^3.1.0", "jackspeak": "^3.1.2", "minimatch": "^9.0.4", "minipass": "^7.1.2", "package-json-from-dist": "^1.0.0", "path-scurry": "^1.11.1" }, "bin": { "glob": "dist/esm/bin.mjs" } }, "sha512-DfXN8DfhJ7NH3Oe7cFmu3NCu1wKbkReJ8TorzSAFbSKrlNaQSKfIzqYqVY8zlbs2NLBbWpRiU52GX2PbaBVNkg=="],
|
||||
|
||||
"graphql-config/@graphql-tools/url-loader/@graphql-tools/wrap/@graphql-tools/delegate/@graphql-tools/batch-execute": ["@graphql-tools/batch-execute@9.0.19", "", { "dependencies": { "@graphql-tools/utils": "^10.9.1", "@whatwg-node/promise-helpers": "^1.3.0", "dataloader": "^2.2.3", "tslib": "^2.8.1" }, "peerDependencies": { "graphql": "^14.0.0 || ^15.0.0 || ^16.0.0 || ^17.0.0" } }, "sha512-VGamgY4PLzSx48IHPoblRw0oTaBa7S26RpZXt0Y4NN90ytoE0LutlpB2484RbkfcTjv9wa64QD474+YP1kEgGA=="],
|
||||
|
||||
"publish-browser-extension/listr2/cli-truncate/slice-ansi/ansi-styles": ["ansi-styles@6.2.3", "", {}, "sha512-4Dj6M28JB+oAH8kFkTLUo+a2jwOFkuqb3yucU0CANcRRUbxS0cP0nZYCGjcc3BNXwRIsUVmDGgzawme7zvJHvg=="],
|
||||
@@ -5844,9 +5837,5 @@
|
||||
"@browseros/build-tools/@aws-sdk/client-s3/@aws-sdk/core/@aws-sdk/xml-builder/fast-xml-parser/fast-xml-builder": ["fast-xml-builder@1.1.4", "", { "dependencies": { "path-expression-matcher": "^1.1.3" } }, "sha512-f2jhpN4Eccy0/Uz9csxh3Nu6q4ErKxf0XIsasomfOihuSUa3/xw6w8dnOtCDgEItQFJG8KyXPzQXzcODDrrbOg=="],
|
||||
|
||||
"@browseros/eval/@aws-sdk/client-s3/@aws-sdk/core/@aws-sdk/xml-builder/fast-xml-parser/fast-xml-builder": ["fast-xml-builder@1.1.4", "", { "dependencies": { "path-expression-matcher": "^1.1.3" } }, "sha512-f2jhpN4Eccy0/Uz9csxh3Nu6q4ErKxf0XIsasomfOihuSUa3/xw6w8dnOtCDgEItQFJG8KyXPzQXzcODDrrbOg=="],
|
||||
|
||||
"@google/genai/google-auth-library/gaxios/rimraf/glob/path-scurry": ["path-scurry@1.11.1", "", { "dependencies": { "lru-cache": "^10.2.0", "minipass": "^5.0.0 || ^6.0.2 || ^7.0.0" } }, "sha512-Xa4Nw17FS9ApQFJ9umLiJS4orGjm7ZzwUrwamcGQuHSzDyth9boKDaycYdDcZDuqYATXw4HFXgaqWTctW/v1HA=="],
|
||||
|
||||
"@google/genai/google-auth-library/gaxios/rimraf/glob/path-scurry/lru-cache": ["lru-cache@10.4.3", "", {}, "sha512-JNAzZcXrCt42VGLuYz0zfAzDfAvJWW6AfYlDBQyDV5DClI2m5sAmK+OIO7s59XfsRsWHp02jAJrRadPRGTt6SQ=="],
|
||||
}
|
||||
}
|
||||
|
||||
@@ -12,10 +12,16 @@
|
||||
"dev:watch": "./tools/dev/run.sh watch",
|
||||
"dev:watch:new": "./tools/dev/run.sh watch --new",
|
||||
"dev:manual": "./tools/dev/run.sh watch --manual",
|
||||
"dev:setup": "./tools/dev/setup.sh",
|
||||
"dev:setup": "./tools/dev/run.sh setup",
|
||||
"dev:cleanup": "./tools/dev/run.sh cleanup --target dev",
|
||||
"dev:reset": "./tools/dev/run.sh reset --target dev",
|
||||
"dev:cleanup:dogfood": "./tools/dev/run.sh cleanup --target dogfood",
|
||||
"dev:reset:dogfood": "./tools/dev/run.sh reset --target dogfood",
|
||||
"dev:cleanup:prod": "./tools/dev/run.sh cleanup --target prod",
|
||||
"dev:reset:prod": "./tools/dev/run.sh reset --target prod",
|
||||
"install:browseros-dogfood": "make -C tools/dogfood install",
|
||||
"test:env": "./tools/dev/run.sh test",
|
||||
"test:cleanup": "./tools/dev/run.sh cleanup",
|
||||
"test:cleanup": "./tools/dev/run.sh cleanup --quick --yes",
|
||||
"start:server": "bun run --filter @browseros/server --elide-lines=0 start",
|
||||
"start:agent": "bun run --filter @browseros/agent dev",
|
||||
"build": "bun run build:server && bun run build:agent",
|
||||
@@ -34,8 +40,7 @@
|
||||
"lint": "bunx biome check",
|
||||
"lint:fix": "bunx biome check --write --unsafe",
|
||||
"gen:cdp": "bun scripts/codegen/cdp-protocol.ts",
|
||||
"generate:models": "bun scripts/generate-models.ts",
|
||||
"clean": "rimraf dist"
|
||||
"generate:models": "bun scripts/generate-models.ts"
|
||||
},
|
||||
"repository": "browseros-ai/BrowserOS-server",
|
||||
"author": "BrowserOS",
|
||||
@@ -56,7 +61,6 @@
|
||||
"globals": "^16.4.0",
|
||||
"lefthook": "^2.0.12",
|
||||
"picocolors": "^1.1.1",
|
||||
"rimraf": "^6.0.1",
|
||||
"typedoc": "^0.28.15",
|
||||
"typescript": "^5.9.2"
|
||||
},
|
||||
|
||||
@@ -1,7 +1,10 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"time"
|
||||
|
||||
"browseros-dev/proc"
|
||||
@@ -11,45 +14,119 @@ import (
|
||||
|
||||
var cleanupCmd = &cobra.Command{
|
||||
Use: "cleanup",
|
||||
Short: "Kill port processes and remove orphaned temp directories",
|
||||
Long: "Kills processes on dev/test ports and removes orphaned browseros-* temp directories.",
|
||||
Short: "Kill target processes and remove orphaned temp directories",
|
||||
Long: "Stops target BrowserOS processes, clears target ports, and removes target temp directories.",
|
||||
RunE: runCleanup,
|
||||
}
|
||||
|
||||
var (
|
||||
cleanupPorts bool
|
||||
cleanupTemps bool
|
||||
cleanupOnlyPorts bool
|
||||
cleanupOnlyTemps bool
|
||||
cleanupQuick bool
|
||||
cleanupYes bool
|
||||
cleanupTarget string
|
||||
cleanupBrowserOSDir string
|
||||
cleanupPortsValue string
|
||||
cleanupBrowserUserDataDir string
|
||||
)
|
||||
|
||||
type safeCleanupOptions struct {
|
||||
ports bool
|
||||
temps bool
|
||||
}
|
||||
|
||||
func init() {
|
||||
cleanupCmd.Flags().BoolVar(&cleanupPorts, "ports", false, "Only kill port processes")
|
||||
cleanupCmd.Flags().BoolVar(&cleanupTemps, "temps", false, "Only remove temp directories")
|
||||
cleanupCmd.Flags().StringVar(&cleanupTarget, "target", targetDev, "Cleanup target: dev, dogfood, or prod")
|
||||
cleanupCmd.Flags().StringVar(&cleanupBrowserOSDir, "browseros-dir", "", "Override target BrowserOS state directory")
|
||||
cleanupCmd.Flags().StringVar(&cleanupPortsValue, "ports", "", "Override ports as cdp,server,extension")
|
||||
cleanupCmd.Flags().StringVar(&cleanupBrowserUserDataDir, "browser-user-data-dir", "", "Override BrowserOS user-data dir to stop")
|
||||
cleanupCmd.Flags().BoolVar(&cleanupOnlyPorts, "only-ports", false, "Only kill port processes")
|
||||
cleanupCmd.Flags().BoolVar(&cleanupOnlyTemps, "only-temps", false, "Only remove temp directories")
|
||||
cleanupCmd.Flags().BoolVar(&cleanupQuick, "quick", false, "Run safe cleanup only")
|
||||
cleanupCmd.Flags().BoolVar(&cleanupYes, "yes", false, "Answer yes to the safe cleanup prompt")
|
||||
rootCmd.AddCommand(cleanupCmd)
|
||||
}
|
||||
|
||||
// runCleanup performs the non-destructive daily cleanup path for local dev.
|
||||
func runCleanup(cmd *cobra.Command, args []string) error {
|
||||
doPorts := !cleanupTemps || cleanupPorts
|
||||
doTemps := !cleanupPorts || cleanupTemps
|
||||
|
||||
if doPorts {
|
||||
ports := proc.DefaultLocalPorts()
|
||||
proc.LogMsgf(proc.TagInfo, "Killing processes on ports %d, %d, %d...", ports.CDP, ports.Server, ports.Extension)
|
||||
if err := proc.KillPortsAndWait(ports, 3*time.Second); err != nil {
|
||||
out := cmd.OutOrStdout()
|
||||
root, err := proc.FindMonorepoRoot()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
target, err := resolveResetTarget(root, resetTargetOptions{
|
||||
Target: cleanupTarget,
|
||||
BrowserOSDir: cleanupBrowserOSDir,
|
||||
Ports: cleanupPortsValue,
|
||||
BrowserUserDataDir: cleanupBrowserUserDataDir,
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if !cleanupYes && !cleanupQuick {
|
||||
ok, err := confirmYesNo(out, bufio.NewReader(os.Stdin), resetPrompt{
|
||||
Title: "Run safe cleanup?",
|
||||
Body: fmt.Sprintf("Stops %s processes, clears target ports, and removes target temp profiles. This does not touch saved BrowserOS data, Lima, containers, or images.", target.Name),
|
||||
Action: "Run safe cleanup for " + target.Name,
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
proc.LogMsg(proc.TagInfo, "Ports cleared")
|
||||
if !ok {
|
||||
fmt.Fprintln(out, dimStyle.Sprint("Skipped."))
|
||||
return nil
|
||||
}
|
||||
}
|
||||
if err := ensureTargetStopped(out, target); err != nil {
|
||||
return err
|
||||
}
|
||||
return runSafeCleanup(out, target, safeCleanupOptions{
|
||||
ports: !cleanupOnlyTemps || cleanupOnlyPorts,
|
||||
temps: !cleanupOnlyPorts || cleanupOnlyTemps,
|
||||
})
|
||||
}
|
||||
|
||||
if doTemps {
|
||||
n := proc.CleanupTempDirs("browseros-test-", "browseros-dev-")
|
||||
if n > 0 {
|
||||
proc.LogMsgf(proc.TagInfo, "Removed %d temp directories", n)
|
||||
} else {
|
||||
proc.LogMsg(proc.TagInfo, "No orphaned temp directories found")
|
||||
// runSafeCleanup is shared by cleanup and reset before any destructive repair steps.
|
||||
func runSafeCleanup(out io.Writer, target resetTarget, opts safeCleanupOptions) error {
|
||||
if opts.ports {
|
||||
if target.WatchRunStateDir != "" {
|
||||
stopped, err := proc.StopAllWatchProcessesInDir(target.WatchRunStateDir, 3*time.Second)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if stopped > 0 {
|
||||
fmt.Fprintf(out, "%s stopped %d old %s watch process group(s)\n", successStyle.Sprint("Stopped:"), stopped, target.Name)
|
||||
}
|
||||
}
|
||||
if len(target.BrowserUserDataDirs) > 0 {
|
||||
killedBrowsers, err := proc.KillBrowserProcessesForUserDataDirs(target.BrowserUserDataDirs, 3*time.Second)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if killedBrowsers > 0 {
|
||||
fmt.Fprintf(out, "%s stopped %d BrowserOS %s profile process(es)\n", successStyle.Sprint("Stopped:"), killedBrowsers, target.Name)
|
||||
}
|
||||
}
|
||||
if target.Ports != nil {
|
||||
ports := *target.Ports
|
||||
fmt.Fprintf(out, "%s ports %d, %d, %d\n", labelStyle.Sprint("Clearing:"), ports.CDP, ports.Server, ports.Extension)
|
||||
if err := proc.KillPortsAndWait(ports, 3*time.Second); err != nil {
|
||||
return err
|
||||
}
|
||||
fmt.Fprintln(out, successStyle.Sprint("Ports cleared."))
|
||||
}
|
||||
}
|
||||
|
||||
fmt.Println()
|
||||
proc.LogMsg(proc.TagInfo, "Cleanup complete")
|
||||
if opts.temps {
|
||||
n := proc.CleanupTempDirs(target.TempPrefixes...)
|
||||
if n > 0 {
|
||||
fmt.Fprintf(out, "%s removed %d temp directories\n", successStyle.Sprint("Removed:"), n)
|
||||
} else if len(target.TempPrefixes) > 0 {
|
||||
fmt.Fprintln(out, dimStyle.Sprint("No orphaned temp directories found."))
|
||||
}
|
||||
}
|
||||
|
||||
fmt.Fprintln(out)
|
||||
fmt.Fprintln(out, successStyle.Sprint("Cleanup complete."))
|
||||
return nil
|
||||
}
|
||||
|
||||
138
packages/browseros-agent/tools/dev/cmd/cleanup_test.go
Normal file
138
packages/browseros-agent/tools/dev/cmd/cleanup_test.go
Normal file
@@ -0,0 +1,138 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"os"
|
||||
"strings"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestConfirmYesNoDefaultsNoAndExplainsAction(t *testing.T) {
|
||||
var out bytes.Buffer
|
||||
prompt := resetPrompt{
|
||||
Title: "Stop VM?",
|
||||
Body: "This shuts down browseros-vm. Data stays on disk.",
|
||||
Action: "Stop browseros-vm",
|
||||
}
|
||||
|
||||
ok, err := confirmYesNo(&out, bufio.NewReader(strings.NewReader("\n")), prompt)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if ok {
|
||||
t.Fatal("expected empty answer to default to no")
|
||||
}
|
||||
text := out.String()
|
||||
for _, want := range []string{
|
||||
"Stop VM?",
|
||||
"This shuts down browseros-vm. Data stays on disk.",
|
||||
"Stop browseros-vm",
|
||||
"[y/N]",
|
||||
} {
|
||||
if !strings.Contains(text, want) {
|
||||
t.Fatalf("missing %q in prompt:\n%s", want, text)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestConfirmTypedRequiresExactToken(t *testing.T) {
|
||||
var out bytes.Buffer
|
||||
ok, err := confirmTyped(
|
||||
&out,
|
||||
bufio.NewReader(strings.NewReader("delete\nDELETE\n")),
|
||||
"Delete dev profile?",
|
||||
"This removes ~/.browseros-dev.",
|
||||
"DELETE",
|
||||
)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if !ok {
|
||||
t.Fatal("expected exact token to confirm")
|
||||
}
|
||||
|
||||
text := out.String()
|
||||
if !strings.Contains(text, "Type DELETE to continue") {
|
||||
t.Fatalf("missing typed confirmation instruction:\n%s", text)
|
||||
}
|
||||
if !strings.Contains(text, "Confirmation did not match") {
|
||||
t.Fatalf("missing retry warning:\n%s", text)
|
||||
}
|
||||
}
|
||||
|
||||
func TestResetOverviewTellsUserToUseSmallestReset(t *testing.T) {
|
||||
var out bytes.Buffer
|
||||
printResetOverview(&out, resetTarget{
|
||||
Title: "BrowserOS dev reset",
|
||||
BrowserOSDir: "/Users/me/.browseros-dev",
|
||||
DeleteRootLabel: "Delete dev profile:",
|
||||
})
|
||||
|
||||
text := out.String()
|
||||
for _, want := range []string{
|
||||
"BrowserOS dev reset",
|
||||
"Pick the smallest reset",
|
||||
"/Users/me/.browseros-dev",
|
||||
"Stop VM",
|
||||
"Delete VM",
|
||||
"Remove OpenClaw container",
|
||||
"Remove OpenClaw image",
|
||||
"Delete dev profile",
|
||||
} {
|
||||
if !strings.Contains(text, want) {
|
||||
t.Fatalf("missing %q in overview:\n%s", want, text)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseLimaListOutputAcceptsSingleObject(t *testing.T) {
|
||||
entries, err := parseLimaListOutput([]byte(`{"name":"browseros-vm","status":"Running"}`))
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if len(entries) != 1 || entries[0].Name != "browseros-vm" || entries[0].Status != "Running" {
|
||||
t.Fatalf("unexpected entries: %#v", entries)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseLimaListOutputAcceptsJSONLines(t *testing.T) {
|
||||
entries, err := parseLimaListOutput([]byte("{\"name\":\"one\",\"status\":\"Stopped\"}\n{\"name\":\"browseros-vm\",\"status\":\"Running\"}\n"))
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if len(entries) != 2 || entries[1].Name != "browseros-vm" || entries[1].Status != "Running" {
|
||||
t.Fatalf("unexpected entries: %#v", entries)
|
||||
}
|
||||
}
|
||||
|
||||
func TestValidateDevProfileRootRejectsUnsafePaths(t *testing.T) {
|
||||
home, err := os.UserHomeDir()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
for _, path := range []string{"/", home, "/etc"} {
|
||||
if err := validateDevProfileRootForDeletion(path); err == nil {
|
||||
t.Fatalf("expected %s to be rejected", path)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestLimactlShellArgsUseGuestWorkdir(t *testing.T) {
|
||||
args := limactlShellArgs("sh", "-lc", "true")
|
||||
want := []string{"shell", "--workdir", "/", "browseros-vm", "--", "sh", "-lc", "true"}
|
||||
if strings.Join(args, "\x00") != strings.Join(want, "\x00") {
|
||||
t.Fatalf("expected %#v, got %#v", want, args)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParsePodmanMachineList(t *testing.T) {
|
||||
machines, err := parsePodmanMachineList([]byte(`[{"Name":"podman-machine-default","Running":true}]`))
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if len(machines) != 1 || machines[0].Name != "podman-machine-default" || !machines[0].Running {
|
||||
t.Fatalf("unexpected machines: %#v", machines)
|
||||
}
|
||||
}
|
||||
197
packages/browseros-agent/tools/dev/cmd/dogfood_stop.go
Normal file
197
packages/browseros-agent/tools/dev/cmd/dogfood_stop.go
Normal file
@@ -0,0 +1,197 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"net"
|
||||
"os"
|
||||
"syscall"
|
||||
"time"
|
||||
)
|
||||
|
||||
const dogfoodStopTimeout = 10 * time.Second
|
||||
|
||||
type dogfoodRunState struct {
|
||||
PID int `json:"pid"`
|
||||
Mode string `json:"mode"`
|
||||
SocketPath string `json:"socket_path"`
|
||||
LogPath string `json:"log_path"`
|
||||
}
|
||||
|
||||
type dogfoodIPCRequest struct {
|
||||
Command string `json:"command"`
|
||||
}
|
||||
|
||||
type dogfoodIPCResponse struct {
|
||||
OK bool `json:"ok"`
|
||||
Error string `json:"error,omitempty"`
|
||||
}
|
||||
|
||||
func ensureTargetStopped(out io.Writer, target resetTarget) error {
|
||||
if target.Dogfood == nil {
|
||||
return nil
|
||||
}
|
||||
return stopDogfoodRun(out, *target.Dogfood, dogfoodStopTimeout)
|
||||
}
|
||||
|
||||
func stopDogfoodRun(out io.Writer, target dogfoodRuntimeTarget, timeout time.Duration) error {
|
||||
active, err := dogfoodRunActive(target.LockPath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if !active {
|
||||
cleanupDogfoodRunFilesWithWarning(out, target)
|
||||
return nil
|
||||
}
|
||||
|
||||
fmt.Fprintln(out, labelStyle.Sprint("Stopping dogfood run first."))
|
||||
if err := stopDogfoodDaemon(target); err == nil {
|
||||
if stopped, err := waitForDogfoodStopped(out, target, timeout); err != nil {
|
||||
return err
|
||||
} else if stopped {
|
||||
fmt.Fprintln(out, successStyle.Sprint("Dogfood stopped."))
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
state, err := readDogfoodRunState(target.StatePath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("dogfood is running but state is unreadable at %s: %w", target.StatePath, err)
|
||||
}
|
||||
if state.PID <= 0 {
|
||||
return fmt.Errorf("dogfood is running but state has no pid at %s", target.StatePath)
|
||||
}
|
||||
if err := signalDogfoodPID(state.PID, syscall.SIGTERM); err != nil {
|
||||
return err
|
||||
}
|
||||
if stopped, err := waitForDogfoodStopped(out, target, timeout); err != nil {
|
||||
return err
|
||||
} else if stopped {
|
||||
fmt.Fprintln(out, successStyle.Sprint("Dogfood stopped."))
|
||||
return nil
|
||||
}
|
||||
if err := signalDogfoodPID(state.PID, syscall.SIGKILL); err != nil {
|
||||
return err
|
||||
}
|
||||
if stopped, err := waitForDogfoodStopped(out, target, time.Second); err != nil {
|
||||
return err
|
||||
} else if stopped {
|
||||
fmt.Fprintln(out, successStyle.Sprint("Dogfood force-stopped."))
|
||||
return nil
|
||||
}
|
||||
return fmt.Errorf("dogfood is still running; stop it manually before cleanup/reset")
|
||||
}
|
||||
|
||||
func stopDogfoodDaemon(target dogfoodRuntimeTarget) error {
|
||||
socketPath := target.SocketPath
|
||||
if state, err := readDogfoodRunState(target.StatePath); err == nil && state.SocketPath != "" {
|
||||
socketPath = state.SocketPath
|
||||
}
|
||||
conn, err := net.DialTimeout("unix", socketPath, 700*time.Millisecond)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer conn.Close()
|
||||
|
||||
data, err := json.Marshal(dogfoodIPCRequest{Command: "stop"})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
data = append(data, '\n')
|
||||
if _, err := conn.Write(data); err != nil {
|
||||
return err
|
||||
}
|
||||
_ = conn.SetReadDeadline(time.Now().Add(2 * time.Second))
|
||||
scanner := bufio.NewScanner(conn)
|
||||
if !scanner.Scan() {
|
||||
if err := scanner.Err(); err != nil {
|
||||
return err
|
||||
}
|
||||
return errors.New("dogfood daemon closed connection without response")
|
||||
}
|
||||
var response dogfoodIPCResponse
|
||||
if err := json.Unmarshal(scanner.Bytes(), &response); err != nil {
|
||||
return err
|
||||
}
|
||||
if response.Error != "" {
|
||||
return errors.New(response.Error)
|
||||
}
|
||||
if !response.OK {
|
||||
return errors.New("dogfood daemon did not accept stop request")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func waitForDogfoodStopped(out io.Writer, target dogfoodRuntimeTarget, timeout time.Duration) (bool, error) {
|
||||
deadline := time.Now().Add(timeout)
|
||||
for {
|
||||
active, err := dogfoodRunActive(target.LockPath)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
if !active {
|
||||
cleanupDogfoodRunFilesWithWarning(out, target)
|
||||
return true, nil
|
||||
}
|
||||
if time.Now().After(deadline) {
|
||||
return false, nil
|
||||
}
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
}
|
||||
}
|
||||
|
||||
func dogfoodRunActive(lockPath string) (bool, error) {
|
||||
file, err := os.OpenFile(lockPath, os.O_CREATE|os.O_RDWR, 0o644)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
defer file.Close()
|
||||
if err := syscall.Flock(int(file.Fd()), syscall.LOCK_EX|syscall.LOCK_NB); err != nil {
|
||||
if errors.Is(err, syscall.EWOULDBLOCK) || errors.Is(err, syscall.EAGAIN) {
|
||||
return true, nil
|
||||
}
|
||||
return false, err
|
||||
}
|
||||
return false, syscall.Flock(int(file.Fd()), syscall.LOCK_UN)
|
||||
}
|
||||
|
||||
func readDogfoodRunState(path string) (dogfoodRunState, error) {
|
||||
data, err := os.ReadFile(path)
|
||||
if err != nil {
|
||||
return dogfoodRunState{}, err
|
||||
}
|
||||
var state dogfoodRunState
|
||||
if err := json.Unmarshal(data, &state); err != nil {
|
||||
return dogfoodRunState{}, err
|
||||
}
|
||||
return state, nil
|
||||
}
|
||||
|
||||
func signalDogfoodPID(pid int, sig syscall.Signal) error {
|
||||
if pid <= 0 {
|
||||
return fmt.Errorf("invalid dogfood pid %d", pid)
|
||||
}
|
||||
if err := syscall.Kill(pid, sig); err != nil && err != syscall.ESRCH {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func cleanupDogfoodRunFilesWithWarning(out io.Writer, target dogfoodRuntimeTarget) {
|
||||
if err := cleanupDogfoodRunFiles(target); err != nil {
|
||||
fmt.Fprintf(out, "%s could not remove dogfood run files: %v\n", warnStyle.Sprint("Warning:"), err)
|
||||
}
|
||||
}
|
||||
|
||||
func cleanupDogfoodRunFiles(target dogfoodRuntimeTarget) error {
|
||||
if err := os.Remove(target.SocketPath); err != nil && !os.IsNotExist(err) {
|
||||
return err
|
||||
}
|
||||
if err := os.Remove(target.StatePath); err != nil && !os.IsNotExist(err) {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
37
packages/browseros-agent/tools/dev/cmd/dogfood_stop_test.go
Normal file
37
packages/browseros-agent/tools/dev/cmd/dogfood_stop_test.go
Normal file
@@ -0,0 +1,37 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
func TestWaitForDogfoodStoppedWarnsWhenRunFileCleanupFails(t *testing.T) {
|
||||
root := t.TempDir()
|
||||
socketPath := filepath.Join(root, "dogfood.sock")
|
||||
if err := os.Mkdir(socketPath, 0o755); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if err := os.WriteFile(filepath.Join(socketPath, "child"), []byte("x"), 0o644); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
var out bytes.Buffer
|
||||
stopped, err := waitForDogfoodStopped(&out, dogfoodRuntimeTarget{
|
||||
LockPath: filepath.Join(root, "run.lock"),
|
||||
SocketPath: socketPath,
|
||||
StatePath: filepath.Join(root, "state.json"),
|
||||
}, time.Millisecond)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if !stopped {
|
||||
t.Fatal("expected inactive dogfood run to be treated as stopped")
|
||||
}
|
||||
if !strings.Contains(out.String(), "Warning:") {
|
||||
t.Fatalf("missing cleanup warning:\n%s", out.String())
|
||||
}
|
||||
}
|
||||
460
packages/browseros-agent/tools/dev/cmd/reset.go
Normal file
460
packages/browseros-agent/tools/dev/cmd/reset.go
Normal file
@@ -0,0 +1,460 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"browseros-dev/proc"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
const (
|
||||
limaVMName = "browseros-vm"
|
||||
openClawImage = "ghcr.io/openclaw/openclaw:2026.4.12"
|
||||
openClawContainerName = "browseros-openclaw-openclaw-gateway-1"
|
||||
openClawSetupContainer = openClawContainerName + "-setup"
|
||||
)
|
||||
|
||||
var resetCmd = &cobra.Command{
|
||||
Use: "reset",
|
||||
Short: "Guide destructive BrowserOS profile and VM resets",
|
||||
Long: "Walks through safe cleanup, VM shutdown/deletion, OpenClaw container/image removal, and target BrowserOS state reset.",
|
||||
RunE: runReset,
|
||||
}
|
||||
|
||||
type resetPrompt struct {
|
||||
Title string
|
||||
Body string
|
||||
Action string
|
||||
}
|
||||
|
||||
type limaListEntry struct {
|
||||
Name string `json:"name"`
|
||||
Status string `json:"status"`
|
||||
}
|
||||
|
||||
type podmanMachineEntry struct {
|
||||
Name string `json:"Name"`
|
||||
Running bool `json:"Running"`
|
||||
}
|
||||
|
||||
var (
|
||||
resetTargetName string
|
||||
resetBrowserOSDir string
|
||||
resetPortsValue string
|
||||
resetBrowserUserDataDir string
|
||||
)
|
||||
|
||||
func init() {
|
||||
resetCmd.Flags().StringVar(&resetTargetName, "target", targetDev, "Reset target: dev, dogfood, or prod")
|
||||
resetCmd.Flags().StringVar(&resetBrowserOSDir, "browseros-dir", "", "Override target BrowserOS state directory")
|
||||
resetCmd.Flags().StringVar(&resetPortsValue, "ports", "", "Override ports as cdp,server,extension")
|
||||
resetCmd.Flags().StringVar(&resetBrowserUserDataDir, "browser-user-data-dir", "", "Override BrowserOS user-data dir to stop")
|
||||
rootCmd.AddCommand(resetCmd)
|
||||
}
|
||||
|
||||
// runReset walks developers through escalating reset options without hiding the blast radius.
|
||||
func runReset(cmd *cobra.Command, args []string) error {
|
||||
out := cmd.OutOrStdout()
|
||||
reader := bufio.NewReader(os.Stdin)
|
||||
root, err := proc.FindMonorepoRoot()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
target, err := resolveResetTarget(root, resetTargetOptions{
|
||||
Target: resetTargetName,
|
||||
BrowserOSDir: resetBrowserOSDir,
|
||||
Ports: resetPortsValue,
|
||||
BrowserUserDataDir: resetBrowserUserDataDir,
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
printResetOverview(out, target)
|
||||
|
||||
if err := ensureTargetStopped(out, target); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if ok, err := confirmYesNo(out, reader, resetPrompt{
|
||||
Title: "Run safe cleanup first?",
|
||||
Body: fmt.Sprintf("This stops %s processes, clears target ports, and removes target temp profiles. It does not touch saved BrowserOS data.", target.Name),
|
||||
Action: "Run safe cleanup for " + target.Name,
|
||||
}); err != nil {
|
||||
return err
|
||||
} else if ok {
|
||||
if err := runSafeCleanup(out, target, safeCleanupOptions{ports: true, temps: true}); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
limactlPath, err := exec.LookPath("limactl")
|
||||
if err != nil {
|
||||
fmt.Fprintf(out, "%s Lima CLI not found; VM and OpenClaw reset steps are unavailable. Install with %s.\n", warnStyle.Sprint("Skipping:"), commandStyle.Sprint("brew install lima"))
|
||||
if err := maybeResetLegacyPodman(out, reader); err != nil {
|
||||
return err
|
||||
}
|
||||
return maybeDeleteTargetRoot(out, reader, target)
|
||||
}
|
||||
|
||||
vm, err := findVM(limactlPath, target.LimaHome)
|
||||
if err != nil {
|
||||
fmt.Fprintf(out, "%s could not inspect Lima VMs: %v\n", warnStyle.Sprint("Warning:"), err)
|
||||
if err := maybeResetLegacyPodman(out, reader); err != nil {
|
||||
return err
|
||||
}
|
||||
return maybeDeleteTargetRoot(out, reader, target)
|
||||
}
|
||||
if vm == nil {
|
||||
fmt.Fprintf(out, "%s %s was not found in %s.\n", dimStyle.Sprint("Not found:"), limaVMName, pathStyle.Sprint(target.LimaHome))
|
||||
if err := maybeResetLegacyPodman(out, reader); err != nil {
|
||||
return err
|
||||
}
|
||||
return maybeDeleteTargetRoot(out, reader, target)
|
||||
}
|
||||
|
||||
fmt.Fprintf(out, "%s %s %s\n", labelStyle.Sprint("Found VM:"), commandStyle.Sprint(vm.Name), dimStyle.Sprintf("(%s)", vm.Status))
|
||||
if strings.EqualFold(vm.Status, "Running") {
|
||||
if err := maybeResetOpenClaw(out, reader, limactlPath, target.LimaHome); err != nil {
|
||||
return err
|
||||
}
|
||||
if ok, err := confirmYesNo(out, reader, resetPrompt{
|
||||
Title: "Stop VM?",
|
||||
Body: "This shuts down browseros-vm. The VM, containers, images, and profile data stay on disk.",
|
||||
Action: "Stop browseros-vm",
|
||||
}); err != nil {
|
||||
return err
|
||||
} else if ok {
|
||||
if err := runLimactl(out, limactlPath, target.LimaHome, "stop", limaVMName); err != nil {
|
||||
return err
|
||||
}
|
||||
fmt.Fprintln(out, successStyle.Sprint("VM stopped."))
|
||||
vm.Status = "Stopped"
|
||||
}
|
||||
} else {
|
||||
fmt.Fprintln(out, dimStyle.Sprint("OpenClaw container/image reset needs the VM running; skipping those steps."))
|
||||
}
|
||||
|
||||
if ok, err := confirmYesNo(out, reader, resetPrompt{
|
||||
Title: "Delete VM?",
|
||||
Body: fmt.Sprintf("This deletes the Lima VM and its container store. %s remains. OpenClaw will be pulled again next time.", target.BrowserOSDir),
|
||||
Action: "Delete browseros-vm",
|
||||
}); err != nil {
|
||||
return err
|
||||
} else if ok {
|
||||
if err := runLimactl(out, limactlPath, target.LimaHome, "delete", "--force", limaVMName); err != nil {
|
||||
return err
|
||||
}
|
||||
fmt.Fprintln(out, successStyle.Sprint("VM deleted."))
|
||||
}
|
||||
|
||||
if err := maybeResetLegacyPodman(out, reader); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return maybeDeleteTargetRoot(out, reader, target)
|
||||
}
|
||||
|
||||
func printResetOverview(out io.Writer, target resetTarget) {
|
||||
fmt.Fprintln(out, headerStyle.Sprint(target.Title))
|
||||
fmt.Fprintln(out)
|
||||
fmt.Fprintf(out, "This can reset parts of %s. Pick the smallest reset that matches the problem.\n", pathStyle.Sprint(target.BrowserOSDir))
|
||||
fmt.Fprintln(out)
|
||||
fmt.Fprintf(out, " %s %s\n", labelStyle.Sprint("Stop VM:"), dimStyle.Sprint("Shuts down browseros-vm. Keeps data."))
|
||||
fmt.Fprintf(out, " %s %s\n", labelStyle.Sprint("Delete VM:"), dimStyle.Sprint("Removes Lima/container state. Keeps the target state root."))
|
||||
fmt.Fprintf(out, " %s %s\n", labelStyle.Sprint("Remove OpenClaw container:"), dimStyle.Sprint("Keeps the downloaded OpenClaw image."))
|
||||
fmt.Fprintf(out, " %s %s\n", labelStyle.Sprint("Remove OpenClaw image:"), dimStyle.Sprint("Next startup pulls it again."))
|
||||
fmt.Fprintf(out, " %s %s\n", warnStyle.Sprint(target.DeleteRootLabel), dimStyle.Sprint("Deletes the target BrowserOS state root."))
|
||||
fmt.Fprintln(out)
|
||||
}
|
||||
|
||||
func confirmYesNo(out io.Writer, r *bufio.Reader, prompt resetPrompt) (bool, error) {
|
||||
fmt.Fprintln(out, labelStyle.Sprint(prompt.Title))
|
||||
fmt.Fprintln(out, prompt.Body)
|
||||
if prompt.Action != "" {
|
||||
fmt.Fprintf(out, "%s %s\n", labelStyle.Sprint("Action:"), commandStyle.Sprint(prompt.Action))
|
||||
}
|
||||
fmt.Fprint(out, labelStyle.Sprint("Continue?")+" [y/N]: ")
|
||||
line, err := r.ReadString('\n')
|
||||
if err != nil && len(line) == 0 {
|
||||
return false, err
|
||||
}
|
||||
line = strings.TrimSpace(strings.ToLower(line))
|
||||
fmt.Fprintln(out)
|
||||
return line == "y" || line == "yes", nil
|
||||
}
|
||||
|
||||
func confirmTyped(out io.Writer, r *bufio.Reader, title string, body string, token string) (bool, error) {
|
||||
fmt.Fprintln(out, warnStyle.Sprint(title))
|
||||
fmt.Fprintln(out, body)
|
||||
for {
|
||||
fmt.Fprintf(out, "%s %s %s: ", labelStyle.Sprint("Type"), commandStyle.Sprint(token), labelStyle.Sprint("to continue"))
|
||||
line, err := r.ReadString('\n')
|
||||
if err != nil && len(line) == 0 {
|
||||
return false, err
|
||||
}
|
||||
if strings.TrimSpace(line) == token {
|
||||
fmt.Fprintln(out)
|
||||
return true, nil
|
||||
}
|
||||
if strings.TrimSpace(line) == "" {
|
||||
fmt.Fprintln(out)
|
||||
return false, nil
|
||||
}
|
||||
fmt.Fprintln(out, warnStyle.Sprint("Confirmation did not match. Press Enter to skip or try again."))
|
||||
}
|
||||
}
|
||||
|
||||
func maybeResetOpenClaw(out io.Writer, reader *bufio.Reader, limactlPath string, limaHome string) error {
|
||||
if ok, err := confirmYesNo(out, reader, resetPrompt{
|
||||
Title: "Remove OpenClaw container?",
|
||||
Body: "This removes the current gateway/setup containers. The downloaded OpenClaw image stays in the VM.",
|
||||
Action: "nerdctl rm -f " + openClawContainerName + " " + openClawSetupContainer,
|
||||
}); err != nil {
|
||||
return err
|
||||
} else if ok {
|
||||
script := fmt.Sprintf(
|
||||
"nerdctl rm -f %s %s >/dev/null 2>&1 || true",
|
||||
openClawContainerName,
|
||||
openClawSetupContainer,
|
||||
)
|
||||
if err := runInVM(out, limactlPath, limaHome, "sh", "-lc", script); err != nil {
|
||||
return err
|
||||
}
|
||||
fmt.Fprintln(out, successStyle.Sprint("OpenClaw containers removed if present."))
|
||||
}
|
||||
|
||||
if ok, err := confirmYesNo(out, reader, resetPrompt{
|
||||
Title: "Remove OpenClaw image?",
|
||||
Body: "This deletes ghcr.io/openclaw/openclaw:2026.4.12 from the VM. Next startup pulls it again.",
|
||||
Action: "nerdctl image rm " + openClawImage,
|
||||
}); err != nil {
|
||||
return err
|
||||
} else if ok {
|
||||
script := fmt.Sprintf("nerdctl image rm %s >/dev/null 2>&1 || true", openClawImage)
|
||||
if err := runInVM(out, limactlPath, limaHome, "sh", "-lc", script); err != nil {
|
||||
return err
|
||||
}
|
||||
fmt.Fprintln(out, successStyle.Sprint("OpenClaw image removed if present."))
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func maybeDeleteTargetRoot(out io.Writer, reader *bufio.Reader, target resetTarget) error {
|
||||
ok, err := confirmTyped(
|
||||
out,
|
||||
reader,
|
||||
target.DeleteRootLabel,
|
||||
fmt.Sprintf("This deletes %s. %s", pathStyle.Sprint(target.BrowserOSDir), target.DeleteRootBody),
|
||||
"DELETE",
|
||||
)
|
||||
if err != nil || !ok {
|
||||
return err
|
||||
}
|
||||
if err := validateDevProfileRootForDeletion(target.BrowserOSDir); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := os.RemoveAll(target.BrowserOSDir); err != nil {
|
||||
return err
|
||||
}
|
||||
fmt.Fprintf(out, "%s %s\n", successStyle.Sprint("Deleted:"), pathStyle.Sprint(target.BrowserOSDir))
|
||||
return nil
|
||||
}
|
||||
|
||||
func maybeResetLegacyPodman(out io.Writer, reader *bufio.Reader) error {
|
||||
podmanPath, err := exec.LookPath("podman")
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
machines, err := listPodmanMachines(podmanPath)
|
||||
if err != nil {
|
||||
fmt.Fprintf(out, "%s could not inspect legacy Podman machines: %v\n", warnStyle.Sprint("Warning:"), err)
|
||||
return nil
|
||||
}
|
||||
if len(machines) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
fmt.Fprintln(out, headerStyle.Sprint("Legacy Podman VM cleanup"))
|
||||
fmt.Fprintln(out, "BrowserOS used Podman before the Lima VM runtime. These machines are legacy for this dev flow.")
|
||||
for _, machine := range machines {
|
||||
state := "Stopped"
|
||||
if machine.Running {
|
||||
state = "Running"
|
||||
}
|
||||
fmt.Fprintf(out, " %s %s\n", commandStyle.Sprint(machine.Name), dimStyle.Sprintf("(%s)", state))
|
||||
}
|
||||
fmt.Fprintln(out, dimStyle.Sprint("Future reset flows can add more legacy cleanup checks here."))
|
||||
fmt.Fprintln(out)
|
||||
|
||||
for i := range machines {
|
||||
machine := machines[i]
|
||||
if machine.Running {
|
||||
if ok, err := confirmYesNo(out, reader, resetPrompt{
|
||||
Title: "Stop legacy Podman machine?",
|
||||
Body: fmt.Sprintf("This stops legacy Podman machine %s. It does not delete the machine.", machine.Name),
|
||||
Action: "podman machine stop " + machine.Name,
|
||||
}); err != nil {
|
||||
return err
|
||||
} else if ok {
|
||||
if err := runCommand(out, podmanPath, "machine", "stop", machine.Name); err != nil {
|
||||
return err
|
||||
}
|
||||
fmt.Fprintf(out, "%s %s\n", successStyle.Sprint("Stopped:"), commandStyle.Sprint(machine.Name))
|
||||
machines[i].Running = false
|
||||
}
|
||||
}
|
||||
|
||||
if ok, err := confirmYesNo(out, reader, resetPrompt{
|
||||
Title: "Delete legacy Podman machine?",
|
||||
Body: fmt.Sprintf("This deletes legacy Podman machine %s. Use this when cleaning up the old VM runtime.", machine.Name),
|
||||
Action: "podman machine rm --force " + machine.Name,
|
||||
}); err != nil {
|
||||
return err
|
||||
} else if ok {
|
||||
if err := runCommand(out, podmanPath, "machine", "rm", "--force", machine.Name); err != nil {
|
||||
return err
|
||||
}
|
||||
fmt.Fprintf(out, "%s %s\n", successStyle.Sprint("Deleted:"), commandStyle.Sprint(machine.Name))
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func listPodmanMachines(podmanPath string) ([]podmanMachineEntry, error) {
|
||||
cmd := exec.Command(podmanPath, "machine", "ls", "--format", "json")
|
||||
output, err := cmd.Output()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return parsePodmanMachineList(output)
|
||||
}
|
||||
|
||||
func parsePodmanMachineList(output []byte) ([]podmanMachineEntry, error) {
|
||||
if strings.TrimSpace(string(output)) == "" {
|
||||
return nil, nil
|
||||
}
|
||||
var machines []podmanMachineEntry
|
||||
if err := json.Unmarshal(output, &machines); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return machines, nil
|
||||
}
|
||||
|
||||
func validateDevProfileRootForDeletion(root string) error {
|
||||
cleanRoot, err := filepath.Abs(root)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if cleanRoot == string(filepath.Separator) {
|
||||
return fmt.Errorf("refusing to delete filesystem root")
|
||||
}
|
||||
home, err := os.UserHomeDir()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
cleanHome, err := filepath.Abs(home)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if cleanRoot == cleanHome {
|
||||
return fmt.Errorf("refusing to delete home directory %s", cleanRoot)
|
||||
}
|
||||
if !isPathInside(cleanRoot, cleanHome) {
|
||||
return fmt.Errorf("refusing to delete path outside home directory: %s", cleanRoot)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func isPathInside(path string, parent string) bool {
|
||||
rel, err := filepath.Rel(parent, path)
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
return rel != "." && rel != "" && !strings.HasPrefix(rel, "..") && !filepath.IsAbs(rel)
|
||||
}
|
||||
|
||||
func findVM(limactlPath string, limaHome string) (*limaListEntry, error) {
|
||||
cmd := limactlCommand(limactlPath, limaHome, "list", "--format", "json")
|
||||
output, err := cmd.Output()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
entries, err := parseLimaListOutput(output)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
for i := range entries {
|
||||
if entries[i].Name == limaVMName {
|
||||
return &entries[i], nil
|
||||
}
|
||||
}
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
func parseLimaListOutput(output []byte) ([]limaListEntry, error) {
|
||||
trimmed := strings.TrimSpace(string(output))
|
||||
if trimmed == "" {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
var entries []limaListEntry
|
||||
if err := json.Unmarshal([]byte(trimmed), &entries); err == nil {
|
||||
return entries, nil
|
||||
}
|
||||
|
||||
var single limaListEntry
|
||||
if err := json.Unmarshal([]byte(trimmed), &single); err == nil {
|
||||
return []limaListEntry{single}, nil
|
||||
}
|
||||
|
||||
for _, line := range strings.Split(trimmed, "\n") {
|
||||
line = strings.TrimSpace(line)
|
||||
if line == "" {
|
||||
continue
|
||||
}
|
||||
var entry limaListEntry
|
||||
if err := json.Unmarshal([]byte(line), &entry); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
entries = append(entries, entry)
|
||||
}
|
||||
return entries, nil
|
||||
}
|
||||
|
||||
func runLimactl(out io.Writer, limactlPath string, limaHome string, args ...string) error {
|
||||
cmd := limactlCommand(limactlPath, limaHome, args...)
|
||||
cmd.Stdout = out
|
||||
cmd.Stderr = out
|
||||
return cmd.Run()
|
||||
}
|
||||
|
||||
func runInVM(out io.Writer, limactlPath string, limaHome string, args ...string) error {
|
||||
shellArgs := limactlShellArgs(args...)
|
||||
return runLimactl(out, limactlPath, limaHome, shellArgs...)
|
||||
}
|
||||
|
||||
func limactlShellArgs(args ...string) []string {
|
||||
return append([]string{"shell", "--workdir", "/", limaVMName, "--"}, args...)
|
||||
}
|
||||
|
||||
func limactlCommand(limactlPath string, limaHome string, args ...string) *exec.Cmd {
|
||||
cmd := exec.Command(limactlPath, args...)
|
||||
cmd.Env = append(os.Environ(), "LIMA_HOME="+limaHome)
|
||||
return cmd
|
||||
}
|
||||
|
||||
func runCommand(out io.Writer, path string, args ...string) error {
|
||||
cmd := exec.Command(path, args...)
|
||||
cmd.Stdout = out
|
||||
cmd.Stderr = out
|
||||
return cmd.Run()
|
||||
}
|
||||
81
packages/browseros-agent/tools/dev/cmd/setup.go
Normal file
81
packages/browseros-agent/tools/dev/cmd/setup.go
Normal file
@@ -0,0 +1,81 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
|
||||
"browseros-dev/proc"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
var setupIfNeeded bool
|
||||
|
||||
const setupModeIfNeeded = true
|
||||
|
||||
var setupCmd = &cobra.Command{
|
||||
Use: "setup",
|
||||
Short: "Install dev dependencies and generate required code",
|
||||
Long: "Installs Bun dependencies and generates agent GraphQL code needed by the dev environment.",
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
root, err := proc.FindMonorepoRoot()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return runDevSetup(cmd.Context(), root, setupIfNeeded)
|
||||
},
|
||||
}
|
||||
|
||||
type setupPlan struct {
|
||||
RunInstall bool
|
||||
RunCodegen bool
|
||||
}
|
||||
|
||||
func init() {
|
||||
setupCmd.Flags().BoolVar(&setupIfNeeded, "if-needed", false, "Skip generated code refresh when it already exists")
|
||||
rootCmd.AddCommand(setupCmd)
|
||||
}
|
||||
|
||||
func buildSetupPlan(root string, ifNeeded bool) setupPlan {
|
||||
return setupPlan{
|
||||
RunInstall: true,
|
||||
RunCodegen: !ifNeeded || !generatedGraphQLExists(root),
|
||||
}
|
||||
}
|
||||
|
||||
func generatedGraphQLExists(root string) bool {
|
||||
for _, file := range []string{"gql.ts", "graphql.ts", "schema.graphql"} {
|
||||
info, err := os.Stat(filepath.Join(root, "apps/agent/generated/graphql", file))
|
||||
if err != nil || info.IsDir() {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// runDevSetup prepares the repo for local development. Dependency install always
|
||||
// runs because Bun is fast and this keeps watch resilient after branch changes.
|
||||
func runDevSetup(ctx context.Context, root string, ifNeeded bool) error {
|
||||
plan := buildSetupPlan(root, ifNeeded)
|
||||
|
||||
if plan.RunInstall {
|
||||
proc.LogMsg(proc.TagSetup, "Installing dependencies...")
|
||||
if err := proc.RunBlocking(ctx, root, proc.TagSetup, "bun", "install", "--frozen-lockfile"); err != nil {
|
||||
return fmt.Errorf("installing dependencies: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
if plan.RunCodegen {
|
||||
proc.LogMsg(proc.TagSetup, "Generating agent code...")
|
||||
if err := proc.RunBlocking(ctx, root, proc.TagSetup, "bun", "run", "codegen:agent"); err != nil {
|
||||
return fmt.Errorf("generating agent code: %w", err)
|
||||
}
|
||||
} else {
|
||||
proc.LogMsg(proc.TagSetup, "Agent code already generated")
|
||||
}
|
||||
|
||||
proc.LogMsg(proc.TagSetup, "Setup ready")
|
||||
return nil
|
||||
}
|
||||
76
packages/browseros-agent/tools/dev/cmd/setup_test.go
Normal file
76
packages/browseros-agent/tools/dev/cmd/setup_test.go
Normal file
@@ -0,0 +1,76 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestBuildSetupPlanAlwaysInstallsDependencies(t *testing.T) {
|
||||
root := t.TempDir()
|
||||
|
||||
plan := buildSetupPlan(root, true)
|
||||
|
||||
if !plan.RunInstall {
|
||||
t.Fatal("expected dependency install to always run")
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuildSetupPlanIfNeededSkipsExistingGeneratedGraphQL(t *testing.T) {
|
||||
root := t.TempDir()
|
||||
writeGeneratedGraphQLSentinels(t, root)
|
||||
|
||||
plan := buildSetupPlan(root, true)
|
||||
|
||||
if plan.RunCodegen {
|
||||
t.Fatal("expected --if-needed setup to skip codegen when generated GraphQL exists")
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuildSetupPlanIfNeededRunsCodegenWhenGeneratedGraphQLEmpty(t *testing.T) {
|
||||
root := t.TempDir()
|
||||
generatedDir := filepath.Join(root, "apps/agent/generated/graphql")
|
||||
if err := os.MkdirAll(generatedDir, 0o755); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
plan := buildSetupPlan(root, true)
|
||||
|
||||
if !plan.RunCodegen {
|
||||
t.Fatal("expected --if-needed setup to run codegen when generated GraphQL is empty")
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuildSetupPlanIfNeededRunsCodegenWhenGeneratedGraphQLMissing(t *testing.T) {
|
||||
root := t.TempDir()
|
||||
|
||||
plan := buildSetupPlan(root, true)
|
||||
|
||||
if !plan.RunCodegen {
|
||||
t.Fatal("expected --if-needed setup to run codegen when generated GraphQL is missing")
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuildSetupPlanExplicitSetupRunsCodegen(t *testing.T) {
|
||||
root := t.TempDir()
|
||||
writeGeneratedGraphQLSentinels(t, root)
|
||||
|
||||
plan := buildSetupPlan(root, false)
|
||||
|
||||
if !plan.RunCodegen {
|
||||
t.Fatal("expected explicit setup to refresh codegen")
|
||||
}
|
||||
}
|
||||
|
||||
func writeGeneratedGraphQLSentinels(t *testing.T, root string) {
|
||||
t.Helper()
|
||||
generatedDir := filepath.Join(root, "apps/agent/generated/graphql")
|
||||
if err := os.MkdirAll(generatedDir, 0o755); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
for _, file := range []string{"gql.ts", "graphql.ts", "schema.graphql"} {
|
||||
if err := os.WriteFile(filepath.Join(generatedDir, file), []byte("generated"), 0o644); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
}
|
||||
13
packages/browseros-agent/tools/dev/cmd/style.go
Normal file
13
packages/browseros-agent/tools/dev/cmd/style.go
Normal file
@@ -0,0 +1,13 @@
|
||||
package cmd
|
||||
|
||||
import "github.com/fatih/color"
|
||||
|
||||
var (
|
||||
headerStyle = color.New(color.Bold, color.FgCyan)
|
||||
commandStyle = color.New(color.FgHiGreen)
|
||||
successStyle = color.New(color.FgGreen, color.Bold)
|
||||
warnStyle = color.New(color.FgYellow, color.Bold)
|
||||
labelStyle = color.New(color.Bold)
|
||||
pathStyle = color.New(color.FgCyan)
|
||||
dimStyle = color.New(color.Faint)
|
||||
)
|
||||
365
packages/browseros-agent/tools/dev/cmd/target.go
Normal file
365
packages/browseros-agent/tools/dev/cmd/target.go
Normal file
@@ -0,0 +1,365 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"browseros-dev/proc"
|
||||
|
||||
"gopkg.in/yaml.v3"
|
||||
)
|
||||
|
||||
const (
|
||||
targetDev = "dev"
|
||||
targetDogfood = "dogfood"
|
||||
targetProd = "prod"
|
||||
|
||||
devDirName = ".browseros-dev"
|
||||
prodDirName = ".browseros"
|
||||
)
|
||||
|
||||
type resetTargetOptions struct {
|
||||
Target string
|
||||
BrowserOSDir string
|
||||
Ports string
|
||||
BrowserUserDataDir string
|
||||
}
|
||||
|
||||
type resetTarget struct {
|
||||
Name string
|
||||
Title string
|
||||
BrowserOSDir string
|
||||
LimaHome string
|
||||
Ports *proc.Ports
|
||||
BrowserUserDataDirs []string
|
||||
TempPrefixes []string
|
||||
WatchRunStateDir string
|
||||
DeleteRootLabel string
|
||||
DeleteRootBody string
|
||||
Dogfood *dogfoodRuntimeTarget
|
||||
}
|
||||
|
||||
type dogfoodRuntimeTarget struct {
|
||||
ConfigDir string
|
||||
LockPath string
|
||||
StatePath string
|
||||
SocketPath string
|
||||
}
|
||||
|
||||
type dogfoodConfigFile struct {
|
||||
BrowserOSDir string `yaml:"browseros_dir"`
|
||||
DevUserDataDir string `yaml:"dev_user_data_dir"`
|
||||
Ports struct {
|
||||
CDP int `yaml:"cdp"`
|
||||
Server int `yaml:"server"`
|
||||
Extension int `yaml:"extension"`
|
||||
} `yaml:"ports"`
|
||||
}
|
||||
|
||||
func resolveResetTarget(root string, opts resetTargetOptions) (resetTarget, error) {
|
||||
target := strings.TrimSpace(opts.Target)
|
||||
if target == "" {
|
||||
target = targetDev
|
||||
}
|
||||
switch target {
|
||||
case targetDev:
|
||||
return resolveDevTarget(root, opts)
|
||||
case targetDogfood:
|
||||
return resolveDogfoodTarget(opts)
|
||||
case targetProd:
|
||||
return resolveProdTarget(opts)
|
||||
default:
|
||||
return resetTarget{}, fmt.Errorf("unsupported reset target %q", target)
|
||||
}
|
||||
}
|
||||
|
||||
func resolveDevTarget(root string, opts resetTargetOptions) (resetTarget, error) {
|
||||
browserosDir, err := resolveBrowserOSDir(opts.BrowserOSDir, devDirName)
|
||||
if err != nil {
|
||||
return resetTarget{}, err
|
||||
}
|
||||
ports, err := resolveTargetPorts(root, opts.Ports)
|
||||
if err != nil {
|
||||
return resetTarget{}, err
|
||||
}
|
||||
return resetTarget{
|
||||
Name: targetDev,
|
||||
Title: "BrowserOS dev reset",
|
||||
BrowserOSDir: browserosDir,
|
||||
LimaHome: filepath.Join(browserosDir, "lima"),
|
||||
Ports: &ports,
|
||||
BrowserUserDataDirs: []string{"/tmp/browseros-dev"},
|
||||
TempPrefixes: []string{"browseros-test-", "browseros-dev-"},
|
||||
WatchRunStateDir: filepath.Join(browserosDir, "runs"),
|
||||
DeleteRootLabel: "Delete dev profile?",
|
||||
DeleteRootBody: "It removes BrowserOS dev data plus VM/OpenClaw state.",
|
||||
}, nil
|
||||
}
|
||||
|
||||
func resolveDogfoodTarget(opts resetTargetOptions) (resetTarget, error) {
|
||||
cfgDir, err := dogfoodConfigDir()
|
||||
if err != nil {
|
||||
return resetTarget{}, err
|
||||
}
|
||||
cfg, err := loadDogfoodConfig(filepath.Join(cfgDir, "config.yaml"))
|
||||
if err != nil {
|
||||
return resetTarget{}, err
|
||||
}
|
||||
applyDogfoodDefaults(&cfg, cfgDir)
|
||||
browserosDir := firstNonEmpty(opts.BrowserOSDir, cfg.BrowserOSDir)
|
||||
if browserosDir == "" {
|
||||
return resetTarget{}, fmt.Errorf("dogfood browseros_dir is empty")
|
||||
}
|
||||
browserosDir, err = filepath.Abs(expandTilde(browserosDir))
|
||||
if err != nil {
|
||||
return resetTarget{}, err
|
||||
}
|
||||
ports, err := parsePorts(firstNonEmpty(opts.Ports, formatPorts(proc.Ports{
|
||||
CDP: cfg.Ports.CDP,
|
||||
Server: cfg.Ports.Server,
|
||||
Extension: cfg.Ports.Extension,
|
||||
})))
|
||||
if err != nil {
|
||||
return resetTarget{}, err
|
||||
}
|
||||
browserUserDataDir := firstNonEmpty(opts.BrowserUserDataDir, cfg.DevUserDataDir)
|
||||
if browserUserDataDir == "" {
|
||||
return resetTarget{}, fmt.Errorf("dogfood dev_user_data_dir is empty")
|
||||
}
|
||||
browserUserDataDir, err = filepath.Abs(expandTilde(browserUserDataDir))
|
||||
if err != nil {
|
||||
return resetTarget{}, err
|
||||
}
|
||||
return resetTarget{
|
||||
Name: targetDogfood,
|
||||
Title: "BrowserOS dogfood reset",
|
||||
BrowserOSDir: browserosDir,
|
||||
LimaHome: filepath.Join(browserosDir, "lima"),
|
||||
Ports: &ports,
|
||||
BrowserUserDataDirs: []string{browserUserDataDir},
|
||||
DeleteRootLabel: "Delete dogfood BrowserOS state?",
|
||||
DeleteRootBody: "It removes dogfood-local BrowserOS server data plus VM/OpenClaw state. It does not touch your source BrowserOS browser profile.",
|
||||
Dogfood: &dogfoodRuntimeTarget{
|
||||
ConfigDir: cfgDir,
|
||||
LockPath: filepath.Join(cfgDir, "run.lock"),
|
||||
StatePath: filepath.Join(cfgDir, "state.json"),
|
||||
SocketPath: filepath.Join(cfgDir, "daemon.sock"),
|
||||
},
|
||||
}, nil
|
||||
}
|
||||
|
||||
func applyDogfoodDefaults(cfg *dogfoodConfigFile, cfgDir string) {
|
||||
if cfg.BrowserOSDir == "" {
|
||||
if home, err := os.UserHomeDir(); err == nil {
|
||||
cfg.BrowserOSDir = filepath.Join(home, ".browseros-dogfood")
|
||||
}
|
||||
}
|
||||
if cfg.DevUserDataDir == "" {
|
||||
cfg.DevUserDataDir = filepath.Join(cfgDir, "profile")
|
||||
}
|
||||
if cfg.Ports.CDP == 0 {
|
||||
cfg.Ports.CDP = 9015
|
||||
}
|
||||
if cfg.Ports.Server == 0 {
|
||||
cfg.Ports.Server = 9115
|
||||
}
|
||||
if cfg.Ports.Extension == 0 {
|
||||
cfg.Ports.Extension = 9315
|
||||
}
|
||||
}
|
||||
|
||||
func resolveProdTarget(opts resetTargetOptions) (resetTarget, error) {
|
||||
browserosDir, err := resolveBrowserOSDir(opts.BrowserOSDir, prodDirName)
|
||||
if err != nil {
|
||||
return resetTarget{}, err
|
||||
}
|
||||
return resetTarget{
|
||||
Name: targetProd,
|
||||
Title: "BrowserOS prod reset",
|
||||
BrowserOSDir: browserosDir,
|
||||
LimaHome: filepath.Join(browserosDir, "lima"),
|
||||
DeleteRootLabel: "Delete prod BrowserOS state?",
|
||||
DeleteRootBody: "It removes ~/.browseros server data plus VM/OpenClaw state. It does not delete your BrowserOS browser profile.",
|
||||
}, nil
|
||||
}
|
||||
|
||||
func resolveBrowserOSDir(override string, dirName string) (string, error) {
|
||||
if strings.TrimSpace(override) != "" {
|
||||
return filepath.Abs(expandTilde(strings.TrimSpace(override)))
|
||||
}
|
||||
if dirName == devDirName {
|
||||
if env := strings.TrimSpace(os.Getenv("BROWSEROS_DIR")); env != "" {
|
||||
return filepath.Abs(expandTilde(env))
|
||||
}
|
||||
}
|
||||
home, err := os.UserHomeDir()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return filepath.Join(home, dirName), nil
|
||||
}
|
||||
|
||||
func resolveTargetPorts(root string, explicit string) (proc.Ports, error) {
|
||||
if strings.TrimSpace(explicit) != "" {
|
||||
return parsePorts(explicit)
|
||||
}
|
||||
for _, path := range []string{
|
||||
filepath.Join(root, "apps/server/.env.development"),
|
||||
filepath.Join(root, "apps/server/.env.example"),
|
||||
} {
|
||||
ports, ok, err := readPortsFromEnvFile(path)
|
||||
if err != nil {
|
||||
return proc.Ports{}, err
|
||||
}
|
||||
if ok {
|
||||
return ports, nil
|
||||
}
|
||||
}
|
||||
return proc.DefaultLocalPorts(), nil
|
||||
}
|
||||
|
||||
func readPortsFromEnvFile(path string) (proc.Ports, bool, error) {
|
||||
file, err := os.Open(path)
|
||||
if os.IsNotExist(err) {
|
||||
return proc.Ports{}, false, nil
|
||||
}
|
||||
if err != nil {
|
||||
return proc.Ports{}, false, err
|
||||
}
|
||||
defer file.Close()
|
||||
|
||||
values := map[string]int{}
|
||||
scanner := bufio.NewScanner(file)
|
||||
for scanner.Scan() {
|
||||
key, value, ok := parseEnvLine(scanner.Text())
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
switch key {
|
||||
case "BROWSEROS_CDP_PORT", "BROWSEROS_SERVER_PORT", "BROWSEROS_EXTENSION_PORT":
|
||||
port, err := strconv.Atoi(value)
|
||||
if err != nil {
|
||||
return proc.Ports{}, false, fmt.Errorf("parse %s in %s: %w", key, path, err)
|
||||
}
|
||||
values[key] = port
|
||||
}
|
||||
}
|
||||
if err := scanner.Err(); err != nil {
|
||||
return proc.Ports{}, false, err
|
||||
}
|
||||
if len(values) != 3 {
|
||||
return proc.Ports{}, false, nil
|
||||
}
|
||||
return proc.Ports{
|
||||
CDP: values["BROWSEROS_CDP_PORT"],
|
||||
Server: values["BROWSEROS_SERVER_PORT"],
|
||||
Extension: values["BROWSEROS_EXTENSION_PORT"],
|
||||
}, true, nil
|
||||
}
|
||||
|
||||
func parseEnvLine(line string) (string, string, bool) {
|
||||
line = strings.TrimSpace(line)
|
||||
if line == "" || strings.HasPrefix(line, "#") {
|
||||
return "", "", false
|
||||
}
|
||||
key, value, ok := strings.Cut(line, "=")
|
||||
if !ok {
|
||||
return "", "", false
|
||||
}
|
||||
key = strings.TrimSpace(key)
|
||||
value = strings.TrimSpace(stripInlineComment(value))
|
||||
value = strings.Trim(value, `"'`)
|
||||
return key, value, key != "" && value != ""
|
||||
}
|
||||
|
||||
func stripInlineComment(value string) string {
|
||||
quote := byte(0)
|
||||
for index := 0; index < len(value); index++ {
|
||||
switch value[index] {
|
||||
case '\'', '"':
|
||||
if quote == 0 {
|
||||
quote = value[index]
|
||||
} else if quote == value[index] {
|
||||
quote = 0
|
||||
}
|
||||
case '#':
|
||||
if quote == 0 {
|
||||
return value[:index]
|
||||
}
|
||||
}
|
||||
}
|
||||
return value
|
||||
}
|
||||
|
||||
func parsePorts(value string) (proc.Ports, error) {
|
||||
parts := strings.Split(value, ",")
|
||||
if len(parts) != 3 {
|
||||
return proc.Ports{}, fmt.Errorf("ports must be cdp,server,extension")
|
||||
}
|
||||
parsed := [3]int{}
|
||||
for i, part := range parts {
|
||||
port, err := strconv.Atoi(strings.TrimSpace(part))
|
||||
if err != nil {
|
||||
return proc.Ports{}, fmt.Errorf("parse port %q: %w", part, err)
|
||||
}
|
||||
if port <= 0 || port > 65535 {
|
||||
return proc.Ports{}, fmt.Errorf("port %d out of range", port)
|
||||
}
|
||||
parsed[i] = port
|
||||
}
|
||||
return proc.Ports{CDP: parsed[0], Server: parsed[1], Extension: parsed[2]}, nil
|
||||
}
|
||||
|
||||
func formatPorts(ports proc.Ports) string {
|
||||
return fmt.Sprintf("%d,%d,%d", ports.CDP, ports.Server, ports.Extension)
|
||||
}
|
||||
|
||||
func dogfoodConfigDir() (string, error) {
|
||||
if xdg := strings.TrimSpace(os.Getenv("XDG_CONFIG_HOME")); xdg != "" {
|
||||
return filepath.Join(expandTilde(xdg), "browseros-dogfood"), nil
|
||||
}
|
||||
home, err := os.UserHomeDir()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return filepath.Join(home, ".config", "browseros-dogfood"), nil
|
||||
}
|
||||
|
||||
func loadDogfoodConfig(path string) (dogfoodConfigFile, error) {
|
||||
data, err := os.ReadFile(path)
|
||||
if err != nil {
|
||||
return dogfoodConfigFile{}, fmt.Errorf("read dogfood config at %s: %w", path, err)
|
||||
}
|
||||
var cfg dogfoodConfigFile
|
||||
if err := yaml.Unmarshal(data, &cfg); err != nil {
|
||||
return dogfoodConfigFile{}, fmt.Errorf("parse dogfood config: %w", err)
|
||||
}
|
||||
return cfg, nil
|
||||
}
|
||||
|
||||
func expandTilde(path string) string {
|
||||
if path == "~" {
|
||||
if home, err := os.UserHomeDir(); err == nil {
|
||||
return home
|
||||
}
|
||||
}
|
||||
if strings.HasPrefix(path, "~/") {
|
||||
if home, err := os.UserHomeDir(); err == nil {
|
||||
return filepath.Join(home, path[2:])
|
||||
}
|
||||
}
|
||||
return path
|
||||
}
|
||||
|
||||
func firstNonEmpty(values ...string) string {
|
||||
for _, value := range values {
|
||||
if strings.TrimSpace(value) != "" {
|
||||
return strings.TrimSpace(value)
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
||||
166
packages/browseros-agent/tools/dev/cmd/target_test.go
Normal file
166
packages/browseros-agent/tools/dev/cmd/target_test.go
Normal file
@@ -0,0 +1,166 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestResolveDevTargetReadsDevelopmentEnvPorts(t *testing.T) {
|
||||
root := t.TempDir()
|
||||
serverDir := filepath.Join(root, "apps/server")
|
||||
if err := os.MkdirAll(serverDir, 0o755); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if err := os.WriteFile(filepath.Join(serverDir, ".env.development"), []byte(
|
||||
"BROWSEROS_CDP_PORT=9101\nBROWSEROS_SERVER_PORT=9201\nBROWSEROS_EXTENSION_PORT=9301\n",
|
||||
), 0o644); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
home := t.TempDir()
|
||||
t.Setenv("HOME", home)
|
||||
t.Setenv("BROWSEROS_DIR", "")
|
||||
|
||||
target, err := resolveResetTarget(root, resetTargetOptions{Target: "dev"})
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if target.Ports == nil || target.Ports.CDP != 9101 || target.Ports.Server != 9201 || target.Ports.Extension != 9301 {
|
||||
t.Fatalf("unexpected ports: %#v", target.Ports)
|
||||
}
|
||||
if target.BrowserOSDir != filepath.Join(home, ".browseros-dev") {
|
||||
t.Fatalf("unexpected browseros dir: %s", target.BrowserOSDir)
|
||||
}
|
||||
}
|
||||
|
||||
func TestResolveDevTargetFallsBackToExampleEnvPorts(t *testing.T) {
|
||||
root := t.TempDir()
|
||||
serverDir := filepath.Join(root, "apps/server")
|
||||
if err := os.MkdirAll(serverDir, 0o755); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if err := os.WriteFile(filepath.Join(serverDir, ".env.example"), []byte(
|
||||
"BROWSEROS_CDP_PORT=9000\nBROWSEROS_SERVER_PORT=9100\nBROWSEROS_EXTENSION_PORT=9300\n",
|
||||
), 0o644); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
t.Setenv("HOME", t.TempDir())
|
||||
t.Setenv("BROWSEROS_DIR", "")
|
||||
|
||||
target, err := resolveResetTarget(root, resetTargetOptions{Target: "dev"})
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if target.Ports == nil || target.Ports.CDP != 9000 || target.Ports.Server != 9100 || target.Ports.Extension != 9300 {
|
||||
t.Fatalf("unexpected ports: %#v", target.Ports)
|
||||
}
|
||||
}
|
||||
|
||||
func TestReadPortsFromEnvFileStripsHashComments(t *testing.T) {
|
||||
path := filepath.Join(t.TempDir(), ".env")
|
||||
if err := os.WriteFile(path, []byte(
|
||||
"BROWSEROS_CDP_PORT=9005#comment\nBROWSEROS_SERVER_PORT=9105 # comment\nBROWSEROS_EXTENSION_PORT=9305\n",
|
||||
), 0o644); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
ports, ok, err := readPortsFromEnvFile(path)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if !ok {
|
||||
t.Fatal("expected ports to be found")
|
||||
}
|
||||
if ports.CDP != 9005 || ports.Server != 9105 || ports.Extension != 9305 {
|
||||
t.Fatalf("unexpected ports: %#v", ports)
|
||||
}
|
||||
}
|
||||
|
||||
func TestResolveDogfoodTargetReadsDogfoodConfig(t *testing.T) {
|
||||
root := t.TempDir()
|
||||
xdgConfig := t.TempDir()
|
||||
t.Setenv("XDG_CONFIG_HOME", xdgConfig)
|
||||
cfgDir := filepath.Join(xdgConfig, "browseros-dogfood")
|
||||
if err := os.MkdirAll(cfgDir, 0o755); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if err := os.WriteFile(filepath.Join(cfgDir, "config.yaml"), []byte(`
|
||||
browseros_dir: /tmp/browseros-dogfood-state
|
||||
dev_user_data_dir: /tmp/browseros-dogfood-profile
|
||||
ports:
|
||||
cdp: 9015
|
||||
server: 9115
|
||||
extension: 9315
|
||||
`), 0o644); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
target, err := resolveResetTarget(root, resetTargetOptions{Target: "dogfood"})
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if target.BrowserOSDir != "/tmp/browseros-dogfood-state" {
|
||||
t.Fatalf("unexpected browseros dir: %s", target.BrowserOSDir)
|
||||
}
|
||||
if target.Ports == nil || target.Ports.CDP != 9015 || target.Ports.Server != 9115 || target.Ports.Extension != 9315 {
|
||||
t.Fatalf("unexpected ports: %#v", target.Ports)
|
||||
}
|
||||
if len(target.BrowserUserDataDirs) != 1 || target.BrowserUserDataDirs[0] != "/tmp/browseros-dogfood-profile" {
|
||||
t.Fatalf("unexpected browser user data dirs: %#v", target.BrowserUserDataDirs)
|
||||
}
|
||||
if target.Dogfood == nil || target.Dogfood.StatePath != filepath.Join(cfgDir, "state.json") {
|
||||
t.Fatalf("unexpected dogfood runtime paths: %#v", target.Dogfood)
|
||||
}
|
||||
}
|
||||
|
||||
func TestResolveDogfoodTargetAppliesDogfoodDefaults(t *testing.T) {
|
||||
root := t.TempDir()
|
||||
home := t.TempDir()
|
||||
xdgConfig := t.TempDir()
|
||||
t.Setenv("HOME", home)
|
||||
t.Setenv("XDG_CONFIG_HOME", xdgConfig)
|
||||
cfgDir := filepath.Join(xdgConfig, "browseros-dogfood")
|
||||
if err := os.MkdirAll(cfgDir, 0o755); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if err := os.WriteFile(filepath.Join(cfgDir, "config.yaml"), []byte("{}\n"), 0o644); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
target, err := resolveResetTarget(root, resetTargetOptions{Target: "dogfood"})
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if target.BrowserOSDir != filepath.Join(home, ".browseros-dogfood") {
|
||||
t.Fatalf("unexpected browseros dir: %s", target.BrowserOSDir)
|
||||
}
|
||||
if target.Ports == nil || target.Ports.CDP != 9015 || target.Ports.Server != 9115 || target.Ports.Extension != 9315 {
|
||||
t.Fatalf("unexpected ports: %#v", target.Ports)
|
||||
}
|
||||
if len(target.BrowserUserDataDirs) != 1 || target.BrowserUserDataDirs[0] != filepath.Join(cfgDir, "profile") {
|
||||
t.Fatalf("unexpected browser user data dirs: %#v", target.BrowserUserDataDirs)
|
||||
}
|
||||
}
|
||||
|
||||
func TestResolveProdTargetUsesBrowserosStateRoot(t *testing.T) {
|
||||
root := t.TempDir()
|
||||
home := t.TempDir()
|
||||
t.Setenv("HOME", home)
|
||||
t.Setenv("BROWSEROS_DIR", "")
|
||||
|
||||
target, err := resolveResetTarget(root, resetTargetOptions{Target: "prod"})
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if target.BrowserOSDir != filepath.Join(home, ".browseros") {
|
||||
t.Fatalf("unexpected browseros dir: %s", target.BrowserOSDir)
|
||||
}
|
||||
if target.Ports != nil {
|
||||
t.Fatalf("prod target should not clear ports by default: %#v", target.Ports)
|
||||
}
|
||||
}
|
||||
@@ -41,7 +41,10 @@ func runTest(cmd *cobra.Command, args []string) error {
|
||||
return err
|
||||
}
|
||||
|
||||
p := proc.DefaultLocalPorts()
|
||||
p, err := resolveTargetPorts(root, "")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
proc.LogMsg(proc.TagInfo, "Killing processes on test ports...")
|
||||
proc.KillPorts(p)
|
||||
|
||||
@@ -44,10 +44,33 @@ func runWatch(cmd *cobra.Command, args []string) error {
|
||||
return err
|
||||
}
|
||||
|
||||
defaultPorts := proc.DefaultLocalPorts()
|
||||
defaultPorts, err := resolveTargetPorts(root, "")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
p := defaultPorts
|
||||
var reservations *proc.PortReservations
|
||||
userDataDir := "/tmp/browseros-dev"
|
||||
mode := "watch"
|
||||
if watchManual {
|
||||
mode = "manual"
|
||||
}
|
||||
var runLock *proc.WatchRunLock
|
||||
acquireRunLock := func(ports proc.Ports) error {
|
||||
lock, stopped, err := proc.AcquireWatchRunLock(proc.WatchRunIdentity{
|
||||
Mode: mode,
|
||||
Profile: userDataDir,
|
||||
Ports: ports,
|
||||
}, 3*time.Second)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
runLock = lock
|
||||
if stopped {
|
||||
proc.LogMsgf(proc.TagInfo, "Stopped existing dev watch for profile %s", userDataDir)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
if watchNew {
|
||||
proc.LogMsg(proc.TagInfo, "Selecting random available ports...")
|
||||
@@ -62,17 +85,16 @@ func runWatch(cmd *cobra.Command, args []string) error {
|
||||
}
|
||||
userDataDir = dir
|
||||
proc.LogMsgf(proc.TagInfo, "Created fresh profile: %s", userDataDir)
|
||||
if err := acquireRunLock(p); err != nil {
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
if err := os.MkdirAll(userDataDir, 0o755); err != nil {
|
||||
return fmt.Errorf("creating user-data dir: %w", err)
|
||||
}
|
||||
stopped, err := proc.StopExistingWatchProcesses(3 * time.Second)
|
||||
if err != nil {
|
||||
if err := acquireRunLock(p); err != nil {
|
||||
return err
|
||||
}
|
||||
if stopped > 0 {
|
||||
proc.LogMsgf(proc.TagInfo, "Stopped %d existing dev watch process group(s)", stopped)
|
||||
}
|
||||
proc.LogMsg(proc.TagInfo, "Killing processes on preferred ports...")
|
||||
if err := proc.KillPortsAndWait(defaultPorts, 3*time.Second); err != nil {
|
||||
return err
|
||||
@@ -89,13 +111,18 @@ func runWatch(cmd *cobra.Command, args []string) error {
|
||||
p.CDP, p.Server, p.Extension)
|
||||
}
|
||||
}
|
||||
defer func() {
|
||||
if err := runLock.Close(); err != nil {
|
||||
proc.LogMsgf(proc.TagInfo, "Warning: closing run lock: %v", err)
|
||||
}
|
||||
}()
|
||||
defer reservations.ReleaseAll()
|
||||
|
||||
fmt.Println()
|
||||
mode := "watch"
|
||||
if watchManual {
|
||||
mode = "manual"
|
||||
if err := runDevSetup(cmd.Context(), root, setupModeIfNeeded); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
fmt.Println()
|
||||
proc.LogMsgf(proc.TagInfo, "Mode: %s", proc.BoldColor.Sprint(mode))
|
||||
proc.LogMsgf(proc.TagInfo, "Ports: CDP=%d Server=%d Extension=%d", p.CDP, p.Server, p.Extension)
|
||||
proc.LogMsgf(proc.TagInfo, "Profile: %s", userDataDir)
|
||||
|
||||
@@ -5,6 +5,7 @@ go 1.25.7
|
||||
require (
|
||||
github.com/fatih/color v1.18.0
|
||||
github.com/spf13/cobra v1.10.2
|
||||
gopkg.in/yaml.v3 v3.0.1
|
||||
)
|
||||
|
||||
require (
|
||||
|
||||
@@ -18,4 +18,7 @@ golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBc
|
||||
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.25.0 h1:r+8e+loiHxRqhXVl6ML1nO3l1+oFoWbnlu2Ehimmi34=
|
||||
golang.org/x/sys v0.25.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
|
||||
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
|
||||
@@ -14,6 +14,7 @@ type Tag struct {
|
||||
|
||||
var (
|
||||
TagBuild = Tag{"build", color.New(color.FgYellow)}
|
||||
TagSetup = Tag{"setup", color.New(color.FgHiYellow)}
|
||||
TagAgent = Tag{"agent", color.New(color.FgMagenta)}
|
||||
TagServer = Tag{"server", color.New(color.FgCyan)}
|
||||
TagBrowser = Tag{"browser", color.New(color.FgBlue)}
|
||||
|
||||
@@ -27,7 +27,7 @@ const (
|
||||
randomPortMax = 9999
|
||||
)
|
||||
|
||||
var defaultLocalPorts = Ports{CDP: 9005, Server: 9105, Extension: 9305}
|
||||
var defaultLocalPorts = Ports{CDP: 9000, Server: 9100, Extension: 9300}
|
||||
|
||||
func DefaultLocalPorts() Ports {
|
||||
return defaultLocalPorts
|
||||
|
||||
@@ -1,7 +1,12 @@
|
||||
package proc
|
||||
|
||||
import (
|
||||
"crypto/sha256"
|
||||
"encoding/hex"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"sort"
|
||||
@@ -11,24 +16,134 @@ import (
|
||||
"time"
|
||||
)
|
||||
|
||||
// StopExistingWatchProcesses terminates older default-profile watch supervisors.
|
||||
// Port cleanup cannot see a previous watch process while it is still waiting
|
||||
// for CDP, but that process will wake up later and race the new supervisor.
|
||||
func StopExistingWatchProcesses(timeout time.Duration) (int, error) {
|
||||
currentPGID, err := syscall.Getpgid(0)
|
||||
var errWatchRunLocked = errors.New("dev watch run is already locked")
|
||||
|
||||
const maxTCPPort = 65535
|
||||
|
||||
type WatchRunIdentity struct {
|
||||
Mode string `json:"mode"`
|
||||
Profile string `json:"profile"`
|
||||
Ports Ports `json:"ports"`
|
||||
}
|
||||
|
||||
type WatchRunState struct {
|
||||
PID int `json:"pid"`
|
||||
PGID int `json:"pgid"`
|
||||
StartedAt time.Time `json:"started_at"`
|
||||
Identity WatchRunIdentity `json:"identity"`
|
||||
}
|
||||
|
||||
type WatchRunLock struct {
|
||||
file *os.File
|
||||
statePath string
|
||||
}
|
||||
|
||||
type watchRunPathsResult struct {
|
||||
Lock string
|
||||
State string
|
||||
}
|
||||
|
||||
// AcquireWatchRunLock claims ownership of the current dev watch identity.
|
||||
// If the same run identity is already active, it terminates the recorded
|
||||
// process group from the state file and waits for the OS lock to be released.
|
||||
func AcquireWatchRunLock(identity WatchRunIdentity, timeout time.Duration) (*WatchRunLock, bool, error) {
|
||||
baseDir, err := DefaultWatchRunBaseDir()
|
||||
if err != nil {
|
||||
return 0, fmt.Errorf("reading current process group: %w", err)
|
||||
return nil, false, err
|
||||
}
|
||||
return AcquireWatchRunLockInDir(baseDir, identity, timeout)
|
||||
}
|
||||
|
||||
// AcquireWatchRunLockInDir is AcquireWatchRunLock with an explicit base
|
||||
// directory so tests can exercise flock behavior without touching user state.
|
||||
func AcquireWatchRunLockInDir(baseDir string, identity WatchRunIdentity, timeout time.Duration) (*WatchRunLock, bool, error) {
|
||||
identity = normalizeWatchRunIdentity(identity)
|
||||
if err := validateWatchRunIdentity(identity); err != nil {
|
||||
return nil, false, err
|
||||
}
|
||||
if baseDir == "" {
|
||||
return nil, false, fmt.Errorf("watch run base dir is empty")
|
||||
}
|
||||
|
||||
groups, err := currentWatchProcessGroups(currentPGID)
|
||||
paths := watchRunPaths(baseDir, identity)
|
||||
lock, err := tryAcquireWatchRunLock(paths.Lock, paths.State)
|
||||
if err == nil {
|
||||
if err := lock.writeState(identity); err != nil {
|
||||
lock.Close()
|
||||
return nil, false, err
|
||||
}
|
||||
return lock, false, nil
|
||||
}
|
||||
if !errors.Is(err, errWatchRunLocked) {
|
||||
return nil, false, err
|
||||
}
|
||||
|
||||
state, err := readWatchRunStateWithRetry(paths.State, 250*time.Millisecond)
|
||||
if err != nil {
|
||||
return nil, false, fmt.Errorf("dev watch lock is held but state is unreadable at %s: %w", paths.State, err)
|
||||
}
|
||||
if state.Identity != identity {
|
||||
return nil, false, fmt.Errorf("dev watch lock state identity mismatch at %s", paths.State)
|
||||
}
|
||||
if state.PGID <= 0 {
|
||||
return nil, false, fmt.Errorf("dev watch lock state is missing a process group at %s", paths.State)
|
||||
}
|
||||
|
||||
if err := signalProcessGroup(state.PGID, syscall.SIGTERM); err != nil {
|
||||
return nil, false, err
|
||||
}
|
||||
|
||||
lock, err = waitForWatchRunLock(paths, identity, timeout)
|
||||
if err == nil {
|
||||
return lock, true, nil
|
||||
}
|
||||
if !errors.Is(err, errWatchRunLocked) {
|
||||
return nil, false, err
|
||||
}
|
||||
|
||||
if err := signalProcessGroup(state.PGID, syscall.SIGKILL); err != nil {
|
||||
return nil, false, err
|
||||
}
|
||||
lock, err = waitForWatchRunLock(paths, identity, time.Second)
|
||||
if err != nil {
|
||||
if errors.Is(err, errWatchRunLocked) {
|
||||
return nil, false, fmt.Errorf("previous dev watch process group %d did not exit after SIGKILL; inspect %s before retrying", state.PGID, paths.Lock)
|
||||
}
|
||||
return nil, false, err
|
||||
}
|
||||
return lock, true, nil
|
||||
}
|
||||
|
||||
// DefaultWatchRunBaseDir returns the shared location for dev watch lock files.
|
||||
// Individual runs are separated by a hash of profile, ports, and mode.
|
||||
func DefaultWatchRunBaseDir() (string, error) {
|
||||
home, err := os.UserHomeDir()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return filepath.Join(home, ".browseros-dev", "runs"), nil
|
||||
}
|
||||
|
||||
// StopAllWatchProcesses terminates every recorded dev watch run.
|
||||
func StopAllWatchProcesses(timeout time.Duration) (int, error) {
|
||||
baseDir, err := DefaultWatchRunBaseDir()
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
if len(groups) == 0 {
|
||||
return StopAllWatchProcessesInDir(baseDir, timeout)
|
||||
}
|
||||
|
||||
// StopAllWatchProcessesInDir is StopAllWatchProcesses with an explicit state directory for tests.
|
||||
func StopAllWatchProcessesInDir(baseDir string, timeout time.Duration) (int, error) {
|
||||
pgids, err := liveWatchRunPGIDs(baseDir)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
if len(pgids) == 0 {
|
||||
return 0, nil
|
||||
}
|
||||
|
||||
for _, pgid := range groups {
|
||||
for _, pgid := range pgids {
|
||||
if err := signalProcessGroup(pgid, syscall.SIGTERM); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
@@ -36,12 +151,9 @@ func StopExistingWatchProcesses(timeout time.Duration) (int, error) {
|
||||
|
||||
deadline := time.Now().Add(timeout)
|
||||
for {
|
||||
remaining, err := currentWatchProcessGroups(currentPGID)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
remaining := livePGIDs(pgids)
|
||||
if len(remaining) == 0 {
|
||||
return len(groups), nil
|
||||
return len(pgids), nil
|
||||
}
|
||||
if time.Now().After(deadline) {
|
||||
for _, pgid := range remaining {
|
||||
@@ -49,68 +161,315 @@ func StopExistingWatchProcesses(timeout time.Duration) (int, error) {
|
||||
return 0, err
|
||||
}
|
||||
}
|
||||
return len(groups), nil
|
||||
return len(pgids), nil
|
||||
}
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
}
|
||||
}
|
||||
|
||||
func currentWatchProcessGroups(currentPGID int) ([]int, error) {
|
||||
// KillBrowserProcessesForDevProfiles kills BrowserOS instances using temporary dev/test profiles.
|
||||
func KillBrowserProcessesForDevProfiles(timeout time.Duration) (int, error) {
|
||||
return killBrowserProcesses([]string{"/tmp/browseros-dev"}, true, timeout)
|
||||
}
|
||||
|
||||
// KillBrowserProcessesForUserDataDirs kills BrowserOS instances using the given user-data dirs.
|
||||
func KillBrowserProcessesForUserDataDirs(userDataDirs []string, timeout time.Duration) (int, error) {
|
||||
return killBrowserProcesses(userDataDirs, false, timeout)
|
||||
}
|
||||
|
||||
func killBrowserProcesses(userDataDirs []string, includeDevTempProfiles bool, timeout time.Duration) (int, error) {
|
||||
pids, err := currentBrowserProfilePIDs(userDataDirs, includeDevTempProfiles)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
if len(pids) == 0 {
|
||||
return 0, nil
|
||||
}
|
||||
for _, pid := range pids {
|
||||
if err := signalProcess(pid, syscall.SIGTERM); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
}
|
||||
|
||||
deadline := time.Now().Add(timeout)
|
||||
for {
|
||||
remaining, err := currentBrowserProfilePIDs(userDataDirs, includeDevTempProfiles)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
if len(remaining) == 0 {
|
||||
return len(pids), nil
|
||||
}
|
||||
if time.Now().After(deadline) {
|
||||
for _, pid := range remaining {
|
||||
if err := signalProcess(pid, syscall.SIGKILL); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
}
|
||||
return len(pids), nil
|
||||
}
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
}
|
||||
}
|
||||
|
||||
func (l *WatchRunLock) Close() error {
|
||||
if l == nil || l.file == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Keep the lock file path stable. Unlinking it during handoff can let
|
||||
// another opener lock a different inode while an owner still holds this one.
|
||||
removeErr := os.Remove(l.statePath)
|
||||
unlockErr := syscall.Flock(int(l.file.Fd()), syscall.LOCK_UN)
|
||||
closeErr := l.file.Close()
|
||||
l.file = nil
|
||||
if removeErr != nil && !os.IsNotExist(removeErr) {
|
||||
return removeErr
|
||||
}
|
||||
if unlockErr != nil {
|
||||
return unlockErr
|
||||
}
|
||||
return closeErr
|
||||
}
|
||||
|
||||
// ReadWatchRunState reads the metadata used to terminate a previous owner.
|
||||
// The state file is not the lock; it is only trusted after flock says a run is active.
|
||||
func ReadWatchRunState(path string) (WatchRunState, error) {
|
||||
data, err := os.ReadFile(path)
|
||||
if err != nil {
|
||||
return WatchRunState{}, err
|
||||
}
|
||||
var state WatchRunState
|
||||
if err := json.Unmarshal(data, &state); err != nil {
|
||||
return WatchRunState{}, fmt.Errorf("parse watch run state: %w", err)
|
||||
}
|
||||
return state, nil
|
||||
}
|
||||
|
||||
func readWatchRunStateWithRetry(path string, timeout time.Duration) (WatchRunState, error) {
|
||||
deadline := time.Now().Add(timeout)
|
||||
var lastErr error
|
||||
for {
|
||||
state, err := ReadWatchRunState(path)
|
||||
if err == nil {
|
||||
return state, nil
|
||||
}
|
||||
lastErr = err
|
||||
if time.Now().After(deadline) {
|
||||
return WatchRunState{}, lastErr
|
||||
}
|
||||
time.Sleep(50 * time.Millisecond)
|
||||
}
|
||||
}
|
||||
|
||||
func liveWatchRunPGIDs(baseDir string) ([]int, error) {
|
||||
statePaths, err := filepath.Glob(filepath.Join(baseDir, "watch-*.json"))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
seen := map[int]struct{}{}
|
||||
for _, statePath := range statePaths {
|
||||
state, err := ReadWatchRunState(statePath)
|
||||
if err != nil || state.PGID <= 0 || !processGroupLive(state.PGID) {
|
||||
continue
|
||||
}
|
||||
seen[state.PGID] = struct{}{}
|
||||
}
|
||||
pgids := make([]int, 0, len(seen))
|
||||
for pgid := range seen {
|
||||
pgids = append(pgids, pgid)
|
||||
}
|
||||
sort.Ints(pgids)
|
||||
return pgids, nil
|
||||
}
|
||||
|
||||
func livePGIDs(pgids []int) []int {
|
||||
remaining := make([]int, 0, len(pgids))
|
||||
for _, pgid := range pgids {
|
||||
if processGroupLive(pgid) {
|
||||
remaining = append(remaining, pgid)
|
||||
}
|
||||
}
|
||||
return remaining
|
||||
}
|
||||
|
||||
func processGroupLive(pgid int) bool {
|
||||
if pgid <= 0 {
|
||||
return false
|
||||
}
|
||||
err := syscall.Kill(-pgid, 0)
|
||||
return err == nil || err == syscall.EPERM
|
||||
}
|
||||
|
||||
func currentBrowserProfilePIDs(userDataDirs []string, includeDevTempProfiles bool) ([]int, error) {
|
||||
output, err := exec.Command("ps", "-axo", "pid=,pgid=,command=").Output()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("listing processes: %w", err)
|
||||
}
|
||||
return watchProcessGroupsFromPS(string(output), currentPGID), nil
|
||||
return browserProfilePIDsFromPSForUserDataDirs(string(output), userDataDirs, includeDevTempProfiles), nil
|
||||
}
|
||||
|
||||
func watchProcessGroupsFromPS(output string, currentPGID int) []int {
|
||||
seen := map[int]struct{}{}
|
||||
func browserProfilePIDsFromPS(output string) []int {
|
||||
return browserProfilePIDsFromPSForUserDataDirs(output, []string{"/tmp/browseros-dev"}, true)
|
||||
}
|
||||
|
||||
func browserProfilePIDsFromPSForUserDataDirs(output string, userDataDirs []string, includeDevTempProfiles bool) []int {
|
||||
var pids []int
|
||||
for _, line := range strings.Split(output, "\n") {
|
||||
fields := strings.Fields(line)
|
||||
if len(fields) < 3 {
|
||||
continue
|
||||
}
|
||||
pgid, err := strconv.Atoi(fields[1])
|
||||
if err != nil || pgid == currentPGID {
|
||||
pid, err := strconv.Atoi(fields[0])
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
if isDefaultWatchCommand(fields[2:]) {
|
||||
seen[pgid] = struct{}{}
|
||||
command := strings.Join(fields[2:], " ")
|
||||
if isBrowserProcessForUserDataDir(command, userDataDirs, includeDevTempProfiles) {
|
||||
pids = append(pids, pid)
|
||||
}
|
||||
}
|
||||
|
||||
groups := make([]int, 0, len(seen))
|
||||
for pgid := range seen {
|
||||
groups = append(groups, pgid)
|
||||
}
|
||||
sort.Ints(groups)
|
||||
return groups
|
||||
sort.Ints(pids)
|
||||
return pids
|
||||
}
|
||||
|
||||
func isDefaultWatchCommand(commandFields []string) bool {
|
||||
if len(commandFields) < 2 {
|
||||
func isDevBrowserProcess(command string) bool {
|
||||
return isBrowserProcessForUserDataDir(command, []string{"/tmp/browseros-dev"}, true)
|
||||
}
|
||||
|
||||
func isBrowserProcessForUserDataDir(command string, userDataDirs []string, includeDevTempProfiles bool) bool {
|
||||
if !strings.Contains(command, "BrowserOS.app/Contents/MacOS/BrowserOS") {
|
||||
return false
|
||||
}
|
||||
if filepath.Base(commandFields[0]) != "browseros-dev" {
|
||||
return false
|
||||
}
|
||||
if commandFields[1] != "watch" {
|
||||
return false
|
||||
}
|
||||
for _, field := range commandFields[2:] {
|
||||
if field == "--new" {
|
||||
return false
|
||||
for _, dir := range userDataDirs {
|
||||
if dir == "" {
|
||||
continue
|
||||
}
|
||||
if strings.Contains(command, "--user-data-dir="+dir) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return true
|
||||
return includeDevTempProfiles &&
|
||||
(strings.Contains(command, "browseros-dev-") ||
|
||||
strings.Contains(command, "browseros-test-"))
|
||||
}
|
||||
|
||||
func watchRunPaths(baseDir string, identity WatchRunIdentity) watchRunPathsResult {
|
||||
identity = normalizeWatchRunIdentity(identity)
|
||||
sum := sha256.Sum256([]byte(fmt.Sprintf("%s\x00%s\x00%d\x00%d\x00%d",
|
||||
identity.Mode,
|
||||
identity.Profile,
|
||||
identity.Ports.CDP,
|
||||
identity.Ports.Server,
|
||||
identity.Ports.Extension,
|
||||
)))
|
||||
key := hex.EncodeToString(sum[:])
|
||||
return watchRunPathsResult{
|
||||
Lock: filepath.Join(baseDir, "watch-"+key+".lock"),
|
||||
State: filepath.Join(baseDir, "watch-"+key+".json"),
|
||||
}
|
||||
}
|
||||
|
||||
func normalizeWatchRunIdentity(identity WatchRunIdentity) WatchRunIdentity {
|
||||
identity.Profile = filepath.Clean(identity.Profile)
|
||||
return identity
|
||||
}
|
||||
|
||||
func tryAcquireWatchRunLock(lockPath string, statePath string) (*WatchRunLock, error) {
|
||||
if err := os.MkdirAll(filepath.Dir(lockPath), 0o755); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
file, err := os.OpenFile(lockPath, os.O_CREATE|os.O_RDWR, 0o644)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := syscall.Flock(int(file.Fd()), syscall.LOCK_EX|syscall.LOCK_NB); err != nil {
|
||||
file.Close()
|
||||
if errors.Is(err, syscall.EWOULDBLOCK) || errors.Is(err, syscall.EAGAIN) {
|
||||
return nil, errWatchRunLocked
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
return &WatchRunLock{file: file, statePath: statePath}, nil
|
||||
}
|
||||
|
||||
func (l *WatchRunLock) writeState(identity WatchRunIdentity) error {
|
||||
pgid, err := syscall.Getpgid(0)
|
||||
if err != nil {
|
||||
return fmt.Errorf("reading current process group: %w", err)
|
||||
}
|
||||
state := WatchRunState{
|
||||
PID: os.Getpid(),
|
||||
PGID: pgid,
|
||||
StartedAt: time.Now(),
|
||||
Identity: identity,
|
||||
}
|
||||
data, err := json.MarshalIndent(state, "", " ")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
data = append(data, '\n')
|
||||
tmp := l.statePath + ".tmp"
|
||||
if err := os.WriteFile(tmp, data, 0o644); err != nil {
|
||||
return err
|
||||
}
|
||||
return os.Rename(tmp, l.statePath)
|
||||
}
|
||||
|
||||
func waitForWatchRunLock(paths watchRunPathsResult, identity WatchRunIdentity, timeout time.Duration) (*WatchRunLock, error) {
|
||||
deadline := time.Now().Add(timeout)
|
||||
for {
|
||||
lock, err := tryAcquireWatchRunLock(paths.Lock, paths.State)
|
||||
if err == nil {
|
||||
if err := lock.writeState(identity); err != nil {
|
||||
lock.Close()
|
||||
return nil, err
|
||||
}
|
||||
return lock, nil
|
||||
}
|
||||
if !errors.Is(err, errWatchRunLocked) {
|
||||
return nil, err
|
||||
}
|
||||
if time.Now().After(deadline) {
|
||||
return nil, errWatchRunLocked
|
||||
}
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
}
|
||||
}
|
||||
|
||||
func validateWatchRunIdentity(identity WatchRunIdentity) error {
|
||||
if identity.Mode == "" {
|
||||
return fmt.Errorf("watch run mode is empty")
|
||||
}
|
||||
if identity.Profile == "" {
|
||||
return fmt.Errorf("watch run profile is empty")
|
||||
}
|
||||
if !isValidTCPPort(identity.Ports.CDP) || !isValidTCPPort(identity.Ports.Server) || !isValidTCPPort(identity.Ports.Extension) {
|
||||
return fmt.Errorf("watch run ports are invalid: %+v", identity.Ports)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func isValidTCPPort(port int) bool {
|
||||
return port > 0 && port <= maxTCPPort
|
||||
}
|
||||
|
||||
func signalProcessGroup(pgid int, signal syscall.Signal) error {
|
||||
if pgid <= 0 {
|
||||
return nil
|
||||
return fmt.Errorf("invalid process group %d", pgid)
|
||||
}
|
||||
if err := syscall.Kill(-pgid, signal); err != nil && err != syscall.ESRCH {
|
||||
return fmt.Errorf("signaling process group %d: %w", pgid, err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func signalProcess(pid int, signal syscall.Signal) error {
|
||||
if pid <= 0 {
|
||||
return fmt.Errorf("invalid process %d", pid)
|
||||
}
|
||||
if err := syscall.Kill(pid, signal); err != nil && err != syscall.ESRCH {
|
||||
return fmt.Errorf("signaling process %d: %w", pid, err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -1,32 +1,204 @@
|
||||
package proc
|
||||
|
||||
import "testing"
|
||||
import (
|
||||
"encoding/json"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"syscall"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
func TestWatchProcessGroupsFromPSSelectsOtherWatchGroups(t *testing.T) {
|
||||
output := `
|
||||
111 111 /tmp/one/browseros-dev watch
|
||||
222 222 /tmp/two/browseros-dev watch --new
|
||||
333 333 /tmp/one/browseros-dev cleanup
|
||||
444 444 rg browseros-dev watch
|
||||
555 555 bun run dev:watch
|
||||
`
|
||||
const watchLockHelperEnv = "BROWSEROS_DEV_WATCH_LOCK_HELPER"
|
||||
|
||||
groups := watchProcessGroupsFromPS(output, 999)
|
||||
func TestMain(m *testing.M) {
|
||||
if os.Getenv(watchLockHelperEnv) == "1" {
|
||||
runWatchLockHelper()
|
||||
return
|
||||
}
|
||||
os.Exit(m.Run())
|
||||
}
|
||||
|
||||
if len(groups) != 1 || groups[0] != 111 {
|
||||
t.Fatalf("expected only pgid 111, got %#v", groups)
|
||||
func TestWatchRunPathsStableAndDistinct(t *testing.T) {
|
||||
baseDir := t.TempDir()
|
||||
identity := WatchRunIdentity{
|
||||
Mode: "watch",
|
||||
Profile: "/tmp/browseros-dev",
|
||||
Ports: Ports{CDP: 9005, Server: 9105, Extension: 9305},
|
||||
}
|
||||
|
||||
first := watchRunPaths(baseDir, identity)
|
||||
second := watchRunPaths(baseDir, identity)
|
||||
if first != second {
|
||||
t.Fatalf("expected stable paths, got %#v and %#v", first, second)
|
||||
}
|
||||
|
||||
withDifferentPort := identity
|
||||
withDifferentPort.Ports.Server = 9106
|
||||
third := watchRunPaths(baseDir, withDifferentPort)
|
||||
if third.Lock == first.Lock || third.State == first.State {
|
||||
t.Fatalf("expected distinct paths for different ports, got %#v and %#v", first, third)
|
||||
}
|
||||
}
|
||||
|
||||
func TestWatchProcessGroupsFromPSDedupesProcessGroups(t *testing.T) {
|
||||
func TestBrowserProfilePIDsFromPSSelectsOnlyDevAndTestProfiles(t *testing.T) {
|
||||
output := `
|
||||
111 111 /tmp/one/browseros-dev watch
|
||||
112 111 /tmp/one/browseros-dev watch
|
||||
111 111 /Applications/BrowserOS.app/Contents/MacOS/BrowserOS --user-data-dir=/tmp/browseros-dev
|
||||
222 222 /Applications/BrowserOS.app/Contents/MacOS/BrowserOS --user-data-dir=/tmp/browseros-dev-abcd
|
||||
333 333 /Applications/BrowserOS.app/Contents/MacOS/BrowserOS --user-data-dir=/var/folders/x/browseros-test-abcd
|
||||
444 444 /Applications/BrowserOS.app/Contents/MacOS/BrowserOS --user-data-dir=/Users/me/Library/Application Support/BrowserOS
|
||||
555 555 rg browseros-test-
|
||||
`
|
||||
|
||||
groups := watchProcessGroupsFromPS(output, 999)
|
||||
pids := browserProfilePIDsFromPS(output)
|
||||
|
||||
if len(groups) != 1 || groups[0] != 111 {
|
||||
t.Fatalf("expected one pgid 111, got %#v", groups)
|
||||
if len(pids) != 3 || pids[0] != 111 || pids[1] != 222 || pids[2] != 333 {
|
||||
t.Fatalf("expected dev/test browser pids, got %#v", pids)
|
||||
}
|
||||
}
|
||||
|
||||
func TestAcquireWatchRunLockWritesStateAndReleases(t *testing.T) {
|
||||
baseDir := t.TempDir()
|
||||
identity := WatchRunIdentity{
|
||||
Mode: "watch",
|
||||
Profile: "/tmp/browseros-dev",
|
||||
Ports: Ports{CDP: 9005, Server: 9105, Extension: 9305},
|
||||
}
|
||||
|
||||
lock, stopped, err := AcquireWatchRunLockInDir(baseDir, identity, time.Second)
|
||||
if err != nil {
|
||||
t.Fatalf("AcquireWatchRunLockInDir returned error: %v", err)
|
||||
}
|
||||
if stopped {
|
||||
t.Fatal("expected first acquisition not to stop another run")
|
||||
}
|
||||
|
||||
paths := watchRunPaths(baseDir, identity)
|
||||
state, err := ReadWatchRunState(paths.State)
|
||||
if err != nil {
|
||||
t.Fatalf("ReadWatchRunState returned error: %v", err)
|
||||
}
|
||||
if state.PID != os.Getpid() {
|
||||
t.Fatalf("expected state PID %d, got %d", os.Getpid(), state.PID)
|
||||
}
|
||||
if state.PGID <= 0 {
|
||||
t.Fatalf("expected positive PGID, got %d", state.PGID)
|
||||
}
|
||||
if state.Identity != identity {
|
||||
t.Fatalf("expected identity %#v, got %#v", identity, state.Identity)
|
||||
}
|
||||
if err := lock.Close(); err != nil {
|
||||
t.Fatalf("closing lock: %v", err)
|
||||
}
|
||||
if _, err := os.Stat(paths.State); !os.IsNotExist(err) {
|
||||
t.Fatalf("expected state file to be removed on close, got %v", err)
|
||||
}
|
||||
if _, err := os.Stat(paths.Lock); err != nil {
|
||||
t.Fatalf("expected lock file path to remain reusable, got %v", err)
|
||||
}
|
||||
|
||||
lock, stopped, err = AcquireWatchRunLockInDir(baseDir, identity, time.Second)
|
||||
if err != nil {
|
||||
t.Fatalf("reacquiring lock returned error: %v", err)
|
||||
}
|
||||
if stopped {
|
||||
t.Fatal("expected reacquisition after close not to stop another run")
|
||||
}
|
||||
if err := lock.Close(); err != nil {
|
||||
t.Fatalf("closing reacquired lock: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestAcquireWatchRunLockRejectsInvalidPorts(t *testing.T) {
|
||||
identity := WatchRunIdentity{
|
||||
Mode: "watch",
|
||||
Profile: "/tmp/browseros-dev",
|
||||
Ports: Ports{CDP: 9005, Server: 65536, Extension: 9305},
|
||||
}
|
||||
|
||||
if _, _, err := AcquireWatchRunLockInDir(t.TempDir(), identity, time.Second); err == nil {
|
||||
t.Fatal("expected invalid port error")
|
||||
}
|
||||
}
|
||||
|
||||
func TestAcquireWatchRunLockStopsExistingOwnerByStatePGID(t *testing.T) {
|
||||
baseDir := t.TempDir()
|
||||
readyPath := filepath.Join(baseDir, "ready")
|
||||
identity := WatchRunIdentity{
|
||||
Mode: "watch",
|
||||
Profile: "/tmp/browseros-dev",
|
||||
Ports: Ports{CDP: 9005, Server: 9105, Extension: 9305},
|
||||
}
|
||||
identityJSON, err := json.Marshal(identity)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
cmd := exec.Command(os.Args[0], "-test.run=TestMain")
|
||||
cmd.Env = append(os.Environ(),
|
||||
watchLockHelperEnv+"=1",
|
||||
"BROWSEROS_DEV_WATCH_LOCK_BASE="+baseDir,
|
||||
"BROWSEROS_DEV_WATCH_LOCK_READY="+readyPath,
|
||||
"BROWSEROS_DEV_WATCH_LOCK_IDENTITY="+string(identityJSON),
|
||||
)
|
||||
cmd.SysProcAttr = &syscall.SysProcAttr{Setpgid: true}
|
||||
if err := cmd.Start(); err != nil {
|
||||
t.Fatalf("starting helper: %v", err)
|
||||
}
|
||||
defer cmd.Process.Kill()
|
||||
|
||||
waitForFile(t, readyPath, 3*time.Second)
|
||||
|
||||
lock, stopped, err := AcquireWatchRunLockInDir(baseDir, identity, 3*time.Second)
|
||||
if err != nil {
|
||||
t.Fatalf("AcquireWatchRunLockInDir returned error: %v", err)
|
||||
}
|
||||
defer lock.Close()
|
||||
if !stopped {
|
||||
t.Fatal("expected takeover to stop existing owner")
|
||||
}
|
||||
|
||||
done := make(chan error, 1)
|
||||
go func() {
|
||||
done <- cmd.Wait()
|
||||
}()
|
||||
select {
|
||||
case <-done:
|
||||
case <-time.After(3 * time.Second):
|
||||
t.Fatal("expected helper process to exit after takeover")
|
||||
}
|
||||
}
|
||||
|
||||
func runWatchLockHelper() {
|
||||
baseDir := os.Getenv("BROWSEROS_DEV_WATCH_LOCK_BASE")
|
||||
readyPath := os.Getenv("BROWSEROS_DEV_WATCH_LOCK_READY")
|
||||
var identity WatchRunIdentity
|
||||
if err := json.Unmarshal([]byte(os.Getenv("BROWSEROS_DEV_WATCH_LOCK_IDENTITY")), &identity); err != nil {
|
||||
os.Exit(2)
|
||||
}
|
||||
|
||||
lock, _, err := AcquireWatchRunLockInDir(baseDir, identity, time.Second)
|
||||
if err != nil {
|
||||
os.Exit(3)
|
||||
}
|
||||
defer lock.Close()
|
||||
if err := os.WriteFile(readyPath, []byte("ready\n"), 0o644); err != nil {
|
||||
os.Exit(4)
|
||||
}
|
||||
time.Sleep(30 * time.Second)
|
||||
}
|
||||
|
||||
func waitForFile(t *testing.T, path string, timeout time.Duration) {
|
||||
t.Helper()
|
||||
deadline := time.Now().Add(timeout)
|
||||
for {
|
||||
if _, err := os.Stat(path); err == nil {
|
||||
return
|
||||
}
|
||||
if time.Now().After(deadline) {
|
||||
t.Fatalf("timed out waiting for %s", path)
|
||||
}
|
||||
time.Sleep(50 * time.Millisecond)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -2,14 +2,4 @@
|
||||
set -euo pipefail
|
||||
|
||||
DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
ROOT="$(cd "$DIR/../.." && pwd)"
|
||||
|
||||
cd "$ROOT"
|
||||
|
||||
echo "[setup] Installing dependencies..."
|
||||
bun install --frozen-lockfile
|
||||
|
||||
echo "[setup] Generating agent code..."
|
||||
bun run codegen:agent
|
||||
|
||||
echo "[setup] Ready"
|
||||
exec "$DIR/run.sh" setup "$@"
|
||||
|
||||
Reference in New Issue
Block a user