Compare commits

..

8 Commits

Author SHA1 Message Date
Nikhil Sonti
bcf0e6f990 test(openclaw): align serialization mock with image check 2026-04-30 11:23:49 -07:00
Nikhil Sonti
d21befc509 fix(openclaw): address review feedback 2026-04-30 11:22:39 -07:00
Nikhil Sonti
b355b88433 fix(server): satisfy process lock error override 2026-04-30 11:21:55 -07:00
Nikhil Sonti
673ac0ad68 test(openclaw): cover lifecycle race recovery 2026-04-30 11:21:55 -07:00
Nikhil Sonti
114c3c3796 fix(openclaw): reconcile fixed gateway container startup 2026-04-30 11:21:55 -07:00
Nikhil Sonti
a32a073d43 feat(openclaw): serialize lifecycle across processes 2026-04-30 11:21:20 -07:00
Nikhil Sonti
054056017f feat(container): add container name reconciliation helpers 2026-04-30 11:21:19 -07:00
Nikhil Sonti
fc014c37b8 feat(server): add shared process lock helper 2026-04-30 11:19:24 -07:00
20 changed files with 128 additions and 1985 deletions

View File

@@ -1,152 +0,0 @@
---
name: ask-internal
description: Answer questions about BrowserOS internal stuff (setup, features, architecture, design decisions) by reading the private internal-docs submodule and the codebase. Use for "how do I X", "where is Y", "what is the deal with Z", or any question that mixes ops/setup knowledge with code knowledge. Can execute steps with per-command confirmation.
allowed-tools: Bash, Read, Grep, Glob, Edit, Write
---
# Ask Internal
Answer team-internal questions by reading `.internal-docs/` and the codebase, synthesizing a direct answer with file:line citations, and optionally running surfaced commands with confirmation.
**Announce at start:** "I'm using the ask-internal skill to answer this from internal-docs and the codebase."
## When to use
- "How do I reset my dogfood profile?"
- "What's the deal with the OpenClaw VM startup?"
- "Where do we configure release signing?"
- Any question whose answer lives in setup runbooks, feature notes, architecture docs, or the code that produced them.
## Hard rules — never do these
- NEVER execute a state-mutating command without per-command `y` confirmation from the user.
- NEVER edit BrowserOS code in response to an ask-internal question. The skill answers; it does not modify code. Use `/document-internal` for writes.
- NEVER guess. If grep finds nothing useful in docs or code, say so plainly.
- NEVER run this skill if `.internal-docs/` is missing. Stop with the init command.
- NEVER cite a file or line number you have not actually read.
## Voice rules
Apply the same voice rules as `document-internal` to the synthesized answer:
- Lead with the point.
- Concrete nouns. Name files, functions, commands.
- Short sentences. Active voice. No em dashes.
- Banned words: delve, crucial, robust, comprehensive, nuanced, multifaceted, furthermore, moreover, additionally, pivotal, landscape, tapestry, underscore, foster, showcase, intricate, vibrant, fundamental, significant, leverage, utilize.
- No filler intros.
## Workflow
### Step 0: Pre-flight
```bash
if git submodule status .internal-docs 2>/dev/null | grep -q '^-'; then
echo "internal-docs submodule not initialized. Run: git submodule update --init .internal-docs"
exit 0
fi
[ -d .internal-docs ] && [ -n "$(ls -A .internal-docs 2>/dev/null)" ] || {
echo ".internal-docs/ missing or empty. Submodule not configured?"
exit 0
}
```
### Step 1: Parse the question
Pull the keywords from the user's question. Drop stop words. Identify intent:
- **Setup-question** ("how do I", "how to", "where do I configure"): bias the search toward `setup/`.
- **Feature-question** ("what is X", "why does X work this way"): bias toward `features/` and `architecture/`.
- **Free-form** ("anything about Y"): search all categories.
### Step 2: Multi-source search
Run grep in parallel across two sources.
**Internal docs:**
```bash
grep -rni --include='*.md' '<keyword>' .internal-docs/
```
Search each keyword separately. Collect top hits by relevance (more keyword matches = higher).
**Codebase (skip vendored Chromium and `node_modules`):**
```bash
grep -rni --include='*.ts' --include='*.tsx' --include='*.js' --include='*.json' --include='*.sh' \
--exclude-dir=node_modules --exclude-dir=chromium --exclude-dir=.grove \
'<keyword>' packages/ scripts/ .config/ .github/
```
Read the top 3-5 doc hits and top 3-5 code hits. Do not skim — read the relevant section fully so citations are accurate.
### Step 3: Synthesize answer
Structure the response:
1. **Direct answer.** First sentence answers the question. No preamble.
2. **Steps if applicable.** Numbered list with exact commands.
3. **Citations.** Every factual claim references `path/to/file.md:42` or `path/to/code.ts:117`. Run the voice self-check before printing.
If multiple docs cover the topic at different layers (e.g., a setup runbook and a feature note both mention dogfood profiles), reconcile them in the answer rather than dumping both.
### Step 4: Offer execution (only if commands surfaced)
If Step 3 produced executable commands the user could run, ask:
> Run these for you? (y / n / dry-run)
- **y:** Execute one at a time. For any command that mutates state (writes a file, modifies config, kills a process, deletes anything), ask "run this? <command>" before each. Read-only commands (`ls`, `cat`, `git status`) run without per-command confirmation but still print before running.
- **n:** Skip. Done.
- **dry-run:** Print the full sequence as a `bash` block. Do not execute.
### Step 5: Doc-not-found path
If Step 2 returned nothing useful (no doc hits AND no clear code answer):
1. Tell the user: "No doc covers this. Tangentially relevant files: <list>."
2. Ask: "Draft a new doc and open a PR to internal-docs?"
3. On yes: invoke the full `/document-internal` flow (four sharp questions, draft, voice check, PR), forced to `setup/` doc type, with the code-grep findings handed in as initial context.
### Step 6: Completion status
Report one of:
- **DONE** — answer delivered, citations verified.
- **DONE_WITH_CONCERNS** — answered, but flag uncertainty (e.g., docs and code disagreed; user should reconcile).
- **BLOCKED** — submodule missing or other pre-flight failure.
- **NEEDS_CONTEXT** — question too vague to search effectively. Ask one clarifying question.
## Citation discipline
Every "X is at Y" claim in the answer must point to a file:line that the skill actually read. Do not approximate. If you didn't read it, don't cite it.
If a doc says one thing and the code says another, surface the conflict explicitly:
> The setup runbook (`setup/dogfood-profile.md:23`) says to delete `~/.cache/browseros/dogfood`, but the actual code path in `packages/cli/src/cleanup.ts:47` removes `~/.local/share/browseros/dogfood`. The doc looks stale. Recommend updating it.
## Common Mistakes
**Skimming and then citing**
- **Problem:** Citation points to a line that doesn't actually contain the claim.
- **Fix:** Read the section fully before citing. If you didn't read line 117, don't cite line 117.
**Executing without per-command confirmation for mutations**
- **Problem:** User says "y" to "run all", skill blasts through `rm -rf`-style commands.
- **Fix:** "y" means "run this sequence with per-mutation confirmations". Per-command y is required for writes.
**Searching only docs, not code**
- **Problem:** Doc says X but code does Y; answer is wrong.
- **Fix:** Always grep both sources in Step 2.
## Red Flags
**Never:**
- Cite a file:line you haven't read.
- Run mutations without per-command confirmation.
- Modify BrowserOS code from this skill (use `/document-internal` for writes).
**Always:**
- Pre-flight check before any search.
- Reconcile doc vs code conflicts in the answer, don't hide them.
- Plain "no doc covers this" when grep is empty — never invent.

View File

@@ -1,208 +0,0 @@
---
name: document-internal
description: Draft a 1-page internal doc (feature, architecture, or design) for the private browseros-ai/internal-docs repo. Use when wrapping up a feature on a branch, after the PR is open or about to be opened. Skill drafts from the diff, asks four sharp questions, enforces voice rules, and opens a PR to internal-docs.
allowed-tools: Bash, Read, Write, Edit, Grep, Glob
---
# Document Internal
Draft a 1-page internal doc (feature note, architecture note, or design spec) from the current branch's diff and open a PR to `browseros-ai/internal-docs`.
**Announce at start:** "I'm using the document-internal skill to draft a doc for internal-docs."
## When to use
After finishing implementation on a feature branch, when the work is doc-worthy (a major feature, a new subsystem, a setup runbook for something internal, or a design decision that future engineers need to know).
## Hard rules — never do these
- NEVER `git add -A` or `git add .` inside the tmp clone of internal-docs. Always specific paths.
- NEVER write outside the tmp clone (no spillover into the OSS repo's working tree).
- NEVER fabricate filler content for empty template sections. Empty stays empty.
- NEVER touch the OSS repo's `.gitmodules` or submodule pointer — the sync workflow handles that.
- NEVER run this skill if `.internal-docs/` is missing. Stop with the init command.
- NEVER push to `internal-docs/main` directly. Always a feature branch + PR.
## Voice rules — enforced by Step 4
The skill MUST follow these and refuse to draft otherwise. After generation, scan for violations and regenerate offending sentences (max 3 attempts).
- Lead with the point. First sentence answers "what is this?"
- Concrete nouns. Name files, functions, commands. Not "the system" or "the component".
- Short sentences. Average <20 words. No deeply nested clauses.
- Active voice. "X does Y" not "Y is done by X".
- No em dashes. Use commas, periods, or rephrase.
- Banned words: delve, crucial, robust, comprehensive, nuanced, multifaceted, furthermore, moreover, additionally, pivotal, landscape, tapestry, underscore, foster, showcase, intricate, vibrant, fundamental, significant, leverage, utilize.
- "110 IQ" target. Write for a smart engineer who has not seen this code yet.
- No filler intros ("This document describes..."). Start with the substance.
- Empty sections stay empty. Do not write "N/A" or fabricate content.
## Workflow
### Step 0: Pre-flight
Bail with a clear message on any failure.
```bash
# Submodule must be initialized
if git submodule status .internal-docs 2>/dev/null | grep -q '^-'; then
echo "internal-docs submodule not initialized. Run: git submodule update --init .internal-docs"
exit 0
fi
[ -d .internal-docs ] || { echo ".internal-docs/ missing. Submodule not configured?"; exit 0; }
# Must be on a feature branch
BRANCH=$(git branch --show-current)
if [ "$BRANCH" = "main" ] || [ "$BRANCH" = "dev" ]; then
echo "On $BRANCH. Run from a feature branch."
exit 0
fi
# Determine base branch (default: dev for this repo, fall back to main).
# Suppress rev-parse's SHA output on stdout so it doesn't get captured into BASE.
BASE=$(git rev-parse --verify origin/dev >/dev/null 2>&1 && echo dev || echo main)
# Gather context
git log "$BASE..HEAD" --oneline
git diff "$BASE...HEAD" --stat
gh pr view --json body -q .body 2>/dev/null # may be empty if no PR yet
```
### Step 1: Identify the doc
Ask the user for three things in one prompt:
1. **Doc type:** `feature` (default for `feat/*` branches), `architecture`, or `design`
2. **Slug:** kebab-case, short (e.g., `cowork-mcp`, `auto-skill-suggest`)
3. **Owner:** GitHub handle (default = `git config user.name` or current `gh api user --jq .login`)
### Step 2: Decision brief — four sharp questions
Ask one question at a time. Each answer constrains the next. These force compression before drafting.
1. "In one sentence: what can someone now DO that they could not before?"
2. "What is the one design decision a future engineer needs to know?"
3. "Which 3-5 files are the heart of this change?" (suggest candidates from the diff)
4. "Any sharp edges or gotchas? (or 'none')"
Skip any question that is N/A for the doc type. Architecture notes don't need question 1; design specs don't need question 4.
### Step 3: Draft from the template
Read the matching template from `.internal-docs/_templates/`:
- `feature` `feature-note.md`
- `architecture` `architecture-note.md`
- `design` `design-spec.md`
If `.internal-docs/_templates/` does not exist (first run, before seeding), fall back to the seeds bundled with this skill at `.claude/skills/document-internal/seeds/_templates/`.
Generate the 1-pager from the template, the four answers, and the diff context.
### Step 4: Voice self-check
Scan the draft for violations:
- Em dash present (`—`).
- Any banned word from the list.
- Average sentence length > 20 words.
- Body line count > 60 (feature notes only — architecture/design have no cap).
If any violation found, regenerate the offending sentences in place. Max 3 attempts. If still failing after 3 attempts, stop and report which rules are violated.
If the body is over 60 lines for a feature note, ask: "This is N lines, target is 60. Trim, or promote to `architecture/` (no length cap)?"
### Step 5: Show + iterate
Print the full draft. Ask:
> Edit needed? Paste any changes, or say "looks good".
Apply user edits with the Edit tool. Re-run Step 4. Loop until the user approves.
### Step 6: Open PR to internal-docs
Use a tmp clone. Never the user's `.internal-docs` checkout — keeps the user's submodule clean.
```bash
TMP=$(mktemp -d)
trap 'rm -rf "$TMP"' EXIT # cleans up even if any step below fails
git clone -b main git@github.com:browseros-ai/internal-docs.git "$TMP"
cd "$TMP"
git checkout -b "docs/<slug>"
# Write the doc
mkdir -p "<type>" # features, architecture, designs, or setup
cat > "<type>/$(date -u +%Y-%m)-<slug>.md" <<'DOC'
<draft content>
DOC
# Update the root README index — insert one line under the matching section
# Use Edit tool to add: "- [<title>](<type>/YYYY-MM-<slug>.md) — <one-line description>"
git add "<type>/$(date -u +%Y-%m)-<slug>.md" README.md
git commit -m "docs(<type>): <slug>"
git push -u origin "docs/<slug>"
PR_URL=$(gh pr create -R browseros-ai/internal-docs --base main \
--head "docs/<slug>" \
--title "docs(<type>): <slug>" \
--body "$(cat <<'BODY'
## Summary
<one-line of what this doc covers>
## Source
- BrowserOS branch: <branch>
- Related PR: <#NNN if any>
BODY
)")
cd -
echo "PR opened: $PR_URL"
# trap above cleans up $TMP on EXIT
```
If the slug contains characters that won't shell-escape cleanly, sanitize before substitution.
### Step 7: Completion status
Report one of:
- **DONE** — file written, branch pushed, PR opened. Print PR URL.
- **DONE_WITH_CONCERNS** — same as DONE but list concerns (e.g., voice check needed multiple regens, user skipped a question).
- **BLOCKED** — submodule missing, auth fail, or template missing. State exactly what's needed.
## Doc type defaults
| Branch pattern | Default doc type | Default location |
|----------------|------------------|------------------|
| `feat/*` | feature | `features/` |
| `arch/*` or refactor branches with >10 files in `packages/` | architecture | `architecture/` |
| `rfc/*` or `design/*` | design | `designs/` |
| Otherwise | ask | ask |
## Common Mistakes
**Drafting before asking the four questions**
- **Problem:** Output is generic filler that says nothing concrete.
- **Fix:** Always ask Step 2 first, even if the diff "looks obvious".
**Touching `.internal-docs/` directly**
- **Problem:** User's submodule HEAD moves, parent repo shows dirty state.
- **Fix:** Always use the tmp clone in Step 6.
**Skipping voice check on user edits**
- **Problem:** User pastes prose with em dashes or filler; ships as-is.
- **Fix:** Re-run Step 4 after every user edit.
## Red Flags
**Never:**
- Push to `internal-docs/main`. Always branch + PR.
- Modify the OSS repo's `.gitmodules` or submodule pointer.
- Fabricate content for empty template sections.
**Always:**
- Pre-flight check before doing any work.
- One-pager rule for feature notes (60-line body cap).
- File:line citations when referencing code.

View File

@@ -1,51 +0,0 @@
# BrowserOS Internal Docs
Private team docs for `browseros-ai`. Mounted as a submodule into the public OSS repo at `.internal-docs/`.
If you are reading this from a public clone of BrowserOS without team access — this submodule is for the BrowserOS internal team. Nothing here is required to build or use BrowserOS.
## How to find what you need
- Setup task ("how do I X locally") → look in [`setup/`](setup/)
- Recently shipped feature → look in [`features/`](features/)
- Cross-cutting subsystem → look in [`architecture/`](architecture/)
- A design decision or RFC → look in [`designs/`](designs/)
Or run `/ask-internal "<your question>"` from any BrowserOS checkout. The skill greps these docs and the codebase, then synthesizes an answer with citations.
## How to add a doc
Run `/document-internal` from a feature branch. The skill drafts a 1-pager from your branch's diff, asks four sharp questions, enforces voice rules, and opens a PR back to this repo.
## Index
### Setup
<!-- one line per setup runbook: -->
<!-- - [Dev environment](setup/dev-environment.md): first-time machine setup -->
### Features
<!-- one line per shipped feature, newest first: -->
<!-- - [Cowork MCP](features/2026-04-cowork-mcp.md): bring outside MCPs into the BrowserOS agent -->
### Architecture
<!-- one line per cross-cutting subsystem: -->
<!-- - [Chrome fork overview](architecture/chrome-fork-overview.md): what we patched and why -->
### Designs
<!-- one line per design spec, newest first: -->
<!-- - [Internal docs submodule](designs/2026-04-30-internal-docs-submodule.md): this system -->
## Templates
When `/document-internal` runs, it reads from [`_templates/`](_templates/). Edit the templates here when the team's preferred shape changes.
## Voice
Docs in this repo follow these rules. The `/document-internal` skill enforces them; humans editing by hand should match.
- Lead with the point.
- Concrete nouns. Name files, functions, commands.
- Short sentences, active voice, no em dashes.
- No filler words: delve, crucial, robust, comprehensive, nuanced, multifaceted, leverage, utilize, etc.
- Empty sections stay empty. Do not write "N/A" or fake content.
- Feature notes target one screen, body 60 lines max.

View File

@@ -1,31 +0,0 @@
---
title: <subsystem name>
owner: <github handle>
status: current | deprecated
date: YYYY-MM-DD
related-features: [feature-slug-1, feature-slug-2]
---
# <subsystem name>
## What this subsystem does
<1-2 paragraphs. The top-level responsibility. Boundaries.>
## Architecture
<Diagram (ASCII or mermaid) plus prose. Components and how they talk.>
## Constraints
<Hard rules the design enforces. "X must never call Y" type statements.>
## Decisions made
<Numbered list of non-obvious decisions and the reason for each.>
## Key files
- `path/to/file.ts` — role
- `path/to/dir/` — what lives here
## How to evolve this
<Where to add things. Which tests to expect to update. What NOT to touch.>
## Open questions
<What is still being figured out. Empty if none.>

View File

@@ -1,34 +0,0 @@
---
title: <design name>
owner: <github handle>
status: proposed | accepted | rejected | superseded
date: YYYY-MM-DD
supersedes: <design-slug or none>
---
# <design name>
## Goal
<2-4 sentences. What this design is trying to accomplish.>
## Context
<1-2 paragraphs. The current state, what is failing, why this needs to change.>
## Selected Approach
<The chosen design at a high level. Architecture, components, data flow.>
## Alternatives Considered
### 1. <name>
<2-3 sentences on what this would look like, then pro/con and why rejected (or deferred).>
### 2. <name>
<Same shape.>
## Out of Scope
<What this design does NOT cover. Defer references.>
## Rollout
<Numbered steps from "nothing exists" to "fully shipped".>
## Open Questions
<Resolved during design? Empty. Unresolved? List with owner.>

View File

@@ -1,29 +0,0 @@
---
title: <feature name>
owner: <github handle>
status: shipped | wip | deprecated
date: YYYY-MM-DD
prs: ["#NNN"]
tags: [agent, browser, mcp]
---
# <feature name>
## What it does
<2-3 sentences. What can someone now do that they could not before. Lead with user-facing impact, not implementation.>
## Why we built it
<1-2 sentences. Motivation. What pain it removed or what unlocked.>
## How it works
<3-6 sentences. The flow at a high level. Name the key files.>
## Key files
- `path/to/file.ts` — what it does
- `path/to/other.ts` — what it does
## How to run / test it locally
<bullet list of commands. Empty section if N/A do not fake.>
## Gotchas
<known sharp edges. "If you see X, that's why." Empty if N/A.>

View File

@@ -1,53 +0,0 @@
name: Sync internal-docs submodule
on:
schedule:
- cron: '0 */4 * * *'
workflow_dispatch:
jobs:
sync:
name: Bump internal-docs submodule pointer on dev
runs-on: ubuntu-latest
steps:
- name: Rewrite SSH submodule URL to HTTPS-with-token
env:
TOKEN: ${{ secrets.INTERNAL_DOCS_SYNC_TOKEN }}
run: |
git config --global "url.https://x-access-token:${TOKEN}@github.com/.insteadOf" "git@github.com:"
- uses: actions/checkout@v4
with:
token: ${{ secrets.INTERNAL_DOCS_SYNC_TOKEN }}
submodules: true
ref: dev
fetch-depth: 50
- name: Bump submodule pointer if internal-docs has new commits
env:
GH_TOKEN: ${{ secrets.INTERNAL_DOCS_SYNC_TOKEN }}
run: |
set -e
# Skip if submodule not yet configured (handoff window before someone adds it)
if ! git config --file .gitmodules --get-regexp '^submodule\..internal-docs\.path$' >/dev/null 2>&1; then
echo "internal-docs submodule not yet configured in .gitmodules. Skipping."
exit 0
fi
git submodule update --remote --merge .internal-docs
if git diff --quiet .internal-docs; then
echo "No internal-docs changes to sync."
exit 0
fi
git config user.name "browseros-bot"
git config user.email "bot@browseros.ai"
git add .internal-docs
git commit -m "chore: sync internal-docs submodule"
# Rebase onto latest dev to absorb any commits that landed during the run,
# then push. set -e takes care of failing the run on rebase conflict.
git pull --rebase origin dev
git push origin dev

View File

@@ -16,6 +16,7 @@
"globals": "^16.4.0",
"lefthook": "^2.0.12",
"picocolors": "^1.1.1",
"rimraf": "^6.0.1",
"typedoc": "^0.28.15",
"typescript": "^5.9.2",
},
@@ -2674,7 +2675,7 @@
"giscus": ["giscus@1.6.0", "", { "dependencies": { "lit": "^3.2.1" } }, "sha512-Zrsi8r4t1LVW950keaWcsURuZUQwUaMKjvJgTCY125vkW6OiEBkatE7ScJDbpqKHdZwb///7FVC21SE3iFK3PQ=="],
"glob": ["glob@10.5.0", "", { "dependencies": { "foreground-child": "^3.1.0", "jackspeak": "^3.1.2", "minimatch": "^9.0.4", "minipass": "^7.1.2", "package-json-from-dist": "^1.0.0", "path-scurry": "^1.11.1" }, "bin": { "glob": "dist/esm/bin.mjs" } }, "sha512-DfXN8DfhJ7NH3Oe7cFmu3NCu1wKbkReJ8TorzSAFbSKrlNaQSKfIzqYqVY8zlbs2NLBbWpRiU52GX2PbaBVNkg=="],
"glob": ["glob@13.0.0", "", { "dependencies": { "minimatch": "^10.1.1", "minipass": "^7.1.2", "path-scurry": "^2.0.0" } }, "sha512-tvZgpqk6fz4BaNZ66ZsRaZnbHvP/jG3uKJvAZOwEVUL4RTA5nJeeLYfyN9/VA8NX/V3IBG+hkeuGpKjvELkVhA=="],
"glob-parent": ["glob-parent@5.1.2", "", { "dependencies": { "is-glob": "^4.0.1" } }, "sha512-AOIgSQCepiJYwP3ARnGx+5VnTu2HBYdzbGP45eLw1vr3zB3vZLeyed1sC9hnbcOc9/SrMyM5RPQrkGz4aS9Zow=="],
@@ -3108,7 +3109,7 @@
"lowercase-keys": ["lowercase-keys@3.0.0", "", {}, "sha512-ozCC6gdQ+glXOQsveKD0YsDy8DSQFjDTz4zyzEHNV5+JP5D62LmfDZ6o1cycFx9ouG940M5dE8C8CTewdj2YWQ=="],
"lru-cache": ["lru-cache@10.4.3", "", {}, "sha512-JNAzZcXrCt42VGLuYz0zfAzDfAvJWW6AfYlDBQyDV5DClI2m5sAmK+OIO7s59XfsRsWHp02jAJrRadPRGTt6SQ=="],
"lru-cache": ["lru-cache@11.2.4", "", {}, "sha512-B5Y16Jr9LB9dHVkh6ZevG+vAbOsNOYCX+sXvFWFu7B3Iz5mijW3zdbMyhsh8ANd2mSWBYdJgnqi+mL7/LrOPYg=="],
"lucide-react": ["lucide-react@0.562.0", "", { "peerDependencies": { "react": "^16.5.1 || ^17.0.0 || ^18.0.0 || ^19.0.0" } }, "sha512-82hOAu7y0dbVuFfmO4bYF1XEwYk/mEbM5E+b1jgci/udUBEE/R7LF5Ip0CCEmXe8AybRM8L+04eP+LGZeDvkiw=="],
@@ -3484,7 +3485,7 @@
"path-root-regex": ["path-root-regex@0.1.2", "", {}, "sha512-4GlJ6rZDhQZFE0DPVKh0e9jmZ5egZfxTkp7bcRDuPlJXbAwhxcl2dINPUAsjLdejqaLsCeg8axcLjIbvBjN4pQ=="],
"path-scurry": ["path-scurry@1.11.1", "", { "dependencies": { "lru-cache": "^10.2.0", "minipass": "^5.0.0 || ^6.0.2 || ^7.0.0" } }, "sha512-Xa4Nw17FS9ApQFJ9umLiJS4orGjm7ZzwUrwamcGQuHSzDyth9boKDaycYdDcZDuqYATXw4HFXgaqWTctW/v1HA=="],
"path-scurry": ["path-scurry@2.0.1", "", { "dependencies": { "lru-cache": "^11.0.0", "minipass": "^7.1.2" } }, "sha512-oWyT4gICAu+kaA7QWk/jvCHWarMKNs6pXOGWKDTr7cw4IGcUbW+PeTfbaQiLGheFRpjo6O9J0PmyMfQPjH71oA=="],
"path-to-regexp": ["path-to-regexp@8.3.0", "", {}, "sha512-7jdwVIRtsP8MYpdXSwOS0YdD0Du+qOoF/AEPIt88PcCFrZCzx41oxku1jD88hZBwbNUIEfpqvuhjFaMAqMTWnA=="],
@@ -3844,7 +3845,7 @@
"rfdc": ["rfdc@1.4.1", "", {}, "sha512-q1b3N5QkRUWUl7iyylaaj3kOpIT0N2i9MqIEQXP73GVsN9cw3fdx8X63cEmWhJGi2PPCF23Ijp7ktmd39rawIA=="],
"rimraf": ["rimraf@5.0.10", "", { "dependencies": { "glob": "^10.3.7" }, "bin": { "rimraf": "dist/esm/bin.mjs" } }, "sha512-l0OE8wL34P4nJH/H2ffoaniAokM2qSmrtXHmlpvYr5AVVX8msAyW0l8NVJFDxlSK4u3Uh/f41cQheDVdnYijwQ=="],
"rimraf": ["rimraf@6.1.2", "", { "dependencies": { "glob": "^13.0.0", "package-json-from-dist": "^1.0.1" }, "bin": { "rimraf": "dist/esm/bin.mjs" } }, "sha512-cFCkPslJv7BAXJsYlK1dZsbP8/ZNLkCAQ0bi1hf5EKX2QHegmDFEFA6QhuYJlk7UDdc+02JjO80YSOrWPpw06g=="],
"roarr": ["roarr@2.15.4", "", { "dependencies": { "boolean": "^3.0.1", "detect-node": "^2.0.4", "globalthis": "^1.0.1", "json-stringify-safe": "^5.0.1", "semver-compare": "^1.0.0", "sprintf-js": "^1.1.2" } }, "sha512-CHhPh+UNHD2GTXNYhPWLnU8ONHdI+5DI+4EYIAOaiD63rHeYlZvyh8P+in5999TTSFgUYuKUAjzRI4mdh/p+2A=="],
@@ -4424,6 +4425,8 @@
"@google/gemini-cli-core/@opentelemetry/exporter-logs-otlp-http": ["@opentelemetry/exporter-logs-otlp-http@0.203.0", "", { "dependencies": { "@opentelemetry/api-logs": "0.203.0", "@opentelemetry/core": "2.0.1", "@opentelemetry/otlp-exporter-base": "0.203.0", "@opentelemetry/otlp-transformer": "0.203.0", "@opentelemetry/sdk-logs": "0.203.0" }, "peerDependencies": { "@opentelemetry/api": "^1.3.0" } }, "sha512-s0hys1ljqlMTbXx2XiplmMJg9wG570Z5lH7wMvrZX6lcODI56sG4HL03jklF63tBeyNwK2RV1/ntXGo3HgG4Qw=="],
"@google/gemini-cli-core/glob": ["glob@10.5.0", "", { "dependencies": { "foreground-child": "^3.1.0", "jackspeak": "^3.1.2", "minimatch": "^9.0.4", "minipass": "^7.1.2", "package-json-from-dist": "^1.0.0", "path-scurry": "^1.11.1" }, "bin": { "glob": "dist/esm/bin.mjs" } }, "sha512-DfXN8DfhJ7NH3Oe7cFmu3NCu1wKbkReJ8TorzSAFbSKrlNaQSKfIzqYqVY8zlbs2NLBbWpRiU52GX2PbaBVNkg=="],
"@google/gemini-cli-core/https-proxy-agent": ["https-proxy-agent@7.0.6", "", { "dependencies": { "agent-base": "^7.1.2", "debug": "4" } }, "sha512-vK9P5/iUfdl95AI+JVyUuIcVtd4ofvtrOr3HNtM2yxC9bnMbEdp3x01OhQNnjb8IJYi38VlTE3mBXwcfvywuSw=="],
"@google/gemini-cli-core/marked": ["marked@15.0.12", "", { "bin": { "marked": "bin/marked.js" } }, "sha512-8dD6FusOQSrpv9Z1rdNMdlSgQOIP880DHqnohobOmYLElGEqAL/JvxvuxZO16r4HtjTlfPRDC1hbvxC9dPN2nA=="],
@@ -4800,6 +4803,8 @@
"@sentry/bundler-plugin-core/dotenv": ["dotenv@16.6.1", "", {}, "sha512-uBq4egWHTcTt33a72vpSG0z3HnPuIl6NqYcTrKEg2azoEyl2hpW0zqlxysq2pK9HlDIHyHyakeYaYnSAwd8bow=="],
"@sentry/bundler-plugin-core/glob": ["glob@10.5.0", "", { "dependencies": { "foreground-child": "^3.1.0", "jackspeak": "^3.1.2", "minimatch": "^9.0.4", "minipass": "^7.1.2", "package-json-from-dist": "^1.0.0", "path-scurry": "^1.11.1" }, "bin": { "glob": "dist/esm/bin.mjs" } }, "sha512-DfXN8DfhJ7NH3Oe7cFmu3NCu1wKbkReJ8TorzSAFbSKrlNaQSKfIzqYqVY8zlbs2NLBbWpRiU52GX2PbaBVNkg=="],
"@sentry/bundler-plugin-core/magic-string": ["magic-string@0.30.8", "", { "dependencies": { "@jridgewell/sourcemap-codec": "^1.4.15" } }, "sha512-ISQTe55T2ao7XtlAStud6qwYPZjE4GK1S/BeVPus4jrq6JuOnQ00YKQC581RWhR122W7msZV263KzVeLoqidyQ=="],
"@sentry/node/@opentelemetry/core": ["@opentelemetry/core@2.4.0", "", { "dependencies": { "@opentelemetry/semantic-conventions": "^1.29.0" }, "peerDependencies": { "@opentelemetry/api": ">=1.0.0 <1.10.0" } }, "sha512-KtcyFHssTn5ZgDu6SXmUznS80OFs/wN7y6MyFRRcKU6TOw8hNcGxKvt8hsdaLJfhzUszNSjURetq5Qpkad14Gw=="],
@@ -4924,6 +4929,8 @@
"giget/nypm": ["nypm@0.6.4", "", { "dependencies": { "citty": "^0.2.0", "pathe": "^2.0.3", "tinyexec": "^1.0.2" }, "bin": { "nypm": "dist/cli.mjs" } }, "sha512-1TvCKjZyyklN+JJj2TS3P4uSQEInrM/HkkuSXsEzm1ApPgBffOn8gFguNnZf07r/1X6vlryfIqMUkJKQMzlZiw=="],
"glob/minimatch": ["minimatch@10.2.4", "", { "dependencies": { "brace-expansion": "^5.0.2" } }, "sha512-oRjTw/97aTBN0RHbYCdtF1MQfvusSIBQM0IZEgzl6426+8jSC0nF1a/GmnVLpfB9yyr6g6FTqWqiZVbxrtaCIg=="],
"global-agent/serialize-error": ["serialize-error@7.0.1", "", { "dependencies": { "type-fest": "^0.13.1" } }, "sha512-8I8TjW5KMOKsZQTvoxjuSIa7foAwPWGOts+6o7sgjz41/qMD9VQHEDxi6PBvK2l0MXUmqZyNpUK+T2tQaaElvw=="],
"global-directory/ini": ["ini@4.1.1", "", {}, "sha512-QQnnxNyfvmHFIsj7gkPcYymR8Jdw/o7mp5ZFihxn6h8Ci6fh3Dx4E1gPjpQEpIuPo9XVNY/ZUwh4BPMjGyL01g=="],
@@ -4944,6 +4951,8 @@
"hoist-non-react-statics/react-is": ["react-is@16.13.1", "", {}, "sha512-24e6ynE2H+OKt4kqsOvNd8kBpV65zoxbA4BVsEOB3ARVWQki/DHzaUoC5KuON/BiccDaCCTZBuOcfZs70kR8bQ=="],
"hosted-git-info/lru-cache": ["lru-cache@10.4.3", "", {}, "sha512-JNAzZcXrCt42VGLuYz0zfAzDfAvJWW6AfYlDBQyDV5DClI2m5sAmK+OIO7s59XfsRsWHp02jAJrRadPRGTt6SQ=="],
"html-to-text/htmlparser2": ["htmlparser2@8.0.2", "", { "dependencies": { "domelementtype": "^2.3.0", "domhandler": "^5.0.3", "domutils": "^3.0.1", "entities": "^4.4.0" } }, "sha512-GYdjWKDkbRLkZ5geuHs5NY1puJ+PXwP7+fHPRz06Eirsb9ugf6d8kkXav6ADhcODhFFPMIXyxkxSuMf3D6NCFA=="],
"htmlparser2/entities": ["entities@7.0.1", "", {}, "sha512-TWrgLOFUQTH994YUyl1yT4uyavY5nNB5muff+RtWaqNVCAK408b5ZnnbNAUEWLTCpum9w6arT70i1XdQ4UeOPA=="],
@@ -5360,6 +5369,8 @@
"@google/gemini-cli-core/@opentelemetry/exporter-logs-otlp-http/@opentelemetry/sdk-logs": ["@opentelemetry/sdk-logs@0.203.0", "", { "dependencies": { "@opentelemetry/api-logs": "0.203.0", "@opentelemetry/core": "2.0.1", "@opentelemetry/resources": "2.0.1" }, "peerDependencies": { "@opentelemetry/api": ">=1.4.0 <1.10.0" } }, "sha512-vM2+rPq0Vi3nYA5akQD2f3QwossDnTDLvKbea6u/A2NZ3XDkPxMfo/PNrDoXhDUD/0pPo2CdH5ce/thn9K0kLw=="],
"@google/gemini-cli-core/glob/path-scurry": ["path-scurry@1.11.1", "", { "dependencies": { "lru-cache": "^10.2.0", "minipass": "^5.0.0 || ^6.0.2 || ^7.0.0" } }, "sha512-Xa4Nw17FS9ApQFJ9umLiJS4orGjm7ZzwUrwamcGQuHSzDyth9boKDaycYdDcZDuqYATXw4HFXgaqWTctW/v1HA=="],
"@google/gemini-cli-core/https-proxy-agent/agent-base": ["agent-base@7.1.4", "", {}, "sha512-MnA+YT8fwfJPgBx3m60MNqakm30XOkyIoH1y6huTQvC0PwZG7ki8NacLBcrPbNoo8vEZy7Jpuk7+jMO+CUovTQ=="],
"@google/gemini-cli-core/open/wsl-utils": ["wsl-utils@0.1.0", "", { "dependencies": { "is-wsl": "^3.1.0" } }, "sha512-h3Fbisa2nKGPxCpm89Hk33lBLsnaGBvctQopaBSOW/uIs6FTe1ATyAnKFJrzVs9vpGdsTe73WF3V4lIsk4Gacw=="],
@@ -5536,6 +5547,8 @@
"@prisma/instrumentation/@opentelemetry/instrumentation/require-in-the-middle": ["require-in-the-middle@8.0.1", "", { "dependencies": { "debug": "^4.3.5", "module-details-from-path": "^1.0.3" } }, "sha512-QT7FVMXfWOYFbeRBF6nu+I6tr2Tf3u0q8RIEjNob/heKY/nh7drD/k7eeMFmSQgnTtCzLDcCu/XEnpW2wk4xCQ=="],
"@sentry/bundler-plugin-core/glob/path-scurry": ["path-scurry@1.11.1", "", { "dependencies": { "lru-cache": "^10.2.0", "minipass": "^5.0.0 || ^6.0.2 || ^7.0.0" } }, "sha512-Xa4Nw17FS9ApQFJ9umLiJS4orGjm7ZzwUrwamcGQuHSzDyth9boKDaycYdDcZDuqYATXw4HFXgaqWTctW/v1HA=="],
"@sentry/node/@opentelemetry/instrumentation/@opentelemetry/api-logs": ["@opentelemetry/api-logs@0.210.0", "", { "dependencies": { "@opentelemetry/api": "^1.3.0" } }, "sha512-CMtLxp+lYDriveZejpBND/2TmadrrhUfChyxzmkFtHaMDdSKfP59MAYyA0ICBvEBdm3iXwLcaj/8Ic/pnGw9Yg=="],
"@sentry/node/@opentelemetry/instrumentation/require-in-the-middle": ["require-in-the-middle@8.0.1", "", { "dependencies": { "debug": "^4.3.5", "module-details-from-path": "^1.0.3" } }, "sha512-QT7FVMXfWOYFbeRBF6nu+I6tr2Tf3u0q8RIEjNob/heKY/nh7drD/k7eeMFmSQgnTtCzLDcCu/XEnpW2wk4xCQ=="],
@@ -5570,6 +5583,8 @@
"giget/nypm/citty": ["citty@0.2.0", "", {}, "sha512-8csy5IBFI2ex2hTVpaHN2j+LNE199AgiI7y4dMintrr8i0lQiFn+0AWMZrWdHKIgMOer65f8IThysYhoReqjWA=="],
"glob/minimatch/brace-expansion": ["brace-expansion@5.0.4", "", { "dependencies": { "balanced-match": "^4.0.2" } }, "sha512-h+DEnpVvxmfVefa4jFbCf5HdH5YMDXRsmKflpf1pILZWRFlTbJpxeU55nJl4Smt5HQaGzg1o6RHFPJaOqnmBDg=="],
"global-agent/serialize-error/type-fest": ["type-fest@0.13.1", "", {}, "sha512-34R7HTnG0XIJcBSn5XhDd7nNFPRcXYRZrBB2O2jdKqYODldSzBAqzsWoZYYvduky73toYS/ESqxPvkDf/F0XMg=="],
"graphql-config/@graphql-tools/url-loader/@graphql-tools/executor-graphql-ws": ["@graphql-tools/executor-graphql-ws@2.0.7", "", { "dependencies": { "@graphql-tools/executor-common": "^0.0.6", "@graphql-tools/utils": "^10.9.1", "@whatwg-node/disposablestack": "^0.0.6", "graphql-ws": "^6.0.6", "isomorphic-ws": "^5.0.0", "tslib": "^2.8.1", "ws": "^8.18.3" }, "peerDependencies": { "graphql": "^14.0.0 || ^15.0.0 || ^16.0.0 || ^17.0.0" } }, "sha512-J27za7sKF6RjhmvSOwOQFeNhNHyP4f4niqPnerJmq73OtLx9Y2PGOhkXOEB0PjhvPJceuttkD2O1yMgEkTGs3Q=="],
@@ -5764,16 +5779,24 @@
"@google/gemini-cli-core/@opentelemetry/exporter-logs-otlp-http/@opentelemetry/sdk-logs/@opentelemetry/resources": ["@opentelemetry/resources@2.0.1", "", { "dependencies": { "@opentelemetry/core": "2.0.1", "@opentelemetry/semantic-conventions": "^1.29.0" }, "peerDependencies": { "@opentelemetry/api": ">=1.3.0 <1.10.0" } }, "sha512-dZOB3R6zvBwDKnHDTB4X1xtMArB/d324VsbiPkX/Yu0Q8T2xceRthoIVFhJdvgVM2QhGVUyX9tzwiNxGtoBJUw=="],
"@google/gemini-cli-core/glob/path-scurry/lru-cache": ["lru-cache@10.4.3", "", {}, "sha512-JNAzZcXrCt42VGLuYz0zfAzDfAvJWW6AfYlDBQyDV5DClI2m5sAmK+OIO7s59XfsRsWHp02jAJrRadPRGTt6SQ=="],
"@google/genai/google-auth-library/gaxios/https-proxy-agent": ["https-proxy-agent@7.0.6", "", { "dependencies": { "agent-base": "^7.1.2", "debug": "4" } }, "sha512-vK9P5/iUfdl95AI+JVyUuIcVtd4ofvtrOr3HNtM2yxC9bnMbEdp3x01OhQNnjb8IJYi38VlTE3mBXwcfvywuSw=="],
"@google/genai/google-auth-library/gaxios/node-fetch": ["node-fetch@3.3.2", "", { "dependencies": { "data-uri-to-buffer": "^4.0.0", "fetch-blob": "^3.1.4", "formdata-polyfill": "^4.0.10" } }, "sha512-dRB78srN/l6gqWulah9SrxeYnxeddIG30+GOqK/9OlLVyLg3HPnr6SqOWTWOXKRwC2eGYCkZ59NNuSgvSrpgOA=="],
"@google/genai/google-auth-library/gaxios/rimraf": ["rimraf@5.0.10", "", { "dependencies": { "glob": "^10.3.7" }, "bin": { "rimraf": "dist/esm/bin.mjs" } }, "sha512-l0OE8wL34P4nJH/H2ffoaniAokM2qSmrtXHmlpvYr5AVVX8msAyW0l8NVJFDxlSK4u3Uh/f41cQheDVdnYijwQ=="],
"@inquirer/core/wrap-ansi/strip-ansi/ansi-regex": ["ansi-regex@5.0.1", "", {}, "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ=="],
"@sentry/bundler-plugin-core/glob/path-scurry/lru-cache": ["lru-cache@10.4.3", "", {}, "sha512-JNAzZcXrCt42VGLuYz0zfAzDfAvJWW6AfYlDBQyDV5DClI2m5sAmK+OIO7s59XfsRsWHp02jAJrRadPRGTt6SQ=="],
"@types/request/form-data/mime-types/mime-db": ["mime-db@1.52.0", "", {}, "sha512-sPU4uV7dYlvtWJxwwxHD0PuihVNiE7TyAbQ5SWxDCB9mUYvOgroQOwYQQOKPJ8CIbE+1ETVlOoK1UC2nU3gYvg=="],
"fx-runner/which/is-absolute/is-relative": ["is-relative@0.1.3", "", {}, "sha512-wBOr+rNM4gkAZqoLRJI4myw5WzzIdQosFAAbnvfXP5z1LyzgAI3ivOKehC5KfqlQJZoihVhirgtCBj378Eg8GA=="],
"glob/minimatch/brace-expansion/balanced-match": ["balanced-match@4.0.4", "", {}, "sha512-BLrgEcRTwX2o6gGxGOCNyMvGSp35YofuYzw9h1IMTRmKqttAZZVU67bdb9Pr2vUHA8+j3i2tJfjO6C6+4myGTA=="],
"graphql-config/@graphql-tools/url-loader/@graphql-tools/executor-graphql-ws/@graphql-tools/executor-common": ["@graphql-tools/executor-common@0.0.6", "", { "dependencies": { "@envelop/core": "^5.3.0", "@graphql-tools/utils": "^10.9.1" }, "peerDependencies": { "graphql": "^14.0.0 || ^15.0.0 || ^16.0.0 || ^17.0.0" } }, "sha512-JAH/R1zf77CSkpYATIJw+eOJwsbWocdDjY+avY7G+P5HCXxwQjAjWVkJI1QJBQYjPQDVxwf1fmTZlIN3VOadow=="],
"graphql-config/@graphql-tools/url-loader/@graphql-tools/executor-http/@graphql-hive/signal": ["@graphql-hive/signal@1.0.0", "", {}, "sha512-RiwLMc89lTjvyLEivZ/qxAC5nBHoS2CtsWFSOsN35sxG9zoo5Z+JsFHM8MlvmO9yt+MJNIyC5MLE1rsbOphlag=="],
@@ -5826,6 +5849,8 @@
"@google/genai/google-auth-library/gaxios/https-proxy-agent/agent-base": ["agent-base@7.1.4", "", {}, "sha512-MnA+YT8fwfJPgBx3m60MNqakm30XOkyIoH1y6huTQvC0PwZG7ki8NacLBcrPbNoo8vEZy7Jpuk7+jMO+CUovTQ=="],
"@google/genai/google-auth-library/gaxios/rimraf/glob": ["glob@10.5.0", "", { "dependencies": { "foreground-child": "^3.1.0", "jackspeak": "^3.1.2", "minimatch": "^9.0.4", "minipass": "^7.1.2", "package-json-from-dist": "^1.0.0", "path-scurry": "^1.11.1" }, "bin": { "glob": "dist/esm/bin.mjs" } }, "sha512-DfXN8DfhJ7NH3Oe7cFmu3NCu1wKbkReJ8TorzSAFbSKrlNaQSKfIzqYqVY8zlbs2NLBbWpRiU52GX2PbaBVNkg=="],
"graphql-config/@graphql-tools/url-loader/@graphql-tools/wrap/@graphql-tools/delegate/@graphql-tools/batch-execute": ["@graphql-tools/batch-execute@9.0.19", "", { "dependencies": { "@graphql-tools/utils": "^10.9.1", "@whatwg-node/promise-helpers": "^1.3.0", "dataloader": "^2.2.3", "tslib": "^2.8.1" }, "peerDependencies": { "graphql": "^14.0.0 || ^15.0.0 || ^16.0.0 || ^17.0.0" } }, "sha512-VGamgY4PLzSx48IHPoblRw0oTaBa7S26RpZXt0Y4NN90ytoE0LutlpB2484RbkfcTjv9wa64QD474+YP1kEgGA=="],
"publish-browser-extension/listr2/cli-truncate/slice-ansi/ansi-styles": ["ansi-styles@6.2.3", "", {}, "sha512-4Dj6M28JB+oAH8kFkTLUo+a2jwOFkuqb3yucU0CANcRRUbxS0cP0nZYCGjcc3BNXwRIsUVmDGgzawme7zvJHvg=="],
@@ -5837,5 +5862,9 @@
"@browseros/build-tools/@aws-sdk/client-s3/@aws-sdk/core/@aws-sdk/xml-builder/fast-xml-parser/fast-xml-builder": ["fast-xml-builder@1.1.4", "", { "dependencies": { "path-expression-matcher": "^1.1.3" } }, "sha512-f2jhpN4Eccy0/Uz9csxh3Nu6q4ErKxf0XIsasomfOihuSUa3/xw6w8dnOtCDgEItQFJG8KyXPzQXzcODDrrbOg=="],
"@browseros/eval/@aws-sdk/client-s3/@aws-sdk/core/@aws-sdk/xml-builder/fast-xml-parser/fast-xml-builder": ["fast-xml-builder@1.1.4", "", { "dependencies": { "path-expression-matcher": "^1.1.3" } }, "sha512-f2jhpN4Eccy0/Uz9csxh3Nu6q4ErKxf0XIsasomfOihuSUa3/xw6w8dnOtCDgEItQFJG8KyXPzQXzcODDrrbOg=="],
"@google/genai/google-auth-library/gaxios/rimraf/glob/path-scurry": ["path-scurry@1.11.1", "", { "dependencies": { "lru-cache": "^10.2.0", "minipass": "^5.0.0 || ^6.0.2 || ^7.0.0" } }, "sha512-Xa4Nw17FS9ApQFJ9umLiJS4orGjm7ZzwUrwamcGQuHSzDyth9boKDaycYdDcZDuqYATXw4HFXgaqWTctW/v1HA=="],
"@google/genai/google-auth-library/gaxios/rimraf/glob/path-scurry/lru-cache": ["lru-cache@10.4.3", "", {}, "sha512-JNAzZcXrCt42VGLuYz0zfAzDfAvJWW6AfYlDBQyDV5DClI2m5sAmK+OIO7s59XfsRsWHp02jAJrRadPRGTt6SQ=="],
}
}

View File

@@ -12,12 +12,10 @@
"dev:watch": "./tools/dev/run.sh watch",
"dev:watch:new": "./tools/dev/run.sh watch --new",
"dev:manual": "./tools/dev/run.sh watch --manual",
"dev:setup": "./tools/dev/run.sh setup",
"dev:cleanup": "./tools/dev/run.sh cleanup",
"dev:reset": "./tools/dev/run.sh reset",
"dev:setup": "./tools/dev/setup.sh",
"install:browseros-dogfood": "make -C tools/dogfood install",
"test:env": "./tools/dev/run.sh test",
"test:cleanup": "./tools/dev/run.sh cleanup --quick --yes",
"test:cleanup": "./tools/dev/run.sh cleanup",
"start:server": "bun run --filter @browseros/server --elide-lines=0 start",
"start:agent": "bun run --filter @browseros/agent dev",
"build": "bun run build:server && bun run build:agent",
@@ -36,7 +34,8 @@
"lint": "bunx biome check",
"lint:fix": "bunx biome check --write --unsafe",
"gen:cdp": "bun scripts/codegen/cdp-protocol.ts",
"generate:models": "bun scripts/generate-models.ts"
"generate:models": "bun scripts/generate-models.ts",
"clean": "rimraf dist"
},
"repository": "browseros-ai/BrowserOS-server",
"author": "BrowserOS",
@@ -57,6 +56,7 @@
"globals": "^16.4.0",
"lefthook": "^2.0.12",
"picocolors": "^1.1.1",
"rimraf": "^6.0.1",
"typedoc": "^0.28.15",
"typescript": "^5.9.2"
},

View File

@@ -1,10 +1,7 @@
package cmd
import (
"bufio"
"fmt"
"io"
"os"
"time"
"browseros-dev/proc"
@@ -15,88 +12,44 @@ import (
var cleanupCmd = &cobra.Command{
Use: "cleanup",
Short: "Kill port processes and remove orphaned temp directories",
Long: "Stops old dev watch processes, clears dev/test ports, and removes orphaned browseros-* temp directories.",
Long: "Kills processes on dev/test ports and removes orphaned browseros-* temp directories.",
RunE: runCleanup,
}
var (
cleanupPorts bool
cleanupTemps bool
cleanupQuick bool
cleanupYes bool
)
type safeCleanupOptions struct {
ports bool
temps bool
}
func init() {
cleanupCmd.Flags().BoolVar(&cleanupPorts, "ports", false, "Only kill port processes")
cleanupCmd.Flags().BoolVar(&cleanupTemps, "temps", false, "Only remove temp directories")
cleanupCmd.Flags().BoolVar(&cleanupQuick, "quick", false, "Run safe cleanup only")
cleanupCmd.Flags().BoolVar(&cleanupYes, "yes", false, "Answer yes to the safe cleanup prompt")
rootCmd.AddCommand(cleanupCmd)
}
// runCleanup performs the non-destructive daily cleanup path for local dev.
func runCleanup(cmd *cobra.Command, args []string) error {
out := cmd.OutOrStdout()
if !cleanupYes && !cleanupQuick {
ok, err := confirmYesNo(out, bufio.NewReader(os.Stdin), resetPrompt{
Title: "Run safe cleanup?",
Body: "Stops old dev watch processes, clears dev ports, and removes temporary /tmp browser profiles. This does not touch ~/.browseros-dev, Lima, containers, images, or saved dev data.",
Action: "Run safe cleanup",
})
if err != nil {
return err
}
if !ok {
fmt.Fprintln(out, dimStyle.Sprint("Skipped."))
return nil
}
}
return runSafeCleanup(out, safeCleanupOptions{
ports: !cleanupTemps || cleanupPorts,
temps: !cleanupPorts || cleanupTemps,
})
}
doPorts := !cleanupTemps || cleanupPorts
doTemps := !cleanupPorts || cleanupTemps
// runSafeCleanup is shared by cleanup and reset before any destructive repair steps.
func runSafeCleanup(out io.Writer, opts safeCleanupOptions) error {
if opts.ports {
if doPorts {
ports := proc.DefaultLocalPorts()
stopped, err := proc.StopAllWatchProcesses(3 * time.Second)
if err != nil {
return err
}
if stopped > 0 {
fmt.Fprintf(out, "%s stopped %d old dev watch process group(s)\n", successStyle.Sprint("Stopped:"), stopped)
}
killedBrowsers, err := proc.KillBrowserProcessesForDevProfiles(3 * time.Second)
if err != nil {
return err
}
if killedBrowsers > 0 {
fmt.Fprintf(out, "%s stopped %d BrowserOS dev/test profile process(es)\n", successStyle.Sprint("Stopped:"), killedBrowsers)
}
fmt.Fprintf(out, "%s ports %d, %d, %d\n", labelStyle.Sprint("Clearing:"), ports.CDP, ports.Server, ports.Extension)
proc.LogMsgf(proc.TagInfo, "Killing processes on ports %d, %d, %d...", ports.CDP, ports.Server, ports.Extension)
if err := proc.KillPortsAndWait(ports, 3*time.Second); err != nil {
return err
}
fmt.Fprintln(out, successStyle.Sprint("Ports cleared."))
proc.LogMsg(proc.TagInfo, "Ports cleared")
}
if opts.temps {
if doTemps {
n := proc.CleanupTempDirs("browseros-test-", "browseros-dev-")
if n > 0 {
fmt.Fprintf(out, "%s removed %d temp directories\n", successStyle.Sprint("Removed:"), n)
proc.LogMsgf(proc.TagInfo, "Removed %d temp directories", n)
} else {
fmt.Fprintln(out, dimStyle.Sprint("No orphaned temp directories found."))
proc.LogMsg(proc.TagInfo, "No orphaned temp directories found")
}
}
fmt.Fprintln(out)
fmt.Fprintln(out, successStyle.Sprint("Cleanup complete."))
fmt.Println()
proc.LogMsg(proc.TagInfo, "Cleanup complete")
return nil
}

View File

@@ -1,134 +0,0 @@
package cmd
import (
"bufio"
"bytes"
"os"
"strings"
"testing"
)
func TestConfirmYesNoDefaultsNoAndExplainsAction(t *testing.T) {
var out bytes.Buffer
prompt := resetPrompt{
Title: "Stop VM?",
Body: "This shuts down browseros-vm. Data stays on disk.",
Action: "Stop browseros-vm",
}
ok, err := confirmYesNo(&out, bufio.NewReader(strings.NewReader("\n")), prompt)
if err != nil {
t.Fatal(err)
}
if ok {
t.Fatal("expected empty answer to default to no")
}
text := out.String()
for _, want := range []string{
"Stop VM?",
"This shuts down browseros-vm. Data stays on disk.",
"Stop browseros-vm",
"[y/N]",
} {
if !strings.Contains(text, want) {
t.Fatalf("missing %q in prompt:\n%s", want, text)
}
}
}
func TestConfirmTypedRequiresExactToken(t *testing.T) {
var out bytes.Buffer
ok, err := confirmTyped(
&out,
bufio.NewReader(strings.NewReader("delete\nDELETE\n")),
"Delete dev profile?",
"This removes ~/.browseros-dev.",
"DELETE",
)
if err != nil {
t.Fatal(err)
}
if !ok {
t.Fatal("expected exact token to confirm")
}
text := out.String()
if !strings.Contains(text, "Type DELETE to continue") {
t.Fatalf("missing typed confirmation instruction:\n%s", text)
}
if !strings.Contains(text, "Confirmation did not match") {
t.Fatalf("missing retry warning:\n%s", text)
}
}
func TestResetOverviewTellsUserToUseSmallestReset(t *testing.T) {
var out bytes.Buffer
printResetOverview(&out, devPaths{Root: "/Users/me/.browseros-dev"})
text := out.String()
for _, want := range []string{
"BrowserOS dev reset",
"Pick the smallest reset",
"/Users/me/.browseros-dev",
"Stop VM",
"Delete VM",
"Remove OpenClaw container",
"Remove OpenClaw image",
"Delete dev profile",
} {
if !strings.Contains(text, want) {
t.Fatalf("missing %q in overview:\n%s", want, text)
}
}
}
func TestParseLimaListOutputAcceptsSingleObject(t *testing.T) {
entries, err := parseLimaListOutput([]byte(`{"name":"browseros-vm","status":"Running"}`))
if err != nil {
t.Fatal(err)
}
if len(entries) != 1 || entries[0].Name != "browseros-vm" || entries[0].Status != "Running" {
t.Fatalf("unexpected entries: %#v", entries)
}
}
func TestParseLimaListOutputAcceptsJSONLines(t *testing.T) {
entries, err := parseLimaListOutput([]byte("{\"name\":\"one\",\"status\":\"Stopped\"}\n{\"name\":\"browseros-vm\",\"status\":\"Running\"}\n"))
if err != nil {
t.Fatal(err)
}
if len(entries) != 2 || entries[1].Name != "browseros-vm" || entries[1].Status != "Running" {
t.Fatalf("unexpected entries: %#v", entries)
}
}
func TestValidateDevProfileRootRejectsUnsafePaths(t *testing.T) {
home, err := os.UserHomeDir()
if err != nil {
t.Fatal(err)
}
for _, path := range []string{"/", home, "/etc"} {
if err := validateDevProfileRootForDeletion(path); err == nil {
t.Fatalf("expected %s to be rejected", path)
}
}
}
func TestLimactlShellArgsUseGuestWorkdir(t *testing.T) {
args := limactlShellArgs("sh", "-lc", "true")
want := []string{"shell", "--workdir", "/", "browseros-vm", "--", "sh", "-lc", "true"}
if strings.Join(args, "\x00") != strings.Join(want, "\x00") {
t.Fatalf("expected %#v, got %#v", want, args)
}
}
func TestParsePodmanMachineList(t *testing.T) {
machines, err := parsePodmanMachineList([]byte(`[{"Name":"podman-machine-default","Running":true}]`))
if err != nil {
t.Fatal(err)
}
if len(machines) != 1 || machines[0].Name != "podman-machine-default" || !machines[0].Running {
t.Fatalf("unexpected machines: %#v", machines)
}
}

View File

@@ -1,456 +0,0 @@
package cmd
import (
"bufio"
"encoding/json"
"fmt"
"io"
"os"
"os/exec"
"path/filepath"
"strings"
"github.com/spf13/cobra"
)
const (
devDirName = ".browseros-dev"
limaVMName = "browseros-vm"
openClawImage = "ghcr.io/openclaw/openclaw:2026.4.12"
openClawContainerName = "browseros-openclaw-openclaw-gateway-1"
openClawSetupContainer = openClawContainerName + "-setup"
)
var resetCmd = &cobra.Command{
Use: "reset",
Short: "Guide destructive BrowserOS dev profile and VM resets",
Long: "Walks through safe cleanup, VM shutdown/deletion, OpenClaw container/image removal, and full ~/.browseros-dev reset.",
RunE: runReset,
}
type devPaths struct {
Root string
LimaHome string
}
type resetPrompt struct {
Title string
Body string
Action string
}
type limaListEntry struct {
Name string `json:"name"`
Status string `json:"status"`
}
type podmanMachineEntry struct {
Name string `json:"Name"`
Running bool `json:"Running"`
}
func init() {
rootCmd.AddCommand(resetCmd)
}
// runReset walks developers through escalating reset options without hiding the blast radius.
func runReset(cmd *cobra.Command, args []string) error {
out := cmd.OutOrStdout()
reader := bufio.NewReader(os.Stdin)
paths, err := resolveDevPaths()
if err != nil {
return err
}
printResetOverview(out, paths)
if ok, err := confirmYesNo(out, reader, resetPrompt{
Title: "Run safe cleanup first?",
Body: "This stops old dev watch processes, clears dev ports, and removes temporary /tmp browser profiles. It does not touch saved dev data.",
Action: "Run safe cleanup",
}); err != nil {
return err
} else if ok {
if err := runSafeCleanup(out, safeCleanupOptions{ports: true, temps: true}); err != nil {
return err
}
}
limactlPath, err := exec.LookPath("limactl")
if err != nil {
fmt.Fprintf(out, "%s Lima CLI not found; VM and OpenClaw reset steps are unavailable. Install with %s.\n", warnStyle.Sprint("Skipping:"), commandStyle.Sprint("brew install lima"))
if err := maybeResetLegacyPodman(out, reader); err != nil {
return err
}
return maybeDeleteDevProfile(out, reader, paths)
}
vm, err := findVM(limactlPath, paths.LimaHome)
if err != nil {
fmt.Fprintf(out, "%s could not inspect Lima VMs: %v\n", warnStyle.Sprint("Warning:"), err)
if err := maybeResetLegacyPodman(out, reader); err != nil {
return err
}
return maybeDeleteDevProfile(out, reader, paths)
}
if vm == nil {
fmt.Fprintf(out, "%s %s was not found in %s.\n", dimStyle.Sprint("Not found:"), limaVMName, pathStyle.Sprint(paths.LimaHome))
if err := maybeResetLegacyPodman(out, reader); err != nil {
return err
}
return maybeDeleteDevProfile(out, reader, paths)
}
fmt.Fprintf(out, "%s %s %s\n", labelStyle.Sprint("Found VM:"), commandStyle.Sprint(vm.Name), dimStyle.Sprintf("(%s)", vm.Status))
if strings.EqualFold(vm.Status, "Running") {
if err := maybeResetOpenClaw(out, reader, limactlPath, paths.LimaHome); err != nil {
return err
}
if ok, err := confirmYesNo(out, reader, resetPrompt{
Title: "Stop VM?",
Body: "This shuts down browseros-vm. The VM, containers, images, and profile data stay on disk.",
Action: "Stop browseros-vm",
}); err != nil {
return err
} else if ok {
if err := runLimactl(out, limactlPath, paths.LimaHome, "stop", limaVMName); err != nil {
return err
}
fmt.Fprintln(out, successStyle.Sprint("VM stopped."))
vm.Status = "Stopped"
}
} else {
fmt.Fprintln(out, dimStyle.Sprint("OpenClaw container/image reset needs the VM running; skipping those steps."))
}
if ok, err := confirmYesNo(out, reader, resetPrompt{
Title: "Delete VM?",
Body: "This deletes the Lima VM and its container store. ~/.browseros-dev remains. OpenClaw will be pulled again next time.",
Action: "Delete browseros-vm",
}); err != nil {
return err
} else if ok {
if err := runLimactl(out, limactlPath, paths.LimaHome, "delete", "--force", limaVMName); err != nil {
return err
}
fmt.Fprintln(out, successStyle.Sprint("VM deleted."))
}
if err := maybeResetLegacyPodman(out, reader); err != nil {
return err
}
return maybeDeleteDevProfile(out, reader, paths)
}
func resolveDevPaths() (devPaths, error) {
if override := strings.TrimSpace(os.Getenv("BROWSEROS_DIR")); override != "" {
root, err := filepath.Abs(override)
if err != nil {
return devPaths{}, err
}
return devPaths{Root: root, LimaHome: filepath.Join(root, "lima")}, nil
}
home, err := os.UserHomeDir()
if err != nil {
return devPaths{}, err
}
root := filepath.Join(home, devDirName)
return devPaths{Root: root, LimaHome: filepath.Join(root, "lima")}, nil
}
func printResetOverview(out io.Writer, paths devPaths) {
fmt.Fprintln(out, headerStyle.Sprint("BrowserOS dev reset"))
fmt.Fprintln(out)
fmt.Fprintf(out, "This can reset parts of %s. Pick the smallest reset that matches the problem.\n", pathStyle.Sprint(paths.Root))
fmt.Fprintln(out)
fmt.Fprintf(out, " %s %s\n", labelStyle.Sprint("Stop VM:"), dimStyle.Sprint("Shuts down browseros-vm. Keeps data."))
fmt.Fprintf(out, " %s %s\n", labelStyle.Sprint("Delete VM:"), dimStyle.Sprint("Removes Lima/container state. Keeps the dev profile."))
fmt.Fprintf(out, " %s %s\n", labelStyle.Sprint("Remove OpenClaw container:"), dimStyle.Sprint("Keeps the downloaded OpenClaw image."))
fmt.Fprintf(out, " %s %s\n", labelStyle.Sprint("Remove OpenClaw image:"), dimStyle.Sprint("Next startup pulls it again."))
fmt.Fprintf(out, " %s %s\n", warnStyle.Sprint("Delete dev profile:"), dimStyle.Sprint("Deletes the dev profile root and dev-local BrowserOS data."))
fmt.Fprintln(out)
}
func confirmYesNo(out io.Writer, r *bufio.Reader, prompt resetPrompt) (bool, error) {
fmt.Fprintln(out, labelStyle.Sprint(prompt.Title))
fmt.Fprintln(out, prompt.Body)
if prompt.Action != "" {
fmt.Fprintf(out, "%s %s\n", labelStyle.Sprint("Action:"), commandStyle.Sprint(prompt.Action))
}
fmt.Fprint(out, labelStyle.Sprint("Continue?")+" [y/N]: ")
line, err := r.ReadString('\n')
if err != nil && len(line) == 0 {
return false, err
}
line = strings.TrimSpace(strings.ToLower(line))
fmt.Fprintln(out)
return line == "y" || line == "yes", nil
}
func confirmTyped(out io.Writer, r *bufio.Reader, title string, body string, token string) (bool, error) {
fmt.Fprintln(out, warnStyle.Sprint(title))
fmt.Fprintln(out, body)
for {
fmt.Fprintf(out, "%s %s %s: ", labelStyle.Sprint("Type"), commandStyle.Sprint(token), labelStyle.Sprint("to continue"))
line, err := r.ReadString('\n')
if err != nil && len(line) == 0 {
return false, err
}
if strings.TrimSpace(line) == token {
fmt.Fprintln(out)
return true, nil
}
if strings.TrimSpace(line) == "" {
fmt.Fprintln(out)
return false, nil
}
fmt.Fprintln(out, warnStyle.Sprint("Confirmation did not match. Press Enter to skip or try again."))
}
}
func maybeResetOpenClaw(out io.Writer, reader *bufio.Reader, limactlPath string, limaHome string) error {
if ok, err := confirmYesNo(out, reader, resetPrompt{
Title: "Remove OpenClaw container?",
Body: "This removes the current gateway/setup containers. The downloaded OpenClaw image stays in the VM.",
Action: "nerdctl rm -f " + openClawContainerName + " " + openClawSetupContainer,
}); err != nil {
return err
} else if ok {
script := fmt.Sprintf(
"nerdctl rm -f %s %s >/dev/null 2>&1 || true",
openClawContainerName,
openClawSetupContainer,
)
if err := runInVM(out, limactlPath, limaHome, "sh", "-lc", script); err != nil {
return err
}
fmt.Fprintln(out, successStyle.Sprint("OpenClaw containers removed if present."))
}
if ok, err := confirmYesNo(out, reader, resetPrompt{
Title: "Remove OpenClaw image?",
Body: "This deletes ghcr.io/openclaw/openclaw:2026.4.12 from the VM. Next startup pulls it again.",
Action: "nerdctl image rm " + openClawImage,
}); err != nil {
return err
} else if ok {
script := fmt.Sprintf("nerdctl image rm %s >/dev/null 2>&1 || true", openClawImage)
if err := runInVM(out, limactlPath, limaHome, "sh", "-lc", script); err != nil {
return err
}
fmt.Fprintln(out, successStyle.Sprint("OpenClaw image removed if present."))
}
return nil
}
func maybeDeleteDevProfile(out io.Writer, reader *bufio.Reader, paths devPaths) error {
ok, err := confirmTyped(
out,
reader,
"Delete dev profile?",
fmt.Sprintf("This deletes %s. It removes BrowserOS dev data plus VM/OpenClaw state.", pathStyle.Sprint(paths.Root)),
"DELETE",
)
if err != nil || !ok {
return err
}
if err := validateDevProfileRootForDeletion(paths.Root); err != nil {
return err
}
if err := os.RemoveAll(paths.Root); err != nil {
return err
}
fmt.Fprintf(out, "%s %s\n", successStyle.Sprint("Deleted:"), pathStyle.Sprint(paths.Root))
return nil
}
func maybeResetLegacyPodman(out io.Writer, reader *bufio.Reader) error {
podmanPath, err := exec.LookPath("podman")
if err != nil {
return nil
}
machines, err := listPodmanMachines(podmanPath)
if err != nil {
fmt.Fprintf(out, "%s could not inspect legacy Podman machines: %v\n", warnStyle.Sprint("Warning:"), err)
return nil
}
if len(machines) == 0 {
return nil
}
fmt.Fprintln(out, headerStyle.Sprint("Legacy Podman VM cleanup"))
fmt.Fprintln(out, "BrowserOS used Podman before the Lima VM runtime. These machines are legacy for this dev flow.")
for _, machine := range machines {
state := "Stopped"
if machine.Running {
state = "Running"
}
fmt.Fprintf(out, " %s %s\n", commandStyle.Sprint(machine.Name), dimStyle.Sprintf("(%s)", state))
}
fmt.Fprintln(out, dimStyle.Sprint("Future reset flows can add more legacy cleanup checks here."))
fmt.Fprintln(out)
for i := range machines {
machine := machines[i]
if machine.Running {
if ok, err := confirmYesNo(out, reader, resetPrompt{
Title: "Stop legacy Podman machine?",
Body: fmt.Sprintf("This stops legacy Podman machine %s. It does not delete the machine.", machine.Name),
Action: "podman machine stop " + machine.Name,
}); err != nil {
return err
} else if ok {
if err := runCommand(out, podmanPath, "machine", "stop", machine.Name); err != nil {
return err
}
fmt.Fprintf(out, "%s %s\n", successStyle.Sprint("Stopped:"), commandStyle.Sprint(machine.Name))
machines[i].Running = false
}
}
if ok, err := confirmYesNo(out, reader, resetPrompt{
Title: "Delete legacy Podman machine?",
Body: fmt.Sprintf("This deletes legacy Podman machine %s. Use this when cleaning up the old VM runtime.", machine.Name),
Action: "podman machine rm --force " + machine.Name,
}); err != nil {
return err
} else if ok {
if err := runCommand(out, podmanPath, "machine", "rm", "--force", machine.Name); err != nil {
return err
}
fmt.Fprintf(out, "%s %s\n", successStyle.Sprint("Deleted:"), commandStyle.Sprint(machine.Name))
}
}
return nil
}
func listPodmanMachines(podmanPath string) ([]podmanMachineEntry, error) {
cmd := exec.Command(podmanPath, "machine", "ls", "--format", "json")
output, err := cmd.Output()
if err != nil {
return nil, err
}
return parsePodmanMachineList(output)
}
func parsePodmanMachineList(output []byte) ([]podmanMachineEntry, error) {
if strings.TrimSpace(string(output)) == "" {
return nil, nil
}
var machines []podmanMachineEntry
if err := json.Unmarshal(output, &machines); err != nil {
return nil, err
}
return machines, nil
}
func validateDevProfileRootForDeletion(root string) error {
cleanRoot, err := filepath.Abs(root)
if err != nil {
return err
}
if cleanRoot == string(filepath.Separator) {
return fmt.Errorf("refusing to delete filesystem root")
}
home, err := os.UserHomeDir()
if err != nil {
return err
}
cleanHome, err := filepath.Abs(home)
if err != nil {
return err
}
if cleanRoot == cleanHome {
return fmt.Errorf("refusing to delete home directory %s", cleanRoot)
}
if !isPathInside(cleanRoot, cleanHome) {
return fmt.Errorf("refusing to delete path outside home directory: %s", cleanRoot)
}
return nil
}
func isPathInside(path string, parent string) bool {
rel, err := filepath.Rel(parent, path)
if err != nil {
return false
}
return rel != "." && rel != "" && !strings.HasPrefix(rel, "..") && !filepath.IsAbs(rel)
}
func findVM(limactlPath string, limaHome string) (*limaListEntry, error) {
cmd := limactlCommand(limactlPath, limaHome, "list", "--format", "json")
output, err := cmd.Output()
if err != nil {
return nil, err
}
entries, err := parseLimaListOutput(output)
if err != nil {
return nil, err
}
for i := range entries {
if entries[i].Name == limaVMName {
return &entries[i], nil
}
}
return nil, nil
}
func parseLimaListOutput(output []byte) ([]limaListEntry, error) {
trimmed := strings.TrimSpace(string(output))
if trimmed == "" {
return nil, nil
}
var entries []limaListEntry
if err := json.Unmarshal([]byte(trimmed), &entries); err == nil {
return entries, nil
}
var single limaListEntry
if err := json.Unmarshal([]byte(trimmed), &single); err == nil {
return []limaListEntry{single}, nil
}
for _, line := range strings.Split(trimmed, "\n") {
line = strings.TrimSpace(line)
if line == "" {
continue
}
var entry limaListEntry
if err := json.Unmarshal([]byte(line), &entry); err != nil {
return nil, err
}
entries = append(entries, entry)
}
return entries, nil
}
func runLimactl(out io.Writer, limactlPath string, limaHome string, args ...string) error {
cmd := limactlCommand(limactlPath, limaHome, args...)
cmd.Stdout = out
cmd.Stderr = out
return cmd.Run()
}
func runInVM(out io.Writer, limactlPath string, limaHome string, args ...string) error {
shellArgs := limactlShellArgs(args...)
return runLimactl(out, limactlPath, limaHome, shellArgs...)
}
func limactlShellArgs(args ...string) []string {
return append([]string{"shell", "--workdir", "/", limaVMName, "--"}, args...)
}
func limactlCommand(limactlPath string, limaHome string, args ...string) *exec.Cmd {
cmd := exec.Command(limactlPath, args...)
cmd.Env = append(os.Environ(), "LIMA_HOME="+limaHome)
return cmd
}
func runCommand(out io.Writer, path string, args ...string) error {
cmd := exec.Command(path, args...)
cmd.Stdout = out
cmd.Stderr = out
return cmd.Run()
}

View File

@@ -1,81 +0,0 @@
package cmd
import (
"context"
"fmt"
"os"
"path/filepath"
"browseros-dev/proc"
"github.com/spf13/cobra"
)
var setupIfNeeded bool
const setupModeIfNeeded = true
var setupCmd = &cobra.Command{
Use: "setup",
Short: "Install dev dependencies and generate required code",
Long: "Installs Bun dependencies and generates agent GraphQL code needed by the dev environment.",
RunE: func(cmd *cobra.Command, args []string) error {
root, err := proc.FindMonorepoRoot()
if err != nil {
return err
}
return runDevSetup(cmd.Context(), root, setupIfNeeded)
},
}
type setupPlan struct {
RunInstall bool
RunCodegen bool
}
func init() {
setupCmd.Flags().BoolVar(&setupIfNeeded, "if-needed", false, "Skip generated code refresh when it already exists")
rootCmd.AddCommand(setupCmd)
}
func buildSetupPlan(root string, ifNeeded bool) setupPlan {
return setupPlan{
RunInstall: true,
RunCodegen: !ifNeeded || !generatedGraphQLExists(root),
}
}
func generatedGraphQLExists(root string) bool {
for _, file := range []string{"gql.ts", "graphql.ts", "schema.graphql"} {
info, err := os.Stat(filepath.Join(root, "apps/agent/generated/graphql", file))
if err != nil || info.IsDir() {
return false
}
}
return true
}
// runDevSetup prepares the repo for local development. Dependency install always
// runs because Bun is fast and this keeps watch resilient after branch changes.
func runDevSetup(ctx context.Context, root string, ifNeeded bool) error {
plan := buildSetupPlan(root, ifNeeded)
if plan.RunInstall {
proc.LogMsg(proc.TagSetup, "Installing dependencies...")
if err := proc.RunBlocking(ctx, root, proc.TagSetup, "bun", "install", "--frozen-lockfile"); err != nil {
return fmt.Errorf("installing dependencies: %w", err)
}
}
if plan.RunCodegen {
proc.LogMsg(proc.TagSetup, "Generating agent code...")
if err := proc.RunBlocking(ctx, root, proc.TagSetup, "bun", "run", "codegen:agent"); err != nil {
return fmt.Errorf("generating agent code: %w", err)
}
} else {
proc.LogMsg(proc.TagSetup, "Agent code already generated")
}
proc.LogMsg(proc.TagSetup, "Setup ready")
return nil
}

View File

@@ -1,76 +0,0 @@
package cmd
import (
"os"
"path/filepath"
"testing"
)
func TestBuildSetupPlanAlwaysInstallsDependencies(t *testing.T) {
root := t.TempDir()
plan := buildSetupPlan(root, true)
if !plan.RunInstall {
t.Fatal("expected dependency install to always run")
}
}
func TestBuildSetupPlanIfNeededSkipsExistingGeneratedGraphQL(t *testing.T) {
root := t.TempDir()
writeGeneratedGraphQLSentinels(t, root)
plan := buildSetupPlan(root, true)
if plan.RunCodegen {
t.Fatal("expected --if-needed setup to skip codegen when generated GraphQL exists")
}
}
func TestBuildSetupPlanIfNeededRunsCodegenWhenGeneratedGraphQLEmpty(t *testing.T) {
root := t.TempDir()
generatedDir := filepath.Join(root, "apps/agent/generated/graphql")
if err := os.MkdirAll(generatedDir, 0o755); err != nil {
t.Fatal(err)
}
plan := buildSetupPlan(root, true)
if !plan.RunCodegen {
t.Fatal("expected --if-needed setup to run codegen when generated GraphQL is empty")
}
}
func TestBuildSetupPlanIfNeededRunsCodegenWhenGeneratedGraphQLMissing(t *testing.T) {
root := t.TempDir()
plan := buildSetupPlan(root, true)
if !plan.RunCodegen {
t.Fatal("expected --if-needed setup to run codegen when generated GraphQL is missing")
}
}
func TestBuildSetupPlanExplicitSetupRunsCodegen(t *testing.T) {
root := t.TempDir()
writeGeneratedGraphQLSentinels(t, root)
plan := buildSetupPlan(root, false)
if !plan.RunCodegen {
t.Fatal("expected explicit setup to refresh codegen")
}
}
func writeGeneratedGraphQLSentinels(t *testing.T, root string) {
t.Helper()
generatedDir := filepath.Join(root, "apps/agent/generated/graphql")
if err := os.MkdirAll(generatedDir, 0o755); err != nil {
t.Fatal(err)
}
for _, file := range []string{"gql.ts", "graphql.ts", "schema.graphql"} {
if err := os.WriteFile(filepath.Join(generatedDir, file), []byte("generated"), 0o644); err != nil {
t.Fatal(err)
}
}
}

View File

@@ -1,13 +0,0 @@
package cmd
import "github.com/fatih/color"
var (
headerStyle = color.New(color.Bold, color.FgCyan)
commandStyle = color.New(color.FgHiGreen)
successStyle = color.New(color.FgGreen, color.Bold)
warnStyle = color.New(color.FgYellow, color.Bold)
labelStyle = color.New(color.Bold)
pathStyle = color.New(color.FgCyan)
dimStyle = color.New(color.Faint)
)

View File

@@ -48,26 +48,6 @@ func runWatch(cmd *cobra.Command, args []string) error {
p := defaultPorts
var reservations *proc.PortReservations
userDataDir := "/tmp/browseros-dev"
mode := "watch"
if watchManual {
mode = "manual"
}
var runLock *proc.WatchRunLock
acquireRunLock := func(ports proc.Ports) error {
lock, stopped, err := proc.AcquireWatchRunLock(proc.WatchRunIdentity{
Mode: mode,
Profile: userDataDir,
Ports: ports,
}, 3*time.Second)
if err != nil {
return err
}
runLock = lock
if stopped {
proc.LogMsgf(proc.TagInfo, "Stopped existing dev watch for profile %s", userDataDir)
}
return nil
}
if watchNew {
proc.LogMsg(proc.TagInfo, "Selecting random available ports...")
@@ -82,16 +62,17 @@ func runWatch(cmd *cobra.Command, args []string) error {
}
userDataDir = dir
proc.LogMsgf(proc.TagInfo, "Created fresh profile: %s", userDataDir)
if err := acquireRunLock(p); err != nil {
return err
}
} else {
if err := os.MkdirAll(userDataDir, 0o755); err != nil {
return fmt.Errorf("creating user-data dir: %w", err)
}
if err := acquireRunLock(p); err != nil {
stopped, err := proc.StopExistingWatchProcesses(3 * time.Second)
if err != nil {
return err
}
if stopped > 0 {
proc.LogMsgf(proc.TagInfo, "Stopped %d existing dev watch process group(s)", stopped)
}
proc.LogMsg(proc.TagInfo, "Killing processes on preferred ports...")
if err := proc.KillPortsAndWait(defaultPorts, 3*time.Second); err != nil {
return err
@@ -108,18 +89,13 @@ func runWatch(cmd *cobra.Command, args []string) error {
p.CDP, p.Server, p.Extension)
}
}
defer func() {
if err := runLock.Close(); err != nil {
proc.LogMsgf(proc.TagInfo, "Warning: closing run lock: %v", err)
}
}()
defer reservations.ReleaseAll()
if err := runDevSetup(cmd.Context(), root, setupModeIfNeeded); err != nil {
return err
}
fmt.Println()
mode := "watch"
if watchManual {
mode = "manual"
}
proc.LogMsgf(proc.TagInfo, "Mode: %s", proc.BoldColor.Sprint(mode))
proc.LogMsgf(proc.TagInfo, "Ports: CDP=%d Server=%d Extension=%d", p.CDP, p.Server, p.Extension)
proc.LogMsgf(proc.TagInfo, "Profile: %s", userDataDir)

View File

@@ -14,7 +14,6 @@ type Tag struct {
var (
TagBuild = Tag{"build", color.New(color.FgYellow)}
TagSetup = Tag{"setup", color.New(color.FgHiYellow)}
TagAgent = Tag{"agent", color.New(color.FgMagenta)}
TagServer = Tag{"server", color.New(color.FgCyan)}
TagBrowser = Tag{"browser", color.New(color.FgBlue)}

View File

@@ -1,12 +1,7 @@
package proc
import (
"crypto/sha256"
"encoding/hex"
"encoding/json"
"errors"
"fmt"
"os"
"os/exec"
"path/filepath"
"sort"
@@ -16,134 +11,24 @@ import (
"time"
)
var errWatchRunLocked = errors.New("dev watch run is already locked")
const maxTCPPort = 65535
type WatchRunIdentity struct {
Mode string `json:"mode"`
Profile string `json:"profile"`
Ports Ports `json:"ports"`
}
type WatchRunState struct {
PID int `json:"pid"`
PGID int `json:"pgid"`
StartedAt time.Time `json:"started_at"`
Identity WatchRunIdentity `json:"identity"`
}
type WatchRunLock struct {
file *os.File
statePath string
}
type watchRunPathsResult struct {
Lock string
State string
}
// AcquireWatchRunLock claims ownership of the current dev watch identity.
// If the same run identity is already active, it terminates the recorded
// process group from the state file and waits for the OS lock to be released.
func AcquireWatchRunLock(identity WatchRunIdentity, timeout time.Duration) (*WatchRunLock, bool, error) {
baseDir, err := DefaultWatchRunBaseDir()
// StopExistingWatchProcesses terminates older default-profile watch supervisors.
// Port cleanup cannot see a previous watch process while it is still waiting
// for CDP, but that process will wake up later and race the new supervisor.
func StopExistingWatchProcesses(timeout time.Duration) (int, error) {
currentPGID, err := syscall.Getpgid(0)
if err != nil {
return nil, false, err
}
return AcquireWatchRunLockInDir(baseDir, identity, timeout)
}
// AcquireWatchRunLockInDir is AcquireWatchRunLock with an explicit base
// directory so tests can exercise flock behavior without touching user state.
func AcquireWatchRunLockInDir(baseDir string, identity WatchRunIdentity, timeout time.Duration) (*WatchRunLock, bool, error) {
identity = normalizeWatchRunIdentity(identity)
if err := validateWatchRunIdentity(identity); err != nil {
return nil, false, err
}
if baseDir == "" {
return nil, false, fmt.Errorf("watch run base dir is empty")
return 0, fmt.Errorf("reading current process group: %w", err)
}
paths := watchRunPaths(baseDir, identity)
lock, err := tryAcquireWatchRunLock(paths.Lock, paths.State)
if err == nil {
if err := lock.writeState(identity); err != nil {
lock.Close()
return nil, false, err
}
return lock, false, nil
}
if !errors.Is(err, errWatchRunLocked) {
return nil, false, err
}
state, err := readWatchRunStateWithRetry(paths.State, 250*time.Millisecond)
if err != nil {
return nil, false, fmt.Errorf("dev watch lock is held but state is unreadable at %s: %w", paths.State, err)
}
if state.Identity != identity {
return nil, false, fmt.Errorf("dev watch lock state identity mismatch at %s", paths.State)
}
if state.PGID <= 0 {
return nil, false, fmt.Errorf("dev watch lock state is missing a process group at %s", paths.State)
}
if err := signalProcessGroup(state.PGID, syscall.SIGTERM); err != nil {
return nil, false, err
}
lock, err = waitForWatchRunLock(paths, identity, timeout)
if err == nil {
return lock, true, nil
}
if !errors.Is(err, errWatchRunLocked) {
return nil, false, err
}
if err := signalProcessGroup(state.PGID, syscall.SIGKILL); err != nil {
return nil, false, err
}
lock, err = waitForWatchRunLock(paths, identity, time.Second)
if err != nil {
if errors.Is(err, errWatchRunLocked) {
return nil, false, fmt.Errorf("previous dev watch process group %d did not exit after SIGKILL; inspect %s before retrying", state.PGID, paths.Lock)
}
return nil, false, err
}
return lock, true, nil
}
// DefaultWatchRunBaseDir returns the shared location for dev watch lock files.
// Individual runs are separated by a hash of profile, ports, and mode.
func DefaultWatchRunBaseDir() (string, error) {
home, err := os.UserHomeDir()
if err != nil {
return "", err
}
return filepath.Join(home, ".browseros-dev", "runs"), nil
}
// StopAllWatchProcesses terminates every recorded dev watch run.
func StopAllWatchProcesses(timeout time.Duration) (int, error) {
baseDir, err := DefaultWatchRunBaseDir()
groups, err := currentWatchProcessGroups(currentPGID)
if err != nil {
return 0, err
}
return StopAllWatchProcessesInDir(baseDir, timeout)
}
// StopAllWatchProcessesInDir is StopAllWatchProcesses with an explicit state directory for tests.
func StopAllWatchProcessesInDir(baseDir string, timeout time.Duration) (int, error) {
pgids, err := liveWatchRunPGIDs(baseDir)
if err != nil {
return 0, err
}
if len(pgids) == 0 {
if len(groups) == 0 {
return 0, nil
}
for _, pgid := range pgids {
for _, pgid := range groups {
if err := signalProcessGroup(pgid, syscall.SIGTERM); err != nil {
return 0, err
}
@@ -151,9 +36,12 @@ func StopAllWatchProcessesInDir(baseDir string, timeout time.Duration) (int, err
deadline := time.Now().Add(timeout)
for {
remaining := livePGIDs(pgids)
remaining, err := currentWatchProcessGroups(currentPGID)
if err != nil {
return 0, err
}
if len(remaining) == 0 {
return len(pgids), nil
return len(groups), nil
}
if time.Now().After(deadline) {
for _, pgid := range remaining {
@@ -161,290 +49,68 @@ func StopAllWatchProcessesInDir(baseDir string, timeout time.Duration) (int, err
return 0, err
}
}
return len(pgids), nil
return len(groups), nil
}
time.Sleep(100 * time.Millisecond)
}
}
// KillBrowserProcessesForDevProfiles kills BrowserOS instances using temporary dev/test profiles.
func KillBrowserProcessesForDevProfiles(timeout time.Duration) (int, error) {
pids, err := currentBrowserProfilePIDs()
if err != nil {
return 0, err
}
if len(pids) == 0 {
return 0, nil
}
for _, pid := range pids {
if err := signalProcess(pid, syscall.SIGTERM); err != nil {
return 0, err
}
}
deadline := time.Now().Add(timeout)
for {
remaining, err := currentBrowserProfilePIDs()
if err != nil {
return 0, err
}
if len(remaining) == 0 {
return len(pids), nil
}
if time.Now().After(deadline) {
for _, pid := range remaining {
if err := signalProcess(pid, syscall.SIGKILL); err != nil {
return 0, err
}
}
return len(pids), nil
}
time.Sleep(100 * time.Millisecond)
}
}
func (l *WatchRunLock) Close() error {
if l == nil || l.file == nil {
return nil
}
// Keep the lock file path stable. Unlinking it during handoff can let
// another opener lock a different inode while an owner still holds this one.
removeErr := os.Remove(l.statePath)
unlockErr := syscall.Flock(int(l.file.Fd()), syscall.LOCK_UN)
closeErr := l.file.Close()
l.file = nil
if removeErr != nil && !os.IsNotExist(removeErr) {
return removeErr
}
if unlockErr != nil {
return unlockErr
}
return closeErr
}
// ReadWatchRunState reads the metadata used to terminate a previous owner.
// The state file is not the lock; it is only trusted after flock says a run is active.
func ReadWatchRunState(path string) (WatchRunState, error) {
data, err := os.ReadFile(path)
if err != nil {
return WatchRunState{}, err
}
var state WatchRunState
if err := json.Unmarshal(data, &state); err != nil {
return WatchRunState{}, fmt.Errorf("parse watch run state: %w", err)
}
return state, nil
}
func readWatchRunStateWithRetry(path string, timeout time.Duration) (WatchRunState, error) {
deadline := time.Now().Add(timeout)
var lastErr error
for {
state, err := ReadWatchRunState(path)
if err == nil {
return state, nil
}
lastErr = err
if time.Now().After(deadline) {
return WatchRunState{}, lastErr
}
time.Sleep(50 * time.Millisecond)
}
}
func liveWatchRunPGIDs(baseDir string) ([]int, error) {
statePaths, err := filepath.Glob(filepath.Join(baseDir, "watch-*.json"))
if err != nil {
return nil, err
}
seen := map[int]struct{}{}
for _, statePath := range statePaths {
state, err := ReadWatchRunState(statePath)
if err != nil || state.PGID <= 0 || !processGroupLive(state.PGID) {
continue
}
seen[state.PGID] = struct{}{}
}
pgids := make([]int, 0, len(seen))
for pgid := range seen {
pgids = append(pgids, pgid)
}
sort.Ints(pgids)
return pgids, nil
}
func livePGIDs(pgids []int) []int {
remaining := make([]int, 0, len(pgids))
for _, pgid := range pgids {
if processGroupLive(pgid) {
remaining = append(remaining, pgid)
}
}
return remaining
}
func processGroupLive(pgid int) bool {
if pgid <= 0 {
return false
}
err := syscall.Kill(-pgid, 0)
return err == nil || err == syscall.EPERM
}
func currentBrowserProfilePIDs() ([]int, error) {
func currentWatchProcessGroups(currentPGID int) ([]int, error) {
output, err := exec.Command("ps", "-axo", "pid=,pgid=,command=").Output()
if err != nil {
return nil, fmt.Errorf("listing processes: %w", err)
}
return browserProfilePIDsFromPS(string(output)), nil
return watchProcessGroupsFromPS(string(output), currentPGID), nil
}
func browserProfilePIDsFromPS(output string) []int {
var pids []int
func watchProcessGroupsFromPS(output string, currentPGID int) []int {
seen := map[int]struct{}{}
for _, line := range strings.Split(output, "\n") {
fields := strings.Fields(line)
if len(fields) < 3 {
continue
}
pid, err := strconv.Atoi(fields[0])
if err != nil {
pgid, err := strconv.Atoi(fields[1])
if err != nil || pgid == currentPGID {
continue
}
command := strings.Join(fields[2:], " ")
if isDevBrowserProcess(command) {
pids = append(pids, pid)
if isDefaultWatchCommand(fields[2:]) {
seen[pgid] = struct{}{}
}
}
sort.Ints(pids)
return pids
groups := make([]int, 0, len(seen))
for pgid := range seen {
groups = append(groups, pgid)
}
sort.Ints(groups)
return groups
}
func isDevBrowserProcess(command string) bool {
if !strings.Contains(command, "BrowserOS.app/Contents/MacOS/BrowserOS") {
func isDefaultWatchCommand(commandFields []string) bool {
if len(commandFields) < 2 {
return false
}
return strings.Contains(command, "--user-data-dir=/tmp/browseros-dev") ||
strings.Contains(command, "browseros-dev-") ||
strings.Contains(command, "browseros-test-")
}
func watchRunPaths(baseDir string, identity WatchRunIdentity) watchRunPathsResult {
identity = normalizeWatchRunIdentity(identity)
sum := sha256.Sum256([]byte(fmt.Sprintf("%s\x00%s\x00%d\x00%d\x00%d",
identity.Mode,
identity.Profile,
identity.Ports.CDP,
identity.Ports.Server,
identity.Ports.Extension,
)))
key := hex.EncodeToString(sum[:])
return watchRunPathsResult{
Lock: filepath.Join(baseDir, "watch-"+key+".lock"),
State: filepath.Join(baseDir, "watch-"+key+".json"),
if filepath.Base(commandFields[0]) != "browseros-dev" {
return false
}
}
func normalizeWatchRunIdentity(identity WatchRunIdentity) WatchRunIdentity {
identity.Profile = filepath.Clean(identity.Profile)
return identity
}
func tryAcquireWatchRunLock(lockPath string, statePath string) (*WatchRunLock, error) {
if err := os.MkdirAll(filepath.Dir(lockPath), 0o755); err != nil {
return nil, err
if commandFields[1] != "watch" {
return false
}
file, err := os.OpenFile(lockPath, os.O_CREATE|os.O_RDWR, 0o644)
if err != nil {
return nil, err
}
if err := syscall.Flock(int(file.Fd()), syscall.LOCK_EX|syscall.LOCK_NB); err != nil {
file.Close()
if errors.Is(err, syscall.EWOULDBLOCK) || errors.Is(err, syscall.EAGAIN) {
return nil, errWatchRunLocked
for _, field := range commandFields[2:] {
if field == "--new" {
return false
}
return nil, err
}
return &WatchRunLock{file: file, statePath: statePath}, nil
}
func (l *WatchRunLock) writeState(identity WatchRunIdentity) error {
pgid, err := syscall.Getpgid(0)
if err != nil {
return fmt.Errorf("reading current process group: %w", err)
}
state := WatchRunState{
PID: os.Getpid(),
PGID: pgid,
StartedAt: time.Now(),
Identity: identity,
}
data, err := json.MarshalIndent(state, "", " ")
if err != nil {
return err
}
data = append(data, '\n')
tmp := l.statePath + ".tmp"
if err := os.WriteFile(tmp, data, 0o644); err != nil {
return err
}
return os.Rename(tmp, l.statePath)
}
func waitForWatchRunLock(paths watchRunPathsResult, identity WatchRunIdentity, timeout time.Duration) (*WatchRunLock, error) {
deadline := time.Now().Add(timeout)
for {
lock, err := tryAcquireWatchRunLock(paths.Lock, paths.State)
if err == nil {
if err := lock.writeState(identity); err != nil {
lock.Close()
return nil, err
}
return lock, nil
}
if !errors.Is(err, errWatchRunLocked) {
return nil, err
}
if time.Now().After(deadline) {
return nil, errWatchRunLocked
}
time.Sleep(100 * time.Millisecond)
}
}
func validateWatchRunIdentity(identity WatchRunIdentity) error {
if identity.Mode == "" {
return fmt.Errorf("watch run mode is empty")
}
if identity.Profile == "" {
return fmt.Errorf("watch run profile is empty")
}
if !isValidTCPPort(identity.Ports.CDP) || !isValidTCPPort(identity.Ports.Server) || !isValidTCPPort(identity.Ports.Extension) {
return fmt.Errorf("watch run ports are invalid: %+v", identity.Ports)
}
return nil
}
func isValidTCPPort(port int) bool {
return port > 0 && port <= maxTCPPort
return true
}
func signalProcessGroup(pgid int, signal syscall.Signal) error {
if pgid <= 0 {
return fmt.Errorf("invalid process group %d", pgid)
return nil
}
if err := syscall.Kill(-pgid, signal); err != nil && err != syscall.ESRCH {
return fmt.Errorf("signaling process group %d: %w", pgid, err)
}
return nil
}
func signalProcess(pid int, signal syscall.Signal) error {
if pid <= 0 {
return fmt.Errorf("invalid process %d", pid)
}
if err := syscall.Kill(pid, signal); err != nil && err != syscall.ESRCH {
return fmt.Errorf("signaling process %d: %w", pid, err)
}
return nil
}

View File

@@ -1,204 +1,32 @@
package proc
import (
"encoding/json"
"os"
"os/exec"
"path/filepath"
"syscall"
"testing"
"time"
)
import "testing"
const watchLockHelperEnv = "BROWSEROS_DEV_WATCH_LOCK_HELPER"
func TestMain(m *testing.M) {
if os.Getenv(watchLockHelperEnv) == "1" {
runWatchLockHelper()
return
}
os.Exit(m.Run())
}
func TestWatchRunPathsStableAndDistinct(t *testing.T) {
baseDir := t.TempDir()
identity := WatchRunIdentity{
Mode: "watch",
Profile: "/tmp/browseros-dev",
Ports: Ports{CDP: 9005, Server: 9105, Extension: 9305},
}
first := watchRunPaths(baseDir, identity)
second := watchRunPaths(baseDir, identity)
if first != second {
t.Fatalf("expected stable paths, got %#v and %#v", first, second)
}
withDifferentPort := identity
withDifferentPort.Ports.Server = 9106
third := watchRunPaths(baseDir, withDifferentPort)
if third.Lock == first.Lock || third.State == first.State {
t.Fatalf("expected distinct paths for different ports, got %#v and %#v", first, third)
}
}
func TestBrowserProfilePIDsFromPSSelectsOnlyDevAndTestProfiles(t *testing.T) {
func TestWatchProcessGroupsFromPSSelectsOtherWatchGroups(t *testing.T) {
output := `
111 111 /Applications/BrowserOS.app/Contents/MacOS/BrowserOS --user-data-dir=/tmp/browseros-dev
222 222 /Applications/BrowserOS.app/Contents/MacOS/BrowserOS --user-data-dir=/tmp/browseros-dev-abcd
333 333 /Applications/BrowserOS.app/Contents/MacOS/BrowserOS --user-data-dir=/var/folders/x/browseros-test-abcd
444 444 /Applications/BrowserOS.app/Contents/MacOS/BrowserOS --user-data-dir=/Users/me/Library/Application Support/BrowserOS
555 555 rg browseros-test-
111 111 /tmp/one/browseros-dev watch
222 222 /tmp/two/browseros-dev watch --new
333 333 /tmp/one/browseros-dev cleanup
444 444 rg browseros-dev watch
555 555 bun run dev:watch
`
pids := browserProfilePIDsFromPS(output)
groups := watchProcessGroupsFromPS(output, 999)
if len(pids) != 3 || pids[0] != 111 || pids[1] != 222 || pids[2] != 333 {
t.Fatalf("expected dev/test browser pids, got %#v", pids)
if len(groups) != 1 || groups[0] != 111 {
t.Fatalf("expected only pgid 111, got %#v", groups)
}
}
func TestAcquireWatchRunLockWritesStateAndReleases(t *testing.T) {
baseDir := t.TempDir()
identity := WatchRunIdentity{
Mode: "watch",
Profile: "/tmp/browseros-dev",
Ports: Ports{CDP: 9005, Server: 9105, Extension: 9305},
}
func TestWatchProcessGroupsFromPSDedupesProcessGroups(t *testing.T) {
output := `
111 111 /tmp/one/browseros-dev watch
112 111 /tmp/one/browseros-dev watch
`
lock, stopped, err := AcquireWatchRunLockInDir(baseDir, identity, time.Second)
if err != nil {
t.Fatalf("AcquireWatchRunLockInDir returned error: %v", err)
}
if stopped {
t.Fatal("expected first acquisition not to stop another run")
}
groups := watchProcessGroupsFromPS(output, 999)
paths := watchRunPaths(baseDir, identity)
state, err := ReadWatchRunState(paths.State)
if err != nil {
t.Fatalf("ReadWatchRunState returned error: %v", err)
}
if state.PID != os.Getpid() {
t.Fatalf("expected state PID %d, got %d", os.Getpid(), state.PID)
}
if state.PGID <= 0 {
t.Fatalf("expected positive PGID, got %d", state.PGID)
}
if state.Identity != identity {
t.Fatalf("expected identity %#v, got %#v", identity, state.Identity)
}
if err := lock.Close(); err != nil {
t.Fatalf("closing lock: %v", err)
}
if _, err := os.Stat(paths.State); !os.IsNotExist(err) {
t.Fatalf("expected state file to be removed on close, got %v", err)
}
if _, err := os.Stat(paths.Lock); err != nil {
t.Fatalf("expected lock file path to remain reusable, got %v", err)
}
lock, stopped, err = AcquireWatchRunLockInDir(baseDir, identity, time.Second)
if err != nil {
t.Fatalf("reacquiring lock returned error: %v", err)
}
if stopped {
t.Fatal("expected reacquisition after close not to stop another run")
}
if err := lock.Close(); err != nil {
t.Fatalf("closing reacquired lock: %v", err)
}
}
func TestAcquireWatchRunLockRejectsInvalidPorts(t *testing.T) {
identity := WatchRunIdentity{
Mode: "watch",
Profile: "/tmp/browseros-dev",
Ports: Ports{CDP: 9005, Server: 65536, Extension: 9305},
}
if _, _, err := AcquireWatchRunLockInDir(t.TempDir(), identity, time.Second); err == nil {
t.Fatal("expected invalid port error")
}
}
func TestAcquireWatchRunLockStopsExistingOwnerByStatePGID(t *testing.T) {
baseDir := t.TempDir()
readyPath := filepath.Join(baseDir, "ready")
identity := WatchRunIdentity{
Mode: "watch",
Profile: "/tmp/browseros-dev",
Ports: Ports{CDP: 9005, Server: 9105, Extension: 9305},
}
identityJSON, err := json.Marshal(identity)
if err != nil {
t.Fatal(err)
}
cmd := exec.Command(os.Args[0], "-test.run=TestMain")
cmd.Env = append(os.Environ(),
watchLockHelperEnv+"=1",
"BROWSEROS_DEV_WATCH_LOCK_BASE="+baseDir,
"BROWSEROS_DEV_WATCH_LOCK_READY="+readyPath,
"BROWSEROS_DEV_WATCH_LOCK_IDENTITY="+string(identityJSON),
)
cmd.SysProcAttr = &syscall.SysProcAttr{Setpgid: true}
if err := cmd.Start(); err != nil {
t.Fatalf("starting helper: %v", err)
}
defer cmd.Process.Kill()
waitForFile(t, readyPath, 3*time.Second)
lock, stopped, err := AcquireWatchRunLockInDir(baseDir, identity, 3*time.Second)
if err != nil {
t.Fatalf("AcquireWatchRunLockInDir returned error: %v", err)
}
defer lock.Close()
if !stopped {
t.Fatal("expected takeover to stop existing owner")
}
done := make(chan error, 1)
go func() {
done <- cmd.Wait()
}()
select {
case <-done:
case <-time.After(3 * time.Second):
t.Fatal("expected helper process to exit after takeover")
}
}
func runWatchLockHelper() {
baseDir := os.Getenv("BROWSEROS_DEV_WATCH_LOCK_BASE")
readyPath := os.Getenv("BROWSEROS_DEV_WATCH_LOCK_READY")
var identity WatchRunIdentity
if err := json.Unmarshal([]byte(os.Getenv("BROWSEROS_DEV_WATCH_LOCK_IDENTITY")), &identity); err != nil {
os.Exit(2)
}
lock, _, err := AcquireWatchRunLockInDir(baseDir, identity, time.Second)
if err != nil {
os.Exit(3)
}
defer lock.Close()
if err := os.WriteFile(readyPath, []byte("ready\n"), 0o644); err != nil {
os.Exit(4)
}
time.Sleep(30 * time.Second)
}
func waitForFile(t *testing.T, path string, timeout time.Duration) {
t.Helper()
deadline := time.Now().Add(timeout)
for {
if _, err := os.Stat(path); err == nil {
return
}
if time.Now().After(deadline) {
t.Fatalf("timed out waiting for %s", path)
}
time.Sleep(50 * time.Millisecond)
if len(groups) != 1 || groups[0] != 111 {
t.Fatalf("expected one pgid 111, got %#v", groups)
}
}

View File

@@ -2,4 +2,14 @@
set -euo pipefail
DIR="$(cd "$(dirname "$0")" && pwd)"
exec "$DIR/run.sh" setup "$@"
ROOT="$(cd "$DIR/../.." && pwd)"
cd "$ROOT"
echo "[setup] Installing dependencies..."
bun install --frozen-lockfile
echo "[setup] Generating agent code..."
bun run codegen:agent
echo "[setup] Ready"