Compare commits

...

14 Commits

Author SHA1 Message Date
Nikhil Sonti
d63fa91a3b chore: pin release.linux.yaml to arm64-only for sysroot bootstrap test
x64 already builds cleanly — the failing leg is arm64 cross-compile from
an x64 host. Pin the config to arm64 to exercise the new
install-sysroot.py path in configure without burning time on x64.
Flip back to [x64, arm64] once arm64 is green.
2026-04-06 18:14:19 -07:00
Nikhil Sonti
c0889577ba fix: install linux sysroot in configure, not via gclient hook
`gn gen` was failing on the arm64 leg with `Missing sysroot
(//build/linux/debian_bullseye_arm64-sysroot)`. The previous design
relied on `git_setup` writing `target_cpus` to `.gclient` so that
`gclient sync`'s DEPS hook would download the cross-arch sysroot. That
chain breaks for any chromium_src that was synced before cross-arch
support landed (the hook is gated on .gclient state at sync time) and
for partial pipeline runs that skip git_setup entirely. Nothing in
configure declared or verified its sysroot precondition.

Make configure self-healing: on Linux, invoke
`build/linux/sysroot_scripts/install-sysroot.py --arch=<target>`
directly before `gn gen`. install-sysroot.py is idempotent (stamp file
+ SHA check), fast when already installed, and decoupled from .gclient
— it's exactly what the failing assertion's error message recommends.
The script accepts our arch names directly: `x64` translates to `amd64`
internally via ARCH_TRANSLATIONS, and `arm64` is a valid pass-through.

Also temporarily pin release.linux.yaml to x64 only while we validate
the sysroot bootstrap end-to-end. Flip back to `[x64, arm64]` once
arm64 is green.
2026-04-06 18:14:19 -07:00
Nikhil
8de2bf984f feat: build linux x64 + arm64 in a single invocation (#652)
`release.linux.yaml` now declares `architecture: [x64, arm64]` and the
runner loops the entire pipeline once per architecture. depot_tools
fetches both Linux sysroots automatically — `git_setup` idempotently
ensures `target_cpus = ['x64', 'arm64']` is in `.gclient` before
`gclient sync`, so cross-compiling arm64 from an x64 host just works.

The resolver returns `List[Context]` (single-element for the common
single-arch case), and `build/cli/build.py` loops `execute_pipeline` over
the per-arch contexts. Modules stay 100% arch-agnostic — no new
orchestration module, no new YAML schema beyond the list form.

Also fix a cross-compile bug in `build/modules/package/linux.py`: the
appimagetool binary must match the BUILD machine's arch (it executes
locally), not the target arch. Split into a host-keyed
`LINUX_HOST_APPIMAGETOOL` lookup vs the existing target-keyed
`LINUX_ARCHITECTURE_CONFIG`. Target arch is still passed to appimagetool
via the `ARCH` env var.

- build/common/resolver.py: scalar OR list `architecture` -> List[Context]
- build/cli/build.py: loop pipeline per arch, log multi-arch headers
- build/config/release.linux.yaml: `architecture: [x64, arm64]`
- build/modules/setup/git.py: idempotent `target_cpus` edit on Linux
- build/modules/package/linux.py: host vs target appimagetool split
- build/modules/package/linux_test.py: cover the host/target split
2026-04-06 13:08:06 -07:00
Nikhil
1b8720740c feat: add linux arm64 release support (#651)
* feat: support linux arm64 release artifacts

* fix: address PR review comments for 0406-linux_arm64_support
2026-04-06 10:20:38 -07:00
Nikhil
91be726381 refactor: remove --compile-only flag, consolidate into --ci (#646)
The --compile-only and --ci flags served overlapping purposes for CI
builds. Remove --compile-only entirely since --ci already handles the
CI use case (skip R2, skip prod env validation, local zip packaging)
and --no-upload covers the upload-skipping use case for full builds.
2026-04-03 14:58:52 -07:00
Nikhil
ff5386a24a fix: agent storage issue on update (#643)
* fix: agent storage erase issue fix

* fix: remove the guard against remote
2026-04-03 14:50:14 -07:00
Nikhil
a5f3c4da65 fix: skip windows exe patching in ci mode to avoid wine dependency (#645)
The server release CI workflow fails on ubuntu-latest because
patch-windows-exe.ts requires Wine to run rcedit. Thread the existing
--ci flag through compileServerBinaries so Windows PE metadata patching
is skipped in CI mode with a warning log.
2026-04-03 14:46:33 -07:00
Nikhil
e5a852dd3d chore: update server version (#644) 2026-04-03 14:29:07 -07:00
Felarof
aee30ce8e1 Update README.md (#638) 2026-04-02 13:00:11 -07:00
Nikhil
0833c8d42d fix: windows app-data location fix (#637) 2026-04-02 08:53:04 -07:00
Nikhil
036c7f280b fix: tab-grouping cdp crash (#635)
* fix: tab group crash + history fix

* fix: tab group crash + history fix
2026-04-01 15:06:41 -07:00
Nikhil
000429277d fix: isolate server release packaging to ci mode (#629)
* fix: relax compile-only release env requirements

* refactor: add ci mode for server release builds
2026-03-31 20:57:44 -07:00
Nikhil
f8535fd96d fix: exclude eval framework from language stats via gitattributes (#630) 2026-03-31 20:44:06 -07:00
Nikhil
f0cbf77924 feat: add server release workflow (#627)
* feat: add server release workflow

* fix: address PR review comments for 0331-add_server_release_workflow

* refactor: rework 0331-add_server_release_workflow based on feedback

* refactor: rework 0331-add_server_release_workflow based on feedback
2026-03-31 17:37:06 -07:00
33 changed files with 1094 additions and 154 deletions

2
.gitattributes vendored
View File

@@ -9,4 +9,6 @@ packages/browseros/chromium_patches/**/*.py linguist-generated
scripts/*.py linguist-generated
# Mark build directories as generated
build/* linguist-generated
# Mark eval/test framework as vendored so it's excluded from language stats
packages/browseros-agent/apps/eval/** linguist-vendored
docs/videos/** filter=lfs diff=lfs merge=lfs -text

147
.github/workflows/release-server.yml vendored Normal file
View File

@@ -0,0 +1,147 @@
name: Release BrowserOS Server
on:
workflow_dispatch:
inputs:
version:
description: "Release version (e.g. 0.0.80)"
required: true
type: string
concurrency:
group: release-server
cancel-in-progress: false
jobs:
release:
if: github.ref == 'refs/heads/main'
runs-on: ubuntu-latest
environment: release-core
permissions:
contents: write
defaults:
run:
working-directory: packages/browseros-agent
steps:
- uses: actions/checkout@v6
with:
fetch-depth: 0
- uses: oven-sh/setup-bun@v2
with:
bun-version: "1.3.6"
- name: Install dependencies
run: bun ci
- name: Prepare production env file
run: cp apps/server/.env.production.example apps/server/.env.production
- name: Validate version
id: version
env:
REQUESTED_VERSION: ${{ inputs.version }}
run: |
PACKAGE_VERSION=$(node -p "require('./apps/server/package.json').version")
echo "package_version=$PACKAGE_VERSION" >> "$GITHUB_OUTPUT"
echo "release_sha=$(git rev-parse HEAD)" >> "$GITHUB_OUTPUT"
if [ "$PACKAGE_VERSION" != "$REQUESTED_VERSION" ]; then
echo "Requested version $REQUESTED_VERSION does not match apps/server/package.json ($PACKAGE_VERSION)"
exit 1
fi
- name: Build release artifacts
run: bun run build:server:ci
- name: Verify release artifacts
run: |
mapfile -t ZIP_FILES < <(find dist/prod/server -maxdepth 1 -type f -name 'browseros-server-resources-*.zip' | sort)
if [ "${#ZIP_FILES[@]}" -eq 0 ]; then
echo "No server release zip files were produced"
exit 1
fi
printf 'Found release artifacts:\n%s\n' "${ZIP_FILES[@]}"
- name: Generate release notes
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
PACKAGE_VERSION: ${{ steps.version.outputs.package_version }}
run: |
SERVER_APP_PATH="packages/browseros-agent/apps/server"
SERVER_BUILD_DIR="packages/browseros-agent/scripts/build/server"
SERVER_BUILD_ENTRY="packages/browseros-agent/scripts/build/server.ts"
SERVER_RESOURCE_MANIFEST="packages/browseros-agent/scripts/build/config/server-prod-resources.json"
SERVER_WORKSPACE_PKG="packages/browseros-agent/package.json"
CURRENT_TAG="browseros-server-v$PACKAGE_VERSION"
PREV_TAG=$(git tag -l "browseros-server-v*" --sort=-v:refname | grep -v "^${CURRENT_TAG}$" | head -n 1)
if [ -z "$PREV_TAG" ]; then
echo "Initial release of browseros-server." > /tmp/release-notes.md
else
COMMITS=$(git log "$PREV_TAG"..HEAD --pretty=format:"%H" -- \
"$SERVER_APP_PATH" \
"$SERVER_BUILD_DIR" \
"$SERVER_BUILD_ENTRY" \
"$SERVER_RESOURCE_MANIFEST" \
"$SERVER_WORKSPACE_PKG")
if [ -z "$COMMITS" ]; then
echo "No notable changes." > /tmp/release-notes.md
else
echo "## What's Changed" > /tmp/release-notes.md
echo "" >> /tmp/release-notes.md
while IFS= read -r SHA; do
SUBJECT=$(git log -1 --pretty=format:"%s" "$SHA")
PR_NUM=$(gh api "/repos/${{ github.repository }}/commits/${SHA}/pulls" --jq '.[0].number // empty' 2>/dev/null)
if [ -n "$PR_NUM" ] && ! echo "$SUBJECT" | grep -qF "(#${PR_NUM})"; then
echo "- ${SUBJECT} (#${PR_NUM})" >> /tmp/release-notes.md
else
echo "- ${SUBJECT}" >> /tmp/release-notes.md
fi
done <<< "$COMMITS"
fi
fi
working-directory: ${{ github.workspace }}
- name: Create GitHub release
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
PACKAGE_VERSION: ${{ steps.version.outputs.package_version }}
RELEASE_SHA: ${{ steps.version.outputs.release_sha }}
run: |
TAG="browseros-server-v$PACKAGE_VERSION"
TITLE="BrowserOS Server - v$PACKAGE_VERSION"
mapfile -t ZIP_FILES < <(find packages/browseros-agent/dist/prod/server -maxdepth 1 -type f -name 'browseros-server-resources-*.zip' | sort)
git config user.name "github-actions[bot]"
git config user.email "github-actions[bot]@users.noreply.github.com"
if git rev-parse "$TAG" >/dev/null 2>&1; then
echo "Tag $TAG already exists, skipping tag creation"
else
git tag -a "$TAG" -m "browseros-server v$PACKAGE_VERSION" "$RELEASE_SHA"
fi
if git ls-remote --tags origin "$TAG" | grep -q "$TAG"; then
echo "Tag $TAG already on remote, skipping push"
else
git push origin "$TAG"
fi
if gh release view "$TAG" >/dev/null 2>&1; then
echo "Release $TAG already exists, updating"
gh release edit "$TAG" --title "$TITLE" --notes-file /tmp/release-notes.md
gh release upload "$TAG" "${ZIP_FILES[@]}" --clobber
else
gh release create "$TAG" \
--title "$TITLE" \
--notes-file /tmp/release-notes.md \
"${ZIP_FILES[@]}"
fi
working-directory: ${{ github.workspace }}

View File

@@ -192,7 +192,7 @@ We'd love your help making BrowserOS better! See our [Contributing Guide](CONTRI
BrowserOS is open source under the [AGPL-3.0 license](LICENSE).
Copyright &copy; 2025 Felafax, Inc.
Copyright &copy; 2026 Felafax, Inc.
## Stargazers

View File

@@ -1,6 +1,6 @@
{
"name": "@browseros/server",
"version": "0.0.80",
"version": "0.0.81",
"description": "BrowserOS server",
"type": "module",
"main": "./src/index.ts",

View File

@@ -8,11 +8,16 @@
import { afterAll, describe, it } from 'bun:test'
import assert from 'node:assert'
import { mkdtempSync, rmSync, writeFileSync } from 'node:fs'
import {
existsSync,
mkdtempSync,
readFileSync,
rmSync,
writeFileSync,
} from 'node:fs'
import { tmpdir } from 'node:os'
import { join, resolve } from 'node:path'
// Derive the build target from the current platform so the test is portable
function getNativeTarget(): { id: string; ext: string } {
const os =
process.platform === 'darwin'
@@ -24,12 +29,30 @@ function getNativeTarget(): { id: string; ext: string } {
return { id: `${os}-${cpu}`, ext: process.platform === 'win32' ? '.exe' : '' }
}
// Stub values so the build config validation passes without real secrets
const BUILD_ENV_STUBS: Record<string, string> = {
const REQUIRED_INLINE_ENV_KEYS = [
'BROWSEROS_CONFIG_URL',
'CODEGEN_SERVICE_URL',
'POSTHOG_API_KEY',
'SENTRY_DSN',
] as const
const R2_ENV_KEYS = [
'R2_ACCOUNT_ID',
'R2_ACCESS_KEY_ID',
'R2_SECRET_ACCESS_KEY',
'R2_BUCKET',
] as const
const PROD_SECRET_KEYS = [...REQUIRED_INLINE_ENV_KEYS, ...R2_ENV_KEYS]
const INLINE_ENV_STUBS: Record<string, string> = {
BROWSEROS_CONFIG_URL: 'https://stub.test/config',
CODEGEN_SERVICE_URL: 'https://stub.test/codegen',
POSTHOG_API_KEY: 'phc_test_stub',
SENTRY_DSN: 'https://stub@sentry.test/0',
}
const R2_ENV_STUBS: Record<string, string> = {
R2_ACCOUNT_ID: 'test',
R2_ACCESS_KEY_ID: 'test',
R2_SECRET_ACCESS_KEY: 'test',
@@ -39,23 +62,58 @@ const BUILD_ENV_STUBS: Record<string, string> = {
describe('server build', () => {
const rootDir = resolve(import.meta.dir, '../../..')
const serverPkgPath = resolve(rootDir, 'apps/server/package.json')
const prodEnvPath = resolve(rootDir, 'apps/server/.env.production')
const prodEnvTemplatePath = resolve(
rootDir,
'apps/server/.env.production.example',
)
const originalProdEnv = existsSync(prodEnvPath)
? readFileSync(prodEnvPath, 'utf-8')
: null
const prodEnvTemplate = readFileSync(prodEnvTemplatePath, 'utf-8')
const buildScript = resolve(rootDir, 'scripts/build/server.ts')
const target = getNativeTarget()
const binaryPath = resolve(
rootDir,
`dist/prod/server/.tmp/binaries/browseros-server-${target.id}${target.ext}`,
)
// Empty manifest so the build skips R2 resource downloads
const zipPath = resolve(
rootDir,
`dist/prod/server/browseros-server-resources-${target.id}.zip`,
)
const tempDir = mkdtempSync(join(tmpdir(), 'browseros-build-test-'))
const emptyManifestPath = join(tempDir, 'empty-manifest.json')
writeFileSync(emptyManifestPath, JSON.stringify({ resources: [] }))
function buildEnv(
extraEnv: Record<string, string>,
omitKeys: readonly string[] = [],
): NodeJS.ProcessEnv {
const env: NodeJS.ProcessEnv = {
...process.env,
...extraEnv,
}
for (const key of omitKeys) {
delete env[key]
}
return env
}
function resetProdEnvToTemplate(): void {
writeFileSync(prodEnvPath, prodEnvTemplate)
}
afterAll(() => {
rmSync(tempDir, { recursive: true, force: true })
if (originalProdEnv === null) {
rmSync(prodEnvPath, { force: true })
return
}
writeFileSync(prodEnvPath, originalProdEnv)
})
it('compiles and --version outputs correct version', async () => {
resetProdEnvToTemplate()
const pkg = await Bun.file(serverPkgPath).json()
const expectedVersion: string = pkg.version
@@ -71,7 +129,7 @@ describe('server build', () => {
cwd: rootDir,
stdout: 'pipe',
stderr: 'pipe',
env: { ...process.env, ...BUILD_ENV_STUBS },
env: buildEnv({ ...INLINE_ENV_STUBS, ...R2_ENV_STUBS }),
},
)
const buildExit = await build.exited
@@ -97,4 +155,26 @@ describe('server build', () => {
)
assert.strictEqual(versionOutput.trim(), expectedVersion)
}, 300_000)
it('archives CI builds without R2 config or production env secrets', async () => {
resetProdEnvToTemplate()
rmSync(zipPath, { force: true })
const build = Bun.spawn(
['bun', buildScript, `--target=${target.id}`, '--ci'],
{
cwd: rootDir,
stdout: 'pipe',
stderr: 'pipe',
env: buildEnv({}, PROD_SECRET_KEYS),
},
)
const buildExit = await build.exited
if (buildExit !== 0) {
const stderr = await new Response(build.stderr).text()
assert.fail(`CI build failed (exit ${buildExit}):\n${stderr}`)
}
assert.ok(existsSync(zipPath), `Expected archive at ${zipPath}`)
}, 300_000)
})

View File

@@ -152,7 +152,7 @@
},
"apps/server": {
"name": "@browseros/server",
"version": "0.0.80",
"version": "0.0.81",
"bin": {
"browseros-server": "./src/index.ts",
},

View File

@@ -19,7 +19,7 @@
"start:agent": "bun run --filter @browseros/agent dev",
"build": "bun run build:server && bun run build:agent",
"build:server": "FORCE_COLOR=1 bun scripts/build/server.ts --target=all",
"build:server:ci": "FORCE_COLOR=1 bun scripts/build/server.ts --target=all --compile-only",
"build:server:ci": "FORCE_COLOR=1 bun scripts/build/server.ts --target=all --ci",
"build:server:test": "FORCE_COLOR=1 bun scripts/build/server.ts --target=darwin-arm64 --no-upload",
"upload:cli-installers": "bun scripts/build/cli.ts",
"start:server:test": "bun run build:server:test && set -a && . apps/server/.env.development && set +a && dist/prod/server/.tmp/binaries/browseros-server-darwin-arm64",

View File

@@ -37,29 +37,39 @@ export async function archiveAndUploadArtifacts(
r2: R2Config,
upload: boolean,
): Promise<UploadResult[]> {
const results: UploadResult[] = []
const results = await archiveArtifacts(artifacts)
if (!upload) {
return results
}
for (const artifact of artifacts) {
const zipPath = zipPathForArtifact(artifact)
await zipArtifactRoot(artifact.rootDir, zipPath)
if (!upload) {
results.push({ targetId: artifact.target.id, zipPath })
continue
}
const fileName = basename(zipPath)
const uploadedResults: UploadResult[] = []
for (const result of results) {
const fileName = basename(result.zipPath)
const latestR2Key = joinObjectKey(r2.uploadPrefix, 'latest', fileName)
const versionR2Key = joinObjectKey(r2.uploadPrefix, version, fileName)
await uploadFileToObject(client, r2, latestR2Key, zipPath)
await uploadFileToObject(client, r2, versionR2Key, zipPath)
results.push({
targetId: artifact.target.id,
zipPath,
await uploadFileToObject(client, r2, latestR2Key, result.zipPath)
await uploadFileToObject(client, r2, versionR2Key, result.zipPath)
uploadedResults.push({
targetId: result.targetId,
zipPath: result.zipPath,
latestR2Key,
versionR2Key,
})
}
return uploadedResults
}
export async function archiveArtifacts(
artifacts: StagedArtifact[],
): Promise<UploadResult[]> {
const results: UploadResult[] = []
for (const artifact of artifacts) {
const zipPath = zipPathForArtifact(artifact)
await zipArtifactRoot(artifact.rootDir, zipPath)
results.push({ targetId: artifact.target.id, zipPath })
}
return results
}

View File

@@ -22,23 +22,26 @@ export function parseBuildArgs(argv: string[]): BuildArgs {
.option('--upload', 'Upload artifact zips to R2')
.option('--no-upload', 'Skip zip upload to R2')
.option(
'--compile-only',
'Compile binaries only (skip R2 staging and upload)',
'--ci',
'Build local release zip artifacts for CI without R2 and without requiring production env secrets',
)
program.parse(argv, { from: 'user' })
const options = program.opts<{
target: string
manifest: string
upload: boolean
compileOnly: boolean
ci: boolean
}>()
const compileOnly = options.compileOnly ?? false
const ci = options.ci ?? false
if (ci && options.upload) {
throw new Error('--ci cannot be combined with --upload')
}
return {
targets: resolveTargets(options.target),
manifestPath: options.manifest,
upload: compileOnly ? false : (options.upload ?? true),
compileOnly,
upload: ci ? false : (options.upload ?? true),
ci,
}
}

View File

@@ -1,6 +1,7 @@
import { mkdirSync, rmSync } from 'node:fs'
import { join } from 'node:path'
import { log } from '../log'
import { wasmBinaryPlugin } from '../plugins/wasm-binary'
import { runCommand } from './command'
import type { BuildTarget, CompiledServerBinary } from './types'
@@ -52,6 +53,7 @@ async function bundleServer(
async function compileTarget(
target: BuildTarget,
env: NodeJS.ProcessEnv,
ci: boolean,
): Promise<string> {
const binaryPath = compiledBinaryPath(target)
const args = [
@@ -66,11 +68,15 @@ async function compileTarget(
await runCommand('bun', args, env)
if (target.os === 'windows') {
await runCommand(
'bun',
['scripts/patch-windows-exe.ts', binaryPath],
process.env,
)
if (ci) {
log.warn('Skipping Windows exe metadata patching in CI mode')
} else {
await runCommand(
'bun',
['scripts/patch-windows-exe.ts', binaryPath],
process.env,
)
}
}
return binaryPath
@@ -81,14 +87,16 @@ export async function compileServerBinaries(
envVars: Record<string, string>,
processEnv: NodeJS.ProcessEnv,
version: string,
options?: { ci?: boolean },
): Promise<CompiledServerBinary[]> {
const ci = options?.ci ?? false
rmSync(TMP_ROOT, { recursive: true, force: true })
mkdirSync(BINARIES_DIR, { recursive: true })
await bundleServer(envVars, version)
const compiled: CompiledServerBinary[] = []
for (const target of targets) {
const binaryPath = await compileTarget(target, processEnv)
const binaryPath = await compileTarget(target, processEnv, ci)
compiled.push({ target, binaryPath })
}

View File

@@ -75,7 +75,7 @@ function validateProductionEnv(envVars: Record<string, string>): void {
}
export interface LoadBuildConfigOptions {
compileOnly?: boolean
ci?: boolean
}
export function loadBuildConfig(
@@ -84,7 +84,9 @@ export function loadBuildConfig(
): BuildConfig {
const fileEnv = loadProdEnv(rootDir)
const envVars = buildInlineEnv(fileEnv)
validateProductionEnv(envVars)
if (!options.ci) {
validateProductionEnv(envVars)
}
const processEnv: NodeJS.ProcessEnv = {
PATH: process.env.PATH ?? '',
@@ -92,7 +94,7 @@ export function loadBuildConfig(
...process.env,
}
if (options.compileOnly) {
if (options.ci) {
return { version: readServerVersion(rootDir), envVars, processEnv }
}

View File

@@ -2,13 +2,17 @@ import { existsSync } from 'node:fs'
import { resolve } from 'node:path'
import { log } from '../log'
import { archiveAndUploadArtifacts } from './archive'
import { archiveAndUploadArtifacts, archiveArtifacts } from './archive'
import { parseBuildArgs } from './cli'
import { compileServerBinaries, getDistProdRoot } from './compile'
import { loadBuildConfig } from './config'
import { getTargetRules, loadManifest } from './manifest'
import { createR2Client } from './r2'
import { stageTargetArtifact } from './stage'
import { stageCompiledArtifact, stageTargetArtifact } from './stage'
function buildModeLabel(ci: boolean): string {
return ci ? 'ci' : 'full'
}
export async function runProdResourceBuild(argv: string[]): Promise<void> {
const rootDir = resolve(import.meta.dir, '../../..')
@@ -16,25 +20,40 @@ export async function runProdResourceBuild(argv: string[]): Promise<void> {
const args = parseBuildArgs(argv)
const buildConfig = loadBuildConfig(rootDir, {
compileOnly: args.compileOnly,
})
const buildConfig = loadBuildConfig(rootDir, { ci: args.ci })
log.header(`Building BrowserOS server artifacts v${buildConfig.version}`)
log.info(`Targets: ${args.targets.map((target) => target.id).join(', ')}`)
log.info(`Mode: ${args.compileOnly ? 'compile-only' : 'full'}`)
log.info(`Mode: ${buildModeLabel(args.ci)}`)
const compiled = await compileServerBinaries(
args.targets,
buildConfig.envVars,
buildConfig.processEnv,
buildConfig.version,
{ ci: args.ci },
)
if (args.compileOnly) {
log.done('Compile-only build completed')
if (args.ci) {
const distRoot = getDistProdRoot()
const localArtifacts = []
for (const binary of compiled) {
log.info(`${binary.target.id}: ${binary.binaryPath}`)
log.step(`Packaging ${binary.target.name}`)
const staged = await stageCompiledArtifact(
distRoot,
binary.binaryPath,
binary.target,
buildConfig.version,
)
localArtifacts.push(staged)
log.success(`Packaged ${binary.target.id}`)
}
const archiveResults = await archiveArtifacts(localArtifacts)
log.done('CI build completed')
for (const result of archiveResults) {
log.info(`${result.targetId}: ${result.zipPath}`)
}
return
}

View File

@@ -32,6 +32,36 @@ async function copyServerBinary(
}
}
async function createArtifactRoot(
distRoot: string,
compiledBinaryPath: string,
target: BuildTarget,
): Promise<string> {
const rootDir = artifactRoot(distRoot, target)
await rm(rootDir, { recursive: true, force: true })
await mkdir(rootDir, { recursive: true })
await copyServerBinary(
compiledBinaryPath,
serverDestinationPath(rootDir, target),
target,
)
return rootDir
}
async function finalizeArtifact(
rootDir: string,
target: BuildTarget,
version: string,
): Promise<StagedArtifact> {
const metadataPath = await writeArtifactMetadata(rootDir, target, version)
return {
target,
rootDir,
resourcesDir: join(rootDir, 'resources'),
metadataPath,
}
}
function resolveDestination(rootDir: string, destination: string): string {
const outputPath = join(rootDir, destination)
const relativePath = relative(rootDir, outputPath)
@@ -67,25 +97,21 @@ export async function stageTargetArtifact(
r2: R2Config,
version: string,
): Promise<StagedArtifact> {
const rootDir = artifactRoot(distRoot, target)
await rm(rootDir, { recursive: true, force: true })
await mkdir(rootDir, { recursive: true })
await copyServerBinary(
compiledBinaryPath,
serverDestinationPath(rootDir, target),
target,
)
const rootDir = await createArtifactRoot(distRoot, compiledBinaryPath, target)
for (const rule of rules) {
await stageRule(rootDir, rule, target, client, r2)
}
const metadataPath = await writeArtifactMetadata(rootDir, target, version)
return {
target,
rootDir,
resourcesDir: join(rootDir, 'resources'),
metadataPath,
}
return finalizeArtifact(rootDir, target, version)
}
export async function stageCompiledArtifact(
distRoot: string,
compiledBinaryPath: string,
target: BuildTarget,
version: string,
): Promise<StagedArtifact> {
const rootDir = await createArtifactRoot(distRoot, compiledBinaryPath, target)
return finalizeArtifact(rootDir, target, version)
}

View File

@@ -21,7 +21,7 @@ export interface BuildArgs {
targets: BuildTarget[]
manifestPath: string
upload: boolean
compileOnly: boolean
ci: boolean
}
export interface R2Config {

View File

@@ -402,9 +402,11 @@ def main(
"upload": upload,
}
# Resolve build context (CONFIG mode or DIRECT mode)
# Resolve build context (CONFIG mode or DIRECT mode).
# Returns one Context per architecture — single-element for normal
# builds, multi-element when YAML declares `architecture: [x64, arm64]`.
try:
ctx = resolve_config(cli_args, config_data)
arch_ctxs = resolve_config(cli_args, config_data)
except ValueError as e:
log_error(str(e))
raise typer.Exit(1)
@@ -459,20 +461,40 @@ def main(
os.environ["DEPOT_TOOLS_WIN_TOOLCHAIN"] = "0"
log_info("Set DEPOT_TOOLS_WIN_TOOLCHAIN=0 for Windows build")
# Print build summary using the first context — versions and paths
# are identical across per-arch contexts. Architecture is logged again
# inside the loop below for multi-arch runs.
summary_ctx = arch_ctxs[0]
log_info(f"📍 Root: {root_dir}")
log_info(f"📍 Chromium: {ctx.chromium_src}")
log_info(f"📍 Architecture: {ctx.architecture}")
log_info(f"📍 Build type: {ctx.build_type}")
log_info(f"📍 Output: {ctx.out_dir}")
log_info(f"📍 Semantic version: {ctx.semantic_version}")
log_info(f"📍 Chromium version: {ctx.chromium_version}")
log_info(f"📍 Build offset: {ctx.browseros_build_offset}")
log_info(f"📍 Chromium: {summary_ctx.chromium_src}")
if len(arch_ctxs) > 1:
log_info(
f"📍 Architectures: {[c.architecture for c in arch_ctxs]} (multi-arch loop)"
)
else:
log_info(f"📍 Architecture: {summary_ctx.architecture}")
log_info(f"📍 Build type: {summary_ctx.build_type}")
log_info(f"📍 Semantic version: {summary_ctx.semantic_version}")
log_info(f"📍 Chromium version: {summary_ctx.chromium_version}")
log_info(f"📍 Build offset: {summary_ctx.browseros_build_offset}")
log_info(f"📍 Pipeline: {''.join(pipeline)}")
log_info("=" * 70)
# Set notification context for OS and architecture
os_name = "macOS" if IS_MACOS() else "Windows" if IS_WINDOWS() else "Linux"
set_build_context(os_name, ctx.architecture)
# Execute pipeline
execute_pipeline(ctx, pipeline, AVAILABLE_MODULES, pipeline_name="build")
# Execute the pipeline once per architecture. Modules see a normal
# single-arch ctx; the runner is the only thing that knows about the
# multi-arch loop.
for i, arch_ctx in enumerate(arch_ctxs, start=1):
if len(arch_ctxs) > 1:
log_info("\n" + "#" * 70)
log_info(
f"# Architecture {i}/{len(arch_ctxs)}: {arch_ctx.architecture}"
)
log_info(f"# Output: {arch_ctx.out_dir}")
log_info("#" * 70)
set_build_context(os_name, arch_ctx.architecture)
execute_pipeline(
arch_ctx, pipeline, AVAILABLE_MODULES, pipeline_name="build"
)

View File

@@ -26,11 +26,13 @@ from .context import Context
from .env import EnvConfig
from .utils import get_platform_arch, log_info
VALID_ARCHITECTURES = {"x64", "arm64", "universal"}
def resolve_config(
cli_args: Dict[str, Any],
yaml_config: Optional[Dict[str, Any]] = None,
) -> Context:
) -> List[Context]:
"""Resolve build configuration - single entry point.
Args:
@@ -38,7 +40,9 @@ def resolve_config(
yaml_config: Optional YAML configuration (triggers CONFIG mode)
Returns:
Fully resolved Context object
List of fully resolved Context objects. Single-element for the
common single-arch case; multi-element when YAML declares
`architecture: [x64, arm64]` (Linux multi-arch).
Raises:
ValueError: If required fields missing or invalid
@@ -59,7 +63,7 @@ def resolve_config(
def _resolve_config_mode(
yaml_config: Dict[str, Any], cli_args: Dict[str, Any]
) -> Context:
) -> List[Context]:
"""CONFIG MODE: YAML is base, CLI can override.
Args:
@@ -67,7 +71,7 @@ def _resolve_config_mode(
cli_args: CLI arguments (can override YAML values)
Returns:
Context with values from YAML, optionally overridden by CLI
List of Contexts. One per architecture when YAML provides a list.
Raises:
ValueError: If required fields missing from both YAML and CLI
@@ -94,41 +98,66 @@ def _resolve_config_mode(
f"Expected directory with Chromium source code"
)
# architecture: CLI override > YAML > platform default
architecture = (
cli_args.get("arch")
or build_section.get("architecture")
or build_section.get("arch")
)
arch_source = "cli" if cli_args.get("arch") else "yaml"
if not architecture:
architecture = get_platform_arch()
# architecture: CLI override > YAML > platform default.
# YAML may be a string OR a list (e.g. [x64, arm64]) — list form runs
# the entire pipeline once per arch.
cli_arch = cli_args.get("arch")
yaml_arch = build_section.get("architecture") or build_section.get("arch")
if cli_arch:
architectures = [cli_arch]
arch_source = "cli"
elif yaml_arch is not None:
architectures = yaml_arch if isinstance(yaml_arch, list) else [yaml_arch]
arch_source = "yaml"
else:
architectures = [get_platform_arch()]
arch_source = "default"
log_info(f"CONFIG MODE: Using platform default architecture: {architecture}")
log_info(
f"CONFIG MODE: Using platform default architecture: {architectures[0]}"
)
for arch in architectures:
if arch not in VALID_ARCHITECTURES:
raise ValueError(
f"CONFIG MODE: invalid architecture '{arch}'. "
f"Valid: {sorted(VALID_ARCHITECTURES)}"
)
# build_type: CLI override > YAML > debug
build_type = cli_args.get("build_type") or build_section.get("type", "debug")
build_type_source = "cli" if cli_args.get("build_type") else "yaml"
log_info(f"✓ CONFIG MODE: chromium_src={chromium_src} ({chromium_src_source})")
log_info(f"✓ CONFIG MODE: architecture={architecture} ({arch_source})")
if len(architectures) > 1:
log_info(
f"✓ CONFIG MODE: architectures={architectures} ({arch_source}, multi-arch loop)"
)
else:
log_info(
f"✓ CONFIG MODE: architecture={architectures[0]} ({arch_source})"
)
log_info(f"✓ CONFIG MODE: build_type={build_type} ({build_type_source})")
return Context(
chromium_src=chromium_src,
architecture=architecture,
build_type=build_type,
)
return [
Context(
chromium_src=chromium_src,
architecture=arch,
build_type=build_type,
)
for arch in architectures
]
def _resolve_direct_mode(cli_args: Dict[str, Any]) -> Context:
def _resolve_direct_mode(cli_args: Dict[str, Any]) -> List[Context]:
"""DIRECT MODE: CLI > Env > Defaults.
Args:
cli_args: CLI arguments (None if not provided by user)
Returns:
Context with resolved values
Single-element list with the resolved Context. DIRECT mode is
always single-arch (CLI --arch is a scalar).
Raises:
ValueError: If chromium_src not provided
@@ -160,6 +189,12 @@ def _resolve_direct_mode(cli_args: Dict[str, Any]) -> Context:
architecture = get_platform_arch()
log_info(f"DIRECT MODE: Using platform default architecture: {architecture}")
if architecture not in VALID_ARCHITECTURES:
raise ValueError(
f"DIRECT MODE: invalid architecture '{architecture}'. "
f"Valid: {sorted(VALID_ARCHITECTURES)}"
)
# build_type: CLI > Default
build_type = cli_args.get("build_type") or "debug"
@@ -167,11 +202,13 @@ def _resolve_direct_mode(cli_args: Dict[str, Any]) -> Context:
log_info(f"✓ DIRECT MODE: architecture={architecture} (cli/env/default)")
log_info(f"✓ DIRECT MODE: build_type={build_type} (cli/default)")
return Context(
chromium_src=chromium_src,
architecture=architecture,
build_type=build_type,
)
return [
Context(
chromium_src=chromium_src,
architecture=architecture,
build_type=build_type,
)
]
def resolve_pipeline(

View File

@@ -3,7 +3,10 @@
# This config packages an already-built Linux application.
# Use this when you have a pre-built app and only need to package it.
#
# Expects: out/Default/chrome (Linux binary)
# Expects: out/Default_<arch>/browseros
# Invoke with:
# browseros build --config build/config/package.linux.yaml --arch x64
# browseros build --config build/config/package.linux.yaml --arch arm64
#
# Environment Variables:
# Use !env tag to reference environment variables:
@@ -11,7 +14,6 @@
build:
type: release
architecture: x64 # Linux x64
gn_flags:
file: build/config/gn/flags.linux.release.gn

View File

@@ -1,17 +1,24 @@
# BrowserOS Linux Release Build Configuration
#
# Pinned to arm64-only to validate the cross-compile sysroot bootstrap
# end-to-end on a Linux x64 host. Flip back to `[x64, arm64]` once arm64
# is green.
#
# Run:
# browseros build --config build/config/release.linux.yaml
#
# Environment Variables:
# Use !env tag to reference environment variables:
# Example: chromium_src: !env CHROMIUM_SRC
build:
type: release
architecture: x64 # Linux x64
architecture: arm64
gn_flags:
file: build/config/gn/flags.linux.release.gn
# Explicit module execution order
# Explicit module execution order. Runs once per architecture above.
modules:
# Phase 1: Setup
- clean

View File

@@ -17,10 +17,64 @@ from ...common.utils import (
run_command,
safe_rmtree,
join_paths,
get_platform_arch,
IS_LINUX,
)
from ...common.notify import get_notifier, COLOR_GREEN
# Target-arch packaging metadata. These describe the artifact we're
# producing, not the build machine. `appimage_arch` is passed to
# appimagetool via the ARCH env var; `deb_arch` is written into the
# .deb control file.
LINUX_ARCHITECTURE_CONFIG = {
"x64": {
"appimage_arch": "x86_64",
"deb_arch": "amd64",
},
"arm64": {
"appimage_arch": "aarch64",
"deb_arch": "arm64",
},
}
# Host-arch tool selection. appimagetool is a normal binary that runs on
# the build machine — when cross-compiling arm64 from an x64 host, we
# still need the x86_64 tool to actually execute. Keyed on
# get_platform_arch() (BUILD machine arch), NOT ctx.architecture.
LINUX_HOST_APPIMAGETOOL = {
"x64": (
"appimagetool-x86_64.AppImage",
"https://github.com/AppImage/AppImageKit/releases/download/continuous/appimagetool-x86_64.AppImage",
),
"arm64": (
"appimagetool-aarch64.AppImage",
"https://github.com/AppImage/AppImageKit/releases/download/continuous/appimagetool-aarch64.AppImage",
),
}
def get_linux_architecture_config(architecture: str) -> dict[str, str]:
config = LINUX_ARCHITECTURE_CONFIG.get(architecture)
if not config:
supported = ", ".join(sorted(LINUX_ARCHITECTURE_CONFIG))
raise ValueError(
f"Unsupported Linux architecture: {architecture}. Supported: {supported}"
)
return config
def get_host_appimagetool() -> tuple[str, str]:
"""Return (filename, url) for the appimagetool binary that runs on
the current build machine. Critical for cross-compile correctness."""
host_arch = get_platform_arch()
tool = LINUX_HOST_APPIMAGETOOL.get(host_arch)
if not tool:
supported = ", ".join(sorted(LINUX_HOST_APPIMAGETOOL))
raise ValueError(
f"No appimagetool binary for host arch '{host_arch}'. Supported: {supported}"
)
return tool
class LinuxPackageModule(CommandModule):
produces = ["appimage", "deb"]
@@ -30,6 +84,10 @@ class LinuxPackageModule(CommandModule):
def validate(self, ctx: Context) -> None:
if not IS_LINUX():
raise ValidationError("Linux packaging requires Linux")
try:
get_linux_architecture_config(ctx.architecture)
except ValueError as exc:
raise ValidationError(str(exc)) from exc
out_dir = join_paths(ctx.chromium_src, ctx.out_dir)
chrome_binary = join_paths(out_dir, ctx.BROWSEROS_APP_NAME)
@@ -73,7 +131,7 @@ class LinuxPackageModule(CommandModule):
artifacts.append(deb_path.name)
notifier.notify(
"📦 Package Created",
f"Linux packages created successfully",
"Linux packages created successfully",
{
"Artifacts": ", ".join(artifacts),
"Version": ctx.semantic_version,
@@ -284,25 +342,30 @@ export CHROME_WRAPPER="${{THIS}}"
def download_appimagetool(ctx: Context) -> Optional[Path]:
"""Download appimagetool if not available"""
"""Download the appimagetool binary that runs on the build machine.
Note: this is keyed on the HOST arch, not ctx.architecture. When
cross-compiling arm64 packages from an x64 host, we still need the
x86_64 appimagetool because the tool executes locally; the target
arch is communicated via the ARCH env var in create_appimage().
"""
tool_dir = Path(join_paths(ctx.root_dir, "build", "tools"))
tool_dir.mkdir(exist_ok=True)
tool_path = Path(join_paths(tool_dir, "appimagetool-x86_64.AppImage"))
tool_filename, url = get_host_appimagetool()
tool_path = Path(join_paths(tool_dir, tool_filename))
if tool_path.exists():
log_info("✓ appimagetool already available")
log_info(f"✓ appimagetool already available ({tool_filename})")
return tool_path
log_info("📥 Downloading appimagetool...")
url = "https://github.com/AppImage/AppImageKit/releases/download/continuous/appimagetool-x86_64.AppImage"
log_info(f"📥 Downloading {tool_filename}...")
cmd = ["wget", "-O", str(tool_path), url]
result = run_command(cmd, check=False)
if result.returncode == 0:
tool_path.chmod(0o755)
log_success("✓ Downloaded appimagetool")
log_success(f"✓ Downloaded {tool_filename}")
return tool_path
else:
log_error("Failed to download appimagetool")
@@ -312,6 +375,7 @@ def download_appimagetool(ctx: Context) -> Optional[Path]:
def create_appimage(ctx: Context, appdir: Path, output_path: Path) -> bool:
"""Create AppImage from AppDir"""
log_info("📦 Creating AppImage...")
arch_config = get_linux_architecture_config(ctx.architecture)
# Download appimagetool if needed
appimagetool = download_appimagetool(ctx)
@@ -319,7 +383,7 @@ def create_appimage(ctx: Context, appdir: Path, output_path: Path) -> bool:
return False
# Set architecture environment variable (required by appimagetool)
arch = "x86_64" if ctx.architecture == "x64" else "aarch64"
arch = arch_config["appimage_arch"]
# Create AppImage with ARCH env var set for this command only
cmd = [
@@ -384,7 +448,7 @@ def create_control_file(ctx: Context, debian_dir: Path) -> None:
version = version.lstrip("v").replace(" ", "").replace("_", ".")
# Architecture mapping
deb_arch = "amd64" if ctx.architecture == "x64" else "arm64"
deb_arch = get_linux_architecture_config(ctx.architecture)["deb_arch"]
control_content = f"""Package: browseros
Version: {version}
@@ -653,7 +717,9 @@ def package_appimage(ctx: Context, package_dir: Path) -> Optional[Path]:
"""
log_info("🖼️ Building AppImage...")
appdir = Path(join_paths(package_dir, f"{ctx.BROWSEROS_APP_BASE_NAME}.AppDir"))
appdir = Path(
join_paths(package_dir, f"{ctx.BROWSEROS_APP_BASE_NAME}-{ctx.architecture}.AppDir")
)
if appdir.exists():
safe_rmtree(appdir)
@@ -683,7 +749,9 @@ def package_deb(ctx: Context, package_dir: Path) -> Optional[Path]:
"""
log_info("📦 Building .deb package...")
debdir = Path(join_paths(package_dir, f"{ctx.BROWSEROS_APP_BASE_NAME}_deb"))
debdir = Path(
join_paths(package_dir, f"{ctx.BROWSEROS_APP_BASE_NAME}_{ctx.architecture}_deb")
)
if debdir.exists():
safe_rmtree(debdir)
@@ -703,6 +771,8 @@ def package_deb(ctx: Context, package_dir: Path) -> Optional[Path]:
return output_path
return None
def package_universal(contexts: List[Context]) -> bool:
"""Linux doesn't support universal binaries"""
log_warning("Universal binaries are not supported on Linux")

View File

@@ -0,0 +1,63 @@
#!/usr/bin/env python3
"""Tests for Linux packaging architecture helpers."""
import unittest
from unittest.mock import patch
from build.modules.package.linux import (
LINUX_HOST_APPIMAGETOOL,
get_host_appimagetool,
get_linux_architecture_config,
)
class LinuxArchitectureConfigTest(unittest.TestCase):
def test_returns_x64_packaging_config(self) -> None:
config = get_linux_architecture_config("x64")
self.assertEqual(config["appimage_arch"], "x86_64")
self.assertEqual(config["deb_arch"], "amd64")
def test_returns_arm64_packaging_config(self) -> None:
config = get_linux_architecture_config("arm64")
self.assertEqual(config["appimage_arch"], "aarch64")
self.assertEqual(config["deb_arch"], "arm64")
def test_rejects_unsupported_architecture(self) -> None:
with self.assertRaisesRegex(ValueError, "Unsupported Linux architecture"):
get_linux_architecture_config("universal")
class HostAppImageToolTest(unittest.TestCase):
"""The appimagetool binary must match the BUILD machine's arch, not
the target arch — otherwise cross-compiling arm64 packages from an x64
host fails because the aarch64 tool can't execute on x64."""
def test_x64_host_picks_x86_64_tool(self) -> None:
with patch(
"build.modules.package.linux.get_platform_arch", return_value="x64"
):
filename, url = get_host_appimagetool()
self.assertEqual(filename, "appimagetool-x86_64.AppImage")
self.assertIn("x86_64", url)
def test_arm64_host_picks_aarch64_tool(self) -> None:
with patch(
"build.modules.package.linux.get_platform_arch", return_value="arm64"
):
filename, url = get_host_appimagetool()
self.assertEqual(filename, "appimagetool-aarch64.AppImage")
self.assertIn("aarch64", url)
def test_host_lookup_independent_of_target(self) -> None:
# Both architectures must be present in the host lookup so cross
# builds work in either direction.
self.assertIn("x64", LINUX_HOST_APPIMAGETOOL)
self.assertIn("arm64", LINUX_HOST_APPIMAGETOOL)
if __name__ == "__main__":
unittest.main()

View File

@@ -6,7 +6,6 @@ from datetime import datetime
from typing import Dict, List, Optional
from ...common.env import EnvConfig
from ...common.utils import log_warning
from ..storage import get_release_json, get_r2_client, BOTO3_AVAILABLE
PLATFORMS = ["macos", "win", "linux"]
@@ -24,6 +23,8 @@ DOWNLOAD_PATH_MAPPING = {
"linux": {
"x64_appimage": "download/BrowserOS.AppImage",
"x64_deb": "download/BrowserOS.deb",
"arm64_appimage": "download/BrowserOS-arm64.AppImage",
"arm64_deb": "download/BrowserOS-arm64.deb",
},
}

View File

@@ -1,9 +1,19 @@
#!/usr/bin/env python3
"""Build configuration module for BrowserOS build system"""
import sys
from ...common.module import CommandModule, ValidationError
from ...common.context import Context
from ...common.utils import run_command, log_info, log_success, join_paths, IS_WINDOWS
from ...common.utils import (
run_command,
log_info,
log_warning,
log_success,
join_paths,
IS_LINUX,
IS_WINDOWS,
)
class ConfigureModule(CommandModule):
@@ -25,6 +35,16 @@ class ConfigureModule(CommandModule):
def execute(self, ctx: Context) -> None:
log_info(f"\n⚙️ Configuring {ctx.build_type} build for {ctx.architecture}...")
# Linux: ensure the target-arch Debian sysroot is installed before
# `gn gen`. sysroot.gni asserts on missing sysroots, and relying on
# `gclient sync` DEPS hooks is fragile — the hook only fires when
# .gclient declared the right `target_cpus` *before* sync, which
# isn't guaranteed for chromium_src checkouts that predate
# cross-arch support. install-sysroot.py is idempotent and fast,
# so call it unconditionally for the target arch.
if IS_LINUX():
self._ensure_linux_sysroot(ctx)
out_path = join_paths(ctx.chromium_src, ctx.out_dir)
out_path.mkdir(parents=True, exist_ok=True)
@@ -43,3 +63,26 @@ class ConfigureModule(CommandModule):
run_command(gn_args, cwd=ctx.chromium_src)
log_success("Build configured")
def _ensure_linux_sysroot(self, ctx: Context) -> None:
install_script = (
ctx.chromium_src / "build" / "linux" / "sysroot_scripts" / "install-sysroot.py"
)
if not install_script.exists():
log_warning(
f"⚠️ install-sysroot.py not found at {install_script}; "
f"skipping sysroot bootstrap. gn gen will fail if the "
f"{ctx.architecture} sysroot is missing."
)
return
# install-sysroot.py accepts our arch names directly: it translates
# `x64`→`amd64` internally via ARCH_TRANSLATIONS, and `arm64` is a
# valid pass-through value.
log_info(
f"📦 Ensuring Linux sysroot for {ctx.architecture} (idempotent)..."
)
run_command(
[sys.executable, str(install_script), f"--arch={ctx.architecture}"],
cwd=ctx.chromium_src,
)

View File

@@ -1,12 +1,24 @@
#!/usr/bin/env python3
"""Git operations module for BrowserOS build system"""
import re
import subprocess
import tarfile
import urllib.request
from typing import List
from ...common.module import CommandModule, ValidationError
from ...common.context import Context
from ...common.utils import run_command, log_info, log_error, log_success, IS_WINDOWS, safe_rmtree
from ...common.utils import (
run_command,
log_info,
log_warning,
log_error,
log_success,
IS_LINUX,
IS_WINDOWS,
safe_rmtree,
)
class GitSetupModule(CommandModule):
@@ -32,6 +44,12 @@ class GitSetupModule(CommandModule):
log_info(f"🔀 Checking out tag: {ctx.chromium_version}")
run_command(["git", "checkout", f"tags/{ctx.chromium_version}"], cwd=ctx.chromium_src)
# On Linux, depot_tools fetches per-arch sysroots automatically when
# `.gclient` declares `target_cpus`. Ensure both x64 and arm64 are
# listed before sync so cross-compilation just works on x64 hosts.
if IS_LINUX():
self._ensure_gclient_target_cpus(ctx, ["x64", "arm64"])
log_info("📥 Syncing dependencies (this may take a while)...")
if IS_WINDOWS():
run_command(["gclient.bat", "sync", "-D", "--no-history", "--shallow"], cwd=ctx.chromium_src)
@@ -40,6 +58,49 @@ class GitSetupModule(CommandModule):
log_success("Git setup complete")
def _ensure_gclient_target_cpus(self, ctx: Context, required: List[str]) -> None:
"""Idempotently add `target_cpus` to .gclient so depot_tools fetches
the matching Linux sysroots for cross-compilation.
depot_tools convention: .gclient lives one directory above
chromium_src (i.e. ../.gclient). It is a Python file with a list
of solution dicts followed by optional top-level assignments.
We append a `target_cpus = [...]` line if missing or merge in any
archs that aren't already present.
"""
gclient_path = ctx.chromium_src.parent / ".gclient"
if not gclient_path.exists():
log_warning(
f"⚠️ .gclient not found at {gclient_path}; "
f"skipping target_cpus bootstrap. "
f"Cross-arch builds may fail until you run `fetch chromium`."
)
return
content = gclient_path.read_text()
match = re.search(r"^\s*target_cpus\s*=\s*\[([^\]]*)\]", content, re.MULTILINE)
if match:
existing = re.findall(r"['\"]([^'\"]+)['\"]", match.group(1))
missing = [arch for arch in required if arch not in existing]
if not missing:
log_info(f"✓ .gclient target_cpus already includes {required}")
return
merged = sorted(set(existing) | set(required))
new_line = f"target_cpus = {merged!r}"
content = (
content[: match.start()] + new_line + content[match.end() :]
)
log_info(
f"📝 Updating .gclient target_cpus: {existing}{merged}"
)
else:
new_line = f"\ntarget_cpus = {required!r}\n"
content = content.rstrip() + "\n" + new_line
log_info(f"📝 Adding target_cpus = {required} to .gclient")
gclient_path.write_text(content)
def _verify_tag_exists(self, ctx: Context) -> None:
result = subprocess.run(
["git", "tag", "-l", ctx.chromium_version],

View File

@@ -4,7 +4,7 @@
import json
from datetime import datetime, timezone
from pathlib import Path
from typing import Dict, List, Optional, Tuple
from typing import Any, Dict, List, Optional, Tuple, cast
from ...common.module import CommandModule, ValidationError
from ...common.context import Context
@@ -21,6 +21,7 @@ from ...common.notify import get_notifier, COLOR_GREEN
from .r2 import (
BOTO3_AVAILABLE,
get_r2_client,
get_release_json,
upload_file_to_r2,
)
@@ -58,7 +59,10 @@ class UploadModule(CommandModule):
log_info("\nUploading package artifacts to R2...")
extra_metadata = {}
sparkle_signatures = ctx.artifacts.get("sparkle_signatures")
sparkle_signatures = cast(
Optional[dict[str, tuple[str, int]]],
ctx.artifacts.get("sparkle_signatures"),
)
if sparkle_signatures:
for filename, (sig, length) in sparkle_signatures.items():
extra_metadata[filename] = {
@@ -120,6 +124,36 @@ def generate_release_json(
return release_data
def merge_release_metadata(existing: Optional[Dict], new: Dict) -> Dict:
if not existing:
return new
merged = dict(existing)
merged.update({key: value for key, value in new.items() if key != "artifacts"})
artifacts = dict(existing.get("artifacts", {}))
artifacts.update(new.get("artifacts", {}))
merged["artifacts"] = artifacts
return merged
def _get_linux_artifact_key(filename: str) -> Optional[str]:
lower = filename.lower()
if ".appimage" in lower:
if "arm64" in lower or "aarch64" in lower:
return "arm64_appimage"
if "x64" in lower or "x86_64" in lower:
return "x64_appimage"
elif ".deb" in lower:
if "arm64" in lower or "aarch64" in lower:
return "arm64_deb"
if "amd64" in lower or "x64" in lower or "x86_64" in lower:
return "x64_deb"
return None
def _get_artifact_key(filename: str, platform: str) -> str:
"""Get artifact key name from filename
@@ -147,10 +181,10 @@ def _get_artifact_key(filename: str, platform: str) -> str:
return "x64_zip"
elif platform == "linux":
if ".appimage" in lower:
return "x64_appimage"
elif ".deb" in lower:
return "x64_deb"
artifact_key = _get_linux_artifact_key(filename)
if artifact_key:
return artifact_key
log_warning(f"Unrecognized Linux artifact name: {filename}; using stem key")
return Path(filename).stem
@@ -181,7 +215,7 @@ def detect_artifacts(ctx: Context) -> List[Path]:
def upload_release_artifacts(
ctx: Context,
extra_metadata: Optional[Dict[str, Dict[str, any]]] = None,
extra_metadata: Optional[Dict[str, Dict[str, Any]]] = None,
) -> Tuple[bool, Optional[Dict]]:
"""Upload release artifacts to R2 and generate release.json
@@ -240,6 +274,13 @@ def upload_release_artifacts(
artifact_metadata.append(metadata)
release_data = generate_release_json(ctx, artifact_metadata, platform)
if platform == "linux":
# Linux x64 and arm64 release jobs must be sequenced. A parallel
# fetch-merge-upload flow can still race and drop one architecture.
existing_release_data = get_release_json(
ctx.get_semantic_version(), platform, env
)
release_data = merge_release_metadata(existing_release_data, release_data)
release_json_path = ctx.get_dist_dir() / "release.json"
release_json_path.write_text(json.dumps(release_data, indent=2))
@@ -248,7 +289,7 @@ def upload_release_artifacts(
return False, None
log_success(f"\nSuccessfully uploaded {len(artifacts)} artifact(s) to R2")
log_info(f"\nRelease metadata:")
log_info("\nRelease metadata:")
log_info(f" Version: {release_data['version']}")
if platform == "macos":
log_info(f" Sparkle version: {release_data.get('sparkle_version', 'N/A')}")

View File

@@ -0,0 +1,85 @@
#!/usr/bin/env python3
"""Tests for release artifact upload metadata helpers."""
import unittest
from build.modules.storage.upload import _get_artifact_key, merge_release_metadata
class UploadMetadataTest(unittest.TestCase):
def test_linux_x64_artifacts_use_x64_keys(self) -> None:
self.assertEqual(
_get_artifact_key("BrowserOS_v1.2.3_x64.AppImage", "linux"),
"x64_appimage",
)
self.assertEqual(
_get_artifact_key("BrowserOS_v1.2.3_amd64.deb", "linux"),
"x64_deb",
)
def test_linux_arm64_artifacts_use_arm64_keys(self) -> None:
self.assertEqual(
_get_artifact_key("BrowserOS_v1.2.3_arm64.AppImage", "linux"),
"arm64_appimage",
)
self.assertEqual(
_get_artifact_key("BrowserOS_v1.2.3_arm64.deb", "linux"),
"arm64_deb",
)
self.assertEqual(
_get_artifact_key("BrowserOS_v1.2.3_aarch64.deb", "linux"),
"arm64_deb",
)
def test_merge_release_metadata_preserves_existing_artifacts(self) -> None:
existing = {
"platform": "linux",
"version": "1.2.3",
"build_date": "old",
"artifacts": {
"x64_appimage": {"filename": "BrowserOS_v1.2.3_x64.AppImage"},
"x64_deb": {"filename": "BrowserOS_v1.2.3_amd64.deb"},
},
}
new = {
"platform": "linux",
"version": "1.2.3",
"build_date": "new",
"artifacts": {
"arm64_appimage": {"filename": "BrowserOS_v1.2.3_arm64.AppImage"},
"arm64_deb": {"filename": "BrowserOS_v1.2.3_arm64.deb"},
},
}
merged = merge_release_metadata(existing, new)
self.assertEqual(merged["build_date"], "new")
self.assertEqual(
sorted(merged["artifacts"]),
["arm64_appimage", "arm64_deb", "x64_appimage", "x64_deb"],
)
def test_merge_release_metadata_overwrites_matching_artifact_keys(self) -> None:
existing = {
"platform": "linux",
"version": "1.2.3",
"artifacts": {
"x64_appimage": {"filename": "old.AppImage", "size": 1},
},
}
new = {
"platform": "linux",
"version": "1.2.3",
"artifacts": {
"x64_appimage": {"filename": "new.AppImage", "size": 2},
},
}
merged = merge_release_metadata(existing, new)
self.assertEqual(merged["artifacts"]["x64_appimage"]["filename"], "new.AppImage")
self.assertEqual(merged["artifacts"]["x64_appimage"]["size"], 2)
if __name__ == "__main__":
unittest.main()

View File

@@ -1,9 +1,9 @@
diff --git a/chrome/browser/browseros/extensions/browseros_extension_loader.cc b/chrome/browser/browseros/extensions/browseros_extension_loader.cc
new file mode 100644
index 0000000000000..e61b45d08b7e2
index 0000000000000..fdb6be443f25b
--- /dev/null
+++ b/chrome/browser/browseros/extensions/browseros_extension_loader.cc
@@ -0,0 +1,226 @@
@@ -0,0 +1,269 @@
+// Copyright 2024 The Chromium Authors
+// Use of this source code is governed by a BSD-style license that can be
+// found in the LICENSE file.
@@ -89,10 +89,53 @@ index 0000000000000..e61b45d08b7e2
+ extension_ids_.merge(result.extension_ids);
+ last_config_ = std::move(result.config);
+
+ LoadFinished(std::move(result.prefs));
+ base::DictValue prefs_to_load = std::move(result.prefs);
+
+ if (prefs_to_load.empty()) {
+ LOG(WARNING) << "browseros: Install returned empty prefs, "
+ << "reconstructing from installed extensions";
+ prefs_to_load = ReconstructPrefsFromInstalledExtensions();
+ LOG(INFO) << "browseros: Reconstructed prefs for "
+ << prefs_to_load.size() << " installed extensions";
+ }
+
+ LoadFinished(std::move(prefs_to_load));
+ OnStartupComplete(result.from_bundled);
+}
+
+base::DictValue
+BrowserOSExtensionLoader::ReconstructPrefsFromInstalledExtensions() {
+ base::DictValue prefs;
+
+ extensions::ExtensionRegistry* registry =
+ extensions::ExtensionRegistry::Get(profile_);
+ if (!registry) {
+ return prefs;
+ }
+
+ const std::string update_url =
+ base::FeatureList::IsEnabled(features::kBrowserOsAlphaFeatures)
+ ? kBrowserOSAlphaUpdateUrl
+ : kBrowserOSUpdateUrl;
+
+ for (const std::string& id : GetBrowserOSExtensionIds()) {
+ const extensions::Extension* ext = registry->GetInstalledExtension(id);
+ if (!ext) {
+ continue;
+ }
+
+ base::DictValue ext_pref;
+ ext_pref.Set(extensions::ExternalProviderImpl::kExternalUpdateUrl,
+ update_url);
+ prefs.Set(id, std::move(ext_pref));
+
+ LOG(INFO) << "browseros: Reconstructed pref for installed extension "
+ << id << " v" << ext->version().GetString();
+ }
+
+ return prefs;
+}
+
+const base::FilePath BrowserOSExtensionLoader::GetBaseCrxFilePath() {
+ return bundled_crx_base_path_;
+}

View File

@@ -1,9 +1,9 @@
diff --git a/chrome/browser/browseros/extensions/browseros_extension_loader.h b/chrome/browser/browseros/extensions/browseros_extension_loader.h
new file mode 100644
index 0000000000000..2a14e9068156e
index 0000000000000..ea2c856556f5f
--- /dev/null
+++ b/chrome/browser/browseros/extensions/browseros_extension_loader.h
@@ -0,0 +1,81 @@
@@ -0,0 +1,86 @@
+// Copyright 2024 The Chromium Authors
+// Use of this source code is governed by a BSD-style license that can be
+// found in the LICENSE file.
@@ -62,6 +62,11 @@ index 0000000000000..2a14e9068156e
+ // Convergence point for both startup paths.
+ void OnStartupComplete(bool from_bundled);
+
+ // Reconstructs minimal prefs from already-installed BrowserOS extensions.
+ // Used as a fallback when both bundled CRX and remote config fail,
+ // preventing orphan detection from uninstalling existing extensions.
+ base::DictValue ReconstructPrefsFromInstalledExtensions();
+
+ // Installs remote extensions immediately via PendingExtensionManager + updater.
+ void InstallRemoteExtensionsNow(base::DictValue config);
+

View File

@@ -1,6 +1,6 @@
diff --git a/chrome/browser/browseros/extensions/browseros_extension_maintainer.cc b/chrome/browser/browseros/extensions/browseros_extension_maintainer.cc
new file mode 100644
index 0000000000000..bb33ae5d3b156
index 0000000000000..5804d54696e8f
--- /dev/null
+++ b/chrome/browser/browseros/extensions/browseros_extension_maintainer.cc
@@ -0,0 +1,395 @@

View File

@@ -1,8 +1,13 @@
diff --git a/chrome/browser/devtools/protocol/browser_handler.cc b/chrome/browser/devtools/protocol/browser_handler.cc
index 30bd52d09c3fc..33c7d6d8455fc 100644
index 30bd52d09c3fc..dd9ef4e3b7cbb 100644
--- a/chrome/browser/devtools/protocol/browser_handler.cc
+++ b/chrome/browser/devtools/protocol/browser_handler.cc
@@ -8,19 +8,32 @@
@@ -4,23 +4,37 @@
#include "chrome/browser/devtools/protocol/browser_handler.h"
+#include <algorithm>
#include <set>
#include <vector>
#include "base/functional/bind.h"
@@ -35,7 +40,7 @@ index 30bd52d09c3fc..33c7d6d8455fc 100644
#include "content/public/browser/browser_task_traits.h"
#include "content/public/browser/browser_thread.h"
#include "content/public/browser/devtools_agent_host.h"
@@ -30,10 +43,21 @@
@@ -30,10 +44,21 @@
#include "ui/gfx/image/image.h"
#include "ui/gfx/image/image_png_rep.h"
@@ -57,7 +62,7 @@ index 30bd52d09c3fc..33c7d6d8455fc 100644
BrowserWindow* GetBrowserWindow(int window_id) {
BrowserWindow* result = nullptr;
ForEachCurrentBrowserWindowInterfaceOrderedByActivation(
@@ -72,17 +96,411 @@ std::unique_ptr<protocol::Browser::Bounds> GetBrowserWindowBounds(
@@ -72,17 +97,419 @@ std::unique_ptr<protocol::Browser::Bounds> GetBrowserWindowBounds(
.Build();
}
@@ -437,6 +442,14 @@ index 30bd52d09c3fc..33c7d6d8455fc 100644
+ out_indices->push_back(found_index);
+ }
+
+ if (!(*out_bwi)->GetTabStripModel()->SupportsTabGroups()) {
+ return Response::ServerError("Tab grouping not supported for this window");
+ }
+
+ std::ranges::sort(*out_indices);
+ out_indices->erase(std::ranges::unique(*out_indices).begin(),
+ out_indices->end());
+
+ return Response::Success();
+}
+
@@ -471,7 +484,7 @@ index 30bd52d09c3fc..33c7d6d8455fc 100644
Response BrowserHandler::GetWindowForTarget(
std::optional<std::string> target_id,
@@ -120,6 +538,65 @@ Response BrowserHandler::GetWindowForTarget(
@@ -120,6 +547,65 @@ Response BrowserHandler::GetWindowForTarget(
return Response::Success();
}
@@ -537,7 +550,7 @@ index 30bd52d09c3fc..33c7d6d8455fc 100644
Response BrowserHandler::GetWindowBounds(
int window_id,
std::unique_ptr<protocol::Browser::Bounds>* out_bounds) {
@@ -297,3 +774,910 @@ protocol::Response BrowserHandler::AddPrivacySandboxEnrollmentOverride(
@@ -297,3 +783,909 @@ protocol::Response BrowserHandler::AddPrivacySandboxEnrollmentOverride(
net::SchemefulSite(url_to_add));
return Response::Success();
}
@@ -1447,4 +1460,3 @@ index 30bd52d09c3fc..33c7d6d8455fc 100644
+bool BrowserHandler::IsHiddenWindow(int window_id) const {
+ return hidden_window_ids_.contains(window_id);
+}
+

View File

@@ -0,0 +1,123 @@
diff --git a/chrome/browser/devtools/protocol/devtools_protocol_browsertest.cc b/chrome/browser/devtools/protocol/devtools_protocol_browsertest.cc
index e57b0883b725f..58bfa8d8f5412 100644
--- a/chrome/browser/devtools/protocol/devtools_protocol_browsertest.cc
+++ b/chrome/browser/devtools/protocol/devtools_protocol_browsertest.cc
@@ -20,6 +20,7 @@
#include "base/test/test_switches.h"
#include "base/test/values_test_util.h"
#include "base/threading/thread_restrictions.h"
+#include "base/time/time.h"
#include "base/values.h"
#include "build/build_config.h"
#include "chrome/browser/apps/app_service/app_service_proxy.h"
@@ -30,6 +31,7 @@
#include "chrome/browser/data_saver/data_saver.h"
#include "chrome/browser/devtools/devtools_window.h"
#include "chrome/browser/devtools/protocol/devtools_protocol_test_support.h"
+#include "chrome/browser/history/history_service_factory.h"
#include "chrome/browser/preloading/preloading_prefs.h"
#include "chrome/browser/privacy_sandbox/privacy_sandbox_attestations/privacy_sandbox_attestations_mixin.h"
#include "chrome/browser/profiles/profile.h"
@@ -43,6 +45,8 @@
#include "components/content_settings/core/browser/cookie_settings.h"
#include "components/content_settings/core/common/pref_names.h"
#include "components/custom_handlers/protocol_handler_registry.h"
+#include "components/history/core/browser/history_service.h"
+#include "components/history/core/test/history_service_test_util.h"
#include "components/infobars/content/content_infobar_manager.h"
#include "components/infobars/core/infobar.h"
#include "components/infobars/core/infobar_delegate.h"
@@ -2202,6 +2206,93 @@ IN_PROC_BROWSER_TEST_F(DevToolsProtocolTest,
SendCommandSync("Target.getTargets");
EXPECT_EQ(2u, result()->FindList("targetInfos")->size());
}
+
+IN_PROC_BROWSER_TEST_F(DevToolsProtocolTest,
+ CreateTabGroupAcceptsUnsortedTabIds) {
+ AttachToBrowserTarget();
+
+ ASSERT_EQ(1, browser()->tab_strip_model()->count());
+
+ base::DictValue params;
+ params.Set("url", "about:blank");
+ params.Set("background", true);
+ ASSERT_TRUE(SendCommandSync("Browser.createTab", params.Clone()));
+ ASSERT_TRUE(SendCommandSync("Browser.createTab", std::move(params)));
+
+ const base::DictValue* tabs_result = SendCommandSync("Browser.getTabs");
+ ASSERT_TRUE(tabs_result);
+ const base::ListValue* tabs = tabs_result->FindList("tabs");
+ ASSERT_TRUE(tabs);
+ ASSERT_EQ(3u, tabs->size());
+
+ std::vector<int> tab_ids;
+ tab_ids.reserve(tabs->size());
+ for (const auto& tab : *tabs) {
+ tab_ids.push_back(*tab.GetDict().FindInt("tabId"));
+ }
+
+ base::ListValue unsorted_tab_ids;
+ unsorted_tab_ids.Append(tab_ids[2]);
+ unsorted_tab_ids.Append(tab_ids[0]);
+
+ base::DictValue create_group_params;
+ create_group_params.Set("tabIds", std::move(unsorted_tab_ids));
+ create_group_params.Set("title", "Unsorted");
+
+ const base::DictValue* create_group_result =
+ SendCommandSync("Browser.createTabGroup", std::move(create_group_params));
+ ASSERT_TRUE(create_group_result);
+ ASSERT_FALSE(error());
+
+ const base::DictValue* group = create_group_result->FindDict("group");
+ ASSERT_TRUE(group);
+ const base::ListValue* grouped_tab_ids = group->FindList("tabIds");
+ ASSERT_TRUE(grouped_tab_ids);
+ ASSERT_EQ(2u, grouped_tab_ids->size());
+ EXPECT_EQ(tab_ids[0], *grouped_tab_ids->front().GetIfInt());
+ EXPECT_EQ(tab_ids[2], *grouped_tab_ids->back().GetIfInt());
+ EXPECT_EQ("Unsorted", *group->FindString("title"));
+}
+
+IN_PROC_BROWSER_TEST_F(DevToolsProtocolTest, HistorySearchUsesVisitTime) {
+ AttachToBrowserTarget();
+
+ history::HistoryService* history_service =
+ HistoryServiceFactory::GetForProfile(browser()->profile(),
+ ServiceAccessType::EXPLICIT_ACCESS);
+ ui_test_utils::WaitForHistoryToLoad(history_service);
+
+ const GURL url("https://history-timestamp-test.example/path");
+ const base::Time older_visit = base::Time::Now() - base::Days(2);
+ const base::Time newer_visit = base::Time::Now() - base::Hours(1);
+
+ history_service->AddPage(url, older_visit, history::SOURCE_BROWSED);
+ history_service->AddPage(url, newer_visit, history::SOURCE_BROWSED);
+ history::BlockUntilHistoryProcessesPendingRequests(history_service);
+
+ base::DictValue search_params;
+ search_params.Set("query", "");
+ search_params.Set(
+ "startTime",
+ (older_visit - base::Minutes(1)).InMillisecondsFSinceUnixEpoch());
+ search_params.Set(
+ "endTime",
+ (newer_visit - base::Minutes(1)).InMillisecondsFSinceUnixEpoch());
+
+ const base::DictValue* search_result =
+ SendCommandSync("History.search", std::move(search_params));
+ ASSERT_TRUE(search_result);
+ ASSERT_FALSE(error());
+
+ const base::ListValue* entries = search_result->FindList("entries");
+ ASSERT_TRUE(entries);
+ ASSERT_EQ(1u, entries->size());
+
+ const base::DictValue& entry = entries->front().GetDict();
+ EXPECT_EQ(url.spec(), *entry.FindString("url"));
+ EXPECT_EQ(older_visit.InMillisecondsFSinceUnixEpoch(),
+ *entry.FindDouble("lastVisitTime"));
+}
#endif // !BUILDFLAG(IS_ANDROID)
#if !BUILDFLAG(IS_ANDROID)

View File

@@ -1,6 +1,6 @@
diff --git a/chrome/browser/devtools/protocol/history_handler.cc b/chrome/browser/devtools/protocol/history_handler.cc
new file mode 100644
index 0000000000000..689f6e900a968
index 0000000000000..4087a679a527f
--- /dev/null
+++ b/chrome/browser/devtools/protocol/history_handler.cc
@@ -0,0 +1,188 @@
@@ -36,7 +36,7 @@ index 0000000000000..689f6e900a968
+ .SetId(base::NumberToString(result.id()))
+ .SetUrl(result.url().spec())
+ .SetTitle(base::UTF16ToUTF8(result.title()))
+ .SetLastVisitTime(result.last_visit().InMillisecondsFSinceUnixEpoch())
+ .SetLastVisitTime(result.visit_time().InMillisecondsFSinceUnixEpoch())
+ .SetVisitCount(result.visit_count())
+ .SetTypedCount(result.typed_count())
+ .Build();

View File

@@ -1,5 +1,5 @@
diff --git a/chrome/browser/extensions/chrome_extension_registrar_delegate.cc b/chrome/browser/extensions/chrome_extension_registrar_delegate.cc
index 6eec0585e8925..55c2a73647527 100644
index adfb4e4d49fa4..409e26fa1cb1b 100644
--- a/chrome/browser/extensions/chrome_extension_registrar_delegate.cc
+++ b/chrome/browser/extensions/chrome_extension_registrar_delegate.cc
@@ -12,6 +12,7 @@
@@ -10,7 +10,26 @@ index 6eec0585e8925..55c2a73647527 100644
#include "chrome/browser/extensions/component_loader.h"
#include "chrome/browser/extensions/corrupted_extension_reinstaller.h"
#include "chrome/browser/extensions/data_deleter.h"
@@ -317,6 +318,13 @@ bool ChromeExtensionRegistrarDelegate::CanDisableExtension(
@@ -256,7 +257,17 @@ void ChromeExtensionRegistrarDelegate::PostUninstallExtension(
}
}
- DataDeleter::StartDeleting(profile_, extension.get(), subtask_done_callback);
+ // Preserve chrome.storage.local data for BrowserOS extensions. These may be
+ // transiently uninstalled during update cycles (e.g., when both bundled CRX
+ // and remote config fail on startup). User configuration must survive.
+ if (browseros::IsBrowserOSExtension(extension->id())) {
+ LOG(INFO) << "browseros: Preserving storage for extension "
+ << extension->id();
+ subtask_done_callback.Run();
+ } else {
+ DataDeleter::StartDeleting(profile_, extension.get(),
+ subtask_done_callback);
+ }
}
void ChromeExtensionRegistrarDelegate::DoLoadExtensionForReload(
@@ -322,6 +333,13 @@ bool ChromeExtensionRegistrarDelegate::CanDisableExtension(
return true;
}

View File

@@ -1,8 +1,17 @@
diff --git a/chrome/install_static/chromium_install_modes.h b/chrome/install_static/chromium_install_modes.h
index 0cf937413e08a..a61c438a77379 100644
index ee62888f89705..7ec72d302bc4b 100644
--- a/chrome/install_static/chromium_install_modes.h
+++ b/chrome/install_static/chromium_install_modes.h
@@ -33,48 +33,49 @@ inline constexpr auto kInstallModes = std::to_array<InstallConstants>({
@@ -21,7 +21,7 @@ inline constexpr wchar_t kCompanyPathName[] = L"";
// The brand-specific product name to be included as a component of the install
// and user data directory paths.
-inline constexpr wchar_t kProductPathName[] = L"Chromium";
+inline constexpr wchar_t kProductPathName[] = L"BrowserOS";
// The brand-specific safe browsing client name.
inline constexpr char kSafeBrowsingName[] = "chromium";
@@ -44,48 +44,49 @@ inline constexpr auto kInstallModes = std::to_array<InstallConstants>({
L"", // Empty install_suffix for the primary install mode.
.logo_suffix = L"", // No logo suffix for the primary install mode.
.app_guid =