feat: improve logging and tool execution; add task board files and email skill

- Enhanced tool execution logging in  with exit codes and success status
- Updated success/failure handling for events in  and
- Added comprehensive task board markdown files in
- Initialized  directory
This commit is contained in:
larchanka
2026-02-22 20:52:19 +01:00
committed by Mikhail Larchanka
parent 1b9bed4c54
commit ee896becb3
26 changed files with 833 additions and 439 deletions

View File

@@ -1,102 +1,23 @@
# 🗂️ File Processing Pipeline — Implementation Plan
# Architecture Overhaul: Process Isolation & Cron-Driven AI
## Overview
Add file processing capabilities to ManBot. Users can send text documents, photos, and audio messages via Telegram. The system processes each file type independently: text is read and injected into context (or indexed into RAG if large), images are OCR'd or described using Ollama vision (`glm-ocr:q8_0`), and audio is transcribed using Whisper (`nodejs-whisper` + `ffmpeg-static`). All processing happens in a new independent `file-processor` process, consistent with the existing process-isolation architecture.
This plan describes the transition to an "Advanced" architecture where all components are managed by a dedicated Supervisor, communication is routed via a formalized Message Bus, and Cron jobs can trigger full AI task pipelines.
---
## Goals
- **Fault Tolerance**: Standardize process lifecycle and implement auto-restart.
- **Observability**: formalized IPC routing and real-time dashboard monitoring.
- **Autonomous Action**: Enable scheduled AI queries (Synthetic User Input).
## Architecture Decisions
## Phase 1: Infrastructure & Supervision (AO-01 to AO-03)
We will enhance `BaseProcess` to support heartbeats and health reporting, then implement a Supervisor role in the Orchestrator to monitor and restart crashed child processes.
| Decision | Choice | Rationale |
|---|---|---|
| Audio format conversion | `ffmpeg-static` (npm) | No system ffmpeg installed; bundled binary is self-contained |
| Audio transcription | `nodejs-whisper` (base.en) | ~200MB, runs locally, no cloud dependency |
| Image model | `glm-ocr:q8_0` via Ollama | Already connected to local Ollama; on-demand |
| Image behaviour | Auto OCR + describe | Model decides: text → extract verbatim, no text → describe |
| Long text handling | Chunk → Summarize → RAG | Preserves content accessible via semantic search |
| Large file failures | Warn user, continue | Partial failures don't block the task pipeline |
| File cleanup | Delete after processing | Stateless processor; no persistent upload storage |
## Phase 2: Router & Message Bus (AO-04 to AO-05)
Introduce a dedicated `Router` process to handle message distribution, decoupling the Orchestrator's supervision logic from its routing logic.
---
## Phase 3: Advanced Cron AI Queries (AO-08 to AO-10)
Extend the Cron system to support `ai_query` tasks that feed directly into the Planner, allowing the agent to perform scheduled research or maintenance autonomously.
## Phases
## Phase 4: Monitoring & Control (AO-06, AO-07, AO-11)
Upgrade the Dashboard to provide a "Mission Control" experience, including process metrics, real-time IPC logs, and cron management.
### Phase 1 — Foundation (Tasks FP-01 to FP-03)
Set up infrastructure: new npm packages, config types, TypeScript interfaces.
No runtime changes — pure setup.
### Phase 2 — Core Services (Tasks FP-04 to FP-06)
Build the three low-level utilities:
- `OllamaAdapter.chatWithImage()` for vision
- `convertToWav()` for audio conversion
- `transcribeAudio()` for Whisper transcription
Each utility is independently testable and has no dependency on the new process.
### Phase 3 — File Processor Process (Tasks FP-07 to FP-08)
Build the `file-processor` service as a new `BaseProcess` subprocess and register it in the Orchestrator. At this point the service exists and responds to `file.process` envelopes.
### Phase 4 — Telegram Integration (Tasks FP-09 to FP-11)
Wire the Telegram side: file detection, download, `file.ingest` emission, and the Orchestrator handler that calls `file-processor`, collects results, handles warnings, and builds the enriched goal. Long-text RAG indexing pipeline implemented here.
### Phase 5 — Planner Integration + Cleanup (Tasks FP-12 to FP-13)
Update the planner prompt to handle enriched goals and indexed file references. Set up upload directory lifecycle (init + orphan cleanup).
### Phase 6 — Documentation and Verification (Tasks FP-14 to FP-15)
Update all docs to reflect the new subsystem, then perform end-to-end manual verification across all file types and edge cases.
---
## Data Flow
```
User sends file on Telegram
[telegram-adapter]
Detect attachment → download → classify → emit file.ingest
[core / Orchestrator]
file.ingest handler
↓ (parallel)
┌─ text → [file-processor] → inline or text_long
│ └─ text_long → chunk → summarize (model-router) → insert (rag-service)
├─ image → [file-processor] → OllamaAdapter.chatWithImage → OCR/description
└─ audio → [file-processor] → convertToWav → transcribeAudio → transcript
Build enrichedGoal
runTaskPipeline → Planner → Executor → ...
[telegram-adapter] → User
```
---
## New Files
- `src/services/file-processor.ts` — new independent service process
- `src/utils/audio-converter.ts` — ffmpeg-static conversion utility
- `src/utils/whisper-transcriber.ts` — nodejs-whisper transcription utility
## Modified Files
- `src/services/ollama-adapter.ts` — add `chatWithImage()`
- `src/adapters/telegram-adapter.ts` — file detection, download, `file.ingest`
- `src/core/orchestrator.ts` — register file-processor, add `file.ingest` handler, long-text pipeline
- `src/agents/prompts/planner.ts` — file context instructions + examples
- `src/shared/config.ts``WhisperConfig`, `FileProcessorConfig`
- `config.json`, `config.json.example` — new config sections
- `_docs/COMPONENTS.md`, `_docs/TECH.md`, `_docs/MESSAGE PROTOCOL SPEC.md`, `README.md`
---
## New npm Dependencies
| Package | Type | Purpose |
|---|---|---|
| `nodejs-whisper` | runtime | Speech-to-text transcription |
| `ffmpeg-static` | runtime | Audio format conversion (ogg → wav) |
---
## Risk Notes
- **Whisper model download**: ~200MB on first audio request. User will see a delay and a descriptive error on first attempt. Subsequent uses are fast.
- **OCR model availability**: `glm-ocr:q8_0` must be pulled in Ollama before use (`ollama pull glm-ocr:q8_0`). If missing, image processing returns an Ollama error — the Orchestrator warns the user and skips.
- **Context length**: Inline file content is capped to prevent planner prompt overflow. Long files go through RAG instead.
## Phase 5: Verification (AO-12 to AO-14)
Comprehensive testing of the new supervisor patterns and the end-to-end cron-to-ai flow.

View File

@@ -1,135 +1,93 @@
# 📋 Tasks — File Processing Pipeline
# 📋 Tasks — Architecture Overhaul (AO)
All tasks are prefixed `FP-` (File Processing). They are ordered by implementation dependency.
All tasks are prefixed `AO-` (Architecture Overhaul). They are ordered by implementation dependency.
---
## Phase 1 — Foundation
## Phase 1 — Infrastructure & Supervision
### FP-01 Add npm Dependencies
**File**: `package.json`
**Deps**: None
Install `nodejs-whisper` and `ffmpeg-static` runtime packages.
→ [FP-01_ADD_DEPENDENCIES.md](./TASKS/FP-01_ADD_DEPENDENCIES.md)
### AO-01 Standardize BaseProcess Lifecycle
**File**: `src/shared/base-process.ts`
Add health checks, heartbeats, and standardized status reporting.
→ [AO-01_BASE_PROCESS_LIFECYCLE.md](./TASKS/AO-01_BASE_PROCESS_LIFECYCLE.md)
---
### FP-02 Add Config Types and Defaults
**File**: `src/shared/config.ts`, `config.json`
**Deps**: None
Add `WhisperConfig` and `FileProcessorConfig` interfaces, defaults, and env var overrides.
→ [FP-02_CONFIG_TYPES.md](./TASKS/FP-02_CONFIG_TYPES.md)
---
### FP-03 Define File Processing Protocol Types
**File**: `src/shared/protocol.ts`
**Deps**: FP-02
Define `FileDescriptor`, `FileIngestPayload`, `FileProcessRequest`, `ProcessedFile` interfaces.
→ [FP-03_PROTOCOL_TYPES.md](./TASKS/FP-03_PROTOCOL_TYPES.md)
---
## Phase 2 — Core Services
### FP-04 Extend OllamaAdapter with Vision Support
**File**: `src/services/ollama-adapter.ts`
**Deps**: FP-02, FP-03
Add `chatWithImage(messages, model, imagePath)` method using Ollama multimodal API.
→ [FP-04_OLLAMA_VISION.md](./TASKS/FP-04_OLLAMA_VISION.md)
---
### FP-05 Implement Audio Conversion Utility
**File**: `src/utils/audio-converter.ts`
**Deps**: FP-01, FP-02
`convertToWav(inputPath, outputPath)` — wraps `ffmpeg-static` to produce 16kHz mono WAV.
→ [FP-05_AUDIO_CONVERTER.md](./TASKS/FP-05_AUDIO_CONVERTER.md)
---
### FP-06 Implement Whisper Transcription Utility
**File**: `src/utils/whisper-transcriber.ts`
**Deps**: FP-01, FP-02, FP-05
`transcribeAudio(wavPath)` — calls `nodejs-whisper` with configured model and language.
→ [FP-06_WHISPER_TRANSCRIBER.md](./TASKS/FP-06_WHISPER_TRANSCRIBER.md)
---
## Phase 3 — File Processor Process
### FP-07 Build the File Processor Service
**File**: `src/services/file-processor.ts`
**Deps**: FP-03, FP-04, FP-05, FP-06
New `BaseProcess` subprocess. Routes files by category: text read, image OCR, audio transcription, unknown ignored. Deletes files after processing.
→ [FP-07_FILE_PROCESSOR_SERVICE.md](./TASKS/FP-07_FILE_PROCESSOR_SERVICE.md)
---
### FP-08 Register File Processor in Orchestrator
### AO-02 Implement Process Supervisor
**File**: `src/core/orchestrator.ts`
**Deps**: FP-07
Add `file-processor` to `PROCESS_SCRIPTS` and spawn it at startup.
→ [FP-08_REGISTER_FILE_PROCESSOR.md](./TASKS/FP-08_REGISTER_FILE_PROCESSOR.md)
Implement monitoring and auto-restart logic for child processes.
→ [AO-02_PROCESS_SUPERVISOR.md](./TASKS/AO-02_PROCESS_SUPERVISOR.md)
### AO-03 CLI Interactive Mode
**File**: `src/shared/base-process.ts`
Support `--interactive` mode to allow manual stdin/stdout testing of any service.
→ [AO-03_CLI_INTERACTIVE_MODE.md](./TASKS/AO-03_CLI_INTERACTIVE_MODE.md)
---
## Phase 4Telegram Integration
## Phase 2Message Bus & Router
### FP-09 Telegram Adapter — File Detection and Download
**File**: `src/adapters/telegram-adapter.ts`
**Deps**: FP-02, FP-03
Detect photo/document/voice/audio, download to sandbox, classify MIME type, emit `file.ingest`.
→ [FP-09_TELEGRAM_FILE_DOWNLOAD.md](./TASKS/FP-09_TELEGRAM_FILE_DOWNLOAD.md)
### AO-04 Standalone Router Service
**File**: `src/core/router-service.ts`
Create a lightweight dedicated process for message routing.
→ [AO-04_ROUTER_SERVICE.md](./TASKS/AO-04_ROUTER_SERVICE.md)
---
### FP-10 Orchestrator — file.ingest Handler and Context Building
### AO-05 Integrate Router in Orchestrator
**File**: `src/core/orchestrator.ts`
**Deps**: FP-08, FP-09
Handle `file.ingest`: dispatch to file-processor in parallel, collect results, warn on failures, build enrichedGoal, call `runTaskPipeline`.
→ [FP-10_ORCHESTRATOR_FILE_INGEST.md](./TASKS/FP-10_ORCHESTRATOR_FILE_INGEST.md)
Refactor Orchestrator to use the Router for IPC distribution.
→ [AO-05_INTEGRATE_ROUTER.md](./TASKS/AO-05_INTEGRATE_ROUTER.md)
---
### FP-11 Long Text Chunking, Summarization, and RAG Indexing
## Phase 3 — Cron-Driven AI Queries
### AO-08 SQLite Schema Update for Cron
**File**: `src/services/cron-manager.ts`
Add `ai_query` task type support to the database and types.
→ [AO-08_CRON_SCHEMA_UPDATE.md](./TASKS/AO-08_CRON_SCHEMA_UPDATE.md)
### AO-09 CronManager AI Query Support
**File**: `src/services/cron-manager.ts`
Implement `event.cron.ai_query` emission when scheduled query fires.
→ [AO-09_CRON_AI_QUERY.md](./TASKS/AO-09_CRON_AI_QUERY.md)
### AO-10 Orchestrator Synthetic Task Pipeline
**File**: `src/core/orchestrator.ts`
**Deps**: FP-10, rag-service
Chunk long text → summarize each chunk via model-router → insert summaries into rag-service with file metadata.
→ [FP-11_LONG_TEXT_RAG_INDEXING.md](./TASKS/FP-11_LONG_TEXT_RAG_INDEXING.md)
Implement `handleCronAiQuery` to trigger full AI task pipeline from cron events.
→ [AO-10_SYNTHETIC_TASK_PIPELINE.md](./TASKS/AO-10_SYNTHETIC_TASK_PIPELINE.md)
---
## Phase 5Planner Integration and Cleanup
## Phase 4Monitoring & UI
### FP-12 Update Planner Prompt for File Context Awareness
**File**: `src/agents/prompts/planner.ts`
**Deps**: FP-10
Add `<file_context_instructions>` section and two new few-shot examples (inline context, indexed file).
→ [FP-12_PLANNER_PROMPT_FILE_CONTEXT.md](./TASKS/FP-12_PLANNER_PROMPT_FILE_CONTEXT.md)
### AO-06 Dashboard Process Health Monitoring
**File**: `src/services/dashboard-service.ts`
Extend UI to show status, restarts, and metrics for all child processes.
→ [AO-06_DASHBOARD_HEALTH.md](./TASKS/AO-06_DASHBOARD_HEALTH.md)
### AO-07 Real-time IPC Log Viewer
**File**: `src/services/dashboard-service.ts`
Add a live log streaming interface to view cross-process communication.
→ [AO-07_IPC_LOG_VIEWER.md](./TASKS/AO-07_IPC_LOG_VIEWER.md)
### AO-11 Cron Job Management UI
**File**: `src/services/dashboard-service.ts`
Add UI section to list, add, and manage scheduled AI queries.
→ [AO-11_CRON_MGMT_UI.md](./TASKS/AO-11_CRON_MGMT_UI.md)
---
### FP-13 Upload Directory Initialization and Cleanup
**File**: `src/core/orchestrator.ts`
**Deps**: FP-02, FP-07
Create upload dir at startup; clear orphaned files (older than 1h) at startup.
→ [FP-13_UPLOAD_DIR_CLEANUP.md](./TASKS/FP-13_UPLOAD_DIR_CLEANUP.md)
## Phase 5 — Verification
---
### AO-12 Test: Supervisor Auto-Restart
**File**: `src/tests/supervisor.test.ts`
Verify that killing a process triggers an automatic restart by the supervisor.
→ [AO-12_TEST_AUTO_RESTART.md](./TASKS/AO-12_TEST_AUTO_RESTART.md)
## Phase 6 — Documentation and Verification
### AO-13 Test: Cron-Driven AI Task
**File**: `src/tests/cron-ai.test.ts`
Verify the full flow from cron trigger to task completion and Telegram notification.
→ [AO-13_TEST_CRON_AI_FLOW.md](./TASKS/AO-13_TEST_CRON_AI_FLOW.md)
### FP-14 Update Documentation
**Files**: `_docs/COMPONENTS.md`, `_docs/TECH.md`, `_docs/MESSAGE PROTOCOL SPEC.md`, `README.md`
**Deps**: FP-07, FP-09, FP-10
Document the new subsystem in all architecture and user-facing docs.
→ [FP-14_UPDATE_DOCS.md](./TASKS/FP-14_UPDATE_DOCS.md)
---
### FP-15 End-to-End Verification
**Files**: Manual testing
**Deps**: FP-01 through FP-14
Manual verification of all file types: text (short + long), image (OCR + description), audio (voice + mp3), mixed, and edge/failure cases.
→ [FP-15_E2E_VERIFICATION.md](./TASKS/FP-15_E2E_VERIFICATION.md)
### AO-14 E2E Verification
**File**: Manual
Final validation of all "Advanced Architecture" features.
→ [AO-14_E2E_VERIFICATION.md](./TASKS/AO-14_E2E_VERIFICATION.md)

View File

@@ -0,0 +1,14 @@
# AO-01 Standardize BaseProcess Lifecycle
## Context
Every child process should follow a strict lifecycle protocol to ensure the Supervisor can monitor health.
## Proposed Changes
- [ ] Implement `status` enum in `BaseProcess`.
- [ ] Add `heartbeat` mechanism (emit `event.system.heartbeat` periodically).
- [ ] Add `getMetrics()` method to report memory/resource usage.
- [ ] Ensure all existing services benefit from these changes.
## Verification
- Unit tests for lifecycle transitions.
- Verify heartbeat events in logs.

View File

@@ -0,0 +1,14 @@
# AO-02 Implement Process Supervisor
## Context
The Orchestrator currently spawns processes but doesn't actively manage their lifecycle or recover from crashes.
## Proposed Changes
- [ ] Implement a `Supervisor` module within the Orchestrator.
- [ ] Track child process uptime and exit codes.
- [ ] Implement auto-restart logic (with backoff).
- [ ] Emit `event.system.process_restart` for observability.
## Verification
- Kill a child process (e.g., `model-router`) and verify it restarts automatically.
- Check logs for restart events.

View File

@@ -0,0 +1,13 @@
# AO-03 CLI Interactive Mode
## Context
Debugging IPC is hard. Services should be testable via direct stdin.
## Proposed Changes
- [ ] Detect `--interactive` flag in `main()` of services.
- [ ] When in interactive mode, use a simpler `readline` interface for manual input.
- [ ] Pretty-print outgoing envelopes to stderr for human readability.
## Verification
- Run `node dist/services/rag-service.js --interactive` and manually type a JSON envelope.
- Verify response is printed correctly.

View File

@@ -0,0 +1,12 @@
# AO-04 Standalone Router Service
## Context
Orchestrator is doing too much. Routing should be a dumb, high-speed pipe.
## Proposed Changes
- [ ] Create `src/core/router-service.ts`.
- [ ] Implement simple routing logic: read from any child, write to the target named in `to`.
- [ ] Standardize the "Hub and Spoke" IPC model.
## Verification
- Test routing between two mock processes via the Router.

View File

@@ -0,0 +1,11 @@
# AO-05 Integrate Router in Orchestrator
## Context
Move routing logic out of `orchestrator.ts` and into the new Router service.
## Proposed Changes
- [ ] Refactor `Orchestrator.handleLine` to delegate to Router where appropriate.
- [ ] Ensure all services are connected to the central Router bus.
## Verification
- Full system smoke test: original Telegram task pipeline should still work.

View File

@@ -0,0 +1,12 @@
# AO-06 Dashboard Process Health Monitoring
## Context
The dashboard should show the status of all child processes managed by the supervisor.
## Proposed Changes
- [ ] Add "Process Status" table to dashboard.
- [ ] Display: Process Name, Status (Running/Restarting), Restarts Count, Uptime.
- [ ] Use `heartbeat` events to update status in real-time.
## Verification
- Open dashboard and verify process list matches actual running system.

View File

@@ -0,0 +1,12 @@
# AO-07 Real-time IPC Log Viewer
## Context
Debugging cross-process flows is currently dependent on console logs.
## Proposed Changes
- [ ] Implement a WebSocket-based (or polling) live log viewer in the dashboard.
- [ ] Show envelopes as they pass through the Router.
- [ ] Add filtering by process name and message type.
## Verification
- Send a message in Telegram and watch the log entries appear in the dashboard.

View File

@@ -0,0 +1,12 @@
# AO-08 SQLite Schema Update for Cron
## Context
Prepare the cron database for AI-driven tasks.
## Proposed Changes
- [ ] Update `SCHEMA` in `cron-manager.ts`.
- [ ] Add migration/check for `task_type` field and ensures `ai_query` is a valid option.
- [ ] Update types and interfaces.
## Verification
- Check `cron.sqlite` schema after restart.

View File

@@ -0,0 +1,11 @@
# AO-09 CronManager AI Query Support
## Context
Enable CronManager to handle AI query tasks.
## Proposed Changes
- [ ] Implement trigger logic for `task_type === 'ai_query'`.
- [ ] Emit `event.cron.ai_query` with the prompt and context.
## Verification
- Verify `event.cron.ai_query` is emitted when a scheduled job fires.

View File

@@ -0,0 +1,13 @@
# AO-10 Orchestrator Synthetic Task Pipeline
## Context
Bridge cron events to the AI Agent reasoning pipeline.
## Proposed Changes
- [ ] Implement `handleCronAiQuery()` in Orchestrator.
- [ ] This should wrap the `ai_query` prompt and inject it into `runTaskPipeline`.
- [ ] Map the cron task result back to the specific Telegram `chatId`.
## Verification
- Create an `ai_query` cron job.
- Verify the agent plans and executes the task, then sends the result to Telegram.

View File

@@ -0,0 +1,13 @@
# AO-11 Cron Job Management UI
## Context
Provide a user-friendly way to schedule AI queries.
## Proposed Changes
- [ ] Add "Schedules" tab to dashboard.
- [ ] List current cron jobs with their type (`reminder` vs `ai_query`).
- [ ] Add "New AI Query" form: Input prompt + Cron expression.
- [ ] Implement deletion of cron jobs from the UI.
## Verification
- Add a job via the UI and verify it appears in the `cron.sqlite` database.

View File

@@ -0,0 +1,12 @@
# AO-12 Test: Supervisor Auto-Restart
## Context
Automated verification of the Supervisor's ability to maintain system uptime.
## Proposed Changes
- [ ] Create `src/tests/supervisor.test.ts`.
- [ ] Test case: Spawn child, sigkill child, wait for supervisor to restart it.
- [ ] Test case: Verify backoff strategy if child keeps crashing.
## Verification
- Run `npm test src/tests/supervisor.test.ts`.

View File

@@ -0,0 +1,14 @@
# AO-13 Test: Cron-Driven AI Task
## Context
End-to-end integration test for the autonomous cron-to-ai flow.
## Proposed Changes
- [ ] Create `src/tests/cron-ai.test.ts`.
- [ ] Mock a cron-fire event.
- [ ] Verify Orchestrator starts the pipeline.
- [ ] Mock tool-host/model-router responses.
- [ ] Verify final event emission.
## Verification
- Run `npm test src/tests/cron-ai.test.ts`.

View File

@@ -0,0 +1,13 @@
# AO-14 E2E Verification
## Context
Manual end-to-end testing of the fully integrated advanced architecture.
## Proposed Changes
- [ ] Verify process dashboard health reporting.
- [ ] Verify live log stream.
- [ ] Verify manual cron job addition via dashboard and subsequent AI execution.
- [ ] Verify system stability under simulated failures.
## Verification
- Comprehensive manual walkthrough documented with screenshots/logs.

View File

@@ -2,147 +2,119 @@
## To Do
### AO-01 Standardize BaseProcess Lifecycle
- tags: [todo, infra, core]
- defaultExpanded: true
```md
Add health checks, heartbeats, and standardized status reporting to BaseProcess.
Source: _board/TASKS/AO-01_BASE_PROCESS_LIFECYCLE.md
```
### AO-02 Implement Process Supervisor
- tags: [todo, orchestrator, core]
- defaultExpanded: false
```md
Implement monitoring and auto-restart logic for child processes in Orchestrator.
Source: _board/TASKS/AO-02_PROCESS_SUPERVISOR.md
```
### AO-03 CLI Interactive Mode
- tags: [todo, shared, devtools]
- defaultExpanded: false
```md
Support --interactive flag for manual service testing via stdin.
Source: _board/TASKS/AO-03_CLI_INTERACTIVE_MODE.md
```
### AO-04 Standalone Router Service
- tags: [todo, service, bus]
- defaultExpanded: false
```md
Create a lightweight dedicated process for message routing.
Source: _board/TASKS/AO-04_ROUTER_SERVICE.md
```
### AO-05 Integrate Router in Orchestrator
- tags: [todo, orchestrator, bus]
- defaultExpanded: false
```md
Refactor Orchestrator to use the Router for IPC distribution.
Source: _board/TASKS/AO-05_INTEGRATE_ROUTER.md
```
### AO-06 Dashboard Process Health Monitoring
- tags: [todo, dashboard, admin]
- defaultExpanded: false
```md
Extend UI to show status, restarts, and metrics for all child processes.
Source: _board/TASKS/AO-06_DASHBOARD_HEALTH.md
```
### AO-07 Real-time IPC Log Viewer
- tags: [todo, dashboard, debug]
- defaultExpanded: false
```md
Add a live log streaming interface to dashboard for IPC debugging.
Source: _board/TASKS/AO-07_IPC_LOG_VIEWER.md
```
### AO-08 SQLite Schema Update for Cron
- tags: [todo, services, database]
- defaultExpanded: false
```md
Add ai_query task type support to the cron database.
Source: _board/TASKS/AO-08_CRON_SCHEMA_UPDATE.md
```
### AO-09 CronManager AI Query Support
- tags: [todo, services, cron]
- defaultExpanded: false
```md
Implement event.cron.ai_query emission for scheduled AI tasks.
Source: _board/TASKS/AO-09_CRON_AI_QUERY.md
```
### AO-10 Orchestrator Synthetic Task Pipeline
- tags: [todo, orchestrator, ai]
- defaultExpanded: false
```md
Connect cron AI events to the full agent task pipeline.
Source: _board/TASKS/AO-10_SYNTHETIC_TASK_PIPELINE.md
```
### AO-11 Cron Job Management UI
- tags: [todo, dashboard, cron]
- defaultExpanded: false
```md
Add UI section to manage scheduled AI queries.
Source: _board/TASKS/AO-11_CRON_MGMT_UI.md
```
### AO-12 Test: Supervisor Auto-Restart
- tags: [todo, testing, qa]
- defaultExpanded: false
```md
Automated verification of process restart capability.
Source: _board/TASKS/AO-12_TEST_AUTO_RESTART.md
```
### AO-13 Test: Cron-Driven AI Task
- tags: [todo, testing, integration]
- defaultExpanded: false
```md
Verify full flow from cron trigger to autonomous AI execution.
Source: _board/TASKS/AO-13_TEST_CRON_AI_FLOW.md
```
### AO-14 E2E Verification
- tags: [todo, testing, e2e]
- defaultExpanded: false
```md
Final manual verification of the new architecture and features.
Source: _board/TASKS/AO-14_E2E_VERIFICATION.md
```
## In Progress
### FP-15 End-to-End Verification
- tags: [in-progress, qa, e2e]
- defaultExpanded: true
```md
Manual verification of all file types, edge cases, and failure scenarios via Telegram.
Source: FP-15_E2E_VERIFICATION.md
```
## Done
### FP-14 Update Documentation
- tags: [done, docs]
- defaultExpanded: false
```md
Updated COMPONENTS.md, TECH.md, MESSAGE PROTOCOL SPEC.md, and README.md.
Source: FP-14_UPDATE_DOCS.md
```
### FP-13 Upload Directory Init and Cleanup
- tags: [done, orchestrator, infra]
- defaultExpanded: false
```md
initUploadDirectory() creates upload dir and purges orphaned files (>1h) on startup.
Source: FP-13_UPLOAD_DIR_CLEANUP.md
```
### FP-12 Update Planner Prompt for File Context
- tags: [done, planner, prompt]
- defaultExpanded: false
```md
Added <file_context_awareness> block to PLANNER_SYSTEM_PROMPT.
Documents text/image/audio/indexed file fences and guidance.
Source: FP-12_PLANNER_PROMPT_FILE_CONTEXT.md
```
### FP-11 Long Text Chunking and RAG Indexing
- tags: [done, orchestrator, rag]
- defaultExpanded: false
```md
indexLongText(): 2k-char chunks, 3-at-a-time summarisation, RAG insert with metadata.
Source: FP-11_LONG_TEXT_RAG_INDEXING.md
```
### FP-10 Orchestrator — file.ingest Handler
- tags: [done, orchestrator, core]
- defaultExpanded: false
```md
handleFileIngest(): parallel processing, enrichedGoal builder, 32k char cap, user warnings.
Source: FP-10_ORCHESTRATOR_FILE_INGEST.md
```
### FP-09 Telegram Adapter — File Detection and Download
- tags: [done, telegram, adapter]
- defaultExpanded: false
```md
Detects photo/document/voice/audio, size-guards, downloads, classifies, emits file.ingest.
Source: FP-09_TELEGRAM_FILE_DOWNLOAD.md
```
### FP-08 Register File Processor in Orchestrator
- tags: [done, orchestrator, infra]
- defaultExpanded: false
```md
Added 'file-processor' to PROCESS_SCRIPTS; spawned at startup alongside other services.
Source: FP-08_REGISTER_FILE_PROCESSOR.md
```
### FP-07 Build the File Processor Service
- tags: [done, service, core]
- defaultExpanded: false
```md
file-processor.ts BaseProcess: routes text/image/audio/unknown, deletes files, emits audit events.
Source: FP-07_FILE_PROCESSOR_SERVICE.md
```
### FP-06 Implement Whisper Transcription Utility
- tags: [done, util, audio]
- defaultExpanded: false
```md
Created src/utils/whisper-transcriber.ts. transcribeAudio() with 5-min timeout,
auto-download, first-run UX. Build clean, 156 tests pass.
Source: FP-06_WHISPER_TRANSCRIBER.md
```
### FP-05 Implement Audio Conversion Utility
- tags: [done, util, audio]
- defaultExpanded: false
```md
Created src/utils/audio-converter.ts. convertToWav() with ffmpeg-static,
60s timeout, stderr capture. Build clean, 156 tests pass.
Source: FP-05_AUDIO_CONVERTER.md
```
### FP-04 Extend OllamaAdapter with Vision Support
- tags: [done, service, ollama]
- defaultExpanded: false
```md
Added chatWithImage() with base64 image injection into Ollama multimodal messages.
Reuses fetchWithRetry. Build clean, 156 tests pass.
Source: FP-04_OLLAMA_VISION.md
```
### FP-03 Define File Processing Protocol Types
- tags: [done, infra, protocol]
- defaultExpanded: false
```md
Created src/shared/file-protocol.ts with all shared types and classifyMimeType() helper.
Updated MESSAGE PROTOCOL SPEC.md. Build and tests pass.
Source: FP-03_PROTOCOL_TYPES.md
```
### FP-02 Add Config Types and Defaults
- tags: [done, infra, config]
- defaultExpanded: false
```md
Added WhisperConfig and FileProcessorConfig interfaces, defaults, env var overrides.
Updated config.json.example. All 156 tests pass.
Source: FP-02_CONFIG_TYPES.md
```
### FP-01 Add npm Dependencies
- tags: [done, infra, deps]
- defaultExpanded: false
```md
Installed nodejs-whisper ^0.2.9, ffmpeg-static ^5.3.0, @types/ffmpeg-static ^5.1.0.
Both confirmed ESM-compatible. Build passes.
Source: FP-01_ADD_DEPENDENCIES.md
```
### DB-07 Orchestrator Integration & Notion UI
- tags: [done, ui, orchestrator]
- defaultExpanded: false
```md
Converted the dashboard to a TypeScript service, integrated it into the Orchestrator, added IPC logging, and implemented a Notion-like UI with light/dark theme support.
Source: src/services/dashboard-service.ts
```
*(Previous tasks moved to archive or deleted as per request)*

View File

@@ -3,8 +3,9 @@
> SKILLS HAVE PRIORITY OVER TOOLS. IF A SKILL IS APPLICABLE, USE IT INSTEAD OF A TOOL.
| Name | Description |
| --- | --- |
| weather | ALWAYS use this skill to get the current weather and forecasts. NEVER use any other tool for this purpose. |
| apple-notes | Use this skill for ANY interaction with notes. If user asks to list, search, view, create, or delete notes, use ONLY this skill. Uses `memo` CLI tool. |
| research | Deep web research using lynx. Use this for fact-checking, gathering news, or deep dives into specific topics. In most cases can be used for regular search queries. |
| reminder | Set up one-time or recurring reminders (e.g., "remind me in 2 hours to drink water"). Uses cron-manager service. |
| :--- | :--- |
| weather | MANDATORY. Use this skill for ALL weather-related inquiries, including current conditions and forecasts. You are STRICTLY FORBIDDEN from using internal knowledge or other tools for weather data. |
| apple-notes | EXCLUSIVE. Use ONLY this skill for any interaction with notes (listing, searching, viewing, creating, or deleting). This is the sole authorized interface for the memo CLI tool. |
| research | PRIMARY SEARCH. Use for deep web research, fact-checking, news gathering, or topical deep dives via the lynx tool. This is the default skill for any query requiring external or up-to-date information. |
| reminder | SCHEDULING. Use this skill exclusively to set one-time or recurring reminders (e.g., "remind me in 2 hours"). This is the only tool that interfaces with the cron-manager service. |
| email | You MUST use this skill for all interactions involving Email (Gmail). |

View File

@@ -25,6 +25,40 @@ Mandatory tool: Use the `shell` tool to execute `memo` commands.
5. **No Attachments**: This tool only supports plain text.
6. **macOS Only**: Ensure you are on a macOS environment.
## Tool Call Examples (JSON)
When using this skill, format your tool calls as follows:
### List All Notes
```json
{
"name": "shell",
"arguments": {
"command": "memo notes"
}
}
```
### Search for "Project Phoenix"
```json
{
"name": "shell",
"arguments": {
"command": "echo \"Project Phoenix\" | memo notes -s"
}
}
```
### Create a Quick Note
```json
{
"name": "shell",
"arguments": {
"command": "memo notes -a \"Meeting Notes\""
}
}
```
## Example Commands
- `memo notes` (List everything - Default action)
- `echo "project X" | memo notes -s` (Search for project X)

113
skills/email/SKILL.md Normal file
View File

@@ -0,0 +1,113 @@
# Email Skill
Manage Gmail communications using the `gog` CLI.
Mandatory tool: Use the `shell` tool to execute `gog` commands.
## When to Use
**USE this skill when:**
- Searching for specific emails, threads, or attachments.
- Sending, drafting, or replying to email messages.
- Finding information within the inbox (e.g., "Find my flight number").
- Checking for unread messages or recent updates.
## When NOT to Use
**DON'T use this skill when:**
- Sending bulk marketing/spam campaigns.
- Editing contact details or labels.
## Commands
### 📧 Searching
```bash
# Search threads (grouped by conversation)
gog gmail search 'newer_than:2d' --max 5
# Search individual messages (raw list)
gog gmail messages search "from:updates@example.com" --max 10
# Search for unread emails
gog gmail search "is:unread"
```
### 📩 Sending & Replying
```bash
# Quick one-line email
gog gmail send --to "user@example.com" --subject "Update" --body "Everything is on track."
# Send with multi-line body (Heredoc)
gog gmail send --to "user@example.com" --subject "Meeting Notes" --body-file - <<'EOF'
Hi Team,
Notes from today:
1. Budget approved
2. Deadline moved to Friday
Best,
AI Agent
EOF
# Reply to a specific message ID
gog gmail send --to "a@b.com" --subject "Re: Hello" --reply-to-message-id <msgId> --body "Got it, thanks!"
```
### 📝 Drafting
```bash
# Create a draft instead of sending
gog gmail drafts create --to "client@example.com" --subject "Proposal" --body-file - <<'EOF'
Draft content goes here.
EOF
```
## Email Formatting
### Plain Text vs HTML
- **Default:** Use `--body` for short strings or `--body-file -` for multi-line text.
- **Rich Text:** Use `--body-html` only when formatting (bold, links) is required.
- Supported tags: `<p>`, `<strong>`, `<ul>/<li>`, `<a href="...">`.
```bash
# HTML Example
gog gmail send --to "a@b.com" --subject "Links" --body-html "Click <a href='https://google.com'>here</a> to view."
```
## Tool Call Examples (JSON)
When using this skill, format your tool calls as follows:
Search for Unread Emails
```json
{
"name": "shell",
"arguments": {
"command": "gog gmail search 'is:unread'"
}
}
```
Send a Simple Email
```json
{
"name": "shell",
"arguments": {
"command": "gog gmail send --to \"user@example.com\" --subject \"Update\" --body \"The report is ready.\""
}
}
```
Reply to a Message
```json
{
"name": "shell",
"arguments": {
"command": "gog gmail send --to \"user@example.com\" --subject \"Re: Hello\" --reply-to-message-id \"msg-123\" --body \"Got it!\""
}
}
```
## Notes
- **Confirmation:** Always show the user the recipient and subject before executing a `send` command.
- **Message IDs:** When replying, ensure the `<msgId>` is the specific ID from a `messages search`.
- **JSON Output:** Use the `--json` flag to parse specific fields like `snippet`, `from`, or `date`.

View File

@@ -24,8 +24,9 @@ Web research is a multi-step process. Do not stop at the first page of search re
2. **Search**: Execute search queries one by one using `lynx -dump` command.
3. **Analyze**: Scan the search results after each query. Identify 2-10 most promising links. If needed go to the next page of search results.
4. **Browse**: Visit those links one by one.
5. **Recurse**: If a page contains a "References" section or promising links to deeper info, follow them.
6. **Summarize**: Gather all key findings and provide a comprehensive response.
5. **Data Analysis**: Analyze the data you have collected. Identify key findings and insights. Find missing information and plan additional search queries. Repeat this step until you have enough information to fulfill the user's request.
6. **Recurse**: If a page contains a "References" section or promising links to deeper info, follow them.
7. **Summarize**: Gather all key findings, insights, valuable details and provide a comprehensive response.
**!!! IMPORTANT !!!** If you think that you dont have enough information to answer the user's request, you can generate new search queries and repeat the process.

View File

@@ -326,18 +326,26 @@ function main(): void {
// /new — reset session and trigger archiving
if (text === "/new") {
const oldConversationId = conversationIdByChat.get(chatId);
const oldConversationId = conversationIdByChat.get(chatId) || String(chatId);
const newConversationId = randomUUID();
conversationIdByChat.set(chatId, newConversationId);
if (oldConversationId != null) {
base.send(
createEnvelope<ChatNewPayload>("chat.new", "core", {
chatId,
conversationId: oldConversationId,
})
);
}
sendToUser(chatId, "New session started. Previous conversation has been archived.");
// Emit event for logging
base.send(createEnvelope("event.chat.reset", "logger", {
chatId,
oldConversationId,
newConversationId
}));
// Trigger archiving in Core
base.send(
createEnvelope<ChatNewPayload>("chat.new", "core", {
chatId,
conversationId: oldConversationId,
})
);
sendToUser(chatId, "🔄 New session started. Previous conversation is being archived...");
return;
}

View File

@@ -209,6 +209,25 @@ User: "think of a 3-day workout plan and save it to my notes"
{ "from": "gen-plan", "to": "save-notes" }
]
}
## Example: Email/Calendar
User: "check my inbox for unread messages"
{
"taskId": "task-gog-01",
"complexity": "small",
"reflectionMode": "OFF",
"nodes": [
{
"id": "check-mail",
"type": "skill",
"service": "executor",
"input": {
"skillName": "gog",
"task": "check my inbox for unread messages"
}
}
],
"edges": []
}
</examples>`;
export interface PlannerPromptOptions {
@@ -230,6 +249,12 @@ export function buildPlannerPrompt(userMessage: string, options?: PlannerPromptO
[CRITICAL] Use these instead of raw tools whenever possible:
${options.skills.map(s => `- **${s.name}**: ${s.description}`).join("\n")}
</available_skills>
<skill_variables>
${Object.entries(process.env)
.filter(([key]) => key.startsWith("SKILL_"))
.map(([key, value]) => `${key}: ${value}`)
.join("\n")}
</skill_variables>
<skill_node_template>
{

View File

@@ -60,10 +60,20 @@ export class Orchestrator {
}
private spawnProcess(name: string, scriptPath: string): ChildEntry {
const child = spawn("node", [scriptPath], {
const env = { ...process.env };
// FP-PATH: Ensure common macOS paths are present for tool execution
// Prepend homebrew paths but preserve the current Node version path at the very front
const commonPaths = ["/opt/homebrew/bin", "/opt/homebrew/sbin", "/usr/local/bin"];
const existingPath = env.PATH || "/usr/bin:/bin:/usr/sbin:/sbin";
// Ensure we don't lose the path to the current node process
const nodeDir = dirname(process.execPath);
env.PATH = Array.from(new Set([nodeDir, ...commonPaths, ...existingPath.split(":")])).join(":");
const child = spawn(process.execPath, [scriptPath], {
cwd: ROOT,
stdio: ["pipe", "pipe", "pipe"],
env: { ...process.env },
env,
});
const stdin = child.stdin!;
const rl = createInterface({ input: child.stdout!, terminal: false });
@@ -141,6 +151,16 @@ export class Orchestrator {
}
}
private send(envelope: Envelope): void {
const target = this.children.get(envelope.to);
if (target?.stdin.writable) {
target.stdin.write(JSON.stringify(envelope) + "\n");
ConsoleLogger.ipc("core", "→", envelope);
} else {
ConsoleLogger.warn("core", `Unknown target or process not writable: ${envelope.to}`, envelope);
}
}
private sendErrorToSender(to: string, request: Envelope, code: string, message: string): void {
const target = this.children.get(to);
if (!target?.stdin.writable) return;
@@ -630,9 +650,12 @@ export class Orchestrator {
const tasksPayload = tasksEnv.payload as { status?: string; result?: { tasks?: Array<{ id: string; goal: string; status: string }> } };
const tasks = tasksPayload.result?.tasks ?? [];
if (tasks.length === 0) {
this.sendToTelegram(chatId, "Archived. (No previous tasks in this conversation.)");
this.sendToTelegram(chatId, "Archived. (No previous tasks found in this session.)");
return;
}
this.sendToTelegram(chatId, `🧠 Archiving ${tasks.length} task(s)...`, true);
const historyParts: string[] = [];
for (const t of tasks) {
let taskDetail: Envelope;
@@ -671,10 +694,22 @@ export class Orchestrator {
metadata: { conversationId, chatId, archivedAt: Date.now(), source: "archiving" },
});
} catch {
this.sendToTelegram(chatId, "Summary produced but storage failed. Check logs.");
this.sendToTelegram(chatId, "⚠️ Summary produced but RAG storage failed. Check logs.");
return;
}
this.sendToTelegram(chatId, "Archived. Conversation summary has been stored for later retrieval.");
// Emit success event
this.send({
id: randomUUID(),
timestamp: Date.now(),
from: "core",
to: "logger",
type: "event.archiving.completed",
version: "1.0",
payload: { chatId, conversationId, taskCount: tasks.length }
});
this.sendToTelegram(chatId, "✅ Archived. Conversation summary has been stored in your long-term memory.");
}
private handleCronReminderEvent(envelope: Envelope): void {

View File

@@ -245,20 +245,31 @@ const CSS = `
.node-map {
display: flex;
flex-wrap: wrap;
gap: 8px;
gap: 12px;
margin-top: 8px;
align-items: center;
}
.node-stage {
display: flex;
flex-wrap: wrap;
gap: 8px;
align-items: center;
padding: 4px;
border-radius: 6px;
border: 1px dashed var(--border);
background: rgba(0,0,0,0.02);
}
.node-chip {
font-size: 11px;
padding: 2px 8px;
border-radius: 4px;
border: 1px solid var(--border);
background: var(--subtle);
background: var(--bg);
color: var(--text-muted);
display: flex;
align-items: center;
gap: 6px;
white-space: nowrap;
}
.node-chip.running {
border-color: var(--primary);
@@ -277,6 +288,17 @@ const CSS = `
background: rgba(223, 42, 95, 0.05);
}
.goal-text {
display: -webkit-box;
-webkit-line-clamp: 2;
-webkit-box-orient: vertical;
overflow: hidden;
text-overflow: ellipsis;
font-weight: 500;
line-height: 1.4;
max-height: 2.8em;
}
.btn-refresh {
font-size: 14px;
font-weight: 500;
@@ -333,6 +355,43 @@ const CSS = `
padding: 40px 20px;
}
}
/* Tooltip bubble using data-title */
[data-title] {
position: relative;
}
[data-title]:hover::after {
content: attr(data-title);
position: absolute;
bottom: 125%;
left: 50%;
transform: translateX(-50%);
padding: 6px 10px;
background: #1e1e1e;
color: #fff;
font-size: 11px;
font-weight: 500;
border-radius: 6px;
white-space: nowrap;
z-index: 1000;
box-shadow: 0 4px 12px rgba(0,0,0,0.2);
border: 1px solid rgba(255,255,255,0.1);
pointer-events: none;
}
[data-title]:hover::before {
content: '';
position: absolute;
bottom: 115%;
left: 50%;
transform: translateX(-50%);
border-width: 5px;
border-style: solid;
border-color: #1e1e1e transparent transparent transparent;
z-index: 1001;
pointer-events: none;
}
`;
export class DashboardService extends BaseProcess {
@@ -349,7 +408,6 @@ export class DashboardService extends BaseProcess {
this.logEvent('info', `Dashboard server started on port ${PORT}`);
});
// Send initial log announcement to orchestrator
this.send({
id: randomUUID(),
timestamp: Date.now(),
@@ -401,7 +459,7 @@ export class DashboardService extends BaseProcess {
const files = fs.readdirSync(logDir)
.filter(f => f.startsWith('events-') && f.endsWith('.log'))
.map(f => f.replace('events-', '').replace('.log', ''))
.sort((a, b) => b.localeCompare(a)); // Newest first
.sort((a, b) => b.localeCompare(a));
res.setHeader('Content-Type', 'application/json');
res.end(JSON.stringify(files));
@@ -487,11 +545,13 @@ export class DashboardService extends BaseProcess {
stats.pendingTasks = tdb.prepare(`
SELECT t.id, t.goal, t.status, t.complexity, t.updated_at,
(SELECT json_group_array(json_object('id', id, 'type', type, 'status', status)) FROM task_nodes WHERE task_id = t.id ORDER BY id) as nodes
(SELECT json_group_array(json_object('id', id, 'type', type, 'status', status, 'input', input)) FROM task_nodes WHERE task_id = t.id ORDER BY started_at ASC) as nodes,
(SELECT json_group_array(json_object('from', from_node, 'to', to_node)) FROM task_edges WHERE task_id = t.id) as edges
FROM tasks t WHERE t.status IN (?, ?) ORDER BY t.updated_at DESC
`).all('pending', 'running').map((t: any) => ({
...t,
nodes: JSON.parse(t.nodes || '[]')
nodes: JSON.parse(t.nodes || '[]'),
edges: JSON.parse(t.edges || '[]')
}));
tdb.close();
@@ -665,6 +725,44 @@ export class DashboardService extends BaseProcess {
<script>
let allLogs = [];
let showingAll = false;
let lastDashboardState = {};
function getGraphLayout(nodes, edges) {
if (!nodes || nodes.length === 0) return [];
const nodeMap = {};
nodes.forEach(n => nodeMap[n.id] = { ...n, incoming: 0, outgoing: [], level: 0 });
(edges || []).forEach(e => {
if (nodeMap[e.from] && nodeMap[e.to]) {
nodeMap[e.from].outgoing.push(e.to);
nodeMap[e.to].incoming++;
}
});
// Assign levels
let changed = true;
let iterations = 0;
while (changed && iterations < 100) {
changed = false;
iterations++;
nodes.forEach(n => {
const current = nodeMap[n.id];
current.outgoing.forEach(nextId => {
const next = nodeMap[nextId];
if (next.level <= current.level) {
next.level = current.level + 1;
changed = true;
}
});
});
}
const levels = [];
Object.values(nodeMap).forEach(n => {
if (!levels[n.level]) levels[n.level] = [];
levels[n.level].push(n);
});
return levels.filter(l => l && l.length > 0);
}
function fmtDate(d) {
const date = new Date(d);
@@ -695,6 +793,11 @@ export class DashboardService extends BaseProcess {
const lt = document.getElementById("lt");
const logsToRender = showingAll ? allLogs : allLogs.slice(0, 20);
// Check if logs actually changed before rendering
const currentLogsHash = JSON.stringify(logsToRender);
if (lastDashboardState.logsHash === currentLogsHash) return;
lastDashboardState.logsHash = currentLogsHash;
lt.innerHTML = logsToRender.map(l => {
const tc = l.type?.includes("failed") ? "error" : (l.type?.includes("completed") ? "success" : "warning");
const typeLabel = (l.type || "EVENT").split(".").pop();
@@ -727,83 +830,125 @@ export class DashboardService extends BaseProcess {
fetch(\`/api/stats\${date ? '?date=' + date : ''}\`)
.then(r => r.json())
.then(d => {
const total = Object.values(d.tasks).reduce((a, b) => a + b, 0);
document.getElementById("task-total").textContent = total;
document.getElementById("rag-count").textContent = d.rag;
document.getElementById("cron-count").textContent = d.cron;
document.getElementById("max-nodes").textContent = d.maxNodes || 0;
document.getElementById("time-first").textContent = d.timing.first;
document.getElementById("time-last").textContent = d.timing.last;
document.getElementById("time-avg").textContent = d.timing.avg;
// 1. Update Stats
const statsChanged =
lastDashboardState.rag !== d.rag ||
lastDashboardState.cron !== d.cron ||
lastDashboardState.maxNodes !== d.maxNodes ||
JSON.stringify(lastDashboardState.tasks) !== JSON.stringify(d.tasks) ||
JSON.stringify(lastDashboardState.timing) !== JSON.stringify(d.timing);
const modelsSection = document.getElementById("model-list");
modelsSection.innerHTML = Object.entries(d.models)
.filter(([k]) => ['small', 'medium', 'large'].includes(k))
.map(([k, v]) => \`<div class="model-pill"><span>\${k.toUpperCase()}:</span><b>\${v}</b></div>\`)
.join("");
document.getElementById("c1").innerHTML = d.charts.taskDonut;
document.getElementById("c2").innerHTML = d.charts.compBar;
if (statsChanged) {
const total = Object.values(d.tasks).reduce((a, b) => a + b, 0);
document.getElementById("task-total").textContent = total;
document.getElementById("rag-count").textContent = d.rag;
document.getElementById("cron-count").textContent = d.cron;
document.getElementById("max-nodes").textContent = d.maxNodes || 0;
document.getElementById("time-first").textContent = d.timing.first;
document.getElementById("time-last").textContent = d.timing.last;
document.getElementById("time-avg").textContent = d.timing.avg;
lastDashboardState.rag = d.rag;
lastDashboardState.cron = d.cron;
lastDashboardState.maxNodes = d.maxNodes;
lastDashboardState.tasks = d.tasks;
lastDashboardState.timing = d.timing;
}
// Active Queue
const qt = document.getElementById("qt");
const qs = document.getElementById("active-queue-section");
if (d.pendingTasks && d.pendingTasks.length > 0) {
qs.style.display = "block";
qt.innerHTML = d.pendingTasks.map(t => {
const isRunning = t.status === 'running';
const tc = isRunning ? "running" : "warning";
const indicator = isRunning ? '<div class="pulse"></div>' : '';
return \`<tr>
<td style="padding-left: 20px; color: var(--text-muted); white-space: nowrap; vertical-align: top; padding-top: 15px; border-bottom: none;">\${fmtDate(t.updated_at)}</td>
<td style="white-space: nowrap; vertical-align: top; padding-top: 15px; border-bottom: none;"><span class="tag \${tc}">\${indicator}\${t.status.toUpperCase()}</span></td>
<td style="white-space: nowrap; vertical-align: top; padding-top: 15px; border-bottom: none;"><span class="tag complexity-\${t.complexity || 'unknown'}">\${(t.complexity || 'unknown').toUpperCase()}</span></td>
<td style="padding-top: 15px; border-bottom: none;">
<div style="font-weight: 500; word-break: break-word;">\${t.goal}</div>
</td>
<td style="padding-right: 20px; text-align: right; vertical-align: top; padding-top: 15px; border-bottom: none;">
<button onclick="failTask('\${t.id}')" style="cursor: pointer; font-size: 11px; padding: 2px 8px; border: 1px solid var(--border); border-radius: 4px; background: var(--bg); color: var(--error);">FAIL</button>
</td>
</tr>
<tr>
<td colspan="3" style="border-top: none;"></td>
<td colspan="2" style="padding-bottom: 20px; border-top: none;">
<div class="node-map">
\${(t.nodes || []).map(n => {
const activeIndicator = n.status === 'running' ? '<div class="pulse"></div>' : '';
const typeLabel = n.type.split('.').pop().toUpperCase();
return \`<div class="node-chip \${n.status}">\${activeIndicator}\${typeLabel}</div>\`;
}).join('<span style="color: var(--border); font-size: 10px;">➔</span>')}
</div>
</td>
</tr>\`;
}).join("");
} else {
qs.style.display = "none";
// 2. Update Models
const modelsHash = JSON.stringify(d.models);
if (lastDashboardState.modelsHash !== modelsHash) {
const modelsSection = document.getElementById("model-list");
modelsSection.innerHTML = Object.entries(d.models)
.filter(([k]) => ['small', 'medium', 'large'].includes(k))
.map(([k, v]) => \`<div class="model-pill"><span>\${k.toUpperCase()}:</span><b>\${v}</b></div>\`)
.join("");
lastDashboardState.modelsHash = modelsHash;
}
// 3. Update Charts
const chartsHash = JSON.stringify(d.charts);
if (lastDashboardState.chartsHash !== chartsHash) {
document.getElementById("c1").innerHTML = d.charts.taskDonut;
document.getElementById("c2").innerHTML = d.charts.compBar;
lastDashboardState.chartsHash = chartsHash;
}
// 4. Update Active Queue
const queueHash = JSON.stringify(d.pendingTasks);
if (lastDashboardState.queueHash !== queueHash) {
const qt = document.getElementById("qt");
const qs = document.getElementById("active-queue-section");
if (d.pendingTasks && d.pendingTasks.length > 0) {
qs.style.display = "block";
qt.innerHTML = d.pendingTasks.map(t => {
const isRunning = t.status === 'running';
const tc = isRunning ? "running" : "warning";
const indicator = isRunning ? '<div class="pulse"></div>' : '';
return \`<tr>
<td style="padding-left: 20px; color: var(--text-muted); white-space: nowrap; vertical-align: top; padding-top: 15px; border-bottom: none;">\${fmtDate(t.updated_at)}</td>
<td style="white-space: nowrap; vertical-align: top; padding-top: 15px; border-bottom: none;"><span class="tag \${tc}">\${indicator}\${t.status.toUpperCase()}</span></td>
<td style="white-space: nowrap; vertical-align: top; padding-top: 15px; border-bottom: none;"><span class="tag complexity-\${t.complexity || 'unknown'}">\${(t.complexity || 'unknown').toUpperCase()}</span></td>
<td style="padding-top: 15px; border-bottom: none;">
<div class="goal-text" data-title="\${t.goal.replace(/"/g, '&quot;')}">\${t.goal}</div>
</td>
<td style="padding-right: 20px; text-align: right; vertical-align: top; padding-top: 15px; border-bottom: none;">
<button onclick="failTask('\${t.id}')" style="cursor: pointer; font-size: 11px; padding: 2px 8px; border: 1px solid var(--border); border-radius: 4px; background: var(--bg); color: var(--error);">FAIL</button>
</td>
</tr>
<tr>
<td colspan="5" style="padding-left: 20px; padding-right: 20px; padding-bottom: 20px; border-top: none;">
<div class="node-map">
\${(() => {
const levels = getGraphLayout(t.nodes || [], t.edges || []);
return levels.map(level => \`
<div class="node-stage">
\${level.map(n => {
const activeIndicator = n.status === 'running' ? '<div class="pulse"></div>' : '';
const typeLabel = n.type.split('.').pop().toUpperCase();
let chipAttr = '';
if (n.type === 'skill' && n.input) {
try {
const input = typeof n.input === 'string' ? JSON.parse(n.input) : n.input;
const skillName = input.skillName || input.skill;
if (skillName) chipAttr = \` data-title="Skill: \${skillName}"\`;
} catch(e) {}
}
return \`<div class="node-chip \${n.status}"\${chipAttr}>\${activeIndicator}\${typeLabel}</div>\`;
}).join('')}
</div>
\`).join('<span style="color: var(--border); font-size: 14px; font-weight: 700;">➔</span>');
})()}
</div>
</td>
</tr>\`;
}).join("");
} else {
qs.style.display = "none";
}
lastDashboardState.queueHash = queueHash;
}
// 5. Update Logs
allLogs = d.logs;
renderLogs();
});
}
// Initial load
fetch("/api/log-files")
.then(r => r.json())
.then(files => {
const dateSelect = document.getElementById("log-date-select");
const today = new Date().toISOString().split('T')[0];
// Add today if not in files
if (!files.includes(today)) {
files.unshift(today);
}
dateSelect.innerHTML = files.map(f => \`<option value="\${f}" \${f === today ? 'selected' : ''}>\${f}</option>\`).join("");
// Initial load
fetch("/api/log-files")
.then(r => r.json())
.then(files => {
const dateSelect = document.getElementById("log-date-select");
const today = new Date().toISOString().split('T')[0];
if (!files.includes(today)) {
files.unshift(today);
}
dateSelect.innerHTML = files.map(f => \`<option value="\${f}" \${f === today ? 'selected' : ''}>\${f}</option>\`).join("");
updateDashboard();
// Auto-refresh every 5 seconds
setInterval(updateDashboard, 5000);
});
</script>

View File

@@ -335,17 +335,22 @@ export class ToolHost extends BaseProcess {
const result = await tool(args);
const duration = Date.now() - startTime;
const exitCode = (result && typeof result === "object" && "exitCode" in result) ? (result as any).exitCode : 0;
const success = exitCode === 0;
// Log tool execution success
// Log tool execution completion with actual success status
this.emitEvent("event.tool.completed", {
toolName: name,
arguments: this.sanitizeArguments(args),
durationMs: duration,
success: true,
success,
exitCode,
taskId: (envelope.payload as Record<string, unknown>).taskId,
nodeId: (envelope.payload as Record<string, unknown>).nodeId,
});
ConsoleLogger.info(PROCESS_NAME, `Tool execution completed: ${name} (${duration}ms)`, envelope);
const logMethod = success ? ConsoleLogger.info : ConsoleLogger.warn;
logMethod.call(ConsoleLogger, PROCESS_NAME, `Tool execution finished: ${name} (${duration}ms, exitCode=${exitCode})`, envelope);
this.sendResponse(envelope, result);
} catch (err) {