Redifine skills logic + add command to easy skill adding

This commit is contained in:
larchanka
2026-03-23 18:14:30 +01:00
committed by Mikhail Larchanka
parent e948ba5ed1
commit 159e79fc67
19 changed files with 129 additions and 572 deletions

View File

@@ -15,8 +15,7 @@ AI-Agent/
│ ├── INSTRUCTIONS.md # Workflow for task execution
│ └── TASKS/ # Task specs (P1-01, P2-01, …)
├── skills/ # Dynamic Skills System
── CONFIG.md # Skill manifest and descriptions
│ └── [skill-name]/ # SKILL.md for each skill
── [skill-name]/ # SKILL.md for each skill (first line has description)
├── _docs/
│ ├── ARCHITECTURE.md # Architectural patterns
│ ├── CAPABILITY GRAPH.md # DAG format and node types

View File

@@ -39,9 +39,9 @@ A multi-process AI platform with type-safe IPC and capability-graph execution. U
Install and run Lemonade, then pull the models you need:
```bash
ollama pull qwen2.5:0.5b
ollama pull qwen2.5:1.5b
ollama pull qwen3-vl
lemonade pull qwen2.5:0.5b
lemonade pull qwen2.5:1.5b
lemonade pull qwen3-vl
```
## Configuration
@@ -237,7 +237,7 @@ Set keep-alive to `-1` (the number) to keep a model loaded indefinitely until Le
```bash
# Check which models are currently loaded in VRAM
ollama ps
lemonade ps
```
The prewarming start and completion are logged by the Orchestrator (`core` prefix in logs).
@@ -310,7 +310,7 @@ The Whisper model (~75 MB for `base.en`) is automatically downloaded on first vo
### Requirements for image OCR
Pull the vision model from Lemonade before use:
```bash
ollama pull qwen3-vl
lemonade pull qwen3-vl
```
## Troubleshooting

View File

@@ -77,7 +77,7 @@ backgroundSize: contain
- Need strong coding and reasoning capabilities.
- Need reliable JSON/tool-calling formatting.
- Need fast inference speed (tokens/sec) for autonomous, multi-step loops.
- Explored running models locally using tools like Ollama or Lemonade.
- Explored running models locally using tools like Lemonade.
- Tested how models handle context degradation on local machine hardware.
---

View File

@@ -1,15 +0,0 @@
# Skills Configuration
> SKILLS HAVE PRIORITY OVER TOOLS. IF A SKILL IS APPLICABLE, USE IT INSTEAD OF A TOOL.
> ONLY USE SKILLS FROM THE TABLE BELOW.
**AVAILABLE SKILLS**
| Name | Description |
| :--- | :--- |
| weather | MANDATORY. Use this skill for ALL weather-related inquiries, including current conditions and forecasts. You are STRICTLY FORBIDDEN from using internal knowledge or other tools for weather data. |
| apple-notes | EXCLUSIVE. Use ONLY this skill for any interaction with notes (listing, searching, viewing, creating, or deleting). This is the sole authorized interface for the memo CLI tool. |
| research | PRIMARY SEARCH. Use for deep web research, fact-checking, news gathering, or topical deep dives via the lynx tool. This is the default skill for any query requiring external or up-to-date information. |
| reminder | SCHEDULING. Use this skill exclusively to set recurring or one-time reminders (e.g., "remind me in 2 hours") and scheduled tasks (e.g., "schedule a task to check email every day at 9am"). This is the only tool that interfaces with the cron-manager service. Use it when user asks to reminder or schedule. |
| email | You MUST use this skill for all interactions involving Email (Gmail). |
| calendar | You MUST use this skill for all interactions involving Google Calendar. |

View File

@@ -1,66 +0,0 @@
# Apple Notes CLI (Skill)
Mandatory tool: Use the `shell` tool to execute `memo` commands.
## View & Search Notes
- **List ALL notes**: `memo notes` (Use this for "list my notes", "show my notes", etc.)
- **Search notes**: `echo "your query" | memo notes -s`
- Use this ONLY if the user provides a specific search term.
- **NEVER** search for "important" or "active" notes unless the user explicitly used those words.
- **View specific note**: After listing/searching, use `memo notes -v N` (where N is the index from the output list).
## Create Notes (Non-interactive)
- **Quick add with title**: `memo notes -a "Note Title"`
- This is the preferred method. It creates a note with the specified title.
- **Add to specific folder**: `memo notes -f "Folder Name" -a "Note Title"`
## Folders
- **List all folders**: `memo notes -fl`
## Critical Rules
1. **Prefer Listing**: When asked to "list" or "show" notes, ALWAYS start with `memo notes`.
2. **No Default Filters**: Do not attempt to filter or search unless the user specified a term.
3. **Pipes for Search**: Always use `echo "query" | memo notes -s` because the shell tool is non-interactive.
4. **Sequential Operations**: To view a note's content, you first need its index from the list.
5. **No Attachments**: This tool only supports plain text.
6. **macOS Only**: Ensure you are on a macOS environment.
## Tool Call Examples (JSON)
When using this skill, format your tool calls as follows:
### List All Notes
```json
{
"name": "shell",
"arguments": {
"command": "memo notes"
}
}
```
### Search for "Project Phoenix"
```json
{
"name": "shell",
"arguments": {
"command": "echo \"Project Phoenix\" | memo notes -s"
}
}
```
### Create a Quick Note
```json
{
"name": "shell",
"arguments": {
"command": "memo notes -a \"Meeting Notes\""
}
}
```
## Example Commands
- `memo notes` (List everything - Default action)
- `echo "project X" | memo notes -s` (Search for project X)
- `memo notes -f "Inbox" -a "Buy milk"` (Create note)
- `memo notes -fl` (List folders)

View File

@@ -1,3 +1,5 @@
Description: You MUST use this skill for all interactions involving Google Calendar.
# Calendar Skill
Manage Google Calendar events using the `gog` CLI.

View File

@@ -1,3 +1,5 @@
Description: You MUST use this skill for all interactions involving Email (Gmail).
# Email Skill
Manage Gmail communications using the `gog` CLI.

View File

@@ -1,3 +1,5 @@
Description: SCHEDULING. Use this skill exclusively to set recurring or one-time reminders (e.g., "remind me in 2 hours") and scheduled tasks (e.g., "schedule a task to check email every day at 9am"). This is the only tool that interfaces with the cron-manager service. Use it when user asks to reminder or schedule.
# Reminder Skill
Set up one-time or recurring reminders for the user.

View File

@@ -1,3 +1,5 @@
Description: PRIMARY SEARCH. Use for deep web research, fact-checking, news gathering, or topical deep dives via the lynx tool. This is the default skill for any query requiring external or up-to-date information.
# Research Skill
Deep web research using the text-based browser `lynx` and DuckDuckGo HTML interface.

View File

@@ -1,3 +1,5 @@
Description: MANDATORY. Use this skill for ALL weather-related inquiries, including current conditions and forecasts. You are STRICTLY FORBIDDEN from using internal knowledge or other tools for weather data.
# Weather Skill
Get current weather conditions and forecasts.

View File

@@ -1,7 +1,6 @@
/**
* Integration test for conversation archiving flow (P6-06).
* Verifies: task history retrieval by conversation_id, summary insertion into RAG, SQLite persistence.
* Summarization step is mocked (no Ollama).
* Summarization step is mocked (no Lemonade).
*/
import { randomUUID } from "node:crypto";

View File

@@ -114,7 +114,10 @@ const HELP_TEXT = `Commands:
- Ask me to remind you: "Remind me in 5 minutes to check the oven"
- Recurring reminders: "Remind me every day at 9am to take vitamins"
- List reminders: /reminders
- Cancel a reminder: /cancel_reminder <id>`;
- Cancel a reminder: /cancel_reminder <id>
🛠 Skills:
- Add a new skill: /add_skill <URL_TO_SKILL_MD> (downloads to an underscore-prefixed folder)`;
/** chatId -> current conversation ID for session grouping */
const conversationIdByChat = new Map<number, string>();
@@ -402,25 +405,73 @@ function main(): void {
return;
}
// /task [goal] — if goal is provided, create task; else show usage
if (text.startsWith("/task")) {
const goal = text.slice(5).trim();
if (!goal) {
sendToUser(chatId, "Usage: /task <your goal>. Example: /task Summarize the benefits of TypeScript.");
// /add_skill [URL] — download a new skill
if (text.startsWith("/add_skill")) {
const url = text.slice(10).trim();
if (!url) {
sendToUser(chatId, "Usage: /add_skill {URL_TO_SKILL_MD}");
return;
}
const payload: TelegramTaskCreatePayload = {
chatId,
userId: from.id,
conversationId: getOrCreateConversationId(chatId),
messageId: msg.message_id ?? 0,
goal,
...(from.username !== undefined && from.username !== "" && { username: from.username }),
};
base.send(createEnvelope<TelegramTaskCreatePayload>("task.create", "core", payload));
if (!url.toLowerCase().endsWith("skill.md")) {
sendToUser(chatId, "❌ URL must end with SKILL.md");
return;
}
(async () => {
try {
const urlObj = new URL(url);
const pathParts = urlObj.pathname.split("/").filter(p => p !== "");
let folderName = "";
// if path is /foo/bar/SKILL.md, folderName is bar
if (pathParts.length >= 2) {
folderName = pathParts[pathParts.length - 2] ?? "";
}
if (!folderName || folderName === "." || folderName === "..") {
folderName = randomUUID();
}
if (!folderName.startsWith("_")) {
folderName = "_" + folderName;
}
const cfg = getConfig();
const skillsDir = resolve(process.cwd(), cfg.skills.skillsDir);
const newSkillDir = resolve(skillsDir, folderName);
// Create folder
await new Promise<void>((res, rej) =>
mkdir(newSkillDir, { recursive: true }, (err) => (err ? rej(err) : res())),
);
// Download file
const resp = await fetch(url);
if (!resp.ok) {
throw new Error(`Failed to fetch SKILL.md: HTTP ${resp.status}`);
}
if (!resp.body) {
throw new Error("Empty response body when fetching SKILL.md");
}
const targetPath = resolve(newSkillDir, "SKILL.md");
await pipeline(
resp.body as unknown as NodeJS.ReadableStream,
createWriteStream(targetPath)
);
await sendToUser(chatId, `✅ Skill added to folder: <code>${folderName}</code>\n\nNotes:\n- The skill is currently <b>disabled</b> (starts with <code>_</code>).\n- Rename the folder to remove the underscore to enable it.\n- Use <code>/help</code> to see available commands.`);
} catch (err) {
console.error("[telegram-adapter] /add_skill error:", err);
await sendToUser(chatId, `❌ Error adding skill: ${err instanceof Error ? err.message : String(err)}`);
}
})();
return;
}
// /task [goal] — if goal is provided, create task; else show usage
// Plain text: map to user goal and create task (same as /task <text>)
const taskPayload: TelegramTaskCreatePayload = {
chatId,

View File

@@ -1,6 +1,6 @@
/**
* Planner Agent: converts user intent into a structured execution DAG.
* Listens for plan.create, uses Ollama Adapter + Model Router, validates DAG, emits response.
* Listens for plan.create, uses Lemonade Adapter + Model Router, validates DAG, emits response.
*/
import { randomUUID } from "node:crypto";

View File

@@ -1,6 +1,6 @@
/**
* Generator Service: handles node.execute for generate_text and summarize.
* Used by Executor when dispatching to "model-router"; calls Ollama via ModelRouter.
* Used by Executor when dispatching to "model-router"; calls Lemonade via ModelRouter.
* P6-04: summarize type uses summarizer prompt for memory extraction.
*/

View File

@@ -226,7 +226,7 @@ export class LemonadeAdapter {
/**
* Warmup model (loads it into memory).
* Lemonade might not have a specific warmup endpoint identical to Ollama,
* Lemonade might not have a specific warmup endpoint,
* but sending a small message often works.
*/
async warmup(model: string): Promise<void> {

View File

@@ -1,6 +1,6 @@
/**
* Model Router: maps abstract complexity levels to Ollama model names.
* Per _docs/TECH.md: small -> llama3:8b, medium -> mistral, large -> mixtral.
* Model Router: maps abstract complexity levels to Lemonade model names.
* Per _docs/TECH.md: small -> qwen2.5:0.5b, medium -> qwen2.5:1.5b, large -> qwen2.5:7b.
*/
import { getConfig } from "../shared/config.js";
@@ -22,7 +22,7 @@ export class ModelRouter {
}
/**
* Return the Ollama model name for the given complexity level.
* Return the Lemonade model name for the given complexity level.
*/
getModel(complexity: ComplexityLevel): string {
return this.config[complexity];

View File

@@ -1,416 +0,0 @@
/**
* Ollama adapter: bridge to local Ollama instance for generate, chat, and streaming.
* Uses fetch; supports timeout and retry for network errors.
*/
import { readFile } from "node:fs/promises";
import { getConfig } from "../shared/config.js";
export interface GenerateOptions {
timeoutMs?: number;
keep_alive?: string | number;
options?: Record<string, unknown>;
}
export interface ChatOptions {
timeoutMs?: number;
keep_alive?: string | number;
tools?: any[];
options?: Record<string, unknown>;
}
export interface GenerateResult {
text: string;
prompt_eval_count?: number;
eval_count?: number;
done: boolean;
}
export interface ChatMessage {
role: "system" | "user" | "assistant" | "tool";
content: string;
tool_calls?: ToolCall[];
tool_call_id?: string;
}
export interface ToolCall {
id: string;
type: string;
function: {
name: string;
arguments: Record<string, any>;
};
}
export interface ChatResult {
message: {
role: string;
content: string;
tool_calls?: ToolCall[];
};
prompt_eval_count?: number;
eval_count?: number;
done: boolean;
}
export interface StreamChunk {
message?: { content: string; tool_calls?: ToolCall[] };
done?: boolean;
prompt_eval_count?: number;
eval_count?: number;
}
export interface EmbedResult {
embedding: number[];
prompt_eval_count?: number;
}
export interface OllamaAdapterOptions {
baseUrl?: string;
timeoutMs?: number;
retries?: number;
}
export class OllamaAdapter {
private readonly baseUrl: string;
private readonly timeoutMs: number;
private readonly retries: number;
private readonly numCtx: number;
constructor(options: OllamaAdapterOptions = {}) {
const c = getConfig().lemonade;
this.baseUrl = options.baseUrl ?? c.baseUrl;
this.timeoutMs = options.timeoutMs ?? c.timeoutMs;
this.retries = options.retries ?? c.retries;
this.numCtx = c.numCtx;
}
/**
* Generate completion for a single prompt. Returns full response (stream: false).
*/
async generate(
prompt: string,
model: string,
opts: GenerateOptions = {},
): Promise<GenerateResult> {
const timeoutMs = opts.timeoutMs ?? this.timeoutMs;
const url = `${this.baseUrl}/api/generate`;
const body: Record<string, unknown> = {
model,
prompt,
stream: false,
options: {
num_ctx: this.numCtx,
...opts.options,
},
};
if (opts.keep_alive !== undefined) body.keep_alive = opts.keep_alive;
const res = await this.fetchWithRetry(url, body, timeoutMs);
const data = (await res.json()) as {
response?: string;
done?: boolean;
prompt_eval_count?: number;
eval_count?: number;
};
const result: GenerateResult = {
text: data.response ?? "",
done: data.done ?? true,
};
if (data.prompt_eval_count !== undefined) result.prompt_eval_count = data.prompt_eval_count;
if (data.eval_count !== undefined) result.eval_count = data.eval_count;
return result;
}
/**
* Generate with an image attachment.
*/
async generateWithImage(
prompt: string,
model: string,
imagePath: string,
opts: GenerateOptions = {},
): Promise<GenerateResult> {
const imageBytes = await readFile(imagePath);
const base64Image = imageBytes.toString("base64");
const timeoutMs = opts.timeoutMs ?? this.timeoutMs;
const url = `${this.baseUrl}/api/generate`;
const body: Record<string, unknown> = {
model,
prompt,
images: [base64Image],
stream: false,
options: {
num_ctx: this.numCtx,
...opts.options,
},
};
if (opts.keep_alive !== undefined) body.keep_alive = opts.keep_alive;
const res = await this.fetchWithRetry(url, body, timeoutMs);
const data = (await res.json()) as {
response?: string;
done?: boolean;
prompt_eval_count?: number;
eval_count?: number;
};
const result: GenerateResult = {
text: data.response ?? "",
done: data.done ?? true,
};
if (data.prompt_eval_count !== undefined) result.prompt_eval_count = data.prompt_eval_count;
if (data.eval_count !== undefined) result.eval_count = data.eval_count;
return result;
}
/**
* Chat with messages. Returns full response (stream: false).
*/
async chat(
messages: ChatMessage[],
model: string,
opts: ChatOptions = {},
): Promise<ChatResult> {
const timeoutMs = opts.timeoutMs ?? this.timeoutMs;
const url = `${this.baseUrl}`;
const body: Record<string, unknown> = {
model,
messages,
stream: false,
options: {
num_ctx: this.numCtx,
...opts.options,
},
};
if (opts.keep_alive !== undefined) body.keep_alive = opts.keep_alive;
if (opts.tools) body.tools = opts.tools;
const res = await this.fetchWithRetry(url, body, timeoutMs);
const data = (await res.json()) as {
message?: { role: string; content: string; tool_calls?: ToolCall[] };
done?: boolean;
prompt_eval_count?: number;
eval_count?: number;
};
const result: ChatResult = {
message: data.message ?? { role: "assistant", content: "" },
done: data.done ?? true,
};
if (data.prompt_eval_count !== undefined) result.prompt_eval_count = data.prompt_eval_count;
if (data.eval_count !== undefined) result.eval_count = data.eval_count;
return result;
}
/**
* Chat with an image attachment. Reads the image file at `imagePath`, encodes it
* as base64, and injects it into the last user message via Ollama's `images` field.
* Intended for vision/OCR models such as `glm-ocr:q8_0`.
*
* @param messages Conversation messages (same format as `chat()`).
* @param model Ollama model name supporting vision (must accept `images` field).
* @param imagePath Absolute local path to the image file (jpeg, png, webp, etc.).
* @param opts Optional chat options (timeout, keep_alive, etc.).
*/
async chatWithImage(
messages: ChatMessage[],
model: string,
imagePath: string,
opts: ChatOptions = {},
): Promise<ChatResult> {
// Read and encode the image
let imageBytes: Buffer;
try {
imageBytes = await readFile(imagePath);
} catch (err) {
throw new Error(
`OllamaAdapter.chatWithImage: cannot read image at "${imagePath}": ${err instanceof Error ? err.message : String(err)
}`,
);
}
const base64Image = imageBytes.toString("base64");
// Clone messages; inject images into the last user message
const messagesWithImage: Array<Record<string, unknown>> = messages.map(
(msg, idx) => {
const clone: Record<string, unknown> = { ...msg };
if (idx === messages.length - 1 && msg.role === "user") {
clone.images = [base64Image];
}
return clone;
},
);
// If no user message was found at the end, append one with just the image
const lastMsg = messages[messages.length - 1];
if (!lastMsg || lastMsg.role !== "user") {
messagesWithImage.push({ role: "user", content: "", images: [base64Image] });
}
const timeoutMs = opts.timeoutMs ?? this.timeoutMs;
const url = `${this.baseUrl}`;
const body: Record<string, unknown> = {
model,
messages: messagesWithImage,
stream: false,
options: {
num_ctx: this.numCtx,
...opts.options,
},
};
if (opts.keep_alive !== undefined) body.keep_alive = opts.keep_alive;
const res = await this.fetchWithRetry(url, body, timeoutMs);
const data = (await res.json()) as {
message?: { role: string; content: string; tool_calls?: ToolCall[] };
done?: boolean;
prompt_eval_count?: number;
eval_count?: number;
};
const result: ChatResult = {
message: data.message ?? { role: "assistant", content: "" },
done: data.done ?? true,
};
if (data.prompt_eval_count !== undefined) result.prompt_eval_count = data.prompt_eval_count;
if (data.eval_count !== undefined) result.eval_count = data.eval_count;
return result;
}
/**
* Generate embedding for text. Uses POST /api/embed.
*/
async embed(input: string, model: string, opts: { timeoutMs?: number } = {}): Promise<EmbedResult> {
const timeoutMs = opts.timeoutMs ?? this.timeoutMs;
const url = `${this.baseUrl}/api/embed`;
const body = { model, input };
const res = await this.fetchWithRetry(url, body, timeoutMs);
const data = (await res.json()) as {
embeddings?: number[][];
prompt_eval_count?: number;
};
const embedding = Array.isArray(data.embeddings) && data.embeddings[0] ? data.embeddings[0] : [];
const result: EmbedResult = { embedding };
if (data.prompt_eval_count !== undefined) result.prompt_eval_count = data.prompt_eval_count;
return result;
}
/**
* Warm up a model by sending a minimal prompt, ensuring it is loaded into memory.
* The keep_alive parameter controls how long the model stays in memory after the call.
*/
async warmup(model: string, keepAlive: string | number): Promise<void> {
const url = `${this.baseUrl}`;
const body = {
model,
messages: [{ role: "user", content: "hello" }],
stream: false,
keep_alive: keepAlive,
};
try {
await this.fetchWithRetry(url, body, this.timeoutMs);
} catch (err) {
throw new Error(
`OllamaAdapter.warmup failed for model "${model}": ${err instanceof Error ? err.message : String(err)
}`,
);
}
}
/**
* Stream chat response. Returns async iterator of chunks (NDJSON).
*/
async *streamChat(
messages: ChatMessage[],
model: string,
opts: ChatOptions = {},
): AsyncGenerator<StreamChunk> {
const timeoutMs = opts.timeoutMs ?? this.timeoutMs;
const url = `${this.baseUrl}`;
const body: Record<string, unknown> = { model, messages, stream: true };
if (opts.keep_alive !== undefined) body.keep_alive = opts.keep_alive;
if (opts.options !== undefined) body.options = opts.options;
const res = await this.fetchWithRetry(url, body, timeoutMs);
if (!res.body) return;
const reader = res.body.getReader();
const decoder = new TextDecoder();
let buffer = "";
try {
while (true) {
const { done, value } = await reader.read();
if (done) break;
buffer += decoder.decode(value, { stream: true });
const lines = buffer.split("\n");
buffer = lines.pop() ?? "";
for (const line of lines) {
const trimmed = line.trim();
if (!trimmed) continue;
try {
const chunk = JSON.parse(trimmed) as StreamChunk;
yield chunk;
} catch {
// skip malformed line
}
}
}
if (buffer.trim()) {
try {
yield JSON.parse(buffer.trim()) as StreamChunk;
} catch {
// skip
}
}
} finally {
reader.releaseLock();
}
}
private async fetchWithRetry(
url: string,
body: unknown,
timeoutMs: number,
): Promise<Response> {
let lastError: unknown;
for (let attempt = 0; attempt <= this.retries; attempt++) {
try {
const controller = new AbortController();
const timeout = setTimeout(() => controller.abort(), timeoutMs);
const res = await fetch(url, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify(body),
signal: controller.signal,
});
clearTimeout(timeout);
if (!res.ok) {
const text = await res.text();
throw new Error(`Ollama ${res.status}: ${text}`);
}
return res;
} catch (err) {
lastError = err;
const isRetryable =
err instanceof Error &&
(err.name === "AbortError" ||
err.message.includes("fetch") ||
err.message.includes("ECONNREFUSED") ||
err.message.includes("network") ||
err.message.includes("reset") ||
err.message.includes("hangup"));
if (attempt === this.retries || !isRetryable) {
if (err instanceof Error && (err as any).cause) {
const cause = (err as any).cause;
const causeMsg = cause instanceof Error ? cause.message : String(cause);
err.message += ` (Cause: ${causeMsg})`;
}
throw err;
}
// Wait before retry: 1s, 2s...
console.warn(`[ollama-adapter] Fetch failed: ${err.message}. Retrying in ${(attempt + 1)}s... (Attempt ${attempt + 1}/${this.retries})`);
await new Promise((resolve) => setTimeout(resolve, (attempt + 1) * 1000));
}
}
throw lastError;
}
}

View File

@@ -1,6 +1,6 @@
/**
* RAG Service: semantic memory via embeddings and similarity search.
* Uses Ollama for embeddings; SQLite for persistent storage.
* Uses Lemonade for embeddings; SQLite for persistent storage.
* P6-01: SQLite persistence. P7-01: sqlite-vss for scalable KNN with fallback to dot-product.
*/

View File

@@ -1,4 +1,4 @@
import { readFileSync, existsSync } from "node:fs";
import { readFileSync, existsSync, readdirSync } from "node:fs";
import { join, resolve } from "node:path";
import { getConfig } from "../shared/config.js";
import { TELEGRAM_HTML_FORMAT_INSTRUCTION } from "../agents/prompts/telegram-html.js";
@@ -19,17 +19,48 @@ export class SkillManager {
}
/**
* List all available skills from CONFIG.md.
* List all available skills by scanning the skills directory.
* Each skill is a subdirectory containing a SKILL.md file.
* The description is extracted from the first line of SKILL.md.
*/
public listSkills(): SkillInfo[] {
const configPath = join(this.skillsDir, "CONFIG.md");
if (!existsSync(configPath)) return [];
if (!existsSync(this.skillsDir)) return [];
try {
const content = readFileSync(configPath, "utf-8");
return this.parseConfig(content);
const entries = readdirSync(this.skillsDir, { withFileTypes: true });
const skills: SkillInfo[] = [];
for (const entry of entries) {
// Ignore hidden directories and those starting with underscore (disabled)
if (entry.isDirectory() && !entry.name.startsWith(".") && !entry.name.startsWith("_")) {
const skillMdPath = join(this.skillsDir, entry.name, "SKILL.md");
if (existsSync(skillMdPath)) {
try {
const content = readFileSync(skillMdPath, "utf-8");
const lines = content.split("\n").filter(l => l.trim() !== "");
const firstLine = lines[0]?.trim();
if (!firstLine) continue;
let description = firstLine;
if (firstLine.toLowerCase().startsWith("description:")) {
description = firstLine.substring("description:".length).trim();
}
if (description) {
skills.push({
name: entry.name,
description: description
});
}
} catch (err) {
console.error(`Failed to read SKILL.md for ${entry.name}:`, err);
}
}
}
}
return skills;
} catch (err) {
console.error("Failed to load skills config:", err);
console.error("Failed to list skills by scanning directory:", err);
return [];
}
}
@@ -52,41 +83,5 @@ export class SkillManager {
}
}
/**
* Simple markdown table/list parser for CONFIG.md.
*/
private parseConfig(content: string): SkillInfo[] {
const lines = content.split("\n");
const skills: SkillInfo[] = [];
for (const line of lines) {
// Handle table rows: | name | description |
if (line.includes("|")) {
const parts = line.split("|").map(p => p.trim()).filter(p => p !== "");
if (parts.length >= 2 && parts[0] && parts[1]) {
const name = parts[0];
const desc = parts[1];
if (name.toLowerCase() !== "name" && !name.startsWith("---")) {
skills.push({
name,
description: desc
});
}
}
}
// Handle list items: - name: description or names: description
else if (line.trim().startsWith("-") || line.trim().startsWith("*")) {
const clean = line.trim().substring(1).trim();
const colonIndex = clean.indexOf(":");
if (colonIndex !== -1) {
const name = clean.substring(0, colonIndex).trim();
const description = clean.substring(colonIndex + 1).trim();
if (name && description) {
skills.push({ name, description });
}
}
}
}
return skills;
}
}