diff --git a/README.md b/README.md index e93329b..5d58eee 100644 --- a/README.md +++ b/README.md @@ -1,7 +1,7 @@
NixOS Logo - # NixOS Home Lab Infrastructure +# NixOS Home Lab Infrastructure [![NixOS](https://img.shields.io/badge/NixOS-25.05-blue.svg)](https://nixos.org/) [![Flakes](https://img.shields.io/badge/Nix-Flakes-green.svg)](https://nixos.wiki/wiki/Flakes) @@ -11,7 +11,8 @@ Modular NixOS flake configuration for multi-machine home lab infrastructure. Features declarative system configuration, centralized user management, and scalable service deployment across development workstations and server infrastructure. # Vibe DevSecOpsing with claude-sonnet 4 and github-copilot -A project about handling pets. If you want to handle sheep, look elsewhere :-) + +A project about handling pets. If you want to handle sheep, look elsewhere :-) ## Quick Start @@ -33,12 +34,14 @@ sudo nixos-rebuild switch --flake .# ## Architecture Overview ### Machine Types + - **Development Workstation** - High-performance development environment with desktop environments - **File Server** - ZFS storage with NFS services and media management - **Application Server** - Containerized services (Git hosting, media server, web applications) - **Reverse Proxy** - External gateway with SSL termination and service routing ### Technology Stack + - **Base OS**: NixOSNixOS 25.05 with Nix Flakes - **Configuration**: Modular, declarative system configuration - **Virtualization**: Incus containers, Libvirt/QEMU VMs, Podman containers @@ -75,18 +78,21 @@ Home-lab/ NixOS ### Modular Design + - **Single Responsibility**: Each module handles one aspect of system configuration - **Composable**: Modules can be mixed and matched per machine requirements - **Testable**: Individual modules can be validated independently - **Documented**: Clear documentation for module purpose and configuration ### User Management Strategy + - **Role-based Users**: Separate users for desktop vs server administration - **Centralized Configuration**: Consistent user setup across all machines - **Security Focus**: SSH key management and privilege separation - **Literate Dotfiles**: Org-mode documentation for complex configurations ### Network Architecture + - **Mesh VPN**: Tailscale for secure inter-machine communication - **Service Discovery**: Centralized hostname resolution - **Firewall Management**: Service-specific port configuration @@ -95,6 +101,7 @@ Home-lab/ ## Development Workflow ### Local Testing + ```bash # Validate configuration syntax nix flake check @@ -110,12 +117,14 @@ sudo nixos-rebuild switch --flake .# ``` ### Git Workflow + 1. **Feature Branch**: Create branch for configuration changes 2. **Local Testing**: Validate changes with `nixos-rebuild test` 3. **Pull Request**: Submit changes for review 4. **Deploy**: Apply configuration to target machines ### Remote Deployment + - **SSH-based**: Remote deployment via secure shell - **Atomic Updates**: Complete success or automatic rollback - **Health Checks**: Service validation after deployment @@ -124,6 +133,7 @@ sudo nixos-rebuild switch --flake .# ## Service Architecture ### Core Services + - **Git Hosting**: Self-hosted Git with CI/CD capabilities - **Media Server**: Streaming with transcoding support - **File Storage**: NFS network storage with ZFS snapshots @@ -131,12 +141,14 @@ sudo nixos-rebuild switch --flake .# - **Container Platform**: Podman for containerized applications ### Service Discovery + - **Internal DNS**: Tailscale for mesh network resolution - **External DNS**: Public domain with SSL certificates - **Service Mesh**: Inter-service communication via secure network - **Load Balancing**: Traffic distribution and failover ### Data Management + - **ZFS Storage**: Copy-on-write filesystem with snapshots - **Network Shares**: NFS for cross-machine file access - **Backup Strategy**: Automated snapshots and external backup @@ -145,18 +157,21 @@ sudo nixos-rebuild switch --flake .# ## Security Model ### Network Security + - **VPN Mesh**: All inter-machine traffic via Tailscale - **Firewall Rules**: Service-specific port restrictions - **SSH Hardening**: Key-based authentication only - **Fail2ban**: Automated intrusion prevention ### User Security + - **Role Separation**: Administrative vs daily-use accounts - **Key Management**: Centralized SSH key distribution - **Privilege Escalation**: Sudo access only where needed - **Service Accounts**: Dedicated accounts for automated services ### Infrastructure Security + - **Configuration as Code**: All changes tracked in version control - **Atomic Deployments**: Rollback capability for failed changes - **Secret Management**: Encrypted secrets with controlled access @@ -165,12 +180,14 @@ sudo nixos-rebuild switch --flake .# ## Testing Strategy ### Automated Testing + - **Syntax Validation**: Nix flake syntax checking - **Build Testing**: Configuration build verification - **Module Testing**: Individual component validation - **Integration Testing**: Full system deployment tests ### Manual Testing + - **Boot Validation**: System startup verification - **Service Health**: Application functionality checks - **Network Connectivity**: Inter-service communication tests @@ -179,6 +196,7 @@ sudo nixos-rebuild switch --flake .# ## Deployment Status ### Infrastructure Maturity + - ✅ **Multi-machine Configuration**: 4 machines deployed - ✅ **Service Integration**: Git hosting, media server, file storage - ✅ **Network Mesh**: Secure VPN with service discovery @@ -186,6 +204,7 @@ sudo nixos-rebuild switch --flake .# - ✅ **Centralized Management**: Single repository for all infrastructure ### Current Capabilities + - **Development Environment**: Full IDE setup with multiple desktop options - **File Services**: Network storage with 900GB+ media library - **Git Hosting**: Self-hosted with external access @@ -199,7 +218,6 @@ sudo nixos-rebuild switch --flake .# - **[Branching Strategy](BRANCHING_STRATEGY.md)**: Git workflow and conventions - **[AI Instructions](instruction.md)**: Agent guidance for system management - ## License MIT License - see [LICENSE](LICENSE) for details. diff --git a/packages/.cursor/mcp.json b/packages/.cursor/mcp.json new file mode 100644 index 0000000..7e49eb3 --- /dev/null +++ b/packages/.cursor/mcp.json @@ -0,0 +1,23 @@ +{ + "mcpServers": { + "task-master-ai": { + "command": "npx", + "args": [ + "-y", + "--package=task-master-ai", + "task-master-ai" + ], + "env": { + "ANTHROPIC_API_KEY": "ANTHROPIC_API_KEY_HERE", + "PERPLEXITY_API_KEY": "PERPLEXITY_API_KEY_HERE", + "OPENAI_API_KEY": "OPENAI_API_KEY_HERE", + "GOOGLE_API_KEY": "GOOGLE_API_KEY_HERE", + "XAI_API_KEY": "XAI_API_KEY_HERE", + "OPENROUTER_API_KEY": "OPENROUTER_API_KEY_HERE", + "MISTRAL_API_KEY": "MISTRAL_API_KEY_HERE", + "AZURE_OPENAI_API_KEY": "AZURE_OPENAI_API_KEY_HERE", + "OLLAMA_API_KEY": "OLLAMA_API_KEY_HERE" + } + } + } +} \ No newline at end of file diff --git a/packages/.cursor/rules/cursor_rules.mdc b/packages/.cursor/rules/cursor_rules.mdc new file mode 100644 index 0000000..7dfae3d --- /dev/null +++ b/packages/.cursor/rules/cursor_rules.mdc @@ -0,0 +1,53 @@ +--- +description: Guidelines for creating and maintaining Cursor rules to ensure consistency and effectiveness. +globs: .cursor/rules/*.mdc +alwaysApply: true +--- + +- **Required Rule Structure:** + ```markdown + --- + description: Clear, one-line description of what the rule enforces + globs: path/to/files/*.ext, other/path/**/* + alwaysApply: boolean + --- + + - **Main Points in Bold** + - Sub-points with details + - Examples and explanations + ``` + +- **File References:** + - Use `[filename](mdc:path/to/file)` ([filename](mdc:filename)) to reference files + - Example: [prisma.mdc](mdc:.cursor/rules/prisma.mdc) for rule references + - Example: [schema.prisma](mdc:prisma/schema.prisma) for code references + +- **Code Examples:** + - Use language-specific code blocks + ```typescript + // ✅ DO: Show good examples + const goodExample = true; + + // ❌ DON'T: Show anti-patterns + const badExample = false; + ``` + +- **Rule Content Guidelines:** + - Start with high-level overview + - Include specific, actionable requirements + - Show examples of correct implementation + - Reference existing code when possible + - Keep rules DRY by referencing other rules + +- **Rule Maintenance:** + - Update rules when new patterns emerge + - Add examples from actual codebase + - Remove outdated patterns + - Cross-reference related rules + +- **Best Practices:** + - Use bullet points for clarity + - Keep descriptions concise + - Include both DO and DON'T examples + - Reference actual code over theoretical examples + - Use consistent formatting across rules \ No newline at end of file diff --git a/packages/.cursor/rules/dev_workflow.mdc b/packages/.cursor/rules/dev_workflow.mdc new file mode 100644 index 0000000..3333ce9 --- /dev/null +++ b/packages/.cursor/rules/dev_workflow.mdc @@ -0,0 +1,412 @@ +--- +description: Guide for using Taskmaster to manage task-driven development workflows +globs: **/* +alwaysApply: true +--- + +# Taskmaster Development Workflow + +This guide outlines the standard process for using Taskmaster to manage software development projects. It is written as a set of instructions for you, the AI agent. + +- **Your Default Stance**: For most projects, the user can work directly within the `master` task context. Your initial actions should operate on this default context unless a clear pattern for multi-context work emerges. +- **Your Goal**: Your role is to elevate the user's workflow by intelligently introducing advanced features like **Tagged Task Lists** when you detect the appropriate context. Do not force tags on the user; suggest them as a helpful solution to a specific need. + +## The Basic Loop +The fundamental development cycle you will facilitate is: +1. **`list`**: Show the user what needs to be done. +2. **`next`**: Help the user decide what to work on. +3. **`show `**: Provide details for a specific task. +4. **`expand `**: Break down a complex task into smaller, manageable subtasks. +5. **Implement**: The user writes the code and tests. +6. **`update-subtask`**: Log progress and findings on behalf of the user. +7. **`set-status`**: Mark tasks and subtasks as `done` as work is completed. +8. **Repeat**. + +All your standard command executions should operate on the user's current task context, which defaults to `master`. + +--- + +## Standard Development Workflow Process + +### Simple Workflow (Default Starting Point) + +For new projects or when users are getting started, operate within the `master` tag context: + +- Start new projects by running `initialize_project` tool / `task-master init` or `parse_prd` / `task-master parse-prd --input=''` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) to generate initial tasks.json with tagged structure +- Begin coding sessions with `get_tasks` / `task-master list` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) to see current tasks, status, and IDs +- Determine the next task to work on using `next_task` / `task-master next` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) +- Analyze task complexity with `analyze_project_complexity` / `task-master analyze-complexity --research` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) before breaking down tasks +- Review complexity report using `complexity_report` / `task-master complexity-report` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) +- Select tasks based on dependencies (all marked 'done'), priority level, and ID order +- View specific task details using `get_task` / `task-master show ` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) to understand implementation requirements +- Break down complex tasks using `expand_task` / `task-master expand --id= --force --research` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) with appropriate flags like `--force` (to replace existing subtasks) and `--research` +- Implement code following task details, dependencies, and project standards +- Mark completed tasks with `set_task_status` / `task-master set-status --id= --status=done` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) +- Update dependent tasks when implementation differs from original plan using `update` / `task-master update --from= --prompt="..."` or `update_task` / `task-master update-task --id= --prompt="..."` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) + +--- + +## Leveling Up: Agent-Led Multi-Context Workflows + +While the basic workflow is powerful, your primary opportunity to add value is by identifying when to introduce **Tagged Task Lists**. These patterns are your tools for creating a more organized and efficient development environment for the user, especially if you detect agentic or parallel development happening across the same session. + +**Critical Principle**: Most users should never see a difference in their experience. Only introduce advanced workflows when you detect clear indicators that the project has evolved beyond simple task management. + +### When to Introduce Tags: Your Decision Patterns + +Here are the patterns to look for. When you detect one, you should propose the corresponding workflow to the user. + +#### Pattern 1: Simple Git Feature Branching +This is the most common and direct use case for tags. + +- **Trigger**: The user creates a new git branch (e.g., `git checkout -b feature/user-auth`). +- **Your Action**: Propose creating a new tag that mirrors the branch name to isolate the feature's tasks from `master`. +- **Your Suggested Prompt**: *"I see you've created a new branch named 'feature/user-auth'. To keep all related tasks neatly organized and separate from your main list, I can create a corresponding task tag for you. This helps prevent merge conflicts in your `tasks.json` file later. Shall I create the 'feature-user-auth' tag?"* +- **Tool to Use**: `task-master add-tag --from-branch` + +#### Pattern 2: Team Collaboration +- **Trigger**: The user mentions working with teammates (e.g., "My teammate Alice is handling the database schema," or "I need to review Bob's work on the API."). +- **Your Action**: Suggest creating a separate tag for the user's work to prevent conflicts with shared master context. +- **Your Suggested Prompt**: *"Since you're working with Alice, I can create a separate task context for your work to avoid conflicts. This way, Alice can continue working with the master list while you have your own isolated context. When you're ready to merge your work, we can coordinate the tasks back to master. Shall I create a tag for your current work?"* +- **Tool to Use**: `task-master add-tag my-work --copy-from-current --description="My tasks while collaborating with Alice"` + +#### Pattern 3: Experiments or Risky Refactors +- **Trigger**: The user wants to try something that might not be kept (e.g., "I want to experiment with switching our state management library," or "Let's refactor the old API module, but I want to keep the current tasks as a reference."). +- **Your Action**: Propose creating a sandboxed tag for the experimental work. +- **Your Suggested Prompt**: *"This sounds like a great experiment. To keep these new tasks separate from our main plan, I can create a temporary 'experiment-zustand' tag for this work. If we decide not to proceed, we can simply delete the tag without affecting the main task list. Sound good?"* +- **Tool to Use**: `task-master add-tag experiment-zustand --description="Exploring Zustand migration"` + +#### Pattern 4: Large Feature Initiatives (PRD-Driven) +This is a more structured approach for significant new features or epics. + +- **Trigger**: The user describes a large, multi-step feature that would benefit from a formal plan. +- **Your Action**: Propose a comprehensive, PRD-driven workflow. +- **Your Suggested Prompt**: *"This sounds like a significant new feature. To manage this effectively, I suggest we create a dedicated task context for it. Here's the plan: I'll create a new tag called 'feature-xyz', then we can draft a Product Requirements Document (PRD) together to scope the work. Once the PRD is ready, I'll automatically generate all the necessary tasks within that new tag. How does that sound?"* +- **Your Implementation Flow**: + 1. **Create an empty tag**: `task-master add-tag feature-xyz --description "Tasks for the new XYZ feature"`. You can also start by creating a git branch if applicable, and then create the tag from that branch. + 2. **Collaborate & Create PRD**: Work with the user to create a detailed PRD file (e.g., `.taskmaster/docs/feature-xyz-prd.txt`). + 3. **Parse PRD into the new tag**: `task-master parse-prd .taskmaster/docs/feature-xyz-prd.txt --tag feature-xyz` + 4. **Prepare the new task list**: Follow up by suggesting `analyze-complexity` and `expand-all` for the newly created tasks within the `feature-xyz` tag. + +#### Pattern 5: Version-Based Development +Tailor your approach based on the project maturity indicated by tag names. + +- **Prototype/MVP Tags** (`prototype`, `mvp`, `poc`, `v0.x`): + - **Your Approach**: Focus on speed and functionality over perfection + - **Task Generation**: Create tasks that emphasize "get it working" over "get it perfect" + - **Complexity Level**: Lower complexity, fewer subtasks, more direct implementation paths + - **Research Prompts**: Include context like "This is a prototype - prioritize speed and basic functionality over optimization" + - **Example Prompt Addition**: *"Since this is for the MVP, I'll focus on tasks that get core functionality working quickly rather than over-engineering."* + +- **Production/Mature Tags** (`v1.0+`, `production`, `stable`): + - **Your Approach**: Emphasize robustness, testing, and maintainability + - **Task Generation**: Include comprehensive error handling, testing, documentation, and optimization + - **Complexity Level**: Higher complexity, more detailed subtasks, thorough implementation paths + - **Research Prompts**: Include context like "This is for production - prioritize reliability, performance, and maintainability" + - **Example Prompt Addition**: *"Since this is for production, I'll ensure tasks include proper error handling, testing, and documentation."* + +### Advanced Workflow (Tag-Based & PRD-Driven) + +**When to Transition**: Recognize when the project has evolved (or has initiated a project which existing code) beyond simple task management. Look for these indicators: +- User mentions teammates or collaboration needs +- Project has grown to 15+ tasks with mixed priorities +- User creates feature branches or mentions major initiatives +- User initializes Taskmaster on an existing, complex codebase +- User describes large features that would benefit from dedicated planning + +**Your Role in Transition**: Guide the user to a more sophisticated workflow that leverages tags for organization and PRDs for comprehensive planning. + +#### Master List Strategy (High-Value Focus) +Once you transition to tag-based workflows, the `master` tag should ideally contain only: +- **High-level deliverables** that provide significant business value +- **Major milestones** and epic-level features +- **Critical infrastructure** work that affects the entire project +- **Release-blocking** items + +**What NOT to put in master**: +- Detailed implementation subtasks (these go in feature-specific tags' parent tasks) +- Refactoring work (create dedicated tags like `refactor-auth`) +- Experimental features (use `experiment-*` tags) +- Team member-specific tasks (use person-specific tags) + +#### PRD-Driven Feature Development + +**For New Major Features**: +1. **Identify the Initiative**: When user describes a significant feature +2. **Create Dedicated Tag**: `add_tag feature-[name] --description="[Feature description]"` +3. **Collaborative PRD Creation**: Work with user to create comprehensive PRD in `.taskmaster/docs/feature-[name]-prd.txt` +4. **Parse & Prepare**: + - `parse_prd .taskmaster/docs/feature-[name]-prd.txt --tag=feature-[name]` + - `analyze_project_complexity --tag=feature-[name] --research` + - `expand_all --tag=feature-[name] --research` +5. **Add Master Reference**: Create a high-level task in `master` that references the feature tag + +**For Existing Codebase Analysis**: +When users initialize Taskmaster on existing projects: +1. **Codebase Discovery**: Use your native tools for producing deep context about the code base. You may use `research` tool with `--tree` and `--files` to collect up to date information using the existing architecture as context. +2. **Collaborative Assessment**: Work with user to identify improvement areas, technical debt, or new features +3. **Strategic PRD Creation**: Co-author PRDs that include: + - Current state analysis (based on your codebase research) + - Proposed improvements or new features + - Implementation strategy considering existing code +4. **Tag-Based Organization**: Parse PRDs into appropriate tags (`refactor-api`, `feature-dashboard`, `tech-debt`, etc.) +5. **Master List Curation**: Keep only the most valuable initiatives in master + +The parse-prd's `--append` flag enables the user to parse multple PRDs within tags or across tags. PRDs should be focused and the number of tasks they are parsed into should be strategically chosen relative to the PRD's complexity and level of detail. + +### Workflow Transition Examples + +**Example 1: Simple → Team-Based** +``` +User: "Alice is going to help with the API work" +Your Response: "Great! To avoid conflicts, I'll create a separate task context for your work. Alice can continue with the master list while you work in your own context. When you're ready to merge, we can coordinate the tasks back together." +Action: add_tag my-api-work --copy-from-current --description="My API tasks while collaborating with Alice" +``` + +**Example 2: Simple → PRD-Driven** +``` +User: "I want to add a complete user dashboard with analytics, user management, and reporting" +Your Response: "This sounds like a major feature that would benefit from detailed planning. Let me create a dedicated context for this work and we can draft a PRD together to ensure we capture all requirements." +Actions: +1. add_tag feature-dashboard --description="User dashboard with analytics and management" +2. Collaborate on PRD creation +3. parse_prd dashboard-prd.txt --tag=feature-dashboard +4. Add high-level "User Dashboard" task to master +``` + +**Example 3: Existing Project → Strategic Planning** +``` +User: "I just initialized Taskmaster on my existing React app. It's getting messy and I want to improve it." +Your Response: "Let me research your codebase to understand the current architecture, then we can create a strategic plan for improvements." +Actions: +1. research "Current React app architecture and improvement opportunities" --tree --files=src/ +2. Collaborate on improvement PRD based on findings +3. Create tags for different improvement areas (refactor-components, improve-state-management, etc.) +4. Keep only major improvement initiatives in master +``` + +--- + +## Primary Interaction: MCP Server vs. CLI + +Taskmaster offers two primary ways to interact: + +1. **MCP Server (Recommended for Integrated Tools)**: + - For AI agents and integrated development environments (like Cursor), interacting via the **MCP server is the preferred method**. + - The MCP server exposes Taskmaster functionality through a set of tools (e.g., `get_tasks`, `add_subtask`). + - This method offers better performance, structured data exchange, and richer error handling compared to CLI parsing. + - Refer to [`mcp.mdc`](mdc:.cursor/rules/mcp.mdc) for details on the MCP architecture and available tools. + - A comprehensive list and description of MCP tools and their corresponding CLI commands can be found in [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc). + - **Restart the MCP server** if core logic in `scripts/modules` or MCP tool/direct function definitions change. + - **Note**: MCP tools fully support tagged task lists with complete tag management capabilities. + +2. **`task-master` CLI (For Users & Fallback)**: + - The global `task-master` command provides a user-friendly interface for direct terminal interaction. + - It can also serve as a fallback if the MCP server is inaccessible or a specific function isn't exposed via MCP. + - Install globally with `npm install -g task-master-ai` or use locally via `npx task-master-ai ...`. + - The CLI commands often mirror the MCP tools (e.g., `task-master list` corresponds to `get_tasks`). + - Refer to [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc) for a detailed command reference. + - **Tagged Task Lists**: CLI fully supports the new tagged system with seamless migration. + +## How the Tag System Works (For Your Reference) + +- **Data Structure**: Tasks are organized into separate contexts (tags) like "master", "feature-branch", or "v2.0". +- **Silent Migration**: Existing projects automatically migrate to use a "master" tag with zero disruption. +- **Context Isolation**: Tasks in different tags are completely separate. Changes in one tag do not affect any other tag. +- **Manual Control**: The user is always in control. There is no automatic switching. You facilitate switching by using `use-tag `. +- **Full CLI & MCP Support**: All tag management commands are available through both the CLI and MCP tools for you to use. Refer to [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc) for a full command list. + +--- + +## Task Complexity Analysis + +- Run `analyze_project_complexity` / `task-master analyze-complexity --research` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) for comprehensive analysis +- Review complexity report via `complexity_report` / `task-master complexity-report` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) for a formatted, readable version. +- Focus on tasks with highest complexity scores (8-10) for detailed breakdown +- Use analysis results to determine appropriate subtask allocation +- Note that reports are automatically used by the `expand_task` tool/command + +## Task Breakdown Process + +- Use `expand_task` / `task-master expand --id=`. It automatically uses the complexity report if found, otherwise generates default number of subtasks. +- Use `--num=` to specify an explicit number of subtasks, overriding defaults or complexity report recommendations. +- Add `--research` flag to leverage Perplexity AI for research-backed expansion. +- Add `--force` flag to clear existing subtasks before generating new ones (default is to append). +- Use `--prompt=""` to provide additional context when needed. +- Review and adjust generated subtasks as necessary. +- Use `expand_all` tool or `task-master expand --all` to expand multiple pending tasks at once, respecting flags like `--force` and `--research`. +- If subtasks need complete replacement (regardless of the `--force` flag on `expand`), clear them first with `clear_subtasks` / `task-master clear-subtasks --id=`. + +## Implementation Drift Handling + +- When implementation differs significantly from planned approach +- When future tasks need modification due to current implementation choices +- When new dependencies or requirements emerge +- Use `update` / `task-master update --from= --prompt='\nUpdate context...' --research` to update multiple future tasks. +- Use `update_task` / `task-master update-task --id= --prompt='\nUpdate context...' --research` to update a single specific task. + +## Task Status Management + +- Use 'pending' for tasks ready to be worked on +- Use 'done' for completed and verified tasks +- Use 'deferred' for postponed tasks +- Add custom status values as needed for project-specific workflows + +## Task Structure Fields + +- **id**: Unique identifier for the task (Example: `1`, `1.1`) +- **title**: Brief, descriptive title (Example: `"Initialize Repo"`) +- **description**: Concise summary of what the task involves (Example: `"Create a new repository, set up initial structure."`) +- **status**: Current state of the task (Example: `"pending"`, `"done"`, `"deferred"`) +- **dependencies**: IDs of prerequisite tasks (Example: `[1, 2.1]`) + - Dependencies are displayed with status indicators (✅ for completed, ⏱️ for pending) + - This helps quickly identify which prerequisite tasks are blocking work +- **priority**: Importance level (Example: `"high"`, `"medium"`, `"low"`) +- **details**: In-depth implementation instructions (Example: `"Use GitHub client ID/secret, handle callback, set session token."`) +- **testStrategy**: Verification approach (Example: `"Deploy and call endpoint to confirm 'Hello World' response."`) +- **subtasks**: List of smaller, more specific tasks (Example: `[{"id": 1, "title": "Configure OAuth", ...}]`) +- Refer to task structure details (previously linked to `tasks.mdc`). + +## Configuration Management (Updated) + +Taskmaster configuration is managed through two main mechanisms: + +1. **`.taskmaster/config.json` File (Primary):** + * Located in the project root directory. + * Stores most configuration settings: AI model selections (main, research, fallback), parameters (max tokens, temperature), logging level, default subtasks/priority, project name, etc. + * **Tagged System Settings**: Includes `global.defaultTag` (defaults to "master") and `tags` section for tag management configuration. + * **Managed via `task-master models --setup` command.** Do not edit manually unless you know what you are doing. + * **View/Set specific models via `task-master models` command or `models` MCP tool.** + * Created automatically when you run `task-master models --setup` for the first time or during tagged system migration. + +2. **Environment Variables (`.env` / `mcp.json`):** + * Used **only** for sensitive API keys and specific endpoint URLs. + * Place API keys (one per provider) in a `.env` file in the project root for CLI usage. + * For MCP/Cursor integration, configure these keys in the `env` section of `.cursor/mcp.json`. + * Available keys/variables: See `assets/env.example` or the Configuration section in the command reference (previously linked to `taskmaster.mdc`). + +3. **`.taskmaster/state.json` File (Tagged System State):** + * Tracks current tag context and migration status. + * Automatically created during tagged system migration. + * Contains: `currentTag`, `lastSwitched`, `migrationNoticeShown`. + +**Important:** Non-API key settings (like model selections, `MAX_TOKENS`, `TASKMASTER_LOG_LEVEL`) are **no longer configured via environment variables**. Use the `task-master models` command (or `--setup` for interactive configuration) or the `models` MCP tool. +**If AI commands FAIL in MCP** verify that the API key for the selected provider is present in the `env` section of `.cursor/mcp.json`. +**If AI commands FAIL in CLI** verify that the API key for the selected provider is present in the `.env` file in the root of the project. + +## Determining the Next Task + +- Run `next_task` / `task-master next` to show the next task to work on. +- The command identifies tasks with all dependencies satisfied +- Tasks are prioritized by priority level, dependency count, and ID +- The command shows comprehensive task information including: + - Basic task details and description + - Implementation details + - Subtasks (if they exist) + - Contextual suggested actions +- Recommended before starting any new development work +- Respects your project's dependency structure +- Ensures tasks are completed in the appropriate sequence +- Provides ready-to-use commands for common task actions + +## Viewing Specific Task Details + +- Run `get_task` / `task-master show ` to view a specific task. +- Use dot notation for subtasks: `task-master show 1.2` (shows subtask 2 of task 1) +- Displays comprehensive information similar to the next command, but for a specific task +- For parent tasks, shows all subtasks and their current status +- For subtasks, shows parent task information and relationship +- Provides contextual suggested actions appropriate for the specific task +- Useful for examining task details before implementation or checking status + +## Managing Task Dependencies + +- Use `add_dependency` / `task-master add-dependency --id= --depends-on=` to add a dependency. +- Use `remove_dependency` / `task-master remove-dependency --id= --depends-on=` to remove a dependency. +- The system prevents circular dependencies and duplicate dependency entries +- Dependencies are checked for existence before being added or removed +- Task files are automatically regenerated after dependency changes +- Dependencies are visualized with status indicators in task listings and files + +## Task Reorganization + +- Use `move_task` / `task-master move --from= --to=` to move tasks or subtasks within the hierarchy +- This command supports several use cases: + - Moving a standalone task to become a subtask (e.g., `--from=5 --to=7`) + - Moving a subtask to become a standalone task (e.g., `--from=5.2 --to=7`) + - Moving a subtask to a different parent (e.g., `--from=5.2 --to=7.3`) + - Reordering subtasks within the same parent (e.g., `--from=5.2 --to=5.4`) + - Moving a task to a new, non-existent ID position (e.g., `--from=5 --to=25`) + - Moving multiple tasks at once using comma-separated IDs (e.g., `--from=10,11,12 --to=16,17,18`) +- The system includes validation to prevent data loss: + - Allows moving to non-existent IDs by creating placeholder tasks + - Prevents moving to existing task IDs that have content (to avoid overwriting) + - Validates source tasks exist before attempting to move them +- The system maintains proper parent-child relationships and dependency integrity +- Task files are automatically regenerated after the move operation +- This provides greater flexibility in organizing and refining your task structure as project understanding evolves +- This is especially useful when dealing with potential merge conflicts arising from teams creating tasks on separate branches. Solve these conflicts very easily by moving your tasks and keeping theirs. + +## Iterative Subtask Implementation + +Once a task has been broken down into subtasks using `expand_task` or similar methods, follow this iterative process for implementation: + +1. **Understand the Goal (Preparation):** + * Use `get_task` / `task-master show ` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) to thoroughly understand the specific goals and requirements of the subtask. + +2. **Initial Exploration & Planning (Iteration 1):** + * This is the first attempt at creating a concrete implementation plan. + * Explore the codebase to identify the precise files, functions, and even specific lines of code that will need modification. + * Determine the intended code changes (diffs) and their locations. + * Gather *all* relevant details from this exploration phase. + +3. **Log the Plan:** + * Run `update_subtask` / `task-master update-subtask --id= --prompt=''`. + * Provide the *complete and detailed* findings from the exploration phase in the prompt. Include file paths, line numbers, proposed diffs, reasoning, and any potential challenges identified. Do not omit details. The goal is to create a rich, timestamped log within the subtask's `details`. + +4. **Verify the Plan:** + * Run `get_task` / `task-master show ` again to confirm that the detailed implementation plan has been successfully appended to the subtask's details. + +5. **Begin Implementation:** + * Set the subtask status using `set_task_status` / `task-master set-status --id= --status=in-progress`. + * Start coding based on the logged plan. + +6. **Refine and Log Progress (Iteration 2+):** + * As implementation progresses, you will encounter challenges, discover nuances, or confirm successful approaches. + * **Before appending new information**: Briefly review the *existing* details logged in the subtask (using `get_task` or recalling from context) to ensure the update adds fresh insights and avoids redundancy. + * **Regularly** use `update_subtask` / `task-master update-subtask --id= --prompt='\n- What worked...\n- What didn't work...'` to append new findings. + * **Crucially, log:** + * What worked ("fundamental truths" discovered). + * What didn't work and why (to avoid repeating mistakes). + * Specific code snippets or configurations that were successful. + * Decisions made, especially if confirmed with user input. + * Any deviations from the initial plan and the reasoning. + * The objective is to continuously enrich the subtask's details, creating a log of the implementation journey that helps the AI (and human developers) learn, adapt, and avoid repeating errors. + +7. **Review & Update Rules (Post-Implementation):** + * Once the implementation for the subtask is functionally complete, review all code changes and the relevant chat history. + * Identify any new or modified code patterns, conventions, or best practices established during the implementation. + * Create new or update existing rules following internal guidelines (previously linked to `cursor_rules.mdc` and `self_improve.mdc`). + +8. **Mark Task Complete:** + * After verifying the implementation and updating any necessary rules, mark the subtask as completed: `set_task_status` / `task-master set-status --id= --status=done`. + +9. **Commit Changes (If using Git):** + * Stage the relevant code changes and any updated/new rule files (`git add .`). + * Craft a comprehensive Git commit message summarizing the work done for the subtask, including both code implementation and any rule adjustments. + * Execute the commit command directly in the terminal (e.g., `git commit -m 'feat(module): Implement feature X for subtask \n\n- Details about changes...\n- Updated rule Y for pattern Z'`). + * Consider if a Changeset is needed according to internal versioning guidelines (previously linked to `changeset.mdc`). If so, run `npm run changeset`, stage the generated file, and amend the commit or create a new one. + +10. **Proceed to Next Subtask:** + * Identify the next subtask (e.g., using `next_task` / `task-master next`). + +## Code Analysis & Refactoring Techniques + +- **Top-Level Function Search**: + - Useful for understanding module structure or planning refactors. + - Use grep/ripgrep to find exported functions/constants: + `rg "export (async function|function|const) \w+"` or similar patterns. + - Can help compare functions between files during migrations or identify potential naming conflicts. + +--- +*This workflow provides a general guideline. Adapt it based on your specific project needs and team practices.* \ No newline at end of file diff --git a/packages/.cursor/rules/self_improve.mdc b/packages/.cursor/rules/self_improve.mdc new file mode 100644 index 0000000..40b31b6 --- /dev/null +++ b/packages/.cursor/rules/self_improve.mdc @@ -0,0 +1,72 @@ +--- +description: Guidelines for continuously improving Cursor rules based on emerging code patterns and best practices. +globs: **/* +alwaysApply: true +--- + +- **Rule Improvement Triggers:** + - New code patterns not covered by existing rules + - Repeated similar implementations across files + - Common error patterns that could be prevented + - New libraries or tools being used consistently + - Emerging best practices in the codebase + +- **Analysis Process:** + - Compare new code with existing rules + - Identify patterns that should be standardized + - Look for references to external documentation + - Check for consistent error handling patterns + - Monitor test patterns and coverage + +- **Rule Updates:** + - **Add New Rules When:** + - A new technology/pattern is used in 3+ files + - Common bugs could be prevented by a rule + - Code reviews repeatedly mention the same feedback + - New security or performance patterns emerge + + - **Modify Existing Rules When:** + - Better examples exist in the codebase + - Additional edge cases are discovered + - Related rules have been updated + - Implementation details have changed + +- **Example Pattern Recognition:** + ```typescript + // If you see repeated patterns like: + const data = await prisma.user.findMany({ + select: { id: true, email: true }, + where: { status: 'ACTIVE' } + }); + + // Consider adding to [prisma.mdc](mdc:.cursor/rules/prisma.mdc): + // - Standard select fields + // - Common where conditions + // - Performance optimization patterns + ``` + +- **Rule Quality Checks:** + - Rules should be actionable and specific + - Examples should come from actual code + - References should be up to date + - Patterns should be consistently enforced + +- **Continuous Improvement:** + - Monitor code review comments + - Track common development questions + - Update rules after major refactors + - Add links to relevant documentation + - Cross-reference related rules + +- **Rule Deprecation:** + - Mark outdated patterns as deprecated + - Remove rules that no longer apply + - Update references to deprecated rules + - Document migration paths for old patterns + +- **Documentation Updates:** + - Keep examples synchronized with code + - Update references to external docs + - Maintain links between related rules + - Document breaking changes +Follow [cursor_rules.mdc](mdc:.cursor/rules/cursor_rules.mdc) for proper rule formatting and structure. diff --git a/packages/.cursor/rules/taskmaster.mdc b/packages/.cursor/rules/taskmaster.mdc new file mode 100644 index 0000000..b4fe6df --- /dev/null +++ b/packages/.cursor/rules/taskmaster.mdc @@ -0,0 +1,557 @@ +--- +description: Comprehensive reference for Taskmaster MCP tools and CLI commands. +globs: **/* +alwaysApply: true +--- +# Taskmaster Tool & Command Reference + +This document provides a detailed reference for interacting with Taskmaster, covering both the recommended MCP tools, suitable for integrations like Cursor, and the corresponding `task-master` CLI commands, designed for direct user interaction or fallback. + +**Note:** For interacting with Taskmaster programmatically or via integrated tools, using the **MCP tools is strongly recommended** due to better performance, structured data, and error handling. The CLI commands serve as a user-friendly alternative and fallback. + +**Important:** Several MCP tools involve AI processing... The AI-powered tools include `parse_prd`, `analyze_project_complexity`, `update_subtask`, `update_task`, `update`, `expand_all`, `expand_task`, and `add_task`. + +**🏷️ Tagged Task Lists System:** Task Master now supports **tagged task lists** for multi-context task management. This allows you to maintain separate, isolated lists of tasks for different features, branches, or experiments. Existing projects are seamlessly migrated to use a default "master" tag. Most commands now support a `--tag ` flag to specify which context to operate on. If omitted, commands use the currently active tag. + +--- + +## Initialization & Setup + +### 1. Initialize Project (`init`) + +* **MCP Tool:** `initialize_project` +* **CLI Command:** `task-master init [options]` +* **Description:** `Set up the basic Taskmaster file structure and configuration in the current directory for a new project.` +* **Key CLI Options:** + * `--name `: `Set the name for your project in Taskmaster's configuration.` + * `--description `: `Provide a brief description for your project.` + * `--version `: `Set the initial version for your project, e.g., '0.1.0'.` + * `-y, --yes`: `Initialize Taskmaster quickly using default settings without interactive prompts.` +* **Usage:** Run this once at the beginning of a new project. +* **MCP Variant Description:** `Set up the basic Taskmaster file structure and configuration in the current directory for a new project by running the 'task-master init' command.` +* **Key MCP Parameters/Options:** + * `projectName`: `Set the name for your project.` (CLI: `--name `) + * `projectDescription`: `Provide a brief description for your project.` (CLI: `--description `) + * `projectVersion`: `Set the initial version for your project, e.g., '0.1.0'.` (CLI: `--version `) + * `authorName`: `Author name.` (CLI: `--author `) + * `skipInstall`: `Skip installing dependencies. Default is false.` (CLI: `--skip-install`) + * `addAliases`: `Add shell aliases tm and taskmaster. Default is false.` (CLI: `--aliases`) + * `yes`: `Skip prompts and use defaults/provided arguments. Default is false.` (CLI: `-y, --yes`) +* **Usage:** Run this once at the beginning of a new project, typically via an integrated tool like Cursor. Operates on the current working directory of the MCP server. +* **Important:** Once complete, you *MUST* parse a prd in order to generate tasks. There will be no tasks files until then. The next step after initializing should be to create a PRD using the example PRD in .taskmaster/templates/example_prd.txt. +* **Tagging:** Use the `--tag` option to parse the PRD into a specific, non-default tag context. If the tag doesn't exist, it will be created automatically. Example: `task-master parse-prd spec.txt --tag=new-feature`. + +### 2. Parse PRD (`parse_prd`) + +* **MCP Tool:** `parse_prd` +* **CLI Command:** `task-master parse-prd [file] [options]` +* **Description:** `Parse a Product Requirements Document, PRD, or text file with Taskmaster to automatically generate an initial set of tasks in tasks.json.` +* **Key Parameters/Options:** + * `input`: `Path to your PRD or requirements text file that Taskmaster should parse for tasks.` (CLI: `[file]` positional or `-i, --input `) + * `output`: `Specify where Taskmaster should save the generated 'tasks.json' file. Defaults to '.taskmaster/tasks/tasks.json'.` (CLI: `-o, --output `) + * `numTasks`: `Approximate number of top-level tasks Taskmaster should aim to generate from the document.` (CLI: `-n, --num-tasks `) + * `force`: `Use this to allow Taskmaster to overwrite an existing 'tasks.json' without asking for confirmation.` (CLI: `-f, --force`) +* **Usage:** Useful for bootstrapping a project from an existing requirements document. +* **Notes:** Task Master will strictly adhere to any specific requirements mentioned in the PRD, such as libraries, database schemas, frameworks, tech stacks, etc., while filling in any gaps where the PRD isn't fully specified. Tasks are designed to provide the most direct implementation path while avoiding over-engineering. +* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. If the user does not have a PRD, suggest discussing their idea and then use the example PRD in `.taskmaster/templates/example_prd.txt` as a template for creating the PRD based on their idea, for use with `parse-prd`. + +--- + +## AI Model Configuration + +### 2. Manage Models (`models`) +* **MCP Tool:** `models` +* **CLI Command:** `task-master models [options]` +* **Description:** `View the current AI model configuration or set specific models for different roles (main, research, fallback). Allows setting custom model IDs for Ollama and OpenRouter.` +* **Key MCP Parameters/Options:** + * `setMain `: `Set the primary model ID for task generation/updates.` (CLI: `--set-main `) + * `setResearch `: `Set the model ID for research-backed operations.` (CLI: `--set-research `) + * `setFallback `: `Set the model ID to use if the primary fails.` (CLI: `--set-fallback `) + * `ollama `: `Indicates the set model ID is a custom Ollama model.` (CLI: `--ollama`) + * `openrouter `: `Indicates the set model ID is a custom OpenRouter model.` (CLI: `--openrouter`) + * `listAvailableModels `: `If true, lists available models not currently assigned to a role.` (CLI: No direct equivalent; CLI lists available automatically) + * `projectRoot `: `Optional. Absolute path to the project root directory.` (CLI: Determined automatically) +* **Key CLI Options:** + * `--set-main `: `Set the primary model.` + * `--set-research `: `Set the research model.` + * `--set-fallback `: `Set the fallback model.` + * `--ollama`: `Specify that the provided model ID is for Ollama (use with --set-*).` + * `--openrouter`: `Specify that the provided model ID is for OpenRouter (use with --set-*). Validates against OpenRouter API.` + * `--bedrock`: `Specify that the provided model ID is for AWS Bedrock (use with --set-*).` + * `--setup`: `Run interactive setup to configure models, including custom Ollama/OpenRouter IDs.` +* **Usage (MCP):** Call without set flags to get current config. Use `setMain`, `setResearch`, or `setFallback` with a valid model ID to update the configuration. Use `listAvailableModels: true` to get a list of unassigned models. To set a custom model, provide the model ID and set `ollama: true` or `openrouter: true`. +* **Usage (CLI):** Run without flags to view current configuration and available models. Use set flags to update specific roles. Use `--setup` for guided configuration, including custom models. To set a custom model via flags, use `--set-=` along with either `--ollama` or `--openrouter`. +* **Notes:** Configuration is stored in `.taskmaster/config.json` in the project root. This command/tool modifies that file. Use `listAvailableModels` or `task-master models` to see internally supported models. OpenRouter custom models are validated against their live API. Ollama custom models are not validated live. +* **API note:** API keys for selected AI providers (based on their model) need to exist in the mcp.json file to be accessible in MCP context. The API keys must be present in the local .env file for the CLI to be able to read them. +* **Model costs:** The costs in supported models are expressed in dollars. An input/output value of 3 is $3.00. A value of 0.8 is $0.80. +* **Warning:** DO NOT MANUALLY EDIT THE .taskmaster/config.json FILE. Use the included commands either in the MCP or CLI format as needed. Always prioritize MCP tools when available and use the CLI as a fallback. + +--- + +## Task Listing & Viewing + +### 3. Get Tasks (`get_tasks`) + +* **MCP Tool:** `get_tasks` +* **CLI Command:** `task-master list [options]` +* **Description:** `List your Taskmaster tasks, optionally filtering by status and showing subtasks.` +* **Key Parameters/Options:** + * `status`: `Show only Taskmaster tasks matching this status (or multiple statuses, comma-separated), e.g., 'pending' or 'done,in-progress'.` (CLI: `-s, --status `) + * `withSubtasks`: `Include subtasks indented under their parent tasks in the list.` (CLI: `--with-subtasks`) + * `tag`: `Specify which tag context to list tasks from. Defaults to the current active tag.` (CLI: `--tag `) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file `) +* **Usage:** Get an overview of the project status, often used at the start of a work session. + +### 4. Get Next Task (`next_task`) + +* **MCP Tool:** `next_task` +* **CLI Command:** `task-master next [options]` +* **Description:** `Ask Taskmaster to show the next available task you can work on, based on status and completed dependencies.` +* **Key Parameters/Options:** + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file `) + * `tag`: `Specify which tag context to use. Defaults to the current active tag.` (CLI: `--tag `) +* **Usage:** Identify what to work on next according to the plan. + +### 5. Get Task Details (`get_task`) + +* **MCP Tool:** `get_task` +* **CLI Command:** `task-master show [id] [options]` +* **Description:** `Display detailed information for one or more specific Taskmaster tasks or subtasks by ID.` +* **Key Parameters/Options:** + * `id`: `Required. The ID of the Taskmaster task (e.g., '15'), subtask (e.g., '15.2'), or a comma-separated list of IDs ('1,5,10.2') you want to view.` (CLI: `[id]` positional or `-i, --id `) + * `tag`: `Specify which tag context to get the task(s) from. Defaults to the current active tag.` (CLI: `--tag `) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file `) +* **Usage:** Understand the full details for a specific task. When multiple IDs are provided, a summary table is shown. +* **CRITICAL INFORMATION** If you need to collect information from multiple tasks, use comma-separated IDs (i.e. 1,2,3) to receive an array of tasks. Do not needlessly get tasks one at a time if you need to get many as that is wasteful. + +--- + +## Task Creation & Modification + +### 6. Add Task (`add_task`) + +* **MCP Tool:** `add_task` +* **CLI Command:** `task-master add-task [options]` +* **Description:** `Add a new task to Taskmaster by describing it; AI will structure it.` +* **Key Parameters/Options:** + * `prompt`: `Required. Describe the new task you want Taskmaster to create, e.g., "Implement user authentication using JWT".` (CLI: `-p, --prompt `) + * `dependencies`: `Specify the IDs of any Taskmaster tasks that must be completed before this new one can start, e.g., '12,14'.` (CLI: `-d, --dependencies `) + * `priority`: `Set the priority for the new task: 'high', 'medium', or 'low'. Default is 'medium'.` (CLI: `--priority `) + * `research`: `Enable Taskmaster to use the research role for potentially more informed task creation.` (CLI: `-r, --research`) + * `tag`: `Specify which tag context to add the task to. Defaults to the current active tag.` (CLI: `--tag `) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file `) +* **Usage:** Quickly add newly identified tasks during development. +* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. + +### 7. Add Subtask (`add_subtask`) + +* **MCP Tool:** `add_subtask` +* **CLI Command:** `task-master add-subtask [options]` +* **Description:** `Add a new subtask to a Taskmaster parent task, or convert an existing task into a subtask.` +* **Key Parameters/Options:** + * `id` / `parent`: `Required. The ID of the Taskmaster task that will be the parent.` (MCP: `id`, CLI: `-p, --parent `) + * `taskId`: `Use this if you want to convert an existing top-level Taskmaster task into a subtask of the specified parent.` (CLI: `-i, --task-id `) + * `title`: `Required if not using taskId. The title for the new subtask Taskmaster should create.` (CLI: `-t, --title `) + * `description`: `A brief description for the new subtask.` (CLI: `-d, --description <text>`) + * `details`: `Provide implementation notes or details for the new subtask.` (CLI: `--details <text>`) + * `dependencies`: `Specify IDs of other tasks or subtasks, e.g., '15' or '16.1', that must be done before this new subtask.` (CLI: `--dependencies <ids>`) + * `status`: `Set the initial status for the new subtask. Default is 'pending'.` (CLI: `-s, --status <status>`) + * `skipGenerate`: `Prevent Taskmaster from automatically regenerating markdown task files after adding the subtask.` (CLI: `--skip-generate`) + * `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Break down tasks manually or reorganize existing tasks. + +### 8. Update Tasks (`update`) + +* **MCP Tool:** `update` +* **CLI Command:** `task-master update [options]` +* **Description:** `Update multiple upcoming tasks in Taskmaster based on new context or changes, starting from a specific task ID.` +* **Key Parameters/Options:** + * `from`: `Required. The ID of the first task Taskmaster should update. All tasks with this ID or higher that are not 'done' will be considered.` (CLI: `--from <id>`) + * `prompt`: `Required. Explain the change or new context for Taskmaster to apply to the tasks, e.g., "We are now using React Query instead of Redux Toolkit for data fetching".` (CLI: `-p, --prompt <text>`) + * `research`: `Enable Taskmaster to use the research role for more informed updates. Requires appropriate API key.` (CLI: `-r, --research`) + * `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Handle significant implementation changes or pivots that affect multiple future tasks. Example CLI: `task-master update --from='18' --prompt='Switching to React Query.\nNeed to refactor data fetching...'` +* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. + +### 9. Update Task (`update_task`) + +* **MCP Tool:** `update_task` +* **CLI Command:** `task-master update-task [options]` +* **Description:** `Modify a specific Taskmaster task by ID, incorporating new information or changes. By default, this replaces the existing task details.` +* **Key Parameters/Options:** + * `id`: `Required. The specific ID of the Taskmaster task, e.g., '15', you want to update.` (CLI: `-i, --id <id>`) + * `prompt`: `Required. Explain the specific changes or provide the new information Taskmaster should incorporate into this task.` (CLI: `-p, --prompt <text>`) + * `append`: `If true, appends the prompt content to the task's details with a timestamp, rather than replacing them. Behaves like update-subtask.` (CLI: `--append`) + * `research`: `Enable Taskmaster to use the research role for more informed updates. Requires appropriate API key.` (CLI: `-r, --research`) + * `tag`: `Specify which tag context the task belongs to. Defaults to the current active tag.` (CLI: `--tag <name>`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Refine a specific task based on new understanding. Use `--append` to log progress without creating subtasks. +* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. + +### 10. Update Subtask (`update_subtask`) + +* **MCP Tool:** `update_subtask` +* **CLI Command:** `task-master update-subtask [options]` +* **Description:** `Append timestamped notes or details to a specific Taskmaster subtask without overwriting existing content. Intended for iterative implementation logging.` +* **Key Parameters/Options:** + * `id`: `Required. The ID of the Taskmaster subtask, e.g., '5.2', to update with new information.` (CLI: `-i, --id <id>`) + * `prompt`: `Required. The information, findings, or progress notes to append to the subtask's details with a timestamp.` (CLI: `-p, --prompt <text>`) + * `research`: `Enable Taskmaster to use the research role for more informed updates. Requires appropriate API key.` (CLI: `-r, --research`) + * `tag`: `Specify which tag context the subtask belongs to. Defaults to the current active tag.` (CLI: `--tag <name>`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Log implementation progress, findings, and discoveries during subtask development. Each update is timestamped and appended to preserve the implementation journey. +* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. + +### 11. Set Task Status (`set_task_status`) + +* **MCP Tool:** `set_task_status` +* **CLI Command:** `task-master set-status [options]` +* **Description:** `Update the status of one or more Taskmaster tasks or subtasks, e.g., 'pending', 'in-progress', 'done'.` +* **Key Parameters/Options:** + * `id`: `Required. The ID(s) of the Taskmaster task(s) or subtask(s), e.g., '15', '15.2', or '16,17.1', to update.` (CLI: `-i, --id <id>`) + * `status`: `Required. The new status to set, e.g., 'done', 'pending', 'in-progress', 'review', 'cancelled'.` (CLI: `-s, --status <status>`) + * `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Mark progress as tasks move through the development cycle. + +### 12. Remove Task (`remove_task`) + +* **MCP Tool:** `remove_task` +* **CLI Command:** `task-master remove-task [options]` +* **Description:** `Permanently remove a task or subtask from the Taskmaster tasks list.` +* **Key Parameters/Options:** + * `id`: `Required. The ID of the Taskmaster task, e.g., '5', or subtask, e.g., '5.2', to permanently remove.` (CLI: `-i, --id <id>`) + * `yes`: `Skip the confirmation prompt and immediately delete the task.` (CLI: `-y, --yes`) + * `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Permanently delete tasks or subtasks that are no longer needed in the project. +* **Notes:** Use with caution as this operation cannot be undone. Consider using 'blocked', 'cancelled', or 'deferred' status instead if you just want to exclude a task from active planning but keep it for reference. The command automatically cleans up dependency references in other tasks. + +--- + +## Task Structure & Breakdown + +### 13. Expand Task (`expand_task`) + +* **MCP Tool:** `expand_task` +* **CLI Command:** `task-master expand [options]` +* **Description:** `Use Taskmaster's AI to break down a complex task into smaller, manageable subtasks. Appends subtasks by default.` +* **Key Parameters/Options:** + * `id`: `The ID of the specific Taskmaster task you want to break down into subtasks.` (CLI: `-i, --id <id>`) + * `num`: `Optional: Suggests how many subtasks Taskmaster should aim to create. Uses complexity analysis/defaults otherwise.` (CLI: `-n, --num <number>`) + * `research`: `Enable Taskmaster to use the research role for more informed subtask generation. Requires appropriate API key.` (CLI: `-r, --research`) + * `prompt`: `Optional: Provide extra context or specific instructions to Taskmaster for generating the subtasks.` (CLI: `-p, --prompt <text>`) + * `force`: `Optional: If true, clear existing subtasks before generating new ones. Default is false (append).` (CLI: `--force`) + * `tag`: `Specify which tag context the task belongs to. Defaults to the current active tag.` (CLI: `--tag <name>`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Generate a detailed implementation plan for a complex task before starting coding. Automatically uses complexity report recommendations if available and `num` is not specified. +* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. + +### 14. Expand All Tasks (`expand_all`) + +* **MCP Tool:** `expand_all` +* **CLI Command:** `task-master expand --all [options]` (Note: CLI uses the `expand` command with the `--all` flag) +* **Description:** `Tell Taskmaster to automatically expand all eligible pending/in-progress tasks based on complexity analysis or defaults. Appends subtasks by default.` +* **Key Parameters/Options:** + * `num`: `Optional: Suggests how many subtasks Taskmaster should aim to create per task.` (CLI: `-n, --num <number>`) + * `research`: `Enable research role for more informed subtask generation. Requires appropriate API key.` (CLI: `-r, --research`) + * `prompt`: `Optional: Provide extra context for Taskmaster to apply generally during expansion.` (CLI: `-p, --prompt <text>`) + * `force`: `Optional: If true, clear existing subtasks before generating new ones for each eligible task. Default is false (append).` (CLI: `--force`) + * `tag`: `Specify which tag context to expand. Defaults to the current active tag.` (CLI: `--tag <name>`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Useful after initial task generation or complexity analysis to break down multiple tasks at once. +* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. + +### 15. Clear Subtasks (`clear_subtasks`) + +* **MCP Tool:** `clear_subtasks` +* **CLI Command:** `task-master clear-subtasks [options]` +* **Description:** `Remove all subtasks from one or more specified Taskmaster parent tasks.` +* **Key Parameters/Options:** + * `id`: `The ID(s) of the Taskmaster parent task(s) whose subtasks you want to remove, e.g., '15' or '16,18'. Required unless using `all`.) (CLI: `-i, --id <ids>`) + * `all`: `Tell Taskmaster to remove subtasks from all parent tasks.` (CLI: `--all`) + * `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Used before regenerating subtasks with `expand_task` if the previous breakdown needs replacement. + +### 16. Remove Subtask (`remove_subtask`) + +* **MCP Tool:** `remove_subtask` +* **CLI Command:** `task-master remove-subtask [options]` +* **Description:** `Remove a subtask from its Taskmaster parent, optionally converting it into a standalone task.` +* **Key Parameters/Options:** + * `id`: `Required. The ID(s) of the Taskmaster subtask(s) to remove, e.g., '15.2' or '16.1,16.3'.` (CLI: `-i, --id <id>`) + * `convert`: `If used, Taskmaster will turn the subtask into a regular top-level task instead of deleting it.` (CLI: `-c, --convert`) + * `skipGenerate`: `Prevent Taskmaster from automatically regenerating markdown task files after removing the subtask.` (CLI: `--skip-generate`) + * `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Delete unnecessary subtasks or promote a subtask to a top-level task. + +### 17. Move Task (`move_task`) + +* **MCP Tool:** `move_task` +* **CLI Command:** `task-master move [options]` +* **Description:** `Move a task or subtask to a new position within the task hierarchy.` +* **Key Parameters/Options:** + * `from`: `Required. ID of the task/subtask to move (e.g., "5" or "5.2"). Can be comma-separated for multiple tasks.` (CLI: `--from <id>`) + * `to`: `Required. ID of the destination (e.g., "7" or "7.3"). Must match the number of source IDs if comma-separated.` (CLI: `--to <id>`) + * `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Reorganize tasks by moving them within the hierarchy. Supports various scenarios like: + * Moving a task to become a subtask + * Moving a subtask to become a standalone task + * Moving a subtask to a different parent + * Reordering subtasks within the same parent + * Moving a task to a new, non-existent ID (automatically creates placeholders) + * Moving multiple tasks at once with comma-separated IDs +* **Validation Features:** + * Allows moving tasks to non-existent destination IDs (creates placeholder tasks) + * Prevents moving to existing task IDs that already have content (to avoid overwriting) + * Validates that source tasks exist before attempting to move them + * Maintains proper parent-child relationships +* **Example CLI:** `task-master move --from=5.2 --to=7.3` to move subtask 5.2 to become subtask 7.3. +* **Example Multi-Move:** `task-master move --from=10,11,12 --to=16,17,18` to move multiple tasks to new positions. +* **Common Use:** Resolving merge conflicts in tasks.json when multiple team members create tasks on different branches. + +--- + +## Dependency Management + +### 18. Add Dependency (`add_dependency`) + +* **MCP Tool:** `add_dependency` +* **CLI Command:** `task-master add-dependency [options]` +* **Description:** `Define a dependency in Taskmaster, making one task a prerequisite for another.` +* **Key Parameters/Options:** + * `id`: `Required. The ID of the Taskmaster task that will depend on another.` (CLI: `-i, --id <id>`) + * `dependsOn`: `Required. The ID of the Taskmaster task that must be completed first, the prerequisite.` (CLI: `-d, --depends-on <id>`) + * `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <path>`) +* **Usage:** Establish the correct order of execution between tasks. + +### 19. Remove Dependency (`remove_dependency`) + +* **MCP Tool:** `remove_dependency` +* **CLI Command:** `task-master remove-dependency [options]` +* **Description:** `Remove a dependency relationship between two Taskmaster tasks.` +* **Key Parameters/Options:** + * `id`: `Required. The ID of the Taskmaster task you want to remove a prerequisite from.` (CLI: `-i, --id <id>`) + * `dependsOn`: `Required. The ID of the Taskmaster task that should no longer be a prerequisite.` (CLI: `-d, --depends-on <id>`) + * `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Update task relationships when the order of execution changes. + +### 20. Validate Dependencies (`validate_dependencies`) + +* **MCP Tool:** `validate_dependencies` +* **CLI Command:** `task-master validate-dependencies [options]` +* **Description:** `Check your Taskmaster tasks for dependency issues (like circular references or links to non-existent tasks) without making changes.` +* **Key Parameters/Options:** + * `tag`: `Specify which tag context to validate. Defaults to the current active tag.` (CLI: `--tag <name>`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Audit the integrity of your task dependencies. + +### 21. Fix Dependencies (`fix_dependencies`) + +* **MCP Tool:** `fix_dependencies` +* **CLI Command:** `task-master fix-dependencies [options]` +* **Description:** `Automatically fix dependency issues (like circular references or links to non-existent tasks) in your Taskmaster tasks.` +* **Key Parameters/Options:** + * `tag`: `Specify which tag context to fix dependencies in. Defaults to the current active tag.` (CLI: `--tag <name>`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Clean up dependency errors automatically. + +--- + +## Analysis & Reporting + +### 22. Analyze Project Complexity (`analyze_project_complexity`) + +* **MCP Tool:** `analyze_project_complexity` +* **CLI Command:** `task-master analyze-complexity [options]` +* **Description:** `Have Taskmaster analyze your tasks to determine their complexity and suggest which ones need to be broken down further.` +* **Key Parameters/Options:** + * `output`: `Where to save the complexity analysis report. Default is '.taskmaster/reports/task-complexity-report.json' (or '..._tagname.json' if a tag is used).` (CLI: `-o, --output <file>`) + * `threshold`: `The minimum complexity score (1-10) that should trigger a recommendation to expand a task.` (CLI: `-t, --threshold <number>`) + * `research`: `Enable research role for more accurate complexity analysis. Requires appropriate API key.` (CLI: `-r, --research`) + * `tag`: `Specify which tag context to analyze. Defaults to the current active tag.` (CLI: `--tag <name>`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Used before breaking down tasks to identify which ones need the most attention. +* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. + +### 23. View Complexity Report (`complexity_report`) + +* **MCP Tool:** `complexity_report` +* **CLI Command:** `task-master complexity-report [options]` +* **Description:** `Display the task complexity analysis report in a readable format.` +* **Key Parameters/Options:** + * `tag`: `Specify which tag context to show the report for. Defaults to the current active tag.` (CLI: `--tag <name>`) + * `file`: `Path to the complexity report (default: '.taskmaster/reports/task-complexity-report.json').` (CLI: `-f, --file <file>`) +* **Usage:** Review and understand the complexity analysis results after running analyze-complexity. + +--- + +## File Management + +### 24. Generate Task Files (`generate`) + +* **MCP Tool:** `generate` +* **CLI Command:** `task-master generate [options]` +* **Description:** `Create or update individual Markdown files for each task based on your tasks.json.` +* **Key Parameters/Options:** + * `output`: `The directory where Taskmaster should save the task files (default: in a 'tasks' directory).` (CLI: `-o, --output <directory>`) + * `tag`: `Specify which tag context to generate files for. Defaults to the current active tag.` (CLI: `--tag <name>`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Run this after making changes to tasks.json to keep individual task files up to date. This command is now manual and no longer runs automatically. + +--- + +## AI-Powered Research + +### 25. Research (`research`) + +* **MCP Tool:** `research` +* **CLI Command:** `task-master research [options]` +* **Description:** `Perform AI-powered research queries with project context to get fresh, up-to-date information beyond the AI's knowledge cutoff.` +* **Key Parameters/Options:** + * `query`: `Required. Research query/prompt (e.g., "What are the latest best practices for React Query v5?").` (CLI: `[query]` positional or `-q, --query <text>`) + * `taskIds`: `Comma-separated list of task/subtask IDs from the current tag context (e.g., "15,16.2,17").` (CLI: `-i, --id <ids>`) + * `filePaths`: `Comma-separated list of file paths for context (e.g., "src/api.js,docs/readme.md").` (CLI: `-f, --files <paths>`) + * `customContext`: `Additional custom context text to include in the research.` (CLI: `-c, --context <text>`) + * `includeProjectTree`: `Include project file tree structure in context (default: false).` (CLI: `--tree`) + * `detailLevel`: `Detail level for the research response: 'low', 'medium', 'high' (default: medium).` (CLI: `--detail <level>`) + * `saveTo`: `Task or subtask ID (e.g., "15", "15.2") to automatically save the research conversation to.` (CLI: `--save-to <id>`) + * `saveFile`: `If true, saves the research conversation to a markdown file in '.taskmaster/docs/research/'.` (CLI: `--save-file`) + * `noFollowup`: `Disables the interactive follow-up question menu in the CLI.` (CLI: `--no-followup`) + * `tag`: `Specify which tag context to use for task-based context gathering. Defaults to the current active tag.` (CLI: `--tag <name>`) + * `projectRoot`: `The directory of the project. Must be an absolute path.` (CLI: Determined automatically) +* **Usage:** **This is a POWERFUL tool that agents should use FREQUENTLY** to: + * Get fresh information beyond knowledge cutoff dates + * Research latest best practices, library updates, security patches + * Find implementation examples for specific technologies + * Validate approaches against current industry standards + * Get contextual advice based on project files and tasks +* **When to Consider Using Research:** + * **Before implementing any task** - Research current best practices + * **When encountering new technologies** - Get up-to-date implementation guidance (libraries, apis, etc) + * **For security-related tasks** - Find latest security recommendations + * **When updating dependencies** - Research breaking changes and migration guides + * **For performance optimization** - Get current performance best practices + * **When debugging complex issues** - Research known solutions and workarounds +* **Research + Action Pattern:** + * Use `research` to gather fresh information + * Use `update_subtask` to commit findings with timestamps + * Use `update_task` to incorporate research into task details + * Use `add_task` with research flag for informed task creation +* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. The research provides FRESH data beyond the AI's training cutoff, making it invaluable for current best practices and recent developments. + +--- + +## Tag Management + +This new suite of commands allows you to manage different task contexts (tags). + +### 26. List Tags (`tags`) + +* **MCP Tool:** `list_tags` +* **CLI Command:** `task-master tags [options]` +* **Description:** `List all available tags with task counts, completion status, and other metadata.` +* **Key Parameters/Options:** + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) + * `--show-metadata`: `Include detailed metadata in the output (e.g., creation date, description).` (CLI: `--show-metadata`) + +### 27. Add Tag (`add_tag`) + +* **MCP Tool:** `add_tag` +* **CLI Command:** `task-master add-tag <tagName> [options]` +* **Description:** `Create a new, empty tag context, or copy tasks from another tag.` +* **Key Parameters/Options:** + * `tagName`: `Name of the new tag to create (alphanumeric, hyphens, underscores).` (CLI: `<tagName>` positional) + * `--from-branch`: `Creates a tag with a name derived from the current git branch, ignoring the <tagName> argument.` (CLI: `--from-branch`) + * `--copy-from-current`: `Copy tasks from the currently active tag to the new tag.` (CLI: `--copy-from-current`) + * `--copy-from <tag>`: `Copy tasks from a specific source tag to the new tag.` (CLI: `--copy-from <tag>`) + * `--description <text>`: `Provide an optional description for the new tag.` (CLI: `-d, --description <text>`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) + +### 28. Delete Tag (`delete_tag`) + +* **MCP Tool:** `delete_tag` +* **CLI Command:** `task-master delete-tag <tagName> [options]` +* **Description:** `Permanently delete a tag and all of its associated tasks.` +* **Key Parameters/Options:** + * `tagName`: `Name of the tag to delete.` (CLI: `<tagName>` positional) + * `--yes`: `Skip the confirmation prompt.` (CLI: `-y, --yes`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) + +### 29. Use Tag (`use_tag`) + +* **MCP Tool:** `use_tag` +* **CLI Command:** `task-master use-tag <tagName>` +* **Description:** `Switch your active task context to a different tag.` +* **Key Parameters/Options:** + * `tagName`: `Name of the tag to switch to.` (CLI: `<tagName>` positional) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) + +### 30. Rename Tag (`rename_tag`) + +* **MCP Tool:** `rename_tag` +* **CLI Command:** `task-master rename-tag <oldName> <newName>` +* **Description:** `Rename an existing tag.` +* **Key Parameters/Options:** + * `oldName`: `The current name of the tag.` (CLI: `<oldName>` positional) + * `newName`: `The new name for the tag.` (CLI: `<newName>` positional) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) + +### 31. Copy Tag (`copy_tag`) + +* **MCP Tool:** `copy_tag` +* **CLI Command:** `task-master copy-tag <sourceName> <targetName> [options]` +* **Description:** `Copy an entire tag context, including all its tasks and metadata, to a new tag.` +* **Key Parameters/Options:** + * `sourceName`: `Name of the tag to copy from.` (CLI: `<sourceName>` positional) + * `targetName`: `Name of the new tag to create.` (CLI: `<targetName>` positional) + * `--description <text>`: `Optional description for the new tag.` (CLI: `-d, --description <text>`) + +--- + +## Miscellaneous + +### 32. Sync Readme (`sync-readme`) -- experimental + +* **MCP Tool:** N/A +* **CLI Command:** `task-master sync-readme [options]` +* **Description:** `Exports your task list to your project's README.md file, useful for showcasing progress.` +* **Key Parameters/Options:** + * `status`: `Filter tasks by status (e.g., 'pending', 'done').` (CLI: `-s, --status <status>`) + * `withSubtasks`: `Include subtasks in the export.` (CLI: `--with-subtasks`) + * `tag`: `Specify which tag context to export from. Defaults to the current active tag.` (CLI: `--tag <name>`) + +--- + +## Environment Variables Configuration (Updated) + +Taskmaster primarily uses the **`.taskmaster/config.json`** file (in project root) for configuration (models, parameters, logging level, etc.), managed via `task-master models --setup`. + +Environment variables are used **only** for sensitive API keys related to AI providers and specific overrides like the Ollama base URL: + +* **API Keys (Required for corresponding provider):** + * `ANTHROPIC_API_KEY` + * `PERPLEXITY_API_KEY` + * `OPENAI_API_KEY` + * `GOOGLE_API_KEY` + * `MISTRAL_API_KEY` + * `AZURE_OPENAI_API_KEY` (Requires `AZURE_OPENAI_ENDPOINT` too) + * `OPENROUTER_API_KEY` + * `XAI_API_KEY` + * `OLLAMA_API_KEY` (Requires `OLLAMA_BASE_URL` too) +* **Endpoints (Optional/Provider Specific inside .taskmaster/config.json):** + * `AZURE_OPENAI_ENDPOINT` + * `OLLAMA_BASE_URL` (Default: `http://localhost:11434/api`) + +**Set API keys** in your **`.env`** file in the project root (for CLI use) or within the `env` section of your **`.cursor/mcp.json`** file (for MCP/Cursor integration). All other settings (model choice, max tokens, temperature, log level, custom endpoints) are managed in `.taskmaster/config.json` via `task-master models` command or `models` MCP tool. + +--- + +For details on how these commands fit into the development process, see the [Development Workflow Guide](mdc:.cursor/rules/dev_workflow.mdc). diff --git a/packages/.env.example b/packages/.env.example new file mode 100644 index 0000000..2c5babf --- /dev/null +++ b/packages/.env.example @@ -0,0 +1,10 @@ +# API Keys (Required to enable respective provider) +ANTHROPIC_API_KEY="your_anthropic_api_key_here" # Required: Format: sk-ant-api03-... +PERPLEXITY_API_KEY="your_perplexity_api_key_here" # Optional: Format: pplx-... +OPENAI_API_KEY="your_openai_api_key_here" # Optional, for OpenAI/OpenRouter models. Format: sk-proj-... +GOOGLE_API_KEY="your_google_api_key_here" # Optional, for Google Gemini models. +MISTRAL_API_KEY="your_mistral_key_here" # Optional, for Mistral AI models. +XAI_API_KEY="YOUR_XAI_KEY_HERE" # Optional, for xAI AI models. +AZURE_OPENAI_API_KEY="your_azure_key_here" # Optional, for Azure OpenAI models (requires endpoint in .taskmaster/config.json). +OLLAMA_API_KEY="your_ollama_api_key_here" # Optional: For remote Ollama servers that require authentication. +GITHUB_API_KEY="your_github_api_key_here" # Optional: For GitHub import/export features. Format: ghp_... or github_pat_... \ No newline at end of file diff --git a/packages/.gitignore b/packages/.gitignore new file mode 100644 index 0000000..db6295f --- /dev/null +++ b/packages/.gitignore @@ -0,0 +1,25 @@ +# Logs +logs +*.log +npm-debug.log* +yarn-debug.log* +yarn-error.log* +dev-debug.log + +# Dependency directories +node_modules/ + +# Environment variables +.env + +# Editor directories and files +.idea +.vscode +*.suo +*.ntvs* +*.njsproj +*.sln +*.sw? + +# OS specific +.DS_Store \ No newline at end of file diff --git a/packages/.guile.instructions.md b/packages/.guile.instructions.md new file mode 100644 index 0000000..dd0d706 --- /dev/null +++ b/packages/.guile.instructions.md @@ -0,0 +1,846 @@ +# Guile Scheme Coding Instructions for Home Lab Tool + +## Functional Programming Principles + +**Core Philosophy**: Functional programming is about actions, data, and computation - compose small, pure functions to build complex behaviors. + +### 1. Pure Functions First +- Functions should be deterministic and side-effect free when possible +- Separate pure computation from I/O operations +- Use immutable data structures as default + +```scheme +;; Good: Pure function +(define (calculate-deployment-hash config) + (sha256 (scm->json-string config))) + +;; Better: Separate pure logic from I/O +(define (deployment-ready? machine-config current-state) + (and (eq? (assoc-ref machine-config 'status) 'configured) + (eq? (assoc-ref current-state 'connectivity) 'online))) + +;; I/O operations separate +(define (check-machine-deployment machine) + (let ((config (load-machine-config machine)) + (state (probe-machine-state machine))) + (deployment-ready? config state))) +``` + +### 2. Data-Driven Design +- Represent configurations and state as data structures +- Use association lists (alists) and vectors for structured data +- Leverage Guile's homoiconicity (code as data) + +```scheme +;; Machine configuration as data +(define machine-specs + `((grey-area + (services (ollama jellyfin forgejo)) + (deployment-method deploy-rs) + (backup-schedule weekly)) + (sleeper-service + (services (nfs zfs monitoring)) + (deployment-method hybrid-update) + (backup-schedule daily)))) + +;; Operations on data +(define (get-machine-services machine) + (assoc-ref (assoc-ref machine-specs machine) 'services)) + +(define (machines-with-service service) + (filter (lambda (machine-spec) + (member service (get-machine-services (car machine-spec)))) + machine-specs)) +``` + +## Guile-Specific Idioms + +### 3. Module Organization +- Use meaningful module hierarchies +- Export only necessary public interfaces +- Group related functionality together + +```scheme +;; File: modules/lab/machines.scm +(define-module (lab machines) + #:use-module (srfi srfi-1) ; List processing + #:use-module (srfi srfi-26) ; Cut/cute + #:use-module (ice-9 match) ; Pattern matching + #:use-module (ssh session) + #:export (machine-status + deploy-machine + list-machines + machine-services)) + +;; File: modules/lab/deployment.scm +(define-module (lab deployment) + #:use-module (lab machines) + #:use-module (json) + #:export (deploy-rs + hybrid-update + rollback-deployment)) +``` + +### 4. Error Handling the Scheme Way +- Use exceptions for exceptional conditions +- Return #f or special values for expected failures +- Provide meaningful error context + +```scheme +;; Use exceptions for programming errors +(define (deploy-machine machine method) + (unless (member machine (list-machines)) + (throw 'invalid-machine "Unknown machine" machine)) + (unless (member method '(deploy-rs hybrid-update legacy)) + (throw 'invalid-method "Unknown deployment method" method)) + ;; ... deployment logic) + +;; Return #f for expected failures +(define (machine-reachable? machine) + (catch #t + (lambda () + (ssh-connect machine) + #t) + (lambda (key . args) + #f))) + +;; Provide context with failure info +(define (deployment-result success? machine method details) + `((success . ,success?) + (machine . ,machine) + (method . ,method) + (timestamp . ,(current-time)) + (details . ,details))) +``` + +### 5. Higher-Order Functions and Composition +- Use map, filter, fold for list processing +- Compose functions to build complex operations +- Leverage SRFI-1 for advanced list operations + +```scheme +(use-modules (srfi srfi-1)) + +;; Functional composition +(define (healthy-machines machines) + (filter machine-reachable? + (filter (lambda (m) (not (maintenance-mode? m))) + machines))) + +;; Map operations across machines +(define (update-all-machines) + (map (lambda (machine) + (cons machine (update-machine machine))) + (healthy-machines (list-machines)))) + +;; Fold for aggregation +(define (deployment-summary results) + (fold (lambda (result acc) + (if (assoc-ref result 'success) + (cons 'successful (1+ (assoc-ref acc 'successful))) + (cons 'failed (1+ (assoc-ref acc 'failed))))) + '((successful . 0) (failed . 0)) + results)) +``` + +### 6. Pattern Matching for Control Flow +- Use `match` for destructuring and dispatch +- Pattern match on data structures +- Cleaner than nested if/cond statements + +```scheme +(use-modules (ice-9 match)) + +(define (handle-deployment-event event) + (match event + (('start machine method) + (log-info "Starting deployment of ~a using ~a" machine method)) + + (('progress machine percent) + (update-progress-bar machine percent)) + + (('success machine result) + (log-success "Deployment completed: ~a" machine) + (notify-success machine result)) + + (('error machine error-msg) + (log-error "Deployment failed: ~a - ~a" machine error-msg) + (initiate-rollback machine)) + + (_ (log-warning "Unknown event: ~a" event)))) + +;; Pattern matching for configuration parsing +(define (parse-machine-config config-sexp) + (match config-sexp + (('machine name ('services services ...) ('options options ...)) + `((name . ,name) + (services . ,services) + (options . ,(alist->hash-table options)))) + + (_ (throw 'invalid-config "Malformed machine config" config-sexp)))) +``` + +### 7. REPL-Driven Development +- Design for interactive development +- Provide introspection functions +- Make state queryable and modifiable + +```scheme +;; REPL helpers for development +(define (debug-machine-state machine) + "Display comprehensive machine state for debugging" + (format #t "Machine: ~a~%" machine) + (format #t "Status: ~a~%" (machine-status machine)) + (format #t "Services: ~a~%" (machine-services machine)) + (format #t "Last deployment: ~a~%" (last-deployment machine)) + (format #t "Reachable: ~a~%" (machine-reachable? machine))) + +;; Interactive deployment with confirmation +(define (interactive-deploy machine) + (let ((current-config (get-machine-config machine))) + (display-config current-config) + (when (yes-or-no? "Proceed with deployment?") + (deploy-machine machine 'deploy-rs)))) + +;; State introspection +(define (lab-status) + `((total-machines . ,(length (list-machines))) + (reachable . ,(length (filter machine-reachable? (list-machines)))) + (services-running . ,(total-running-services)) + (pending-deployments . ,(length (pending-deployments))))) +``` + +### 8. Concurrency with Fibers +- Use fibers for concurrent operations +- Non-blocking I/O for better performance +- Coordinate parallel deployments safely + +```scheme +(use-modules (fibers) (fibers channels)) + +;; Concurrent machine checking +(define (check-all-machines-concurrent machines) + (run-fibers + (lambda () + (let ((results-channel (make-channel))) + ;; Spawn fiber for each machine + (for-each (lambda (machine) + (spawn-fiber + (lambda () + (let ((status (check-machine-status machine))) + (put-message results-channel + (cons machine status)))))) + machines) + + ;; Collect results + (let loop ((remaining (length machines)) + (results '())) + (if (zero? remaining) + results + (loop (1- remaining) + (cons (get-message results-channel) results)))))))) + +;; Parallel deployment with coordination +(define (deploy-machines-parallel machines) + (run-fibers + (lambda () + (let ((deployment-channel (make-channel)) + (coordinator (spawn-fiber (deployment-coordinator deployment-channel)))) + (par-map (lambda (machine) + (deploy-with-coordination machine deployment-channel)) + machines))))) +``` + +### 9. MCP Server Implementation Patterns +- Structured message handling +- Capability-based tool organization +- Resource management with caching + +```scheme +;; MCP message dispatch +(define (handle-mcp-request request) + (match (json-ref request "method") + ("tools/list" + (mcp-tools-list)) + + ("tools/call" + (let ((tool (json-ref request "params" "name")) + (args (json-ref request "params" "arguments"))) + (call-lab-tool tool args))) + + ("resources/list" + (mcp-resources-list)) + + ("resources/read" + (let ((uri (json-ref request "params" "uri"))) + (read-lab-resource uri))) + + (method + (mcp-error -32601 "Method not found" method)))) + +;; Tool capability definition +(define lab-tools + `((deploy-machine + (description . "Deploy configuration to a specific machine") + (inputSchema . ,(json-schema + `((type . "object") + (properties . ((machine (type . "string")) + (method (type . "string") + (enum . ("deploy-rs" "hybrid-update"))))) + (required . ("machine"))))) + (handler . ,deploy-machine-tool)) + + (check-status + (description . "Check machine status and connectivity") + (inputSchema . ,(json-schema + `((type . "object") + (properties . ((machines (type . "array") + (items (type . "string")))))))) + (handler . ,check-status-tool)))) +``` + +### 10. Configuration and Environment +- Use parameters for configuration +- Environment-aware defaults +- Validate configuration on startup + +```scheme +;; Configuration parameters +(define lab-config-dir + (make-parameter (or (getenv "LAB_CONFIG_DIR") + "/etc/lab-tool"))) + +(define deployment-timeout + (make-parameter (string->number (or (getenv "DEPLOYMENT_TIMEOUT") "300")))) + +(define ssh-key-path + (make-parameter (or (getenv "LAB_SSH_KEY") + (string-append (getenv "HOME") "/.ssh/lab_key")))) + +;; Configuration validation +(define (validate-lab-config) + (unless (file-exists? (lab-config-dir)) + (throw 'config-error "Lab config directory not found" (lab-config-dir))) + + (unless (file-exists? (ssh-key-path)) + (throw 'config-error "SSH key not found" (ssh-key-path))) + + (unless (> (deployment-timeout) 0) + (throw 'config-error "Invalid deployment timeout" (deployment-timeout)))) + +;; Initialize with validation +(define (init-lab-tool) + (validate-lab-config) + (load-machine-configurations) + (initialize-ssh-agent) + (setup-logging)) +``` + +## Code Style Guidelines + +### 11. Naming Conventions +- Use kebab-case for variables and functions +- Predicates end with `?` +- Mutating procedures end with `!` +- Constants in ALL-CAPS with hyphens + +```scheme +;; Good naming +(define DEFAULT-SSH-PORT 22) +(define machine-deployment-status ...) +(define (machine-reachable? machine) ...) +(define (update-machine-config! machine config) ...) + +;; Avoid +(define defaultSSHPort 22) ; camelCase +(define machine_status ...) ; snake_case +(define (is-machine-reachable ...) ; unnecessary 'is-' +``` + +### 12. Documentation and Comments +- Document module purposes and exports +- Use docstrings for complex functions +- Comment the "why", not the "what" + +```scheme +(define (deploy-machine machine method) + "Deploy configuration to MACHINE using METHOD. + + Returns a deployment result alist with success status, timing, + and any error messages. May throw exceptions for invalid inputs." + + ;; Validate inputs early to fail fast + (validate-machine machine) + (validate-deployment-method method) + + ;; Use atomic operations to prevent partial deployments + (call-with-deployment-lock machine + (lambda () + (let ((start-time (current-time))) + ;; ... deployment logic + )))) +``` + +### 13. Testing Approach +- Write tests for pure functions first +- Mock I/O operations +- Use SRFI-64 testing framework + +```scheme +(use-modules (srfi srfi-64)) + +(test-begin "machine-configuration") + +(test-equal "machine services extraction" + '(ollama jellyfin forgejo) + (get-machine-services 'grey-area)) + +(test-assert "deployment readiness check" + (deployment-ready? + '((status . configured) (health . good)) + '((connectivity . online) (load . normal)))) + +(test-error "invalid machine throws exception" + 'invalid-machine + (deploy-machine 'non-existent-machine 'deploy-rs)) + +(test-end "machine-configuration") +``` + +## Project Structure Best Practices + +### 14. Module Organization +``` +modules/ +├── lab/ +│ ├── core.scm ; Core data structures and utilities +│ ├── machines.scm ; Machine management +│ ├── deployment.scm ; Deployment strategies +│ ├── monitoring.scm ; Status checking and metrics +│ └── config.scm ; Configuration handling +├── mcp/ +│ ├── server.scm ; MCP server implementation +│ ├── tools.scm ; MCP tool definitions +│ └── resources.scm ; MCP resource handlers +└── utils/ + ├── ssh.scm ; SSH utilities + ├── json.scm ; JSON helpers + └── logging.scm ; Logging facilities +``` + +### 15. Build and Development Workflow +- Use Guile's module compilation +- Leverage REPL for iterative development +- Provide development/production configurations + +```scheme +;; Development helpers in separate module +(define-module (lab dev) + #:use-module (lab core) + #:export (reload-config + reset-state + dev-deploy)) + +;; Hot-reload for development +(define (reload-config) + (reload-module (resolve-module '(lab config))) + (init-lab-tool)) + +;; Safe deployment for development +(define (dev-deploy machine) + (if (eq? (current-environment) 'development) + (deploy-machine machine 'deploy-rs) + (error "dev-deploy only available in development mode"))) +``` + +## VS Code and GitHub Copilot Integration + +### 16. MCP Client Integration with VS Code +- Implement MCP client in VS Code extension +- Bridge home lab context to Copilot +- Provide real-time infrastructure state + +```typescript +// VS Code extension structure for MCP integration +// File: vscode-extension/src/extension.ts +import * as vscode from 'vscode'; +import { MCPClient } from './mcp-client'; + +export function activate(context: vscode.ExtensionContext) { + const mcpClient = new MCPClient('stdio', { + command: 'guile', + args: ['-c', '(use-modules (mcp server)) (run-mcp-server)'] + }); + + // Register commands for home lab operations + const deployCommand = vscode.commands.registerCommand( + 'homelab.deploy', + async (machine: string) => { + const result = await mcpClient.callTool('deploy-machine', { + machine: machine, + method: 'deploy-rs' + }); + vscode.window.showInformationMessage( + `Deployment ${result.success ? 'succeeded' : 'failed'}` + ); + } + ); + + // Provide context to Copilot through workspace state + const statusProvider = new HomeLab StatusProvider(mcpClient); + context.subscriptions.push( + vscode.workspace.registerTextDocumentContentProvider( + 'homelab', statusProvider + ) + ); + + context.subscriptions.push(deployCommand); +} + +class HomeLabStatusProvider implements vscode.TextDocumentContentProvider { + constructor(private mcpClient: MCPClient) {} + + async provideTextDocumentContent(uri: vscode.Uri): Promise<string> { + // Fetch current lab state for Copilot context + const resources = await this.mcpClient.listResources(); + const status = await this.mcpClient.readResource('machines://status/all'); + + return `# Home Lab Status +Current Infrastructure State: +${JSON.stringify(status, null, 2)} + +Available Resources: +${resources.map(r => `- ${r.uri}: ${r.description}`).join('\n')} +`; + } +} +``` + +### 17. MCP Server Configuration for IDE Integration +- Provide IDE-specific tools and resources +- Format responses for developer consumption +- Include code suggestions and snippets + +```scheme +;; IDE-specific MCP tools +(define ide-tools + `((generate-nix-config + (description . "Generate NixOS configuration for new machine") + (inputSchema . ,(json-schema + `((type . "object") + (properties . ((machine-name (type . "string")) + (services (type . "array") + (items (type . "string"))) + (hardware-profile (type . "string")))) + (required . ("machine-name"))))) + (handler . ,generate-nix-config-tool)) + + (suggest-deployment-strategy + (description . "Suggest optimal deployment strategy for changes") + (inputSchema . ,(json-schema + `((type . "object") + (properties . ((changed-files (type . "array") + (items (type . "string"))) + (target-machines (type . "array") + (items (type . "string"))))) + (required . ("changed-files"))))) + (handler . ,suggest-deployment-strategy-tool)) + + (validate-config + (description . "Validate NixOS configuration syntax and dependencies") + (inputSchema . ,(json-schema + `((type . "object") + (properties . ((config-path (type . "string")) + (machine (type . "string")))) + (required . ("config-path"))))) + (handler . ,validate-config-tool)))) + +;; IDE-specific resources +(define ide-resources + `(("homelab://templates/machine-config" + (description . "Template for new machine configuration") + (mimeType . "application/x-nix")) + + ("homelab://examples/service-configs" + (description . "Example service configurations") + (mimeType . "application/x-nix")) + + ("homelab://docs/deployment-guide" + (description . "Step-by-step deployment procedures") + (mimeType . "text/markdown")) + + ("homelab://status/real-time" + (description . "Real-time infrastructure status for context") + (mimeType . "application/json")))) + +;; Generate contextual code suggestions +(define (generate-nix-config-tool args) + (let ((machine-name (assoc-ref args "machine-name")) + (services (assoc-ref args "services")) + (hardware-profile (assoc-ref args "hardware-profile"))) + + `((content . ,(format #f "# Generated configuration for ~a +{ config, pkgs, ... }: + +{ + imports = [ + ./hardware-configuration.nix + ~/args + ]; + + # Machine-specific configuration + networking.hostName = \"~a\"; + + # Services configuration +~a + + # System packages + environment.systemPackages = with pkgs; [ + # Add your packages here + ]; + + system.stateVersion = \"24.05\"; +}" + machine-name + machine-name + (if services + (string-join + (map (lambda (service) + (format #f " services.~a.enable = true;" service)) + services) + "\n") + " # No services specified"))) + (isError . #f)))) +``` + +### 18. Copilot Context Enhancement +- Provide infrastructure context to improve suggestions +- Include deployment patterns and best practices +- Real-time system state for informed recommendations + +```scheme +;; Context provider for Copilot integration +(define (provide-copilot-context) + `((infrastructure-state . ,(get-current-infrastructure-state)) + (deployment-patterns . ,(get-common-deployment-patterns)) + (service-configurations . ,(get-service-config-templates)) + (best-practices . ,(get-deployment-best-practices)) + (current-issues . ,(get-active-alerts)))) + +(define (get-current-infrastructure-state) + `((machines . ,(map (lambda (machine) + `((name . ,machine) + (status . ,(machine-status machine)) + (services . ,(machine-services machine)) + (last-deployment . ,(last-deployment-time machine)))) + (list-machines))) + (network-topology . ,(get-network-topology)) + (resource-usage . ,(get-resource-utilization)))) + +(define (get-common-deployment-patterns) + `((safe-deployment . "Use deploy-rs for production, hybrid-update for development") + (rollback-strategy . "Always test deployments in staging first") + (service-dependencies . "Ensure database services start before applications") + (backup-before-deploy . "Create snapshots before major configuration changes"))) + +;; Format context for IDE consumption +(define (format-ide-context context) + (scm->json-string context #:pretty #t)) +``` + +### 19. VS Code Extension Development +- Create extension for seamless MCP integration +- Provide commands, views, and context +- Enable real-time collaboration with infrastructure + +```typescript +// package.json for VS Code extension +{ + "name": "homelab-mcp-integration", + "displayName": "Home Lab MCP Integration", + "description": "Integrate home lab infrastructure with VS Code through MCP", + "version": "0.1.0", + "engines": { + "vscode": "^1.74.0" + }, + "categories": ["Other"], + "activationEvents": [ + "onCommand:homelab.connect", + "workspaceContains:**/flake.nix" + ], + "main": "./out/extension.js", + "contributes": { + "commands": [ + { + "command": "homelab.deploy", + "title": "Deploy Machine", + "category": "Home Lab" + }, + { + "command": "homelab.status", + "title": "Check Status", + "category": "Home Lab" + }, + { + "command": "homelab.generateConfig", + "title": "Generate Config", + "category": "Home Lab" + } + ], + "views": { + "explorer": [ + { + "id": "homelabStatus", + "name": "Home Lab Status", + "when": "homelab:connected" + } + ] + }, + "viewsContainers": { + "activitybar": [ + { + "id": "homelab", + "title": "Home Lab", + "icon": "$(server-environment)" + } + ] + } + } +} + +// MCP Client implementation +class MCPClient { + private transport: MCPTransport; + private capabilities: MCPCapabilities; + + constructor(transportType: 'stdio' | 'websocket', config: any) { + this.transport = this.createTransport(transportType, config); + this.initialize(); + } + + async callTool(name: string, arguments: any): Promise<any> { + return this.transport.request('tools/call', { + name: name, + arguments: arguments + }); + } + + async listResources(): Promise<MCPResource[]> { + const response = await this.transport.request('resources/list', {}); + return response.resources; + } + + async readResource(uri: string): Promise<any> { + return this.transport.request('resources/read', { uri }); + } + + // Integration with Copilot context + async getCopilotContext(): Promise<string> { + const context = await this.readResource('homelab://context/copilot'); + return context.content; + } +} +``` + +### 20. GitHub Copilot Workspace Integration +- Configure workspace for optimal Copilot suggestions +- Provide infrastructure context files +- Set up context patterns for deployment scenarios + +```json +// .vscode/settings.json +{ + "github.copilot.enable": { + "*": true, + "yaml": true, + "nix": true, + "scheme": true + }, + "github.copilot.advanced": { + "length": 500, + "temperature": 0.2 + }, + "homelab.mcpServer": { + "command": "guile", + "args": ["-L", "modules", "-c", "(use-modules (mcp server)) (run-mcp-server)"], + "autoStart": true + }, + "files.associations": { + "*.scm": "scheme", + "flake.lock": "json" + } +} + +// .copilot/context.md for workspace context +```markdown +# Home Lab Infrastructure Context + +## Current Architecture +- NixOS-based infrastructure with multiple machines +- Deploy-rs for safe deployments +- Services: Ollama, Jellyfin, Forgejo, NFS, ZFS +- Network topology: reverse-proxy, grey-area, sleeper-service, congenital-optimist + +## Common Patterns +- Use `deploy-rs` for production deployments +- Test with `hybrid-update` in development +- Always backup before major changes +- Follow NixOS module structure in `/modules/` + +## Configuration Standards +- Machine configs in `/machines/{hostname}/` +- Shared modules in `/modules/` +- Service-specific configs in `services/` subdirectories +``` + +### 21. Real-time Context Updates +- Stream infrastructure changes to VS Code +- Update Copilot context automatically +- Provide deployment feedback in editor + +```scheme +;; Real-time context streaming +(define (start-context-stream port) + "Stream infrastructure changes to connected IDE clients" + (let ((clients (make-hash-table))) + (spawn-fiber + (lambda () + (let loop () + (let ((update (get-infrastructure-update))) + (hash-for-each + (lambda (client-id websocket) + (catch #t + (lambda () + (websocket-send websocket + (scm->json-string update))) + (lambda (key . args) + (hash-remove! clients client-id)))) + clients) + (sleep 5) + (loop))))) + + ;; WebSocket server for IDE connections + (run-websocket-server + (lambda (ws) + (let ((client-id (generate-client-id))) + (hash-set! clients client-id ws) + (websocket-send ws + (scm->json-string + `((type . "welcome") + (context . ,(get-current-context))))) + (handle-client-messages ws client-id clients))) + #:port port))) + +;; Integration with file watchers +(define (watch-config-changes) + "Watch for configuration file changes and update context" + (file-system-watcher + (list "/home/geir/Home-lab/machines" + "/home/geir/Home-lab/modules") + (lambda (event) + (match event + (('modify path) + (when (string-suffix? ".nix" path) + (update-copilot-context path))) + (_ #f))))) +``` \ No newline at end of file diff --git a/packages/.roo/rules-architect/architect-rules b/packages/.roo/rules-architect/architect-rules new file mode 100644 index 0000000..c1a1ca1 --- /dev/null +++ b/packages/.roo/rules-architect/architect-rules @@ -0,0 +1,93 @@ +**Core Directives & Agentivity:** +# 1. Adhere strictly to the rules defined below. +# 2. Use tools sequentially, one per message. Adhere strictly to the rules defined below. +# 3. CRITICAL: ALWAYS wait for user confirmation of success after EACH tool use before proceeding. Do not assume success. +# 4. Operate iteratively: Analyze task -> Plan steps -> Execute steps one by one. +# 5. Use <thinking> tags for *internal* analysis before tool use (context, tool choice, required params). +# 6. **DO NOT DISPLAY XML TOOL TAGS IN THE OUTPUT.** +# 7. **DO NOT DISPLAY YOUR THINKING IN THE OUTPUT.** + +**Architectural Design & Planning Role (Delegated Tasks):** + +Your primary role when activated via `new_task` by the Boomerang orchestrator is to perform specific architectural, design, or planning tasks, focusing on the instructions provided in the delegation message and referencing the relevant `taskmaster-ai` task ID. + +1. **Analyze Delegated Task:** Carefully examine the `message` provided by Boomerang. This message contains the specific task scope, context (including the `taskmaster-ai` task ID), and constraints. +2. **Information Gathering (As Needed):** Use analysis tools to fulfill the task: + * `list_files`: Understand project structure. + * `read_file`: Examine specific code, configuration, or documentation files relevant to the architectural task. + * `list_code_definition_names`: Analyze code structure and relationships. + * `use_mcp_tool` (taskmaster-ai): Use `get_task` or `analyze_project_complexity` *only if explicitly instructed* by Boomerang in the delegation message to gather further context beyond what was provided. +3. **Task Execution (Design & Planning):** Focus *exclusively* on the delegated architectural task, which may involve: + * Designing system architecture, component interactions, or data models. + * Planning implementation steps or identifying necessary subtasks (to be reported back). + * Analyzing technical feasibility, complexity, or potential risks. + * Defining interfaces, APIs, or data contracts. + * Reviewing existing code/architecture against requirements or best practices. +4. **Reporting Completion:** Signal completion using `attempt_completion`. Provide a concise yet thorough summary of the outcome in the `result` parameter. This summary is **crucial** for Boomerang to update `taskmaster-ai`. Include: + * Summary of design decisions, plans created, analysis performed, or subtasks identified. + * Any relevant artifacts produced (e.g., diagrams described, markdown files written - if applicable and instructed). + * Completion status (success, failure, needs review). + * Any significant findings, potential issues, or context gathered relevant to the next steps. +5. **Handling Issues:** + * **Complexity/Review:** If you encounter significant complexity, uncertainty, or issues requiring further review (e.g., needing testing input, deeper debugging analysis), set the status to 'review' within your `attempt_completion` result and clearly state the reason. **Do not delegate directly.** Report back to Boomerang. + * **Failure:** If the task fails (e.g., requirements are contradictory, necessary information unavailable), clearly report the failure and the reason in the `attempt_completion` result. +6. **Taskmaster Interaction:** + * **Primary Responsibility:** Boomerang is primarily responsible for updating Taskmaster (`set_task_status`, `update_task`, `update_subtask`) after receiving your `attempt_completion` result. + * **Direct Updates (Rare):** Only update Taskmaster directly if operating autonomously (not under Boomerang's delegation) or if *explicitly* instructed by Boomerang within the `new_task` message. +7. **Autonomous Operation (Exceptional):** If operating outside of Boomerang's delegation (e.g., direct user request), ensure Taskmaster is initialized before attempting Taskmaster operations (see Taskmaster-AI Strategy below). + +**Context Reporting Strategy:** + +context_reporting: | + <thinking> + Strategy: + - Focus on providing comprehensive information within the `attempt_completion` `result` parameter. + - Boomerang will use this information to update Taskmaster's `description`, `details`, or log via `update_task`/`update_subtask`. + - My role is to *report* accurately, not *log* directly to Taskmaster unless explicitly instructed or operating autonomously. + </thinking> + - **Goal:** Ensure the `result` parameter in `attempt_completion` contains all necessary information for Boomerang to understand the outcome and update Taskmaster effectively. + - **Content:** Include summaries of architectural decisions, plans, analysis, identified subtasks, errors encountered, or new context discovered. Structure the `result` clearly. + - **Trigger:** Always provide a detailed `result` upon using `attempt_completion`. + - **Mechanism:** Boomerang receives the `result` and performs the necessary Taskmaster updates. + +**Taskmaster-AI Strategy (for Autonomous Operation):** + +# Only relevant if operating autonomously (not delegated by Boomerang). +taskmaster_strategy: + status_prefix: "Begin autonomous responses with either '[TASKMASTER: ON]' or '[TASKMASTER: OFF]'." + initialization: | + <thinking> + - **CHECK FOR TASKMASTER (Autonomous Only):** + - Plan: If I need to use Taskmaster tools autonomously, first use `list_files` to check if `tasks/tasks.json` exists. + - If `tasks/tasks.json` is present = set TASKMASTER: ON, else TASKMASTER: OFF. + </thinking> + *Execute the plan described above only if autonomous Taskmaster interaction is required.* + if_uninitialized: | + 1. **Inform:** "Task Master is not initialized. Autonomous Taskmaster operations cannot proceed." + 2. **Suggest:** "Consider switching to Boomerang mode to initialize and manage the project workflow." + if_ready: | + 1. **Verify & Load:** Optionally fetch tasks using `taskmaster-ai`'s `get_tasks` tool if needed for autonomous context. + 2. **Set Status:** Set status to '[TASKMASTER: ON]'. + 3. **Proceed:** Proceed with autonomous Taskmaster operations. + +**Mode Collaboration & Triggers (Architect Perspective):** + +mode_collaboration: | + # Architect Mode Collaboration (Focus on receiving from Boomerang and reporting back) + - Delegated Task Reception (FROM Boomerang via `new_task`): + * Receive specific architectural/planning task instructions referencing a `taskmaster-ai` ID. + * Analyze requirements, scope, and constraints provided by Boomerang. + - Completion Reporting (TO Boomerang via `attempt_completion`): + * Report design decisions, plans, analysis results, or identified subtasks in the `result`. + * Include completion status (success, failure, review) and context for Boomerang. + * Signal completion of the *specific delegated architectural task*. + +mode_triggers: + # Conditions that might trigger a switch TO Architect mode (typically orchestrated BY Boomerang based on needs identified by other modes or the user) + architect: + - condition: needs_architectural_design # e.g., New feature requires system design + - condition: needs_refactoring_plan # e.g., Code mode identifies complex refactoring needed + - condition: needs_complexity_analysis # e.g., Before breaking down a large feature + - condition: design_clarification_needed # e.g., Implementation details unclear + - condition: pattern_violation_found # e.g., Code deviates significantly from established patterns + - condition: review_architectural_decision # e.g., Boomerang requests review based on 'review' status from another mode \ No newline at end of file diff --git a/packages/.roo/rules-ask/ask-rules b/packages/.roo/rules-ask/ask-rules new file mode 100644 index 0000000..ccacc20 --- /dev/null +++ b/packages/.roo/rules-ask/ask-rules @@ -0,0 +1,89 @@ +**Core Directives & Agentivity:** +# 1. Adhere strictly to the rules defined below. +# 2. Use tools sequentially, one per message. Adhere strictly to the rules defined below. +# 3. CRITICAL: ALWAYS wait for user confirmation of success after EACH tool use before proceeding. Do not assume success. +# 4. Operate iteratively: Analyze task -> Plan steps -> Execute steps one by one. +# 5. Use <thinking> tags for *internal* analysis before tool use (context, tool choice, required params). +# 6. **DO NOT DISPLAY XML TOOL TAGS IN THE OUTPUT.** +# 7. **DO NOT DISPLAY YOUR THINKING IN THE OUTPUT.** + +**Information Retrieval & Explanation Role (Delegated Tasks):** + +Your primary role when activated via `new_task` by the Boomerang (orchestrator) mode is to act as a specialized technical assistant. Focus *exclusively* on fulfilling the specific instructions provided in the `new_task` message, referencing the relevant `taskmaster-ai` task ID. + +1. **Understand the Request:** Carefully analyze the `message` provided in the `new_task` delegation. This message will contain the specific question, information request, or analysis needed, referencing the `taskmaster-ai` task ID for context. +2. **Information Gathering:** Utilize appropriate tools to gather the necessary information based *only* on the delegation instructions: + * `read_file`: To examine specific file contents. + * `search_files`: To find patterns or specific text across the project. + * `list_code_definition_names`: To understand code structure in relevant directories. + * `use_mcp_tool` (with `taskmaster-ai`): *Only if explicitly instructed* by the Boomerang delegation message to retrieve specific task details (e.g., using `get_task`). +3. **Formulate Response:** Synthesize the gathered information into a clear, concise, and accurate answer or explanation addressing the specific request from the delegation message. +4. **Reporting Completion:** Signal completion using `attempt_completion`. Provide a concise yet thorough summary of the outcome in the `result` parameter. This summary is **crucial** for Boomerang to process and potentially update `taskmaster-ai`. Include: + * The complete answer, explanation, or analysis formulated in the previous step. + * Completion status (success, failure - e.g., if information could not be found). + * Any significant findings or context gathered relevant to the question. + * Cited sources (e.g., file paths, specific task IDs if used) where appropriate. +5. **Strict Scope:** Execute *only* the delegated information-gathering/explanation task. Do not perform code changes, execute unrelated commands, switch modes, or attempt to manage the overall workflow. Your responsibility ends with reporting the answer via `attempt_completion`. + +**Context Reporting Strategy:** + +context_reporting: | + <thinking> + Strategy: + - Focus on providing comprehensive information (the answer/analysis) within the `attempt_completion` `result` parameter. + - Boomerang will use this information to potentially update Taskmaster's `description`, `details`, or log via `update_task`/`update_subtask`. + - My role is to *report* accurately, not *log* directly to Taskmaster. + </thinking> + - **Goal:** Ensure the `result` parameter in `attempt_completion` contains the complete and accurate answer/analysis requested by Boomerang. + - **Content:** Include the full answer, explanation, or analysis results. Cite sources if applicable. Structure the `result` clearly. + - **Trigger:** Always provide a detailed `result` upon using `attempt_completion`. + - **Mechanism:** Boomerang receives the `result` and performs any necessary Taskmaster updates or decides the next workflow step. + +**Taskmaster Interaction:** + +* **Primary Responsibility:** Boomerang is primarily responsible for updating Taskmaster (`set_task_status`, `update_task`, `update_subtask`) after receiving your `attempt_completion` result. +* **Direct Use (Rare & Specific):** Only use Taskmaster tools (`use_mcp_tool` with `taskmaster-ai`) if *explicitly instructed* by Boomerang within the `new_task` message, and *only* for retrieving information (e.g., `get_task`). Do not update Taskmaster status or content directly. + +**Taskmaster-AI Strategy (for Autonomous Operation):** + +# Only relevant if operating autonomously (not delegated by Boomerang), which is highly exceptional for Ask mode. +taskmaster_strategy: + status_prefix: "Begin autonomous responses with either '[TASKMASTER: ON]' or '[TASKMASTER: OFF]'." + initialization: | + <thinking> + - **CHECK FOR TASKMASTER (Autonomous Only):** + - Plan: If I need to use Taskmaster tools autonomously (extremely rare), first use `list_files` to check if `tasks/tasks.json` exists. + - If `tasks/tasks.json` is present = set TASKMASTER: ON, else TASKMASTER: OFF. + </thinking> + *Execute the plan described above only if autonomous Taskmaster interaction is required.* + if_uninitialized: | + 1. **Inform:** "Task Master is not initialized. Autonomous Taskmaster operations cannot proceed." + 2. **Suggest:** "Consider switching to Boomerang mode to initialize and manage the project workflow." + if_ready: | + 1. **Verify & Load:** Optionally fetch tasks using `taskmaster-ai`'s `get_tasks` tool if needed for autonomous context (again, very rare for Ask). + 2. **Set Status:** Set status to '[TASKMASTER: ON]'. + 3. **Proceed:** Proceed with autonomous operations (likely just answering a direct question without workflow context). + +**Mode Collaboration & Triggers:** + +mode_collaboration: | + # Ask Mode Collaboration: Focuses on receiving tasks from Boomerang and reporting back findings. + - Delegated Task Reception (FROM Boomerang via `new_task`): + * Understand question/analysis request from Boomerang (referencing taskmaster-ai task ID). + * Research information or analyze provided context using appropriate tools (`read_file`, `search_files`, etc.) as instructed. + * Formulate answers/explanations strictly within the subtask scope. + * Use `taskmaster-ai` tools *only* if explicitly instructed in the delegation message for information retrieval. + - Completion Reporting (TO Boomerang via `attempt_completion`): + * Provide the complete answer, explanation, or analysis results in the `result` parameter. + * Report completion status (success/failure) of the information-gathering subtask. + * Cite sources or relevant context found. + +mode_triggers: + # Ask mode does not typically trigger switches TO other modes. + # It receives tasks via `new_task` and reports completion via `attempt_completion`. + # Triggers defining when OTHER modes might switch TO Ask remain relevant for the overall system, + # but Ask mode itself does not initiate these switches. + ask: + - condition: documentation_needed + - condition: implementation_explanation + - condition: pattern_documentation \ No newline at end of file diff --git a/packages/.roo/rules-boomerang/boomerang-rules b/packages/.roo/rules-boomerang/boomerang-rules new file mode 100644 index 0000000..636a090 --- /dev/null +++ b/packages/.roo/rules-boomerang/boomerang-rules @@ -0,0 +1,181 @@ +**Core Directives & Agentivity:** +# 1. Adhere strictly to the rules defined below. +# 2. Use tools sequentially, one per message. Adhere strictly to the rules defined below. +# 3. CRITICAL: ALWAYS wait for user confirmation of success after EACH tool use before proceeding. Do not assume success. +# 4. Operate iteratively: Analyze task -> Plan steps -> Execute steps one by one. +# 5. Use <thinking> tags for *internal* analysis before tool use (context, tool choice, required params). +# 6. **DO NOT DISPLAY XML TOOL TAGS IN THE OUTPUT.** +# 7. **DO NOT DISPLAY YOUR THINKING IN THE OUTPUT.** + +**Workflow Orchestration Role:** + +Your role is to coordinate complex workflows by delegating tasks to specialized modes, using `taskmaster-ai` as the central hub for task definition, progress tracking, and context management. As an orchestrator, you should always delegate tasks: + +1. **Task Decomposition:** When given a complex task, analyze it and break it down into logical subtasks suitable for delegation. If TASKMASTER IS ON Leverage `taskmaster-ai` (`get_tasks`, `analyze_project_complexity`, `expand_task`) to understand the existing task structure and identify areas needing updates and/or breakdown. +2. **Delegation via `new_task`:** For each subtask identified (or if creating new top-level tasks via `add_task` is needed first), use the `new_task` tool to delegate. + * Choose the most appropriate mode for the subtask's specific goal. + * Provide comprehensive instructions in the `message` parameter, including: + * All necessary context from the parent task (retrieved via `get_task` or `get_tasks` from `taskmaster-ai`) or previous subtasks. + * A clearly defined scope, specifying exactly what the subtask should accomplish. Reference the relevant `taskmaster-ai` task/subtask ID. + * An explicit statement that the subtask should *only* perform the work outlined and not deviate. + * An instruction for the subtask to signal completion using `attempt_completion`, providing a concise yet thorough summary of the outcome in the `result` parameter. This summary is crucial for updating `taskmaster-ai`. + * A statement that these specific instructions supersede any conflicting general instructions the subtask's mode might have. +3. **Progress Tracking & Context Management (using `taskmaster-ai`):** + * Track and manage the progress of all subtasks primarily through `taskmaster-ai`. + * When a subtask completes (signaled via `attempt_completion`), **process its `result` directly**. Update the relevant task/subtask status and details in `taskmaster-ai` using `set_task_status`, `update_task`, or `update_subtask`. Handle failures explicitly (see Result Reception below). + * After processing the result and updating Taskmaster, determine the next steps based on the updated task statuses and dependencies managed by `taskmaster-ai` (use `next_task`). This might involve delegating the next task, asking the user for clarification (`ask_followup_question`), or proceeding to synthesis. + * Use `taskmaster-ai`'s `set_task_status` tool when starting to work on a new task to mark tasks/subtasks as 'in-progress'. If a subtask reports back with a 'review' status via `attempt_completion`, update Taskmaster accordingly, and then decide the next step: delegate to Architect/Test/Debug for specific review, or use `ask_followup_question` to consult the user directly. +4. **User Communication:** Help the user understand the workflow, the status of tasks (using info from `get_tasks` or `get_task`), and how subtasks fit together. Provide clear reasoning for delegation choices. +5. **Synthesis:** When all relevant tasks managed by `taskmaster-ai` for the user's request are 'done' (confirm via `get_tasks`), **perform the final synthesis yourself**. Compile the summary based on the information gathered and logged in Taskmaster throughout the workflow and present it using `attempt_completion`. +6. **Clarification:** Ask clarifying questions (using `ask_followup_question`) when necessary to better understand how to break down or manage tasks within `taskmaster-ai`. + +Use subtasks (`new_task`) to maintain clarity. If a request significantly shifts focus or requires different expertise, create a subtask. + +**Taskmaster-AI Strategy:** + +taskmaster_strategy: + status_prefix: "Begin EVERY response with either '[TASKMASTER: ON]' or '[TASKMASTER: OFF]', indicating if the Task Master project structure (e.g., `tasks/tasks.json`) appears to be set up." + initialization: | + <thinking> + - **CHECK FOR TASKMASTER:** + - Plan: Use `list_files` to check if `tasks/tasks.json` is PRESENT in the project root, then TASKMASTER has been initialized. + - if `tasks/tasks.json` is present = set TASKMASTER: ON, else TASKMASTER: OFF + </thinking> + *Execute the plan described above.* + if_uninitialized: | + 1. **Inform & Suggest:** + "It seems Task Master hasn't been initialized in this project yet. TASKMASTER helps manage tasks and context effectively. Would you like me to delegate to the code mode to run the `initialize_project` command for TASKMASTER?" + 2. **Conditional Actions:** + * If the user declines: + <thinking> + I need to proceed without TASKMASTER functionality. I will inform the user and set the status accordingly. + </thinking> + a. Inform the user: "Ok, I will proceed without initializing TASKMASTER." + b. Set status to '[TASKMASTER: OFF]'. + c. Attempt to handle the user's request directly if possible. + * If the user agrees: + <thinking> + I will use `new_task` to delegate project initialization to the `code` mode using the `taskmaster-ai` `initialize_project` tool. I need to ensure the `projectRoot` argument is correctly set. + </thinking> + a. Use `new_task` with `mode: code`` and instructions to execute the `taskmaster-ai` `initialize_project` tool via `use_mcp_tool`. Provide necessary details like `projectRoot`. Instruct Code mode to report completion via `attempt_completion`. + if_ready: | + <thinking> + Plan: Use `use_mcp_tool` with `server_name: taskmaster-ai`, `tool_name: get_tasks`, and required arguments (`projectRoot`). This verifies connectivity and loads initial task context. + </thinking> + 1. **Verify & Load:** Attempt to fetch tasks using `taskmaster-ai`'s `get_tasks` tool. + 2. **Set Status:** Set status to '[TASKMASTER: ON]'. + 3. **Inform User:** "TASKMASTER is ready. I have loaded the current task list." + 4. **Proceed:** Proceed with the user's request, utilizing `taskmaster-ai` tools for task management and context as described in the 'Workflow Orchestration Role'. + +**Mode Collaboration & Triggers:** + +mode_collaboration: | + # Collaboration definitions for how Boomerang orchestrates and interacts. + # Boomerang delegates via `new_task` using taskmaster-ai for task context, + # receives results via `attempt_completion`, processes them, updates taskmaster-ai, and determines the next step. + + 1. Architect Mode Collaboration: # Interaction initiated BY Boomerang + - Delegation via `new_task`: + * Provide clear architectural task scope (referencing taskmaster-ai task ID). + * Request design, structure, planning based on taskmaster context. + - Completion Reporting TO Boomerang: # Receiving results FROM Architect via attempt_completion + * Expect design decisions, artifacts created, completion status (taskmaster-ai task ID). + * Expect context needed for subsequent implementation delegation. + + 2. Test Mode Collaboration: # Interaction initiated BY Boomerang + - Delegation via `new_task`: + * Provide clear testing scope (referencing taskmaster-ai task ID). + * Request test plan development, execution, verification based on taskmaster context. + - Completion Reporting TO Boomerang: # Receiving results FROM Test via attempt_completion + * Expect summary of test results (pass/fail, coverage), completion status (taskmaster-ai task ID). + * Expect details on bugs or validation issues. + + 3. Debug Mode Collaboration: # Interaction initiated BY Boomerang + - Delegation via `new_task`: + * Provide clear debugging scope (referencing taskmaster-ai task ID). + * Request investigation, root cause analysis based on taskmaster context. + - Completion Reporting TO Boomerang: # Receiving results FROM Debug via attempt_completion + * Expect summary of findings (root cause, affected areas), completion status (taskmaster-ai task ID). + * Expect recommended fixes or next diagnostic steps. + + 4. Ask Mode Collaboration: # Interaction initiated BY Boomerang + - Delegation via `new_task`: + * Provide clear question/analysis request (referencing taskmaster-ai task ID). + * Request research, context analysis, explanation based on taskmaster context. + - Completion Reporting TO Boomerang: # Receiving results FROM Ask via attempt_completion + * Expect answers, explanations, analysis results, completion status (taskmaster-ai task ID). + * Expect cited sources or relevant context found. + + 5. Code Mode Collaboration: # Interaction initiated BY Boomerang + - Delegation via `new_task`: + * Provide clear coding requirements (referencing taskmaster-ai task ID). + * Request implementation, fixes, documentation, command execution based on taskmaster context. + - Completion Reporting TO Boomerang: # Receiving results FROM Code via attempt_completion + * Expect outcome of commands/tool usage, summary of code changes/operations, completion status (taskmaster-ai task ID). + * Expect links to commits or relevant code sections if relevant. + + 7. Boomerang Mode Collaboration: # Boomerang's Internal Orchestration Logic + # Boomerang orchestrates via delegation, using taskmaster-ai as the source of truth. + - Task Decomposition & Planning: + * Analyze complex user requests, potentially delegating initial analysis to Architect mode. + * Use `taskmaster-ai` (`get_tasks`, `analyze_project_complexity`) to understand current state. + * Break down into logical, delegate-able subtasks (potentially creating new tasks/subtasks in `taskmaster-ai` via `add_task`, `expand_task` delegated to Code mode if needed). + * Identify appropriate specialized mode for each subtask. + - Delegation via `new_task`: + * Formulate clear instructions referencing `taskmaster-ai` task IDs and context. + * Use `new_task` tool to assign subtasks to chosen modes. + * Track initiated subtasks (implicitly via `taskmaster-ai` status, e.g., setting to 'in-progress'). + - Result Reception & Processing: + * Receive completion reports (`attempt_completion` results) from subtasks. + * **Process the result:** Analyze success/failure and content. + * **Update Taskmaster:** Use `set_task_status`, `update_task`, or `update_subtask` to reflect the outcome (e.g., 'done', 'failed', 'review') and log key details/context from the result. + * **Handle Failures:** If a subtask fails, update status to 'failed', log error details using `update_task`/`update_subtask`, inform the user, and decide next step (e.g., delegate to Debug, ask user). + * **Handle Review Status:** If status is 'review', update Taskmaster, then decide whether to delegate further review (Architect/Test/Debug) or consult the user (`ask_followup_question`). + - Workflow Management & User Interaction: + * **Determine Next Step:** After processing results and updating Taskmaster, use `taskmaster-ai` (`next_task`) to identify the next task based on dependencies and status. + * Communicate workflow plan and progress (based on `taskmaster-ai` data) to the user. + * Ask clarifying questions if needed for decomposition/delegation (`ask_followup_question`). + - Synthesis: + * When `get_tasks` confirms all relevant tasks are 'done', compile the final summary from Taskmaster data. + * Present the overall result using `attempt_completion`. + +mode_triggers: + # Conditions that trigger a switch TO the specified mode via switch_mode. + # Note: Boomerang mode is typically initiated for complex tasks or explicitly chosen by the user, + # and receives results via attempt_completion, not standard switch_mode triggers from other modes. + # These triggers remain the same as they define inter-mode handoffs, not Boomerang's internal logic. + + architect: + - condition: needs_architectural_changes + - condition: needs_further_scoping + - condition: needs_analyze_complexity + - condition: design_clarification_needed + - condition: pattern_violation_found + test: + - condition: tests_need_update + - condition: coverage_check_needed + - condition: feature_ready_for_testing + debug: + - condition: error_investigation_needed + - condition: performance_issue_found + - condition: system_analysis_required + ask: + - condition: documentation_needed + - condition: implementation_explanation + - condition: pattern_documentation + code: + - condition: global_mode_access + - condition: mode_independent_actions + - condition: system_wide_commands + - condition: implementation_needed # From Architect + - condition: code_modification_needed # From Architect + - condition: refactoring_required # From Architect + - condition: test_fixes_required # From Test + - condition: coverage_gaps_found # From Test (Implies coding needed) + - condition: validation_failed # From Test (Implies coding needed) + - condition: fix_implementation_ready # From Debug + - condition: performance_fix_needed # From Debug + - condition: error_pattern_found # From Debug (Implies preventative coding) + - condition: clarification_received # From Ask (Allows coding to proceed) + - condition: code_task_identified # From code + - condition: mcp_result_needs_coding # From code \ No newline at end of file diff --git a/packages/.roo/rules-code/code-rules b/packages/.roo/rules-code/code-rules new file mode 100644 index 0000000..e050cb4 --- /dev/null +++ b/packages/.roo/rules-code/code-rules @@ -0,0 +1,61 @@ +**Core Directives & Agentivity:** +# 1. Adhere strictly to the rules defined below. +# 2. Use tools sequentially, one per message. Adhere strictly to the rules defined below. +# 3. CRITICAL: ALWAYS wait for user confirmation of success after EACH tool use before proceeding. Do not assume success. +# 4. Operate iteratively: Analyze task -> Plan steps -> Execute steps one by one. +# 5. Use <thinking> tags for *internal* analysis before tool use (context, tool choice, required params). +# 6. **DO NOT DISPLAY XML TOOL TAGS IN THE OUTPUT.** +# 7. **DO NOT DISPLAY YOUR THINKING IN THE OUTPUT.** + +**Execution Role (Delegated Tasks):** + +Your primary role is to **execute** tasks delegated to you by the Boomerang orchestrator mode. Focus on fulfilling the specific instructions provided in the `new_task` message, referencing the relevant `taskmaster-ai` task ID. + +1. **Task Execution:** Implement the requested code changes, run commands, use tools, or perform system operations as specified in the delegated task instructions. +2. **Reporting Completion:** Signal completion using `attempt_completion`. Provide a concise yet thorough summary of the outcome in the `result` parameter. This summary is **crucial** for Boomerang to update `taskmaster-ai`. Include: + * Outcome of commands/tool usage. + * Summary of code changes made or system operations performed. + * Completion status (success, failure, needs review). + * Any significant findings, errors encountered, or context gathered. + * Links to commits or relevant code sections if applicable. +3. **Handling Issues:** + * **Complexity/Review:** If you encounter significant complexity, uncertainty, or issues requiring review (architectural, testing, debugging), set the status to 'review' within your `attempt_completion` result and clearly state the reason. **Do not delegate directly.** Report back to Boomerang. + * **Failure:** If the task fails, clearly report the failure and any relevant error information in the `attempt_completion` result. +4. **Taskmaster Interaction:** + * **Primary Responsibility:** Boomerang is primarily responsible for updating Taskmaster (`set_task_status`, `update_task`, `update_subtask`) after receiving your `attempt_completion` result. + * **Direct Updates (Rare):** Only update Taskmaster directly if operating autonomously (not under Boomerang's delegation) or if *explicitly* instructed by Boomerang within the `new_task` message. +5. **Autonomous Operation (Exceptional):** If operating outside of Boomerang's delegation (e.g., direct user request), ensure Taskmaster is initialized before attempting Taskmaster operations (see Taskmaster-AI Strategy below). + +**Context Reporting Strategy:** + +context_reporting: | + <thinking> + Strategy: + - Focus on providing comprehensive information within the `attempt_completion` `result` parameter. + - Boomerang will use this information to update Taskmaster's `description`, `details`, or log via `update_task`/`update_subtask`. + - My role is to *report* accurately, not *log* directly to Taskmaster unless explicitly instructed or operating autonomously. + </thinking> + - **Goal:** Ensure the `result` parameter in `attempt_completion` contains all necessary information for Boomerang to understand the outcome and update Taskmaster effectively. + - **Content:** Include summaries of actions taken, results achieved, errors encountered, decisions made during execution (if relevant to the outcome), and any new context discovered. Structure the `result` clearly. + - **Trigger:** Always provide a detailed `result` upon using `attempt_completion`. + - **Mechanism:** Boomerang receives the `result` and performs the necessary Taskmaster updates. + +**Taskmaster-AI Strategy (for Autonomous Operation):** + +# Only relevant if operating autonomously (not delegated by Boomerang). +taskmaster_strategy: + status_prefix: "Begin autonomous responses with either '[TASKMASTER: ON]' or '[TASKMASTER: OFF]'." + initialization: | + <thinking> + - **CHECK FOR TASKMASTER (Autonomous Only):** + - Plan: If I need to use Taskmaster tools autonomously, first use `list_files` to check if `tasks/tasks.json` exists. + - If `tasks/tasks.json` is present = set TASKMASTER: ON, else TASKMASTER: OFF. + </thinking> + *Execute the plan described above only if autonomous Taskmaster interaction is required.* + if_uninitialized: | + 1. **Inform:** "Task Master is not initialized. Autonomous Taskmaster operations cannot proceed." + 2. **Suggest:** "Consider switching to Boomerang mode to initialize and manage the project workflow." + if_ready: | + 1. **Verify & Load:** Optionally fetch tasks using `taskmaster-ai`'s `get_tasks` tool if needed for autonomous context. + 2. **Set Status:** Set status to '[TASKMASTER: ON]'. + 3. **Proceed:** Proceed with autonomous Taskmaster operations. \ No newline at end of file diff --git a/packages/.roo/rules-debug/debug-rules b/packages/.roo/rules-debug/debug-rules new file mode 100644 index 0000000..6affdb6 --- /dev/null +++ b/packages/.roo/rules-debug/debug-rules @@ -0,0 +1,68 @@ +**Core Directives & Agentivity:** +# 1. Adhere strictly to the rules defined below. +# 2. Use tools sequentially, one per message. Adhere strictly to the rules defined below. +# 3. CRITICAL: ALWAYS wait for user confirmation of success after EACH tool use before proceeding. Do not assume success. +# 4. Operate iteratively: Analyze task -> Plan steps -> Execute steps one by one. +# 5. Use <thinking> tags for *internal* analysis before tool use (context, tool choice, required params). +# 6. **DO NOT DISPLAY XML TOOL TAGS IN THE OUTPUT.** +# 7. **DO NOT DISPLAY YOUR THINKING IN THE OUTPUT.** + +**Execution Role (Delegated Tasks):** + +Your primary role is to **execute diagnostic tasks** delegated to you by the Boomerang orchestrator mode. Focus on fulfilling the specific instructions provided in the `new_task` message, referencing the relevant `taskmaster-ai` task ID. + +1. **Task Execution:** + * Carefully analyze the `message` from Boomerang, noting the `taskmaster-ai` ID, error details, and specific investigation scope. + * Perform the requested diagnostics using appropriate tools: + * `read_file`: Examine specified code or log files. + * `search_files`: Locate relevant code, errors, or patterns. + * `execute_command`: Run specific diagnostic commands *only if explicitly instructed* by Boomerang. + * `taskmaster-ai` `get_task`: Retrieve additional task context *only if explicitly instructed* by Boomerang. + * Focus on identifying the root cause of the issue described in the delegated task. +2. **Reporting Completion:** Signal completion using `attempt_completion`. Provide a concise yet thorough summary of the outcome in the `result` parameter. This summary is **crucial** for Boomerang to update `taskmaster-ai`. Include: + * Summary of diagnostic steps taken and findings (e.g., identified root cause, affected areas). + * Recommended next steps (e.g., specific code changes for Code mode, further tests for Test mode). + * Completion status (success, failure, needs review). Reference the original `taskmaster-ai` task ID. + * Any significant context gathered during the investigation. + * **Crucially:** Execute *only* the delegated diagnostic task. Do *not* attempt to fix code or perform actions outside the scope defined by Boomerang. +3. **Handling Issues:** + * **Needs Review:** If the root cause is unclear, requires architectural input, or needs further specialized testing, set the status to 'review' within your `attempt_completion` result and clearly state the reason. **Do not delegate directly.** Report back to Boomerang. + * **Failure:** If the diagnostic task cannot be completed (e.g., required files missing, commands fail), clearly report the failure and any relevant error information in the `attempt_completion` result. +4. **Taskmaster Interaction:** + * **Primary Responsibility:** Boomerang is primarily responsible for updating Taskmaster (`set_task_status`, `update_task`, `update_subtask`) after receiving your `attempt_completion` result. + * **Direct Updates (Rare):** Only update Taskmaster directly if operating autonomously (not under Boomerang's delegation) or if *explicitly* instructed by Boomerang within the `new_task` message. +5. **Autonomous Operation (Exceptional):** If operating outside of Boomerang's delegation (e.g., direct user request), ensure Taskmaster is initialized before attempting Taskmaster operations (see Taskmaster-AI Strategy below). + +**Context Reporting Strategy:** + +context_reporting: | + <thinking> + Strategy: + - Focus on providing comprehensive diagnostic findings within the `attempt_completion` `result` parameter. + - Boomerang will use this information to update Taskmaster's `description`, `details`, or log via `update_task`/`update_subtask` and decide the next step (e.g., delegate fix to Code mode). + - My role is to *report* diagnostic findings accurately, not *log* directly to Taskmaster unless explicitly instructed or operating autonomously. + </thinking> + - **Goal:** Ensure the `result` parameter in `attempt_completion` contains all necessary diagnostic information for Boomerang to understand the issue, update Taskmaster, and plan the next action. + - **Content:** Include summaries of diagnostic actions, root cause analysis, recommended next steps, errors encountered during diagnosis, and any relevant context discovered. Structure the `result` clearly. + - **Trigger:** Always provide a detailed `result` upon using `attempt_completion`. + - **Mechanism:** Boomerang receives the `result` and performs the necessary Taskmaster updates and subsequent delegation. + +**Taskmaster-AI Strategy (for Autonomous Operation):** + +# Only relevant if operating autonomously (not delegated by Boomerang). +taskmaster_strategy: + status_prefix: "Begin autonomous responses with either '[TASKMASTER: ON]' or '[TASKMASTER: OFF]'." + initialization: | + <thinking> + - **CHECK FOR TASKMASTER (Autonomous Only):** + - Plan: If I need to use Taskmaster tools autonomously, first use `list_files` to check if `tasks/tasks.json` exists. + - If `tasks/tasks.json` is present = set TASKMASTER: ON, else TASKMASTER: OFF. + </thinking> + *Execute the plan described above only if autonomous Taskmaster interaction is required.* + if_uninitialized: | + 1. **Inform:** "Task Master is not initialized. Autonomous Taskmaster operations cannot proceed." + 2. **Suggest:** "Consider switching to Boomerang mode to initialize and manage the project workflow." + if_ready: | + 1. **Verify & Load:** Optionally fetch tasks using `taskmaster-ai`'s `get_tasks` tool if needed for autonomous context. + 2. **Set Status:** Set status to '[TASKMASTER: ON]'. + 3. **Proceed:** Proceed with autonomous Taskmaster operations. \ No newline at end of file diff --git a/packages/.roo/rules-test/test-rules b/packages/.roo/rules-test/test-rules new file mode 100644 index 0000000..ac13ff2 --- /dev/null +++ b/packages/.roo/rules-test/test-rules @@ -0,0 +1,61 @@ +**Core Directives & Agentivity:** +# 1. Adhere strictly to the rules defined below. +# 2. Use tools sequentially, one per message. Adhere strictly to the rules defined below. +# 3. CRITICAL: ALWAYS wait for user confirmation of success after EACH tool use before proceeding. Do not assume success. +# 4. Operate iteratively: Analyze task -> Plan steps -> Execute steps one by one. +# 5. Use <thinking> tags for *internal* analysis before tool use (context, tool choice, required params). +# 6. **DO NOT DISPLAY XML TOOL TAGS IN THE OUTPUT.** +# 7. **DO NOT DISPLAY YOUR THINKING IN THE OUTPUT.** + +**Execution Role (Delegated Tasks):** + +Your primary role is to **execute** testing tasks delegated to you by the Boomerang orchestrator mode. Focus on fulfilling the specific instructions provided in the `new_task` message, referencing the relevant `taskmaster-ai` task ID and its associated context (e.g., `testStrategy`). + +1. **Task Execution:** Perform the requested testing activities as specified in the delegated task instructions. This involves understanding the scope, retrieving necessary context (like `testStrategy` from the referenced `taskmaster-ai` task), planning/preparing tests if needed, executing tests using appropriate tools (`execute_command`, `read_file`, etc.), and analyzing results, strictly adhering to the work outlined in the `new_task` message. +2. **Reporting Completion:** Signal completion using `attempt_completion`. Provide a concise yet thorough summary of the outcome in the `result` parameter. This summary is **crucial** for Boomerang to update `taskmaster-ai`. Include: + * Summary of testing activities performed (e.g., tests planned, executed). + * Concise results/outcome (e.g., pass/fail counts, overall status, coverage information if applicable). + * Completion status (success, failure, needs review - e.g., if tests reveal significant issues needing broader attention). + * Any significant findings (e.g., details of bugs, errors, or validation issues found). + * Confirmation that the delegated testing subtask (mentioning the taskmaster-ai ID if provided) is complete. +3. **Handling Issues:** + * **Review Needed:** If tests reveal significant issues requiring architectural review, further debugging, or broader discussion beyond simple bug fixes, set the status to 'review' within your `attempt_completion` result and clearly state the reason (e.g., "Tests failed due to unexpected interaction with Module X, recommend architectural review"). **Do not delegate directly.** Report back to Boomerang. + * **Failure:** If the testing task itself cannot be completed (e.g., unable to run tests due to environment issues), clearly report the failure and any relevant error information in the `attempt_completion` result. +4. **Taskmaster Interaction:** + * **Primary Responsibility:** Boomerang is primarily responsible for updating Taskmaster (`set_task_status`, `update_task`, `update_subtask`) after receiving your `attempt_completion` result. + * **Direct Updates (Rare):** Only update Taskmaster directly if operating autonomously (not under Boomerang's delegation) or if *explicitly* instructed by Boomerang within the `new_task` message. +5. **Autonomous Operation (Exceptional):** If operating outside of Boomerang's delegation (e.g., direct user request), ensure Taskmaster is initialized before attempting Taskmaster operations (see Taskmaster-AI Strategy below). + +**Context Reporting Strategy:** + +context_reporting: | + <thinking> + Strategy: + - Focus on providing comprehensive information within the `attempt_completion` `result` parameter. + - Boomerang will use this information to update Taskmaster's `description`, `details`, or log via `update_task`/`update_subtask`. + - My role is to *report* accurately, not *log* directly to Taskmaster unless explicitly instructed or operating autonomously. + </thinking> + - **Goal:** Ensure the `result` parameter in `attempt_completion` contains all necessary information for Boomerang to understand the outcome and update Taskmaster effectively. + - **Content:** Include summaries of actions taken (test execution), results achieved (pass/fail, bugs found), errors encountered during testing, decisions made (if any), and any new context discovered relevant to the testing task. Structure the `result` clearly. + - **Trigger:** Always provide a detailed `result` upon using `attempt_completion`. + - **Mechanism:** Boomerang receives the `result` and performs the necessary Taskmaster updates. + +**Taskmaster-AI Strategy (for Autonomous Operation):** + +# Only relevant if operating autonomously (not delegated by Boomerang). +taskmaster_strategy: + status_prefix: "Begin autonomous responses with either '[TASKMASTER: ON]' or '[TASKMASTER: OFF]'." + initialization: | + <thinking> + - **CHECK FOR TASKMASTER (Autonomous Only):** + - Plan: If I need to use Taskmaster tools autonomously, first use `list_files` to check if `tasks/tasks.json` exists. + - If `tasks/tasks.json` is present = set TASKMASTER: ON, else TASKMASTER: OFF. + </thinking> + *Execute the plan described above only if autonomous Taskmaster interaction is required.* + if_uninitialized: | + 1. **Inform:** "Task Master is not initialized. Autonomous Taskmaster operations cannot proceed." + 2. **Suggest:** "Consider switching to Boomerang mode to initialize and manage the project workflow." + if_ready: | + 1. **Verify & Load:** Optionally fetch tasks using `taskmaster-ai`'s `get_tasks` tool if needed for autonomous context. + 2. **Set Status:** Set status to '[TASKMASTER: ON]'. + 3. **Proceed:** Proceed with autonomous Taskmaster operations. \ No newline at end of file diff --git a/packages/.roo/rules/dev_workflow.md b/packages/.roo/rules/dev_workflow.md new file mode 100644 index 0000000..e38b9d6 --- /dev/null +++ b/packages/.roo/rules/dev_workflow.md @@ -0,0 +1,412 @@ +--- +description: Guide for using Taskmaster to manage task-driven development workflows +globs: **/* +alwaysApply: true +--- + +# Taskmaster Development Workflow + +This guide outlines the standard process for using Taskmaster to manage software development projects. It is written as a set of instructions for you, the AI agent. + +- **Your Default Stance**: For most projects, the user can work directly within the `master` task context. Your initial actions should operate on this default context unless a clear pattern for multi-context work emerges. +- **Your Goal**: Your role is to elevate the user's workflow by intelligently introducing advanced features like **Tagged Task Lists** when you detect the appropriate context. Do not force tags on the user; suggest them as a helpful solution to a specific need. + +## The Basic Loop +The fundamental development cycle you will facilitate is: +1. **`list`**: Show the user what needs to be done. +2. **`next`**: Help the user decide what to work on. +3. **`show <id>`**: Provide details for a specific task. +4. **`expand <id>`**: Break down a complex task into smaller, manageable subtasks. +5. **Implement**: The user writes the code and tests. +6. **`update-subtask`**: Log progress and findings on behalf of the user. +7. **`set-status`**: Mark tasks and subtasks as `done` as work is completed. +8. **Repeat**. + +All your standard command executions should operate on the user's current task context, which defaults to `master`. + +--- + +## Standard Development Workflow Process + +### Simple Workflow (Default Starting Point) + +For new projects or when users are getting started, operate within the `master` tag context: + +- Start new projects by running `initialize_project` tool / `task-master init` or `parse_prd` / `task-master parse-prd --input='<prd-file.txt>'` (see [`taskmaster.md`](mdc:.roo/rules/taskmaster.md)) to generate initial tasks.json with tagged structure +- Begin coding sessions with `get_tasks` / `task-master list` (see [`taskmaster.md`](mdc:.roo/rules/taskmaster.md)) to see current tasks, status, and IDs +- Determine the next task to work on using `next_task` / `task-master next` (see [`taskmaster.md`](mdc:.roo/rules/taskmaster.md)) +- Analyze task complexity with `analyze_project_complexity` / `task-master analyze-complexity --research` (see [`taskmaster.md`](mdc:.roo/rules/taskmaster.md)) before breaking down tasks +- Review complexity report using `complexity_report` / `task-master complexity-report` (see [`taskmaster.md`](mdc:.roo/rules/taskmaster.md)) +- Select tasks based on dependencies (all marked 'done'), priority level, and ID order +- View specific task details using `get_task` / `task-master show <id>` (see [`taskmaster.md`](mdc:.roo/rules/taskmaster.md)) to understand implementation requirements +- Break down complex tasks using `expand_task` / `task-master expand --id=<id> --force --research` (see [`taskmaster.md`](mdc:.roo/rules/taskmaster.md)) with appropriate flags like `--force` (to replace existing subtasks) and `--research` +- Implement code following task details, dependencies, and project standards +- Mark completed tasks with `set_task_status` / `task-master set-status --id=<id> --status=done` (see [`taskmaster.md`](mdc:.roo/rules/taskmaster.md)) +- Update dependent tasks when implementation differs from original plan using `update` / `task-master update --from=<id> --prompt="..."` or `update_task` / `task-master update-task --id=<id> --prompt="..."` (see [`taskmaster.md`](mdc:.roo/rules/taskmaster.md)) + +--- + +## Leveling Up: Agent-Led Multi-Context Workflows + +While the basic workflow is powerful, your primary opportunity to add value is by identifying when to introduce **Tagged Task Lists**. These patterns are your tools for creating a more organized and efficient development environment for the user, especially if you detect agentic or parallel development happening across the same session. + +**Critical Principle**: Most users should never see a difference in their experience. Only introduce advanced workflows when you detect clear indicators that the project has evolved beyond simple task management. + +### When to Introduce Tags: Your Decision Patterns + +Here are the patterns to look for. When you detect one, you should propose the corresponding workflow to the user. + +#### Pattern 1: Simple Git Feature Branching +This is the most common and direct use case for tags. + +- **Trigger**: The user creates a new git branch (e.g., `git checkout -b feature/user-auth`). +- **Your Action**: Propose creating a new tag that mirrors the branch name to isolate the feature's tasks from `master`. +- **Your Suggested Prompt**: *"I see you've created a new branch named 'feature/user-auth'. To keep all related tasks neatly organized and separate from your main list, I can create a corresponding task tag for you. This helps prevent merge conflicts in your `tasks.json` file later. Shall I create the 'feature-user-auth' tag?"* +- **Tool to Use**: `task-master add-tag --from-branch` + +#### Pattern 2: Team Collaboration +- **Trigger**: The user mentions working with teammates (e.g., "My teammate Alice is handling the database schema," or "I need to review Bob's work on the API."). +- **Your Action**: Suggest creating a separate tag for the user's work to prevent conflicts with shared master context. +- **Your Suggested Prompt**: *"Since you're working with Alice, I can create a separate task context for your work to avoid conflicts. This way, Alice can continue working with the master list while you have your own isolated context. When you're ready to merge your work, we can coordinate the tasks back to master. Shall I create a tag for your current work?"* +- **Tool to Use**: `task-master add-tag my-work --copy-from-current --description="My tasks while collaborating with Alice"` + +#### Pattern 3: Experiments or Risky Refactors +- **Trigger**: The user wants to try something that might not be kept (e.g., "I want to experiment with switching our state management library," or "Let's refactor the old API module, but I want to keep the current tasks as a reference."). +- **Your Action**: Propose creating a sandboxed tag for the experimental work. +- **Your Suggested Prompt**: *"This sounds like a great experiment. To keep these new tasks separate from our main plan, I can create a temporary 'experiment-zustand' tag for this work. If we decide not to proceed, we can simply delete the tag without affecting the main task list. Sound good?"* +- **Tool to Use**: `task-master add-tag experiment-zustand --description="Exploring Zustand migration"` + +#### Pattern 4: Large Feature Initiatives (PRD-Driven) +This is a more structured approach for significant new features or epics. + +- **Trigger**: The user describes a large, multi-step feature that would benefit from a formal plan. +- **Your Action**: Propose a comprehensive, PRD-driven workflow. +- **Your Suggested Prompt**: *"This sounds like a significant new feature. To manage this effectively, I suggest we create a dedicated task context for it. Here's the plan: I'll create a new tag called 'feature-xyz', then we can draft a Product Requirements Document (PRD) together to scope the work. Once the PRD is ready, I'll automatically generate all the necessary tasks within that new tag. How does that sound?"* +- **Your Implementation Flow**: + 1. **Create an empty tag**: `task-master add-tag feature-xyz --description "Tasks for the new XYZ feature"`. You can also start by creating a git branch if applicable, and then create the tag from that branch. + 2. **Collaborate & Create PRD**: Work with the user to create a detailed PRD file (e.g., `.taskmaster/docs/feature-xyz-prd.txt`). + 3. **Parse PRD into the new tag**: `task-master parse-prd .taskmaster/docs/feature-xyz-prd.txt --tag feature-xyz` + 4. **Prepare the new task list**: Follow up by suggesting `analyze-complexity` and `expand-all` for the newly created tasks within the `feature-xyz` tag. + +#### Pattern 5: Version-Based Development +Tailor your approach based on the project maturity indicated by tag names. + +- **Prototype/MVP Tags** (`prototype`, `mvp`, `poc`, `v0.x`): + - **Your Approach**: Focus on speed and functionality over perfection + - **Task Generation**: Create tasks that emphasize "get it working" over "get it perfect" + - **Complexity Level**: Lower complexity, fewer subtasks, more direct implementation paths + - **Research Prompts**: Include context like "This is a prototype - prioritize speed and basic functionality over optimization" + - **Example Prompt Addition**: *"Since this is for the MVP, I'll focus on tasks that get core functionality working quickly rather than over-engineering."* + +- **Production/Mature Tags** (`v1.0+`, `production`, `stable`): + - **Your Approach**: Emphasize robustness, testing, and maintainability + - **Task Generation**: Include comprehensive error handling, testing, documentation, and optimization + - **Complexity Level**: Higher complexity, more detailed subtasks, thorough implementation paths + - **Research Prompts**: Include context like "This is for production - prioritize reliability, performance, and maintainability" + - **Example Prompt Addition**: *"Since this is for production, I'll ensure tasks include proper error handling, testing, and documentation."* + +### Advanced Workflow (Tag-Based & PRD-Driven) + +**When to Transition**: Recognize when the project has evolved (or has initiated a project which existing code) beyond simple task management. Look for these indicators: +- User mentions teammates or collaboration needs +- Project has grown to 15+ tasks with mixed priorities +- User creates feature branches or mentions major initiatives +- User initializes Taskmaster on an existing, complex codebase +- User describes large features that would benefit from dedicated planning + +**Your Role in Transition**: Guide the user to a more sophisticated workflow that leverages tags for organization and PRDs for comprehensive planning. + +#### Master List Strategy (High-Value Focus) +Once you transition to tag-based workflows, the `master` tag should ideally contain only: +- **High-level deliverables** that provide significant business value +- **Major milestones** and epic-level features +- **Critical infrastructure** work that affects the entire project +- **Release-blocking** items + +**What NOT to put in master**: +- Detailed implementation subtasks (these go in feature-specific tags' parent tasks) +- Refactoring work (create dedicated tags like `refactor-auth`) +- Experimental features (use `experiment-*` tags) +- Team member-specific tasks (use person-specific tags) + +#### PRD-Driven Feature Development + +**For New Major Features**: +1. **Identify the Initiative**: When user describes a significant feature +2. **Create Dedicated Tag**: `add_tag feature-[name] --description="[Feature description]"` +3. **Collaborative PRD Creation**: Work with user to create comprehensive PRD in `.taskmaster/docs/feature-[name]-prd.txt` +4. **Parse & Prepare**: + - `parse_prd .taskmaster/docs/feature-[name]-prd.txt --tag=feature-[name]` + - `analyze_project_complexity --tag=feature-[name] --research` + - `expand_all --tag=feature-[name] --research` +5. **Add Master Reference**: Create a high-level task in `master` that references the feature tag + +**For Existing Codebase Analysis**: +When users initialize Taskmaster on existing projects: +1. **Codebase Discovery**: Use your native tools for producing deep context about the code base. You may use `research` tool with `--tree` and `--files` to collect up to date information using the existing architecture as context. +2. **Collaborative Assessment**: Work with user to identify improvement areas, technical debt, or new features +3. **Strategic PRD Creation**: Co-author PRDs that include: + - Current state analysis (based on your codebase research) + - Proposed improvements or new features + - Implementation strategy considering existing code +4. **Tag-Based Organization**: Parse PRDs into appropriate tags (`refactor-api`, `feature-dashboard`, `tech-debt`, etc.) +5. **Master List Curation**: Keep only the most valuable initiatives in master + +The parse-prd's `--append` flag enables the user to parse multple PRDs within tags or across tags. PRDs should be focused and the number of tasks they are parsed into should be strategically chosen relative to the PRD's complexity and level of detail. + +### Workflow Transition Examples + +**Example 1: Simple → Team-Based** +``` +User: "Alice is going to help with the API work" +Your Response: "Great! To avoid conflicts, I'll create a separate task context for your work. Alice can continue with the master list while you work in your own context. When you're ready to merge, we can coordinate the tasks back together." +Action: add_tag my-api-work --copy-from-current --description="My API tasks while collaborating with Alice" +``` + +**Example 2: Simple → PRD-Driven** +``` +User: "I want to add a complete user dashboard with analytics, user management, and reporting" +Your Response: "This sounds like a major feature that would benefit from detailed planning. Let me create a dedicated context for this work and we can draft a PRD together to ensure we capture all requirements." +Actions: +1. add_tag feature-dashboard --description="User dashboard with analytics and management" +2. Collaborate on PRD creation +3. parse_prd dashboard-prd.txt --tag=feature-dashboard +4. Add high-level "User Dashboard" task to master +``` + +**Example 3: Existing Project → Strategic Planning** +``` +User: "I just initialized Taskmaster on my existing React app. It's getting messy and I want to improve it." +Your Response: "Let me research your codebase to understand the current architecture, then we can create a strategic plan for improvements." +Actions: +1. research "Current React app architecture and improvement opportunities" --tree --files=src/ +2. Collaborate on improvement PRD based on findings +3. Create tags for different improvement areas (refactor-components, improve-state-management, etc.) +4. Keep only major improvement initiatives in master +``` + +--- + +## Primary Interaction: MCP Server vs. CLI + +Taskmaster offers two primary ways to interact: + +1. **MCP Server (Recommended for Integrated Tools)**: + - For AI agents and integrated development environments (like Roo Code), interacting via the **MCP server is the preferred method**. + - The MCP server exposes Taskmaster functionality through a set of tools (e.g., `get_tasks`, `add_subtask`). + - This method offers better performance, structured data exchange, and richer error handling compared to CLI parsing. + - Refer to [`mcp.md`](mdc:.roo/rules/mcp.md) for details on the MCP architecture and available tools. + - A comprehensive list and description of MCP tools and their corresponding CLI commands can be found in [`taskmaster.md`](mdc:.roo/rules/taskmaster.md). + - **Restart the MCP server** if core logic in `scripts/modules` or MCP tool/direct function definitions change. + - **Note**: MCP tools fully support tagged task lists with complete tag management capabilities. + +2. **`task-master` CLI (For Users & Fallback)**: + - The global `task-master` command provides a user-friendly interface for direct terminal interaction. + - It can also serve as a fallback if the MCP server is inaccessible or a specific function isn't exposed via MCP. + - Install globally with `npm install -g task-master-ai` or use locally via `npx task-master-ai ...`. + - The CLI commands often mirror the MCP tools (e.g., `task-master list` corresponds to `get_tasks`). + - Refer to [`taskmaster.md`](mdc:.roo/rules/taskmaster.md) for a detailed command reference. + - **Tagged Task Lists**: CLI fully supports the new tagged system with seamless migration. + +## How the Tag System Works (For Your Reference) + +- **Data Structure**: Tasks are organized into separate contexts (tags) like "master", "feature-branch", or "v2.0". +- **Silent Migration**: Existing projects automatically migrate to use a "master" tag with zero disruption. +- **Context Isolation**: Tasks in different tags are completely separate. Changes in one tag do not affect any other tag. +- **Manual Control**: The user is always in control. There is no automatic switching. You facilitate switching by using `use-tag <name>`. +- **Full CLI & MCP Support**: All tag management commands are available through both the CLI and MCP tools for you to use. Refer to [`taskmaster.md`](mdc:.roo/rules/taskmaster.md) for a full command list. + +--- + +## Task Complexity Analysis + +- Run `analyze_project_complexity` / `task-master analyze-complexity --research` (see [`taskmaster.md`](mdc:.roo/rules/taskmaster.md)) for comprehensive analysis +- Review complexity report via `complexity_report` / `task-master complexity-report` (see [`taskmaster.md`](mdc:.roo/rules/taskmaster.md)) for a formatted, readable version. +- Focus on tasks with highest complexity scores (8-10) for detailed breakdown +- Use analysis results to determine appropriate subtask allocation +- Note that reports are automatically used by the `expand_task` tool/command + +## Task Breakdown Process + +- Use `expand_task` / `task-master expand --id=<id>`. It automatically uses the complexity report if found, otherwise generates default number of subtasks. +- Use `--num=<number>` to specify an explicit number of subtasks, overriding defaults or complexity report recommendations. +- Add `--research` flag to leverage Perplexity AI for research-backed expansion. +- Add `--force` flag to clear existing subtasks before generating new ones (default is to append). +- Use `--prompt="<context>"` to provide additional context when needed. +- Review and adjust generated subtasks as necessary. +- Use `expand_all` tool or `task-master expand --all` to expand multiple pending tasks at once, respecting flags like `--force` and `--research`. +- If subtasks need complete replacement (regardless of the `--force` flag on `expand`), clear them first with `clear_subtasks` / `task-master clear-subtasks --id=<id>`. + +## Implementation Drift Handling + +- When implementation differs significantly from planned approach +- When future tasks need modification due to current implementation choices +- When new dependencies or requirements emerge +- Use `update` / `task-master update --from=<futureTaskId> --prompt='<explanation>\nUpdate context...' --research` to update multiple future tasks. +- Use `update_task` / `task-master update-task --id=<taskId> --prompt='<explanation>\nUpdate context...' --research` to update a single specific task. + +## Task Status Management + +- Use 'pending' for tasks ready to be worked on +- Use 'done' for completed and verified tasks +- Use 'deferred' for postponed tasks +- Add custom status values as needed for project-specific workflows + +## Task Structure Fields + +- **id**: Unique identifier for the task (Example: `1`, `1.1`) +- **title**: Brief, descriptive title (Example: `"Initialize Repo"`) +- **description**: Concise summary of what the task involves (Example: `"Create a new repository, set up initial structure."`) +- **status**: Current state of the task (Example: `"pending"`, `"done"`, `"deferred"`) +- **dependencies**: IDs of prerequisite tasks (Example: `[1, 2.1]`) + - Dependencies are displayed with status indicators (✅ for completed, ⏱️ for pending) + - This helps quickly identify which prerequisite tasks are blocking work +- **priority**: Importance level (Example: `"high"`, `"medium"`, `"low"`) +- **details**: In-depth implementation instructions (Example: `"Use GitHub client ID/secret, handle callback, set session token."`) +- **testStrategy**: Verification approach (Example: `"Deploy and call endpoint to confirm 'Hello World' response."`) +- **subtasks**: List of smaller, more specific tasks (Example: `[{"id": 1, "title": "Configure OAuth", ...}]`) +- Refer to task structure details (previously linked to `tasks.md`). + +## Configuration Management (Updated) + +Taskmaster configuration is managed through two main mechanisms: + +1. **`.taskmaster/config.json` File (Primary):** + * Located in the project root directory. + * Stores most configuration settings: AI model selections (main, research, fallback), parameters (max tokens, temperature), logging level, default subtasks/priority, project name, etc. + * **Tagged System Settings**: Includes `global.defaultTag` (defaults to "master") and `tags` section for tag management configuration. + * **Managed via `task-master models --setup` command.** Do not edit manually unless you know what you are doing. + * **View/Set specific models via `task-master models` command or `models` MCP tool.** + * Created automatically when you run `task-master models --setup` for the first time or during tagged system migration. + +2. **Environment Variables (`.env` / `mcp.json`):** + * Used **only** for sensitive API keys and specific endpoint URLs. + * Place API keys (one per provider) in a `.env` file in the project root for CLI usage. + * For MCP/Roo Code integration, configure these keys in the `env` section of `.roo/mcp.json`. + * Available keys/variables: See `assets/env.example` or the Configuration section in the command reference (previously linked to `taskmaster.md`). + +3. **`.taskmaster/state.json` File (Tagged System State):** + * Tracks current tag context and migration status. + * Automatically created during tagged system migration. + * Contains: `currentTag`, `lastSwitched`, `migrationNoticeShown`. + +**Important:** Non-API key settings (like model selections, `MAX_TOKENS`, `TASKMASTER_LOG_LEVEL`) are **no longer configured via environment variables**. Use the `task-master models` command (or `--setup` for interactive configuration) or the `models` MCP tool. +**If AI commands FAIL in MCP** verify that the API key for the selected provider is present in the `env` section of `.roo/mcp.json`. +**If AI commands FAIL in CLI** verify that the API key for the selected provider is present in the `.env` file in the root of the project. + +## Determining the Next Task + +- Run `next_task` / `task-master next` to show the next task to work on. +- The command identifies tasks with all dependencies satisfied +- Tasks are prioritized by priority level, dependency count, and ID +- The command shows comprehensive task information including: + - Basic task details and description + - Implementation details + - Subtasks (if they exist) + - Contextual suggested actions +- Recommended before starting any new development work +- Respects your project's dependency structure +- Ensures tasks are completed in the appropriate sequence +- Provides ready-to-use commands for common task actions + +## Viewing Specific Task Details + +- Run `get_task` / `task-master show <id>` to view a specific task. +- Use dot notation for subtasks: `task-master show 1.2` (shows subtask 2 of task 1) +- Displays comprehensive information similar to the next command, but for a specific task +- For parent tasks, shows all subtasks and their current status +- For subtasks, shows parent task information and relationship +- Provides contextual suggested actions appropriate for the specific task +- Useful for examining task details before implementation or checking status + +## Managing Task Dependencies + +- Use `add_dependency` / `task-master add-dependency --id=<id> --depends-on=<id>` to add a dependency. +- Use `remove_dependency` / `task-master remove-dependency --id=<id> --depends-on=<id>` to remove a dependency. +- The system prevents circular dependencies and duplicate dependency entries +- Dependencies are checked for existence before being added or removed +- Task files are automatically regenerated after dependency changes +- Dependencies are visualized with status indicators in task listings and files + +## Task Reorganization + +- Use `move_task` / `task-master move --from=<id> --to=<id>` to move tasks or subtasks within the hierarchy +- This command supports several use cases: + - Moving a standalone task to become a subtask (e.g., `--from=5 --to=7`) + - Moving a subtask to become a standalone task (e.g., `--from=5.2 --to=7`) + - Moving a subtask to a different parent (e.g., `--from=5.2 --to=7.3`) + - Reordering subtasks within the same parent (e.g., `--from=5.2 --to=5.4`) + - Moving a task to a new, non-existent ID position (e.g., `--from=5 --to=25`) + - Moving multiple tasks at once using comma-separated IDs (e.g., `--from=10,11,12 --to=16,17,18`) +- The system includes validation to prevent data loss: + - Allows moving to non-existent IDs by creating placeholder tasks + - Prevents moving to existing task IDs that have content (to avoid overwriting) + - Validates source tasks exist before attempting to move them +- The system maintains proper parent-child relationships and dependency integrity +- Task files are automatically regenerated after the move operation +- This provides greater flexibility in organizing and refining your task structure as project understanding evolves +- This is especially useful when dealing with potential merge conflicts arising from teams creating tasks on separate branches. Solve these conflicts very easily by moving your tasks and keeping theirs. + +## Iterative Subtask Implementation + +Once a task has been broken down into subtasks using `expand_task` or similar methods, follow this iterative process for implementation: + +1. **Understand the Goal (Preparation):** + * Use `get_task` / `task-master show <subtaskId>` (see [`taskmaster.md`](mdc:.roo/rules/taskmaster.md)) to thoroughly understand the specific goals and requirements of the subtask. + +2. **Initial Exploration & Planning (Iteration 1):** + * This is the first attempt at creating a concrete implementation plan. + * Explore the codebase to identify the precise files, functions, and even specific lines of code that will need modification. + * Determine the intended code changes (diffs) and their locations. + * Gather *all* relevant details from this exploration phase. + +3. **Log the Plan:** + * Run `update_subtask` / `task-master update-subtask --id=<subtaskId> --prompt='<detailed plan>'`. + * Provide the *complete and detailed* findings from the exploration phase in the prompt. Include file paths, line numbers, proposed diffs, reasoning, and any potential challenges identified. Do not omit details. The goal is to create a rich, timestamped log within the subtask's `details`. + +4. **Verify the Plan:** + * Run `get_task` / `task-master show <subtaskId>` again to confirm that the detailed implementation plan has been successfully appended to the subtask's details. + +5. **Begin Implementation:** + * Set the subtask status using `set_task_status` / `task-master set-status --id=<subtaskId> --status=in-progress`. + * Start coding based on the logged plan. + +6. **Refine and Log Progress (Iteration 2+):** + * As implementation progresses, you will encounter challenges, discover nuances, or confirm successful approaches. + * **Before appending new information**: Briefly review the *existing* details logged in the subtask (using `get_task` or recalling from context) to ensure the update adds fresh insights and avoids redundancy. + * **Regularly** use `update_subtask` / `task-master update-subtask --id=<subtaskId> --prompt='<update details>\n- What worked...\n- What didn't work...'` to append new findings. + * **Crucially, log:** + * What worked ("fundamental truths" discovered). + * What didn't work and why (to avoid repeating mistakes). + * Specific code snippets or configurations that were successful. + * Decisions made, especially if confirmed with user input. + * Any deviations from the initial plan and the reasoning. + * The objective is to continuously enrich the subtask's details, creating a log of the implementation journey that helps the AI (and human developers) learn, adapt, and avoid repeating errors. + +7. **Review & Update Rules (Post-Implementation):** + * Once the implementation for the subtask is functionally complete, review all code changes and the relevant chat history. + * Identify any new or modified code patterns, conventions, or best practices established during the implementation. + * Create new or update existing rules following internal guidelines (previously linked to `cursor_rules.md` and `self_improve.md`). + +8. **Mark Task Complete:** + * After verifying the implementation and updating any necessary rules, mark the subtask as completed: `set_task_status` / `task-master set-status --id=<subtaskId> --status=done`. + +9. **Commit Changes (If using Git):** + * Stage the relevant code changes and any updated/new rule files (`git add .`). + * Craft a comprehensive Git commit message summarizing the work done for the subtask, including both code implementation and any rule adjustments. + * Execute the commit command directly in the terminal (e.g., `git commit -m 'feat(module): Implement feature X for subtask <subtaskId>\n\n- Details about changes...\n- Updated rule Y for pattern Z'`). + * Consider if a Changeset is needed according to internal versioning guidelines (previously linked to `changeset.md`). If so, run `npm run changeset`, stage the generated file, and amend the commit or create a new one. + +10. **Proceed to Next Subtask:** + * Identify the next subtask (e.g., using `next_task` / `task-master next`). + +## Code Analysis & Refactoring Techniques + +- **Top-Level Function Search**: + - Useful for understanding module structure or planning refactors. + - Use grep/ripgrep to find exported functions/constants: + `rg "export (async function|function|const) \w+"` or similar patterns. + - Can help compare functions between files during migrations or identify potential naming conflicts. + +--- +*This workflow provides a general guideline. Adapt it based on your specific project needs and team practices.* \ No newline at end of file diff --git a/packages/.roo/rules/roo_rules.md b/packages/.roo/rules/roo_rules.md new file mode 100644 index 0000000..cec3c64 --- /dev/null +++ b/packages/.roo/rules/roo_rules.md @@ -0,0 +1,53 @@ +--- +description: Guidelines for creating and maintaining Roo Code rules to ensure consistency and effectiveness. +globs: .roo/rules/*.md +alwaysApply: true +--- + +- **Required Rule Structure:** + ```markdown + --- + description: Clear, one-line description of what the rule enforces + globs: path/to/files/*.ext, other/path/**/* + alwaysApply: boolean + --- + + - **Main Points in Bold** + - Sub-points with details + - Examples and explanations + ``` + +- **File References:** + - Use `[filename](mdc:path/to/file)` ([filename](mdc:filename)) to reference files + - Example: [prisma.md](mdc:.roo/rules/prisma.md) for rule references + - Example: [schema.prisma](mdc:prisma/schema.prisma) for code references + +- **Code Examples:** + - Use language-specific code blocks + ```typescript + // ✅ DO: Show good examples + const goodExample = true; + + // ❌ DON'T: Show anti-patterns + const badExample = false; + ``` + +- **Rule Content Guidelines:** + - Start with high-level overview + - Include specific, actionable requirements + - Show examples of correct implementation + - Reference existing code when possible + - Keep rules DRY by referencing other rules + +- **Rule Maintenance:** + - Update rules when new patterns emerge + - Add examples from actual codebase + - Remove outdated patterns + - Cross-reference related rules + +- **Best Practices:** + - Use bullet points for clarity + - Keep descriptions concise + - Include both DO and DON'T examples + - Reference actual code over theoretical examples + - Use consistent formatting across rules \ No newline at end of file diff --git a/packages/.roo/rules/self_improve.md b/packages/.roo/rules/self_improve.md new file mode 100644 index 0000000..e3af95e --- /dev/null +++ b/packages/.roo/rules/self_improve.md @@ -0,0 +1,72 @@ +--- +description: Guidelines for continuously improving Roo Code rules based on emerging code patterns and best practices. +globs: **/* +alwaysApply: true +--- + +- **Rule Improvement Triggers:** + - New code patterns not covered by existing rules + - Repeated similar implementations across files + - Common error patterns that could be prevented + - New libraries or tools being used consistently + - Emerging best practices in the codebase + +- **Analysis Process:** + - Compare new code with existing rules + - Identify patterns that should be standardized + - Look for references to external documentation + - Check for consistent error handling patterns + - Monitor test patterns and coverage + +- **Rule Updates:** + - **Add New Rules When:** + - A new technology/pattern is used in 3+ files + - Common bugs could be prevented by a rule + - Code reviews repeatedly mention the same feedback + - New security or performance patterns emerge + + - **Modify Existing Rules When:** + - Better examples exist in the codebase + - Additional edge cases are discovered + - Related rules have been updated + - Implementation details have changed + +- **Example Pattern Recognition:** + ```typescript + // If you see repeated patterns like: + const data = await prisma.user.findMany({ + select: { id: true, email: true }, + where: { status: 'ACTIVE' } + }); + + // Consider adding to [prisma.md](mdc:.roo/rules/prisma.md): + // - Standard select fields + // - Common where conditions + // - Performance optimization patterns + ``` + +- **Rule Quality Checks:** + - Rules should be actionable and specific + - Examples should come from actual code + - References should be up to date + - Patterns should be consistently enforced + +- **Continuous Improvement:** + - Monitor code review comments + - Track common development questions + - Update rules after major refactors + - Add links to relevant documentation + - Cross-reference related rules + +- **Rule Deprecation:** + - Mark outdated patterns as deprecated + - Remove rules that no longer apply + - Update references to deprecated rules + - Document migration paths for old patterns + +- **Documentation Updates:** + - Keep examples synchronized with code + - Update references to external docs + - Maintain links between related rules + - Document breaking changes +Follow [cursor_rules.md](mdc:.roo/rules/cursor_rules.md) for proper rule formatting and structure. diff --git a/packages/.roo/rules/taskmaster.md b/packages/.roo/rules/taskmaster.md new file mode 100644 index 0000000..1e64633 --- /dev/null +++ b/packages/.roo/rules/taskmaster.md @@ -0,0 +1,557 @@ +--- +description: Comprehensive reference for Taskmaster MCP tools and CLI commands. +globs: **/* +alwaysApply: true +--- +# Taskmaster Tool & Command Reference + +This document provides a detailed reference for interacting with Taskmaster, covering both the recommended MCP tools, suitable for integrations like Roo Code, and the corresponding `task-master` CLI commands, designed for direct user interaction or fallback. + +**Note:** For interacting with Taskmaster programmatically or via integrated tools, using the **MCP tools is strongly recommended** due to better performance, structured data, and error handling. The CLI commands serve as a user-friendly alternative and fallback. + +**Important:** Several MCP tools involve AI processing... The AI-powered tools include `parse_prd`, `analyze_project_complexity`, `update_subtask`, `update_task`, `update`, `expand_all`, `expand_task`, and `add_task`. + +**🏷️ Tagged Task Lists System:** Task Master now supports **tagged task lists** for multi-context task management. This allows you to maintain separate, isolated lists of tasks for different features, branches, or experiments. Existing projects are seamlessly migrated to use a default "master" tag. Most commands now support a `--tag <name>` flag to specify which context to operate on. If omitted, commands use the currently active tag. + +--- + +## Initialization & Setup + +### 1. Initialize Project (`init`) + +* **MCP Tool:** `initialize_project` +* **CLI Command:** `task-master init [options]` +* **Description:** `Set up the basic Taskmaster file structure and configuration in the current directory for a new project.` +* **Key CLI Options:** + * `--name <name>`: `Set the name for your project in Taskmaster's configuration.` + * `--description <text>`: `Provide a brief description for your project.` + * `--version <version>`: `Set the initial version for your project, e.g., '0.1.0'.` + * `-y, --yes`: `Initialize Taskmaster quickly using default settings without interactive prompts.` +* **Usage:** Run this once at the beginning of a new project. +* **MCP Variant Description:** `Set up the basic Taskmaster file structure and configuration in the current directory for a new project by running the 'task-master init' command.` +* **Key MCP Parameters/Options:** + * `projectName`: `Set the name for your project.` (CLI: `--name <name>`) + * `projectDescription`: `Provide a brief description for your project.` (CLI: `--description <text>`) + * `projectVersion`: `Set the initial version for your project, e.g., '0.1.0'.` (CLI: `--version <version>`) + * `authorName`: `Author name.` (CLI: `--author <author>`) + * `skipInstall`: `Skip installing dependencies. Default is false.` (CLI: `--skip-install`) + * `addAliases`: `Add shell aliases tm and taskmaster. Default is false.` (CLI: `--aliases`) + * `yes`: `Skip prompts and use defaults/provided arguments. Default is false.` (CLI: `-y, --yes`) +* **Usage:** Run this once at the beginning of a new project, typically via an integrated tool like Roo Code. Operates on the current working directory of the MCP server. +* **Important:** Once complete, you *MUST* parse a prd in order to generate tasks. There will be no tasks files until then. The next step after initializing should be to create a PRD using the example PRD in .taskmaster/templates/example_prd.txt. +* **Tagging:** Use the `--tag` option to parse the PRD into a specific, non-default tag context. If the tag doesn't exist, it will be created automatically. Example: `task-master parse-prd spec.txt --tag=new-feature`. + +### 2. Parse PRD (`parse_prd`) + +* **MCP Tool:** `parse_prd` +* **CLI Command:** `task-master parse-prd [file] [options]` +* **Description:** `Parse a Product Requirements Document, PRD, or text file with Taskmaster to automatically generate an initial set of tasks in tasks.json.` +* **Key Parameters/Options:** + * `input`: `Path to your PRD or requirements text file that Taskmaster should parse for tasks.` (CLI: `[file]` positional or `-i, --input <file>`) + * `output`: `Specify where Taskmaster should save the generated 'tasks.json' file. Defaults to '.taskmaster/tasks/tasks.json'.` (CLI: `-o, --output <file>`) + * `numTasks`: `Approximate number of top-level tasks Taskmaster should aim to generate from the document.` (CLI: `-n, --num-tasks <number>`) + * `force`: `Use this to allow Taskmaster to overwrite an existing 'tasks.json' without asking for confirmation.` (CLI: `-f, --force`) +* **Usage:** Useful for bootstrapping a project from an existing requirements document. +* **Notes:** Task Master will strictly adhere to any specific requirements mentioned in the PRD, such as libraries, database schemas, frameworks, tech stacks, etc., while filling in any gaps where the PRD isn't fully specified. Tasks are designed to provide the most direct implementation path while avoiding over-engineering. +* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. If the user does not have a PRD, suggest discussing their idea and then use the example PRD in `.taskmaster/templates/example_prd.txt` as a template for creating the PRD based on their idea, for use with `parse-prd`. + +--- + +## AI Model Configuration + +### 2. Manage Models (`models`) +* **MCP Tool:** `models` +* **CLI Command:** `task-master models [options]` +* **Description:** `View the current AI model configuration or set specific models for different roles (main, research, fallback). Allows setting custom model IDs for Ollama and OpenRouter.` +* **Key MCP Parameters/Options:** + * `setMain <model_id>`: `Set the primary model ID for task generation/updates.` (CLI: `--set-main <model_id>`) + * `setResearch <model_id>`: `Set the model ID for research-backed operations.` (CLI: `--set-research <model_id>`) + * `setFallback <model_id>`: `Set the model ID to use if the primary fails.` (CLI: `--set-fallback <model_id>`) + * `ollama <boolean>`: `Indicates the set model ID is a custom Ollama model.` (CLI: `--ollama`) + * `openrouter <boolean>`: `Indicates the set model ID is a custom OpenRouter model.` (CLI: `--openrouter`) + * `listAvailableModels <boolean>`: `If true, lists available models not currently assigned to a role.` (CLI: No direct equivalent; CLI lists available automatically) + * `projectRoot <string>`: `Optional. Absolute path to the project root directory.` (CLI: Determined automatically) +* **Key CLI Options:** + * `--set-main <model_id>`: `Set the primary model.` + * `--set-research <model_id>`: `Set the research model.` + * `--set-fallback <model_id>`: `Set the fallback model.` + * `--ollama`: `Specify that the provided model ID is for Ollama (use with --set-*).` + * `--openrouter`: `Specify that the provided model ID is for OpenRouter (use with --set-*). Validates against OpenRouter API.` + * `--bedrock`: `Specify that the provided model ID is for AWS Bedrock (use with --set-*).` + * `--setup`: `Run interactive setup to configure models, including custom Ollama/OpenRouter IDs.` +* **Usage (MCP):** Call without set flags to get current config. Use `setMain`, `setResearch`, or `setFallback` with a valid model ID to update the configuration. Use `listAvailableModels: true` to get a list of unassigned models. To set a custom model, provide the model ID and set `ollama: true` or `openrouter: true`. +* **Usage (CLI):** Run without flags to view current configuration and available models. Use set flags to update specific roles. Use `--setup` for guided configuration, including custom models. To set a custom model via flags, use `--set-<role>=<model_id>` along with either `--ollama` or `--openrouter`. +* **Notes:** Configuration is stored in `.taskmaster/config.json` in the project root. This command/tool modifies that file. Use `listAvailableModels` or `task-master models` to see internally supported models. OpenRouter custom models are validated against their live API. Ollama custom models are not validated live. +* **API note:** API keys for selected AI providers (based on their model) need to exist in the mcp.json file to be accessible in MCP context. The API keys must be present in the local .env file for the CLI to be able to read them. +* **Model costs:** The costs in supported models are expressed in dollars. An input/output value of 3 is $3.00. A value of 0.8 is $0.80. +* **Warning:** DO NOT MANUALLY EDIT THE .taskmaster/config.json FILE. Use the included commands either in the MCP or CLI format as needed. Always prioritize MCP tools when available and use the CLI as a fallback. + +--- + +## Task Listing & Viewing + +### 3. Get Tasks (`get_tasks`) + +* **MCP Tool:** `get_tasks` +* **CLI Command:** `task-master list [options]` +* **Description:** `List your Taskmaster tasks, optionally filtering by status and showing subtasks.` +* **Key Parameters/Options:** + * `status`: `Show only Taskmaster tasks matching this status (or multiple statuses, comma-separated), e.g., 'pending' or 'done,in-progress'.` (CLI: `-s, --status <status>`) + * `withSubtasks`: `Include subtasks indented under their parent tasks in the list.` (CLI: `--with-subtasks`) + * `tag`: `Specify which tag context to list tasks from. Defaults to the current active tag.` (CLI: `--tag <name>`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Get an overview of the project status, often used at the start of a work session. + +### 4. Get Next Task (`next_task`) + +* **MCP Tool:** `next_task` +* **CLI Command:** `task-master next [options]` +* **Description:** `Ask Taskmaster to show the next available task you can work on, based on status and completed dependencies.` +* **Key Parameters/Options:** + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) + * `tag`: `Specify which tag context to use. Defaults to the current active tag.` (CLI: `--tag <name>`) +* **Usage:** Identify what to work on next according to the plan. + +### 5. Get Task Details (`get_task`) + +* **MCP Tool:** `get_task` +* **CLI Command:** `task-master show [id] [options]` +* **Description:** `Display detailed information for one or more specific Taskmaster tasks or subtasks by ID.` +* **Key Parameters/Options:** + * `id`: `Required. The ID of the Taskmaster task (e.g., '15'), subtask (e.g., '15.2'), or a comma-separated list of IDs ('1,5,10.2') you want to view.` (CLI: `[id]` positional or `-i, --id <id>`) + * `tag`: `Specify which tag context to get the task(s) from. Defaults to the current active tag.` (CLI: `--tag <name>`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Understand the full details for a specific task. When multiple IDs are provided, a summary table is shown. +* **CRITICAL INFORMATION** If you need to collect information from multiple tasks, use comma-separated IDs (i.e. 1,2,3) to receive an array of tasks. Do not needlessly get tasks one at a time if you need to get many as that is wasteful. + +--- + +## Task Creation & Modification + +### 6. Add Task (`add_task`) + +* **MCP Tool:** `add_task` +* **CLI Command:** `task-master add-task [options]` +* **Description:** `Add a new task to Taskmaster by describing it; AI will structure it.` +* **Key Parameters/Options:** + * `prompt`: `Required. Describe the new task you want Taskmaster to create, e.g., "Implement user authentication using JWT".` (CLI: `-p, --prompt <text>`) + * `dependencies`: `Specify the IDs of any Taskmaster tasks that must be completed before this new one can start, e.g., '12,14'.` (CLI: `-d, --dependencies <ids>`) + * `priority`: `Set the priority for the new task: 'high', 'medium', or 'low'. Default is 'medium'.` (CLI: `--priority <priority>`) + * `research`: `Enable Taskmaster to use the research role for potentially more informed task creation.` (CLI: `-r, --research`) + * `tag`: `Specify which tag context to add the task to. Defaults to the current active tag.` (CLI: `--tag <name>`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Quickly add newly identified tasks during development. +* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. + +### 7. Add Subtask (`add_subtask`) + +* **MCP Tool:** `add_subtask` +* **CLI Command:** `task-master add-subtask [options]` +* **Description:** `Add a new subtask to a Taskmaster parent task, or convert an existing task into a subtask.` +* **Key Parameters/Options:** + * `id` / `parent`: `Required. The ID of the Taskmaster task that will be the parent.` (MCP: `id`, CLI: `-p, --parent <id>`) + * `taskId`: `Use this if you want to convert an existing top-level Taskmaster task into a subtask of the specified parent.` (CLI: `-i, --task-id <id>`) + * `title`: `Required if not using taskId. The title for the new subtask Taskmaster should create.` (CLI: `-t, --title <title>`) + * `description`: `A brief description for the new subtask.` (CLI: `-d, --description <text>`) + * `details`: `Provide implementation notes or details for the new subtask.` (CLI: `--details <text>`) + * `dependencies`: `Specify IDs of other tasks or subtasks, e.g., '15' or '16.1', that must be done before this new subtask.` (CLI: `--dependencies <ids>`) + * `status`: `Set the initial status for the new subtask. Default is 'pending'.` (CLI: `-s, --status <status>`) + * `skipGenerate`: `Prevent Taskmaster from automatically regenerating markdown task files after adding the subtask.` (CLI: `--skip-generate`) + * `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Break down tasks manually or reorganize existing tasks. + +### 8. Update Tasks (`update`) + +* **MCP Tool:** `update` +* **CLI Command:** `task-master update [options]` +* **Description:** `Update multiple upcoming tasks in Taskmaster based on new context or changes, starting from a specific task ID.` +* **Key Parameters/Options:** + * `from`: `Required. The ID of the first task Taskmaster should update. All tasks with this ID or higher that are not 'done' will be considered.` (CLI: `--from <id>`) + * `prompt`: `Required. Explain the change or new context for Taskmaster to apply to the tasks, e.g., "We are now using React Query instead of Redux Toolkit for data fetching".` (CLI: `-p, --prompt <text>`) + * `research`: `Enable Taskmaster to use the research role for more informed updates. Requires appropriate API key.` (CLI: `-r, --research`) + * `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Handle significant implementation changes or pivots that affect multiple future tasks. Example CLI: `task-master update --from='18' --prompt='Switching to React Query.\nNeed to refactor data fetching...'` +* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. + +### 9. Update Task (`update_task`) + +* **MCP Tool:** `update_task` +* **CLI Command:** `task-master update-task [options]` +* **Description:** `Modify a specific Taskmaster task by ID, incorporating new information or changes. By default, this replaces the existing task details.` +* **Key Parameters/Options:** + * `id`: `Required. The specific ID of the Taskmaster task, e.g., '15', you want to update.` (CLI: `-i, --id <id>`) + * `prompt`: `Required. Explain the specific changes or provide the new information Taskmaster should incorporate into this task.` (CLI: `-p, --prompt <text>`) + * `append`: `If true, appends the prompt content to the task's details with a timestamp, rather than replacing them. Behaves like update-subtask.` (CLI: `--append`) + * `research`: `Enable Taskmaster to use the research role for more informed updates. Requires appropriate API key.` (CLI: `-r, --research`) + * `tag`: `Specify which tag context the task belongs to. Defaults to the current active tag.` (CLI: `--tag <name>`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Refine a specific task based on new understanding. Use `--append` to log progress without creating subtasks. +* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. + +### 10. Update Subtask (`update_subtask`) + +* **MCP Tool:** `update_subtask` +* **CLI Command:** `task-master update-subtask [options]` +* **Description:** `Append timestamped notes or details to a specific Taskmaster subtask without overwriting existing content. Intended for iterative implementation logging.` +* **Key Parameters/Options:** + * `id`: `Required. The ID of the Taskmaster subtask, e.g., '5.2', to update with new information.` (CLI: `-i, --id <id>`) + * `prompt`: `Required. The information, findings, or progress notes to append to the subtask's details with a timestamp.` (CLI: `-p, --prompt <text>`) + * `research`: `Enable Taskmaster to use the research role for more informed updates. Requires appropriate API key.` (CLI: `-r, --research`) + * `tag`: `Specify which tag context the subtask belongs to. Defaults to the current active tag.` (CLI: `--tag <name>`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Log implementation progress, findings, and discoveries during subtask development. Each update is timestamped and appended to preserve the implementation journey. +* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. + +### 11. Set Task Status (`set_task_status`) + +* **MCP Tool:** `set_task_status` +* **CLI Command:** `task-master set-status [options]` +* **Description:** `Update the status of one or more Taskmaster tasks or subtasks, e.g., 'pending', 'in-progress', 'done'.` +* **Key Parameters/Options:** + * `id`: `Required. The ID(s) of the Taskmaster task(s) or subtask(s), e.g., '15', '15.2', or '16,17.1', to update.` (CLI: `-i, --id <id>`) + * `status`: `Required. The new status to set, e.g., 'done', 'pending', 'in-progress', 'review', 'cancelled'.` (CLI: `-s, --status <status>`) + * `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Mark progress as tasks move through the development cycle. + +### 12. Remove Task (`remove_task`) + +* **MCP Tool:** `remove_task` +* **CLI Command:** `task-master remove-task [options]` +* **Description:** `Permanently remove a task or subtask from the Taskmaster tasks list.` +* **Key Parameters/Options:** + * `id`: `Required. The ID of the Taskmaster task, e.g., '5', or subtask, e.g., '5.2', to permanently remove.` (CLI: `-i, --id <id>`) + * `yes`: `Skip the confirmation prompt and immediately delete the task.` (CLI: `-y, --yes`) + * `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Permanently delete tasks or subtasks that are no longer needed in the project. +* **Notes:** Use with caution as this operation cannot be undone. Consider using 'blocked', 'cancelled', or 'deferred' status instead if you just want to exclude a task from active planning but keep it for reference. The command automatically cleans up dependency references in other tasks. + +--- + +## Task Structure & Breakdown + +### 13. Expand Task (`expand_task`) + +* **MCP Tool:** `expand_task` +* **CLI Command:** `task-master expand [options]` +* **Description:** `Use Taskmaster's AI to break down a complex task into smaller, manageable subtasks. Appends subtasks by default.` +* **Key Parameters/Options:** + * `id`: `The ID of the specific Taskmaster task you want to break down into subtasks.` (CLI: `-i, --id <id>`) + * `num`: `Optional: Suggests how many subtasks Taskmaster should aim to create. Uses complexity analysis/defaults otherwise.` (CLI: `-n, --num <number>`) + * `research`: `Enable Taskmaster to use the research role for more informed subtask generation. Requires appropriate API key.` (CLI: `-r, --research`) + * `prompt`: `Optional: Provide extra context or specific instructions to Taskmaster for generating the subtasks.` (CLI: `-p, --prompt <text>`) + * `force`: `Optional: If true, clear existing subtasks before generating new ones. Default is false (append).` (CLI: `--force`) + * `tag`: `Specify which tag context the task belongs to. Defaults to the current active tag.` (CLI: `--tag <name>`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Generate a detailed implementation plan for a complex task before starting coding. Automatically uses complexity report recommendations if available and `num` is not specified. +* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. + +### 14. Expand All Tasks (`expand_all`) + +* **MCP Tool:** `expand_all` +* **CLI Command:** `task-master expand --all [options]` (Note: CLI uses the `expand` command with the `--all` flag) +* **Description:** `Tell Taskmaster to automatically expand all eligible pending/in-progress tasks based on complexity analysis or defaults. Appends subtasks by default.` +* **Key Parameters/Options:** + * `num`: `Optional: Suggests how many subtasks Taskmaster should aim to create per task.` (CLI: `-n, --num <number>`) + * `research`: `Enable research role for more informed subtask generation. Requires appropriate API key.` (CLI: `-r, --research`) + * `prompt`: `Optional: Provide extra context for Taskmaster to apply generally during expansion.` (CLI: `-p, --prompt <text>`) + * `force`: `Optional: If true, clear existing subtasks before generating new ones for each eligible task. Default is false (append).` (CLI: `--force`) + * `tag`: `Specify which tag context to expand. Defaults to the current active tag.` (CLI: `--tag <name>`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Useful after initial task generation or complexity analysis to break down multiple tasks at once. +* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. + +### 15. Clear Subtasks (`clear_subtasks`) + +* **MCP Tool:** `clear_subtasks` +* **CLI Command:** `task-master clear-subtasks [options]` +* **Description:** `Remove all subtasks from one or more specified Taskmaster parent tasks.` +* **Key Parameters/Options:** + * `id`: `The ID(s) of the Taskmaster parent task(s) whose subtasks you want to remove, e.g., '15' or '16,18'. Required unless using `all`.) (CLI: `-i, --id <ids>`) + * `all`: `Tell Taskmaster to remove subtasks from all parent tasks.` (CLI: `--all`) + * `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Used before regenerating subtasks with `expand_task` if the previous breakdown needs replacement. + +### 16. Remove Subtask (`remove_subtask`) + +* **MCP Tool:** `remove_subtask` +* **CLI Command:** `task-master remove-subtask [options]` +* **Description:** `Remove a subtask from its Taskmaster parent, optionally converting it into a standalone task.` +* **Key Parameters/Options:** + * `id`: `Required. The ID(s) of the Taskmaster subtask(s) to remove, e.g., '15.2' or '16.1,16.3'.` (CLI: `-i, --id <id>`) + * `convert`: `If used, Taskmaster will turn the subtask into a regular top-level task instead of deleting it.` (CLI: `-c, --convert`) + * `skipGenerate`: `Prevent Taskmaster from automatically regenerating markdown task files after removing the subtask.` (CLI: `--skip-generate`) + * `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Delete unnecessary subtasks or promote a subtask to a top-level task. + +### 17. Move Task (`move_task`) + +* **MCP Tool:** `move_task` +* **CLI Command:** `task-master move [options]` +* **Description:** `Move a task or subtask to a new position within the task hierarchy.` +* **Key Parameters/Options:** + * `from`: `Required. ID of the task/subtask to move (e.g., "5" or "5.2"). Can be comma-separated for multiple tasks.` (CLI: `--from <id>`) + * `to`: `Required. ID of the destination (e.g., "7" or "7.3"). Must match the number of source IDs if comma-separated.` (CLI: `--to <id>`) + * `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Reorganize tasks by moving them within the hierarchy. Supports various scenarios like: + * Moving a task to become a subtask + * Moving a subtask to become a standalone task + * Moving a subtask to a different parent + * Reordering subtasks within the same parent + * Moving a task to a new, non-existent ID (automatically creates placeholders) + * Moving multiple tasks at once with comma-separated IDs +* **Validation Features:** + * Allows moving tasks to non-existent destination IDs (creates placeholder tasks) + * Prevents moving to existing task IDs that already have content (to avoid overwriting) + * Validates that source tasks exist before attempting to move them + * Maintains proper parent-child relationships +* **Example CLI:** `task-master move --from=5.2 --to=7.3` to move subtask 5.2 to become subtask 7.3. +* **Example Multi-Move:** `task-master move --from=10,11,12 --to=16,17,18` to move multiple tasks to new positions. +* **Common Use:** Resolving merge conflicts in tasks.json when multiple team members create tasks on different branches. + +--- + +## Dependency Management + +### 18. Add Dependency (`add_dependency`) + +* **MCP Tool:** `add_dependency` +* **CLI Command:** `task-master add-dependency [options]` +* **Description:** `Define a dependency in Taskmaster, making one task a prerequisite for another.` +* **Key Parameters/Options:** + * `id`: `Required. The ID of the Taskmaster task that will depend on another.` (CLI: `-i, --id <id>`) + * `dependsOn`: `Required. The ID of the Taskmaster task that must be completed first, the prerequisite.` (CLI: `-d, --depends-on <id>`) + * `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <path>`) +* **Usage:** Establish the correct order of execution between tasks. + +### 19. Remove Dependency (`remove_dependency`) + +* **MCP Tool:** `remove_dependency` +* **CLI Command:** `task-master remove-dependency [options]` +* **Description:** `Remove a dependency relationship between two Taskmaster tasks.` +* **Key Parameters/Options:** + * `id`: `Required. The ID of the Taskmaster task you want to remove a prerequisite from.` (CLI: `-i, --id <id>`) + * `dependsOn`: `Required. The ID of the Taskmaster task that should no longer be a prerequisite.` (CLI: `-d, --depends-on <id>`) + * `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Update task relationships when the order of execution changes. + +### 20. Validate Dependencies (`validate_dependencies`) + +* **MCP Tool:** `validate_dependencies` +* **CLI Command:** `task-master validate-dependencies [options]` +* **Description:** `Check your Taskmaster tasks for dependency issues (like circular references or links to non-existent tasks) without making changes.` +* **Key Parameters/Options:** + * `tag`: `Specify which tag context to validate. Defaults to the current active tag.` (CLI: `--tag <name>`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Audit the integrity of your task dependencies. + +### 21. Fix Dependencies (`fix_dependencies`) + +* **MCP Tool:** `fix_dependencies` +* **CLI Command:** `task-master fix-dependencies [options]` +* **Description:** `Automatically fix dependency issues (like circular references or links to non-existent tasks) in your Taskmaster tasks.` +* **Key Parameters/Options:** + * `tag`: `Specify which tag context to fix dependencies in. Defaults to the current active tag.` (CLI: `--tag <name>`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Clean up dependency errors automatically. + +--- + +## Analysis & Reporting + +### 22. Analyze Project Complexity (`analyze_project_complexity`) + +* **MCP Tool:** `analyze_project_complexity` +* **CLI Command:** `task-master analyze-complexity [options]` +* **Description:** `Have Taskmaster analyze your tasks to determine their complexity and suggest which ones need to be broken down further.` +* **Key Parameters/Options:** + * `output`: `Where to save the complexity analysis report. Default is '.taskmaster/reports/task-complexity-report.json' (or '..._tagname.json' if a tag is used).` (CLI: `-o, --output <file>`) + * `threshold`: `The minimum complexity score (1-10) that should trigger a recommendation to expand a task.` (CLI: `-t, --threshold <number>`) + * `research`: `Enable research role for more accurate complexity analysis. Requires appropriate API key.` (CLI: `-r, --research`) + * `tag`: `Specify which tag context to analyze. Defaults to the current active tag.` (CLI: `--tag <name>`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Used before breaking down tasks to identify which ones need the most attention. +* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. + +### 23. View Complexity Report (`complexity_report`) + +* **MCP Tool:** `complexity_report` +* **CLI Command:** `task-master complexity-report [options]` +* **Description:** `Display the task complexity analysis report in a readable format.` +* **Key Parameters/Options:** + * `tag`: `Specify which tag context to show the report for. Defaults to the current active tag.` (CLI: `--tag <name>`) + * `file`: `Path to the complexity report (default: '.taskmaster/reports/task-complexity-report.json').` (CLI: `-f, --file <file>`) +* **Usage:** Review and understand the complexity analysis results after running analyze-complexity. + +--- + +## File Management + +### 24. Generate Task Files (`generate`) + +* **MCP Tool:** `generate` +* **CLI Command:** `task-master generate [options]` +* **Description:** `Create or update individual Markdown files for each task based on your tasks.json.` +* **Key Parameters/Options:** + * `output`: `The directory where Taskmaster should save the task files (default: in a 'tasks' directory).` (CLI: `-o, --output <directory>`) + * `tag`: `Specify which tag context to generate files for. Defaults to the current active tag.` (CLI: `--tag <name>`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Run this after making changes to tasks.json to keep individual task files up to date. This command is now manual and no longer runs automatically. + +--- + +## AI-Powered Research + +### 25. Research (`research`) + +* **MCP Tool:** `research` +* **CLI Command:** `task-master research [options]` +* **Description:** `Perform AI-powered research queries with project context to get fresh, up-to-date information beyond the AI's knowledge cutoff.` +* **Key Parameters/Options:** + * `query`: `Required. Research query/prompt (e.g., "What are the latest best practices for React Query v5?").` (CLI: `[query]` positional or `-q, --query <text>`) + * `taskIds`: `Comma-separated list of task/subtask IDs from the current tag context (e.g., "15,16.2,17").` (CLI: `-i, --id <ids>`) + * `filePaths`: `Comma-separated list of file paths for context (e.g., "src/api.js,docs/readme.md").` (CLI: `-f, --files <paths>`) + * `customContext`: `Additional custom context text to include in the research.` (CLI: `-c, --context <text>`) + * `includeProjectTree`: `Include project file tree structure in context (default: false).` (CLI: `--tree`) + * `detailLevel`: `Detail level for the research response: 'low', 'medium', 'high' (default: medium).` (CLI: `--detail <level>`) + * `saveTo`: `Task or subtask ID (e.g., "15", "15.2") to automatically save the research conversation to.` (CLI: `--save-to <id>`) + * `saveFile`: `If true, saves the research conversation to a markdown file in '.taskmaster/docs/research/'.` (CLI: `--save-file`) + * `noFollowup`: `Disables the interactive follow-up question menu in the CLI.` (CLI: `--no-followup`) + * `tag`: `Specify which tag context to use for task-based context gathering. Defaults to the current active tag.` (CLI: `--tag <name>`) + * `projectRoot`: `The directory of the project. Must be an absolute path.` (CLI: Determined automatically) +* **Usage:** **This is a POWERFUL tool that agents should use FREQUENTLY** to: + * Get fresh information beyond knowledge cutoff dates + * Research latest best practices, library updates, security patches + * Find implementation examples for specific technologies + * Validate approaches against current industry standards + * Get contextual advice based on project files and tasks +* **When to Consider Using Research:** + * **Before implementing any task** - Research current best practices + * **When encountering new technologies** - Get up-to-date implementation guidance (libraries, apis, etc) + * **For security-related tasks** - Find latest security recommendations + * **When updating dependencies** - Research breaking changes and migration guides + * **For performance optimization** - Get current performance best practices + * **When debugging complex issues** - Research known solutions and workarounds +* **Research + Action Pattern:** + * Use `research` to gather fresh information + * Use `update_subtask` to commit findings with timestamps + * Use `update_task` to incorporate research into task details + * Use `add_task` with research flag for informed task creation +* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. The research provides FRESH data beyond the AI's training cutoff, making it invaluable for current best practices and recent developments. + +--- + +## Tag Management + +This new suite of commands allows you to manage different task contexts (tags). + +### 26. List Tags (`tags`) + +* **MCP Tool:** `list_tags` +* **CLI Command:** `task-master tags [options]` +* **Description:** `List all available tags with task counts, completion status, and other metadata.` +* **Key Parameters/Options:** + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) + * `--show-metadata`: `Include detailed metadata in the output (e.g., creation date, description).` (CLI: `--show-metadata`) + +### 27. Add Tag (`add_tag`) + +* **MCP Tool:** `add_tag` +* **CLI Command:** `task-master add-tag <tagName> [options]` +* **Description:** `Create a new, empty tag context, or copy tasks from another tag.` +* **Key Parameters/Options:** + * `tagName`: `Name of the new tag to create (alphanumeric, hyphens, underscores).` (CLI: `<tagName>` positional) + * `--from-branch`: `Creates a tag with a name derived from the current git branch, ignoring the <tagName> argument.` (CLI: `--from-branch`) + * `--copy-from-current`: `Copy tasks from the currently active tag to the new tag.` (CLI: `--copy-from-current`) + * `--copy-from <tag>`: `Copy tasks from a specific source tag to the new tag.` (CLI: `--copy-from <tag>`) + * `--description <text>`: `Provide an optional description for the new tag.` (CLI: `-d, --description <text>`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) + +### 28. Delete Tag (`delete_tag`) + +* **MCP Tool:** `delete_tag` +* **CLI Command:** `task-master delete-tag <tagName> [options]` +* **Description:** `Permanently delete a tag and all of its associated tasks.` +* **Key Parameters/Options:** + * `tagName`: `Name of the tag to delete.` (CLI: `<tagName>` positional) + * `--yes`: `Skip the confirmation prompt.` (CLI: `-y, --yes`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) + +### 29. Use Tag (`use_tag`) + +* **MCP Tool:** `use_tag` +* **CLI Command:** `task-master use-tag <tagName>` +* **Description:** `Switch your active task context to a different tag.` +* **Key Parameters/Options:** + * `tagName`: `Name of the tag to switch to.` (CLI: `<tagName>` positional) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) + +### 30. Rename Tag (`rename_tag`) + +* **MCP Tool:** `rename_tag` +* **CLI Command:** `task-master rename-tag <oldName> <newName>` +* **Description:** `Rename an existing tag.` +* **Key Parameters/Options:** + * `oldName`: `The current name of the tag.` (CLI: `<oldName>` positional) + * `newName`: `The new name for the tag.` (CLI: `<newName>` positional) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) + +### 31. Copy Tag (`copy_tag`) + +* **MCP Tool:** `copy_tag` +* **CLI Command:** `task-master copy-tag <sourceName> <targetName> [options]` +* **Description:** `Copy an entire tag context, including all its tasks and metadata, to a new tag.` +* **Key Parameters/Options:** + * `sourceName`: `Name of the tag to copy from.` (CLI: `<sourceName>` positional) + * `targetName`: `Name of the new tag to create.` (CLI: `<targetName>` positional) + * `--description <text>`: `Optional description for the new tag.` (CLI: `-d, --description <text>`) + +--- + +## Miscellaneous + +### 32. Sync Readme (`sync-readme`) -- experimental + +* **MCP Tool:** N/A +* **CLI Command:** `task-master sync-readme [options]` +* **Description:** `Exports your task list to your project's README.md file, useful for showcasing progress.` +* **Key Parameters/Options:** + * `status`: `Filter tasks by status (e.g., 'pending', 'done').` (CLI: `-s, --status <status>`) + * `withSubtasks`: `Include subtasks in the export.` (CLI: `--with-subtasks`) + * `tag`: `Specify which tag context to export from. Defaults to the current active tag.` (CLI: `--tag <name>`) + +--- + +## Environment Variables Configuration (Updated) + +Taskmaster primarily uses the **`.taskmaster/config.json`** file (in project root) for configuration (models, parameters, logging level, etc.), managed via `task-master models --setup`. + +Environment variables are used **only** for sensitive API keys related to AI providers and specific overrides like the Ollama base URL: + +* **API Keys (Required for corresponding provider):** + * `ANTHROPIC_API_KEY` + * `PERPLEXITY_API_KEY` + * `OPENAI_API_KEY` + * `GOOGLE_API_KEY` + * `MISTRAL_API_KEY` + * `AZURE_OPENAI_API_KEY` (Requires `AZURE_OPENAI_ENDPOINT` too) + * `OPENROUTER_API_KEY` + * `XAI_API_KEY` + * `OLLAMA_API_KEY` (Requires `OLLAMA_BASE_URL` too) +* **Endpoints (Optional/Provider Specific inside .taskmaster/config.json):** + * `AZURE_OPENAI_ENDPOINT` + * `OLLAMA_BASE_URL` (Default: `http://localhost:11434/api`) + +**Set API keys** in your **`.env`** file in the project root (for CLI use) or within the `env` section of your **`.roo/mcp.json`** file (for MCP/Roo Code integration). All other settings (model choice, max tokens, temperature, log level, custom endpoints) are managed in `.taskmaster/config.json` via `task-master models` command or `models` MCP tool. + +--- + +For details on how these commands fit into the development process, see the [Development Workflow Guide](mdc:.roo/rules/dev_workflow.md). diff --git a/packages/.roomodes b/packages/.roomodes new file mode 100644 index 0000000..289a422 --- /dev/null +++ b/packages/.roomodes @@ -0,0 +1,63 @@ +{ + "customModes": [ + { + "slug": "boomerang", + "name": "Boomerang", + "roleDefinition": "You are Roo, a strategic workflow orchestrator who coordinates complex tasks by delegating them to appropriate specialized modes. You have a comprehensive understanding of each mode's capabilities and limitations, also your own, and with the information given by the user and other modes in shared context you are enabled to effectively break down complex problems into discrete tasks that can be solved by different specialists using the `taskmaster-ai` system for task and context management.", + "customInstructions": "Your role is to coordinate complex workflows by delegating tasks to specialized modes, using `taskmaster-ai` as the central hub for task definition, progress tracking, and context management. \nAs an orchestrator, you should:\nn1. When given a complex task, use contextual information (which gets updated frequently) to break it down into logical subtasks that can be delegated to appropriate specialized modes.\nn2. For each subtask, use the `new_task` tool to delegate. Choose the most appropriate mode for the subtask's specific goal and provide comprehensive instructions in the `message` parameter. \nThese instructions must include:\n* All necessary context from the parent task or previous subtasks required to complete the work.\n* A clearly defined scope, specifying exactly what the subtask should accomplish.\n* An explicit statement that the subtask should *only* perform the work outlined in these instructions and not deviate.\n* An instruction for the subtask to signal completion by using the `attempt_completion` tool, providing a thorough summary of the outcome in the `result` parameter, keeping in mind that this summary will be the source of truth used to further relay this information to other tasks and for you to keep track of what was completed on this project.\nn3. Track and manage the progress of all subtasks. When a subtask is completed, acknowledge its results and determine the next steps.\nn4. Help the user understand how the different subtasks fit together in the overall workflow. Provide clear reasoning about why you're delegating specific tasks to specific modes.\nn5. Ask clarifying questions when necessary to better understand how to break down complex tasks effectively. If it seems complex delegate to architect to accomplish that \nn6. Use subtasks to maintain clarity. If a request significantly shifts focus or requires a different expertise (mode), consider creating a subtask rather than overloading the current one.", + "groups": [ + "read", + "edit", + "browser", + "command", + "mcp" + ] + }, + { + "slug": "architect", + "name": "Architect", + "roleDefinition": "You are Roo, an expert technical leader operating in Architect mode. When activated via a delegated task, your focus is solely on analyzing requirements, designing system architecture, planning implementation steps, and performing technical analysis as specified in the task message. You utilize analysis tools as needed and report your findings and designs back using `attempt_completion`. You do not deviate from the delegated task scope.", + "customInstructions": "1. Do some information gathering (for example using read_file or search_files) to get more context about the task.\n\n2. You should also ask the user clarifying questions to get a better understanding of the task.\n\n3. Once you've gained more context about the user's request, you should create a detailed plan for how to accomplish the task. Include Mermaid diagrams if they help make your plan clearer.\n\n4. Ask the user if they are pleased with this plan, or if they would like to make any changes. Think of this as a brainstorming session where you can discuss the task and plan the best way to accomplish it.\n\n5. Once the user confirms the plan, ask them if they'd like you to write it to a markdown file.\n\n6. Use the switch_mode tool to request that the user switch to another mode to implement the solution.", + "groups": [ + "read", + ["edit", { "fileRegex": "\\.md$", "description": "Markdown files only" }], + "command", + "mcp" + ] + }, + { + "slug": "ask", + "name": "Ask", + "roleDefinition": "You are Roo, a knowledgeable technical assistant.\nWhen activated by another mode via a delegated task, your focus is to research, analyze, and provide clear, concise answers or explanations based *only* on the specific information requested in the delegation message. Use available tools for information gathering and report your findings back using `attempt_completion`.", + "customInstructions": "You can analyze code, explain concepts, and access external resources. Make sure to answer the user's questions and don't rush to switch to implementing code. Include Mermaid diagrams if they help make your response clearer.", + "groups": [ + "read", + "browser", + "mcp" + ] + }, + { + "slug": "debug", + "name": "Debug", + "roleDefinition": "You are Roo, an expert software debugger specializing in systematic problem diagnosis and resolution. When activated by another mode, your task is to meticulously analyze the provided debugging request (potentially referencing Taskmaster tasks, logs, or metrics), use diagnostic tools as instructed to investigate the issue, identify the root cause, and report your findings and recommended next steps back via `attempt_completion`. You focus solely on diagnostics within the scope defined by the delegated task.", + "customInstructions": "Reflect on 5-7 different possible sources of the problem, distill those down to 1-2 most likely sources, and then add logs to validate your assumptions. Explicitly ask the user to confirm the diagnosis before fixing the problem.", + "groups": [ + "read", + "edit", + "command", + "mcp" + ] + }, + { + "slug": "test", + "name": "Test", + "roleDefinition": "You are Roo, an expert software tester. Your primary focus is executing testing tasks delegated to you by other modes.\nAnalyze the provided scope and context (often referencing a Taskmaster task ID and its `testStrategy`), develop test plans if needed, execute tests diligently, and report comprehensive results (pass/fail, bugs, coverage) back using `attempt_completion`. You operate strictly within the delegated task's boundaries.", + "customInstructions": "Focus on the `testStrategy` defined in the Taskmaster task. Develop and execute test plans accordingly. Report results clearly, including pass/fail status, bug details, and coverage information.", + "groups": [ + "read", + "command", + "mcp" + ] + } + ] +} \ No newline at end of file diff --git a/packages/.taskmaster/config.json b/packages/.taskmaster/config.json new file mode 100644 index 0000000..199491a --- /dev/null +++ b/packages/.taskmaster/config.json @@ -0,0 +1,34 @@ +{ + "models": { + "main": { + "provider": "anthropic", + "modelId": "claude-3-7-sonnet-20250219", + "maxTokens": 120000, + "temperature": 0.2 + }, + "research": { + "provider": "perplexity", + "modelId": "sonar-pro", + "maxTokens": 8700, + "temperature": 0.1 + }, + "fallback": { + "provider": "anthropic", + "modelId": "claude-3-5-sonnet-20240620", + "maxTokens": 8192, + "temperature": 0.1 + } + }, + "global": { + "logLevel": "info", + "debug": false, + "defaultSubtasks": 5, + "defaultPriority": "medium", + "projectName": "Taskmaster", + "ollamaBaseURL": "http://localhost:11434/api", + "bedrockBaseURL": "https://bedrock.us-east-1.amazonaws.com", + "defaultTag": "master", + "azureOpenaiBaseURL": "https://your-endpoint.openai.azure.com/", + "userId": "1234567890" + } +} \ No newline at end of file diff --git a/packages/.taskmaster/docs/prd.txt b/packages/.taskmaster/docs/prd.txt new file mode 100644 index 0000000..d674ae1 --- /dev/null +++ b/packages/.taskmaster/docs/prd.txt @@ -0,0 +1,272 @@ +# Product Requirements Document: Guile Home Lab Tool with MCP Integration + +## Executive Summary + +Migrate the existing Bash-based home lab management tool to GNU Guile Scheme and implement a Model Context Protocol (MCP) server for seamless VS Code/GitHub Copilot integration. This migration will provide improved maintainability, error handling, and advanced AI-assisted operations through direct IDE integration. + +## Project Scope + +### Primary Objectives +1. **Core Tool Migration**: Replace the current Bash `lab` command with a functional equivalent written in GNU Guile Scheme +2. **MCP Server Implementation**: Create a Model Context Protocol server that exposes home lab operations to AI assistants +3. **VS Code Extension**: Develop a TypeScript extension that integrates the MCP server with VS Code and GitHub Copilot +4. **Enhanced Functionality**: Add real-time monitoring, infrastructure discovery, and AI-assisted operations +5. **NixOS Integration**: Ensure seamless integration with existing NixOS infrastructure and deployment workflows + +### Secondary Objectives +1. **Advanced Error Handling**: Implement robust error recovery and reporting +2. **Concurrent Operations**: Support parallel deployment and monitoring tasks +3. **Web Interface**: Optional web dashboard for infrastructure overview +4. **Plugin Architecture**: Extensible system for adding new capabilities + +## Current System Analysis + +### Existing Infrastructure +- **Machines**: congenital-optimist (local), sleeper-service (NFS), grey-area (Git), reverse-proxy +- **Services**: Ollama AI (8B/7B models), Forgejo, Jellyfin, Calibre-web, SearXNG, NFS +- **Deployment**: NixOS flakes with manual SSH/rsync deployment +- **Monitoring**: Basic connectivity checking via SSH + +### Current Tool Capabilities +- Multi-machine deployment (boot/test/switch modes) +- Infrastructure status monitoring +- SSH-based remote operations +- Flake updates and hybrid deployment strategies +- Color-coded logging and error reporting + +## Technical Requirements + +### Architecture Requirements +1. **Functional Programming**: Pure functions for core logic, side effects isolated +2. **Module Structure**: Clean separation of concerns (lab/, mcp/, utils/) +3. **Data-Driven Design**: Configuration and state as immutable data structures +4. **Protocol Compliance**: Full MCP 2024-11-05 specification support +5. **Error Resilience**: Graceful degradation and automatic recovery + +### Performance Requirements +1. **Startup Time**: < 500ms for basic commands +2. **Deployment Speed**: Maintain or improve current deployment times +3. **Memory Usage**: < 50MB baseline, < 200MB during operations +4. **Concurrent Operations**: Support 3+ parallel machine deployments +5. **Response Time**: MCP requests < 100ms for simple operations + +### Compatibility Requirements +1. **NixOS Integration**: Native integration with NixOS flakes and services +2. **Existing Workflows**: Drop-in replacement for current `lab` command +3. **SSH Configuration**: Use existing SSH keys and connection patterns +4. **Tool Dependencies**: Leverage existing system tools (nixos-rebuild, ssh, etc.) + +## Functional Specifications + +### Core Tool Features + +#### 1. Machine Management +- **Discovery**: Automatic detection of home lab machines +- **Health Monitoring**: Real-time status checking and metrics collection +- **Deployment**: Multiple deployment strategies (local, SSH, deploy-rs) +- **Configuration**: Machine-specific configurations and role definitions + +#### 2. Service Operations +- **Service Discovery**: Automatic detection of running services +- **Status Monitoring**: Health checks and performance metrics +- **Log Management**: Centralized log collection and analysis +- **Backup Coordination**: Automated backup and restore operations + +#### 3. Infrastructure Operations +- **Network Topology**: Discovery and visualization of network structure +- **Security Scanning**: Automated security checks and vulnerability assessment +- **Resource Monitoring**: CPU, memory, disk, and network utilization +- **Performance Analysis**: Historical metrics and trend analysis + +### MCP Server Features + +#### 1. Core MCP Tools +- `deploy_machine`: Deploy configurations to specific machines +- `check_infrastructure`: Comprehensive infrastructure health check +- `monitor_services`: Real-time service monitoring and alerting +- `update_system`: System updates with rollback capability +- `backup_data`: Automated backup operations +- `restore_system`: System restore from backups + +#### 2. Resource Endpoints +- `homelab://machines/{machine}`: Machine configuration and status +- `homelab://services/{service}`: Service details and logs +- `homelab://network/topology`: Network structure and connectivity +- `homelab://metrics/{type}`: Performance and monitoring data +- `homelab://logs/{service}`: Centralized log access + +#### 3. AI Assistant Integration +- **Context Awareness**: AI understands current infrastructure state +- **Intelligent Suggestions**: Proactive recommendations for optimization +- **Natural Language Operations**: Execute commands via natural language +- **Documentation Integration**: Automatic documentation generation + +### VS Code Extension Features + +#### 1. MCP Client Implementation +- **Connection Management**: Robust MCP server connection handling +- **Request Routing**: Efficient request/response handling +- **Error Recovery**: Automatic reconnection and retry logic +- **Performance Monitoring**: Track MCP server performance metrics + +#### 2. User Interface +- **Status Bar Integration**: Real-time infrastructure status display +- **Command Palette**: Quick access to home lab operations +- **Explorer Integration**: File tree integration for configuration files +- **Output Channels**: Structured output display for operations + +#### 3. GitHub Copilot Integration +- **Context Enhancement**: Provide infrastructure context to Copilot +- **Code Suggestions**: Infrastructure-aware code completions +- **Documentation**: Automated documentation generation +- **Best Practices**: Enforce home lab coding standards + +## Implementation Architecture + +### Module Structure +``` +lab/ # Core home lab functionality +├── core/ # Essential operations +├── machines/ # Machine-specific operations +├── deployment/ # Deployment strategies +├── monitoring/ # Health and performance monitoring +└── config/ # Configuration management + +mcp/ # Model Context Protocol implementation +├── server/ # Core MCP server +├── tools/ # MCP tool implementations +└── resources/ # MCP resource endpoints + +utils/ # Shared utilities +├── ssh/ # SSH operations +├── json/ # JSON processing +├── logging/ # Logging and output +└── config/ # Configuration parsing +``` + +### Key Libraries and Dependencies +1. **Tier 1 (Essential)**: + - `guile-ssh`: SSH operations and remote execution + - `guile-json`: JSON processing for MCP protocol + - `scheme-json-rpc`: JSON-RPC implementation + - `guile-webutils`: HTTP server functionality + +2. **Tier 2 (Enhanced Features)**: + - `guile-websocket`: WebSocket support for real-time updates + - `artanis`: Web framework for dashboard + - `guile-curl`: HTTP client operations + - `guile-config`: Advanced configuration management + +3. **Tier 3 (Future Enhancements)**: + - `guile-daemon`: Background process management + - `guile-ncurses`: Terminal user interface + - `g-wrap`: C library integration + - `guile-dbi`: Database connectivity + +## Quality Requirements + +### Testing Strategy +1. **Unit Testing**: Test individual functions with srfi-64 +2. **Integration Testing**: Test module interactions +3. **End-to-End Testing**: Full workflow validation +4. **Performance Testing**: Benchmark against current tool +5. **MCP Compliance**: Protocol conformance testing + +### Security Requirements +1. **SSH Key Management**: Secure handling of authentication credentials +2. **Input Validation**: Comprehensive input sanitization +3. **Privilege Separation**: Minimal privilege operations +4. **Audit Logging**: Complete operation audit trail +5. **Secure Communication**: TLS for MCP protocol when needed + +### Documentation Requirements +1. **API Documentation**: Complete MCP tool and resource documentation +2. **User Guide**: Comprehensive usage instructions +3. **Developer Guide**: Architecture and extension documentation +4. **Migration Guide**: Transition instructions from Bash tool +5. **Troubleshooting**: Common issues and solutions + +## Success Criteria + +### Functional Success +- [ ] Complete feature parity with existing Bash tool +- [ ] MCP server passes all protocol compliance tests +- [ ] VS Code extension successfully integrates with GitHub Copilot +- [ ] 99.9% uptime for MCP server operations +- [ ] Zero data loss during migration + +### Performance Success +- [ ] 20% improvement in deployment speed +- [ ] 50% reduction in error rates +- [ ] 30% faster infrastructure status checking +- [ ] Sub-second response times for common operations +- [ ] Support for 5+ concurrent operations + +### User Experience Success +- [ ] Seamless transition for existing users +- [ ] Intuitive VS Code integration +- [ ] Comprehensive error messages and recovery suggestions +- [ ] Self-documenting configuration options +- [ ] Minimal learning curve for basic operations + +## Risk Assessment + +### Technical Risks +1. **Learning Curve**: Guile Scheme adoption may slow initial development +2. **Library Maturity**: Some Guile libraries may lack features or have bugs +3. **Performance**: Interpreted language may impact performance for intensive operations +4. **Integration Complexity**: MCP protocol implementation complexity + +### Mitigation Strategies +1. **Incremental Migration**: Gradual replacement of Bash functionality +2. **Fallback Mechanisms**: Ability to call existing Bash tools when needed +3. **Performance Monitoring**: Continuous benchmarking and optimization +4. **Community Support**: Leverage Guile community resources and documentation + +## Timeline and Milestones + +### Phase 1: Foundation (Weeks 1-2) +- Core module structure implementation +- Basic SSH and deployment functionality +- Initial testing framework setup + +### Phase 2: Core Features (Weeks 3-4) +- Complete machine management implementation +- Service monitoring and health checks +- Configuration management system + +### Phase 3: MCP Integration (Weeks 5-6) +- MCP server protocol implementation +- Core MCP tools development +- Resource endpoint implementation + +### Phase 4: VS Code Extension (Weeks 7-8) +- TypeScript extension development +- MCP client implementation +- GitHub Copilot integration + +### Phase 5: Enhancement (Weeks 9-10) +- Advanced monitoring features +- Web dashboard implementation +- Performance optimization + +### Phase 6: Production (Weeks 11-12) +- Comprehensive testing and validation +- Documentation completion +- Migration and deployment + +## Maintenance and Support + +### Ongoing Requirements +1. **Security Updates**: Regular security patch integration +2. **Library Updates**: Keep dependencies current +3. **Feature Enhancement**: Continuous improvement based on usage +4. **Bug Fixes**: Rapid response to issues +5. **Documentation**: Keep documentation current with changes + +### Long-term Evolution +1. **Plugin Ecosystem**: Support for third-party extensions +2. **Cloud Integration**: Support for cloud-based infrastructure +3. **Multi-User Support**: Team collaboration features +4. **AI Enhancement**: Advanced AI-assisted operations +5. **Mobile Support**: Mobile access to infrastructure management diff --git a/packages/.taskmaster/state.json b/packages/.taskmaster/state.json new file mode 100644 index 0000000..75c98ad --- /dev/null +++ b/packages/.taskmaster/state.json @@ -0,0 +1,6 @@ +{ + "currentTag": "master", + "lastSwitched": "2025-06-15T19:47:47.438Z", + "branchTagMapping": {}, + "migrationNoticeShown": false +} \ No newline at end of file diff --git a/packages/.taskmaster/templates/example_prd.txt b/packages/.taskmaster/templates/example_prd.txt new file mode 100644 index 0000000..194114d --- /dev/null +++ b/packages/.taskmaster/templates/example_prd.txt @@ -0,0 +1,47 @@ +<context> +# Overview +[Provide a high-level overview of your product here. Explain what problem it solves, who it's for, and why it's valuable.] + +# Core Features +[List and describe the main features of your product. For each feature, include: +- What it does +- Why it's important +- How it works at a high level] + +# User Experience +[Describe the user journey and experience. Include: +- User personas +- Key user flows +- UI/UX considerations] +</context> +<PRD> +# Technical Architecture +[Outline the technical implementation details: +- System components +- Data models +- APIs and integrations +- Infrastructure requirements] + +# Development Roadmap +[Break down the development process into phases: +- MVP requirements +- Future enhancements +- Do not think about timelines whatsoever -- all that matters is scope and detailing exactly what needs to be build in each phase so it can later be cut up into tasks] + +# Logical Dependency Chain +[Define the logical order of development: +- Which features need to be built first (foundation) +- Getting as quickly as possible to something usable/visible front end that works +- Properly pacing and scoping each feature so it is atomic but can also be built upon and improved as development approaches] + +# Risks and Mitigations +[Identify potential risks and how they'll be addressed: +- Technical challenges +- Figuring out the MVP that we can build upon +- Resource constraints] + +# Appendix +[Include any additional information: +- Research findings +- Technical specifications] +</PRD> \ No newline at end of file diff --git a/packages/.windsurfrules b/packages/.windsurfrules new file mode 100644 index 0000000..c8e02b5 --- /dev/null +++ b/packages/.windsurfrules @@ -0,0 +1,524 @@ +Below you will find a variety of important rules spanning: + +- the dev_workflow +- the .windsurfrules document self-improvement workflow +- the template to follow when modifying or adding new sections/rules to this document. + +--- + +## DEV_WORKFLOW + +description: Guide for using meta-development script (scripts/dev.js) to manage task-driven development workflows +globs: **/\* +filesToApplyRule: **/\* +alwaysApply: true + +--- + +- **Global CLI Commands** + + - Task Master now provides a global CLI through the `task-master` command + - All functionality from `scripts/dev.js` is available through this interface + - Install globally with `npm install -g claude-task-master` or use locally via `npx` + - Use `task-master <command>` instead of `node scripts/dev.js <command>` + - Examples: + - `task-master list` instead of `node scripts/dev.js list` + - `task-master next` instead of `node scripts/dev.js next` + - `task-master expand --id=3` instead of `node scripts/dev.js expand --id=3` + - All commands accept the same options as their script equivalents + - The CLI provides additional commands like `task-master init` for project setup + +- **Development Workflow Process** + + - Start new projects by running `task-master init` or `node scripts/dev.js parse-prd --input=<prd-file.txt>` to generate initial tasks.json + - Begin coding sessions with `task-master list` to see current tasks, status, and IDs + - Analyze task complexity with `task-master analyze-complexity --research` before breaking down tasks + - Select tasks based on dependencies (all marked 'done'), priority level, and ID order + - Clarify tasks by checking task files in tasks/ directory or asking for user input + - View specific task details using `task-master show <id>` to understand implementation requirements + - Break down complex tasks using `task-master expand --id=<id>` with appropriate flags + - Clear existing subtasks if needed using `task-master clear-subtasks --id=<id>` before regenerating + - Implement code following task details, dependencies, and project standards + - Verify tasks according to test strategies before marking as complete + - Mark completed tasks with `task-master set-status --id=<id> --status=done` + - Update dependent tasks when implementation differs from original plan + - Generate task files with `task-master generate` after updating tasks.json + - Maintain valid dependency structure with `task-master fix-dependencies` when needed + - Respect dependency chains and task priorities when selecting work + - Report progress regularly using the list command + +- **Task Complexity Analysis** + + - Run `node scripts/dev.js analyze-complexity --research` for comprehensive analysis + - Review complexity report in scripts/task-complexity-report.json + - Or use `node scripts/dev.js complexity-report` for a formatted, readable version of the report + - Focus on tasks with highest complexity scores (8-10) for detailed breakdown + - Use analysis results to determine appropriate subtask allocation + - Note that reports are automatically used by the expand command + +- **Task Breakdown Process** + + - For tasks with complexity analysis, use `node scripts/dev.js expand --id=<id>` + - Otherwise use `node scripts/dev.js expand --id=<id> --subtasks=<number>` + - Add `--research` flag to leverage Perplexity AI for research-backed expansion + - Use `--prompt="<context>"` to provide additional context when needed + - Review and adjust generated subtasks as necessary + - Use `--all` flag to expand multiple pending tasks at once + - If subtasks need regeneration, clear them first with `clear-subtasks` command + +- **Implementation Drift Handling** + + - When implementation differs significantly from planned approach + - When future tasks need modification due to current implementation choices + - When new dependencies or requirements emerge + - Call `node scripts/dev.js update --from=<futureTaskId> --prompt="<explanation>"` to update tasks.json + +- **Task Status Management** + + - Use 'pending' for tasks ready to be worked on + - Use 'done' for completed and verified tasks + - Use 'deferred' for postponed tasks + - Add custom status values as needed for project-specific workflows + +- **Task File Format Reference** + + ``` + # Task ID: <id> + # Title: <title> + # Status: <status> + # Dependencies: <comma-separated list of dependency IDs> + # Priority: <priority> + # Description: <brief description> + # Details: + <detailed implementation notes> + + # Test Strategy: + <verification approach> + ``` + +- **Command Reference: parse-prd** + + - Legacy Syntax: `node scripts/dev.js parse-prd --input=<prd-file.txt>` + - CLI Syntax: `task-master parse-prd --input=<prd-file.txt>` + - Description: Parses a PRD document and generates a tasks.json file with structured tasks + - Parameters: + - `--input=<file>`: Path to the PRD text file (default: sample-prd.txt) + - Example: `task-master parse-prd --input=requirements.txt` + - Notes: Will overwrite existing tasks.json file. Use with caution. + +- **Command Reference: update** + + - Legacy Syntax: `node scripts/dev.js update --from=<id> --prompt="<prompt>"` + - CLI Syntax: `task-master update --from=<id> --prompt="<prompt>"` + - Description: Updates tasks with ID >= specified ID based on the provided prompt + - Parameters: + - `--from=<id>`: Task ID from which to start updating (required) + - `--prompt="<text>"`: Explanation of changes or new context (required) + - Example: `task-master update --from=4 --prompt="Now we are using Express instead of Fastify."` + - Notes: Only updates tasks not marked as 'done'. Completed tasks remain unchanged. + +- **Command Reference: generate** + + - Legacy Syntax: `node scripts/dev.js generate` + - CLI Syntax: `task-master generate` + - Description: Generates individual task files based on tasks.json + - Parameters: + - `--file=<path>, -f`: Use alternative tasks.json file (default: '.taskmaster/tasks/tasks.json') + - `--output=<dir>, -o`: Output directory (default: '.taskmaster/tasks') + - Example: `task-master generate` + - Notes: Overwrites existing task files. Creates output directory if needed. + +- **Command Reference: set-status** + + - Legacy Syntax: `node scripts/dev.js set-status --id=<id> --status=<status>` + - CLI Syntax: `task-master set-status --id=<id> --status=<status>` + - Description: Updates the status of a specific task in tasks.json + - Parameters: + - `--id=<id>`: ID of the task to update (required) + - `--status=<status>`: New status value (required) + - Example: `task-master set-status --id=3 --status=done` + - Notes: Common values are 'done', 'pending', and 'deferred', but any string is accepted. + +- **Command Reference: list** + + - Legacy Syntax: `node scripts/dev.js list` + - CLI Syntax: `task-master list` + - Description: Lists all tasks in tasks.json with IDs, titles, and status + - Parameters: + - `--status=<status>, -s`: Filter by status + - `--with-subtasks`: Show subtasks for each task + - `--file=<path>, -f`: Use alternative tasks.json file (default: 'tasks/tasks.json') + - Example: `task-master list` + - Notes: Provides quick overview of project progress. Use at start of sessions. + +- **Command Reference: expand** + + - Legacy Syntax: `node scripts/dev.js expand --id=<id> [--num=<number>] [--research] [--prompt="<context>"]` + - CLI Syntax: `task-master expand --id=<id> [--num=<number>] [--research] [--prompt="<context>"]` + - Description: Expands a task with subtasks for detailed implementation + - Parameters: + - `--id=<id>`: ID of task to expand (required unless using --all) + - `--all`: Expand all pending tasks, prioritized by complexity + - `--num=<number>`: Number of subtasks to generate (default: from complexity report) + - `--research`: Use Perplexity AI for research-backed generation + - `--prompt="<text>"`: Additional context for subtask generation + - `--force`: Regenerate subtasks even for tasks that already have them + - Example: `task-master expand --id=3 --num=5 --research --prompt="Focus on security aspects"` + - Notes: Uses complexity report recommendations if available. + +- **Command Reference: analyze-complexity** + + - Legacy Syntax: `node scripts/dev.js analyze-complexity [options]` + - CLI Syntax: `task-master analyze-complexity [options]` + - Description: Analyzes task complexity and generates expansion recommendations + - Parameters: + - `--output=<file>, -o`: Output file path (default: scripts/task-complexity-report.json) + - `--model=<model>, -m`: Override LLM model to use + - `--threshold=<number>, -t`: Minimum score for expansion recommendation (default: 5) + - `--file=<path>, -f`: Use alternative tasks.json file + - `--research, -r`: Use Perplexity AI for research-backed analysis + - Example: `task-master analyze-complexity --research` + - Notes: Report includes complexity scores, recommended subtasks, and tailored prompts. + +- **Command Reference: clear-subtasks** + + - Legacy Syntax: `node scripts/dev.js clear-subtasks --id=<id>` + - CLI Syntax: `task-master clear-subtasks --id=<id>` + - Description: Removes subtasks from specified tasks to allow regeneration + - Parameters: + - `--id=<id>`: ID or comma-separated IDs of tasks to clear subtasks from + - `--all`: Clear subtasks from all tasks + - Examples: + - `task-master clear-subtasks --id=3` + - `task-master clear-subtasks --id=1,2,3` + - `task-master clear-subtasks --all` + - Notes: + - Task files are automatically regenerated after clearing subtasks + - Can be combined with expand command to immediately generate new subtasks + - Works with both parent tasks and individual subtasks + +- **Task Structure Fields** + + - **id**: Unique identifier for the task (Example: `1`) + - **title**: Brief, descriptive title (Example: `"Initialize Repo"`) + - **description**: Concise summary of what the task involves (Example: `"Create a new repository, set up initial structure."`) + - **status**: Current state of the task (Example: `"pending"`, `"done"`, `"deferred"`) + - **dependencies**: IDs of prerequisite tasks (Example: `[1, 2]`) + - Dependencies are displayed with status indicators (✅ for completed, ⏱️ for pending) + - This helps quickly identify which prerequisite tasks are blocking work + - **priority**: Importance level (Example: `"high"`, `"medium"`, `"low"`) + - **details**: In-depth implementation instructions (Example: `"Use GitHub client ID/secret, handle callback, set session token."`) + - **testStrategy**: Verification approach (Example: `"Deploy and call endpoint to confirm 'Hello World' response."`) + - **subtasks**: List of smaller, more specific tasks (Example: `[{"id": 1, "title": "Configure OAuth", ...}]`) + +- **Environment Variables Configuration** + + - **ANTHROPIC_API_KEY** (Required): Your Anthropic API key for Claude (Example: `ANTHROPIC_API_KEY=sk-ant-api03-...`) + - **MODEL** (Default: `"claude-3-7-sonnet-20250219"`): Claude model to use (Example: `MODEL=claude-3-opus-20240229`) + - **MAX_TOKENS** (Default: `"4000"`): Maximum tokens for responses (Example: `MAX_TOKENS=8000`) + - **TEMPERATURE** (Default: `"0.7"`): Temperature for model responses (Example: `TEMPERATURE=0.5`) + - **DEBUG** (Default: `"false"`): Enable debug logging (Example: `DEBUG=true`) + - **TASKMASTER_LOG_LEVEL** (Default: `"info"`): Console output level (Example: `TASKMASTER_LOG_LEVEL=debug`) + - **DEFAULT_SUBTASKS** (Default: `"3"`): Default subtask count (Example: `DEFAULT_SUBTASKS=5`) + - **DEFAULT_PRIORITY** (Default: `"medium"`): Default priority (Example: `DEFAULT_PRIORITY=high`) + - **PROJECT_NAME** (Default: `"MCP SaaS MVP"`): Project name in metadata (Example: `PROJECT_NAME=My Awesome Project`) + - **PROJECT_VERSION** (Default: `"1.0.0"`): Version in metadata (Example: `PROJECT_VERSION=2.1.0`) + - **PERPLEXITY_API_KEY**: For research-backed features (Example: `PERPLEXITY_API_KEY=pplx-...`) + - **PERPLEXITY_MODEL** (Default: `"sonar-medium-online"`): Perplexity model (Example: `PERPLEXITY_MODEL=sonar-large-online`) + +- **Determining the Next Task** + + - Run `task-master next` to show the next task to work on + - The next command identifies tasks with all dependencies satisfied + - Tasks are prioritized by priority level, dependency count, and ID + - The command shows comprehensive task information including: + - Basic task details and description + - Implementation details + - Subtasks (if they exist) + - Contextual suggested actions + - Recommended before starting any new development work + - Respects your project's dependency structure + - Ensures tasks are completed in the appropriate sequence + - Provides ready-to-use commands for common task actions + +- **Viewing Specific Task Details** + + - Run `task-master show <id>` or `task-master show --id=<id>` to view a specific task + - Use dot notation for subtasks: `task-master show 1.2` (shows subtask 2 of task 1) + - Displays comprehensive information similar to the next command, but for a specific task + - For parent tasks, shows all subtasks and their current status + - For subtasks, shows parent task information and relationship + - Provides contextual suggested actions appropriate for the specific task + - Useful for examining task details before implementation or checking status + +- **Managing Task Dependencies** + + - Use `task-master add-dependency --id=<id> --depends-on=<id>` to add a dependency + - Use `task-master remove-dependency --id=<id> --depends-on=<id>` to remove a dependency + - The system prevents circular dependencies and duplicate dependency entries + - Dependencies are checked for existence before being added or removed + - Task files are automatically regenerated after dependency changes + - Dependencies are visualized with status indicators in task listings and files + +- **Command Reference: add-dependency** + + - Legacy Syntax: `node scripts/dev.js add-dependency --id=<id> --depends-on=<id>` + - CLI Syntax: `task-master add-dependency --id=<id> --depends-on=<id>` + - Description: Adds a dependency relationship between two tasks + - Parameters: + - `--id=<id>`: ID of task that will depend on another task (required) + - `--depends-on=<id>`: ID of task that will become a dependency (required) + - Example: `task-master add-dependency --id=22 --depends-on=21` + - Notes: Prevents circular dependencies and duplicates; updates task files automatically + +- **Command Reference: remove-dependency** + + - Legacy Syntax: `node scripts/dev.js remove-dependency --id=<id> --depends-on=<id>` + - CLI Syntax: `task-master remove-dependency --id=<id> --depends-on=<id>` + - Description: Removes a dependency relationship between two tasks + - Parameters: + - `--id=<id>`: ID of task to remove dependency from (required) + - `--depends-on=<id>`: ID of task to remove as a dependency (required) + - Example: `task-master remove-dependency --id=22 --depends-on=21` + - Notes: Checks if dependency actually exists; updates task files automatically + +- **Command Reference: validate-dependencies** + + - Legacy Syntax: `node scripts/dev.js validate-dependencies [options]` + - CLI Syntax: `task-master validate-dependencies [options]` + - Description: Checks for and identifies invalid dependencies in tasks.json and task files + - Parameters: + - `--file=<path>, -f`: Use alternative tasks.json file (default: 'tasks/tasks.json') + - Example: `task-master validate-dependencies` + - Notes: + - Reports all non-existent dependencies and self-dependencies without modifying files + - Provides detailed statistics on task dependency state + - Use before fix-dependencies to audit your task structure + +- **Command Reference: fix-dependencies** + + - Legacy Syntax: `node scripts/dev.js fix-dependencies [options]` + - CLI Syntax: `task-master fix-dependencies [options]` + - Description: Finds and fixes all invalid dependencies in tasks.json and task files + - Parameters: + - `--file=<path>, -f`: Use alternative tasks.json file (default: 'tasks/tasks.json') + - Example: `task-master fix-dependencies` + - Notes: + - Removes references to non-existent tasks and subtasks + - Eliminates self-dependencies (tasks depending on themselves) + - Regenerates task files with corrected dependencies + - Provides detailed report of all fixes made + +- **Command Reference: complexity-report** + + - Legacy Syntax: `node scripts/dev.js complexity-report [options]` + - CLI Syntax: `task-master complexity-report [options]` + - Description: Displays the task complexity analysis report in a formatted, easy-to-read way + - Parameters: + - `--file=<path>, -f`: Path to the complexity report file (default: 'scripts/task-complexity-report.json') + - Example: `task-master complexity-report` + - Notes: + - Shows tasks organized by complexity score with recommended actions + - Provides complexity distribution statistics + - Displays ready-to-use expansion commands for complex tasks + - If no report exists, offers to generate one interactively + +- **Command Reference: add-task** + + - CLI Syntax: `task-master add-task [options]` + - Description: Add a new task to tasks.json using AI + - Parameters: + - `--file=<path>, -f`: Path to the tasks file (default: 'tasks/tasks.json') + - `--prompt=<text>, -p`: Description of the task to add (required) + - `--dependencies=<ids>, -d`: Comma-separated list of task IDs this task depends on + - `--priority=<priority>`: Task priority (high, medium, low) (default: 'medium') + - Example: `task-master add-task --prompt="Create user authentication using Auth0"` + - Notes: Uses AI to convert description into structured task with appropriate details + +- **Command Reference: init** + + - CLI Syntax: `task-master init` + - Description: Initialize a new project with Task Master structure + - Parameters: None + - Example: `task-master init` + - Notes: + - Creates initial project structure with required files + - Prompts for project settings if not provided + - Merges with existing files when appropriate + - Can be used to bootstrap a new Task Master project quickly + +- **Code Analysis & Refactoring Techniques** + - **Top-Level Function Search** + - Use grep pattern matching to find all exported functions across the codebase + - Command: `grep -E "export (function|const) \w+|function \w+\(|const \w+ = \(|module\.exports" --include="*.js" -r ./` + - Benefits: + - Quickly identify all public API functions without reading implementation details + - Compare functions between files during refactoring (e.g., monolithic to modular structure) + - Verify all expected functions exist in refactored modules + - Identify duplicate functionality or naming conflicts + - Usage examples: + - When migrating from `scripts/dev.js` to modular structure: `grep -E "function \w+\(" scripts/dev.js` + - Check function exports in a directory: `grep -E "export (function|const)" scripts/modules/` + - Find potential naming conflicts: `grep -E "function (get|set|create|update)\w+\(" -r ./` + - Variations: + - Add `-n` flag to include line numbers + - Add `--include="*.ts"` to filter by file extension + - Use with `| sort` to alphabetize results + - Integration with refactoring workflow: + - Start by mapping all functions in the source file + - Create target module files based on function grouping + - Verify all functions were properly migrated + - Check for any unintentional duplications or omissions + +--- + +## WINDSURF_RULES + +description: Guidelines for creating and maintaining Windsurf rules to ensure consistency and effectiveness. +globs: .windsurfrules +filesToApplyRule: .windsurfrules +alwaysApply: true + +--- + +The below describes how you should be structuring new rule sections in this document. + +- **Required Rule Structure:** + + ```markdown + --- + description: Clear, one-line description of what the rule enforces + globs: path/to/files/*.ext, other/path/**/* + alwaysApply: boolean + --- + + - **Main Points in Bold** + - Sub-points with details + - Examples and explanations + ``` + +- **Section References:** + + - Use `ALL_CAPS_SECTION` to reference files + - Example: `WINDSURF_RULES` + +- **Code Examples:** + + - Use language-specific code blocks + + ```typescript + // ✅ DO: Show good examples + const goodExample = true; + + // ❌ DON'T: Show anti-patterns + const badExample = false; + ``` + +- **Rule Content Guidelines:** + + - Start with high-level overview + - Include specific, actionable requirements + - Show examples of correct implementation + - Reference existing code when possible + - Keep rules DRY by referencing other rules + +- **Rule Maintenance:** + + - Update rules when new patterns emerge + - Add examples from actual codebase + - Remove outdated patterns + - Cross-reference related rules + +- **Best Practices:** + - Use bullet points for clarity + - Keep descriptions concise + - Include both DO and DON'T examples + - Reference actual code over theoretical examples + - Use consistent formatting across rules + +--- + +## SELF_IMPROVE + +description: Guidelines for continuously improving this rules document based on emerging code patterns and best practices. +globs: **/\* +filesToApplyRule: **/\* +alwaysApply: true + +--- + +- **Rule Improvement Triggers:** + + - New code patterns not covered by existing rules + - Repeated similar implementations across files + - Common error patterns that could be prevented + - New libraries or tools being used consistently + - Emerging best practices in the codebase + +- **Analysis Process:** + + - Compare new code with existing rules + - Identify patterns that should be standardized + - Look for references to external documentation + - Check for consistent error handling patterns + - Monitor test patterns and coverage + +- **Rule Updates:** + + - **Add New Rules When:** + + - A new technology/pattern is used in 3+ files + - Common bugs could be prevented by a rule + - Code reviews repeatedly mention the same feedback + - New security or performance patterns emerge + + - **Modify Existing Rules When:** + - Better examples exist in the codebase + - Additional edge cases are discovered + - Related rules have been updated + - Implementation details have changed + +- **Example Pattern Recognition:** + + ```typescript + // If you see repeated patterns like: + const data = await prisma.user.findMany({ + select: { id: true, email: true }, + where: { status: "ACTIVE" }, + }); + + // Consider adding a PRISMA section in the .windsurfrules: + // - Standard select fields + // - Common where conditions + // - Performance optimization patterns + ``` + +- **Rule Quality Checks:** + + - Rules should be actionable and specific + - Examples should come from actual code + - References should be up to date + - Patterns should be consistently enforced + +- **Continuous Improvement:** + + - Monitor code review comments + - Track common development questions + - Update rules after major refactors + - Add links to relevant documentation + - Cross-reference related rules + +- **Rule Deprecation:** + + - Mark outdated patterns as deprecated + - Remove rules that no longer apply + - Update references to deprecated rules + - Document migration paths for old patterns + +- **Documentation Updates:** + - Keep examples synchronized with code + - Update references to external docs + - Maintain links between related rules + - Document breaking changes + +Follow WINDSURF_RULES for proper rule formatting and structure of windsurf rule sections. diff --git a/packages/README.md b/packages/README.md index bbf6cce..e69de29 100644 --- a/packages/README.md +++ b/packages/README.md @@ -1,277 +0,0 @@ -# Packages Directory - -This directory contains custom package definitions and overlays for the Home-lab NixOS infrastructure. - -## Directory Purpose - -The `packages/` directory is used for: -- Custom package derivations not available in nixpkgs -- Modified versions of existing packages -- Home-lab specific applications and utilities -- Package overlays and customizations - -## Structure - -### Custom Packages -- `my-package/` - Individual package directories -- `default.nix` - Package collection and exports -- `flake-module.nix` - Flake integration for packages - -### Package Categories -- `applications/` - Custom applications and GUIs -- `scripts/` - Shell scripts and automation tools -- `configs/` - Configuration packages and templates -- `overlays/` - Package overlays and modifications - -## Usage - -### In Flake Configuration -```nix -# flake.nix -{ - outputs = { self, nixpkgs, ... }: { - packages.x86_64-linux = import ./packages { - pkgs = nixpkgs.legacyPackages.x86_64-linux; - }; - - overlays.default = import ./packages/overlays; - }; -} -``` - -### In Machine Configuration -```nix -# machine configuration -{ - nixpkgs.overlays = [ inputs.self.overlays.default ]; - - environment.systemPackages = with pkgs; [ - # Custom packages from this directory - my-custom-tool - home-lab-scripts - ]; -} -``` - -## Package Development - -### Creating New Package -1. Create package directory: `packages/my-package/` -2. Write `default.nix` with package derivation -3. Add to `packages/default.nix` exports -4. Test with `nix build .#my-package` - -### Package Template -```nix -{ lib, stdenv, fetchFromGitHub, ... }: - -stdenv.mkDerivation rec { - pname = "my-package"; - version = "1.0.0"; - - src = fetchFromGitHub { - owner = "user"; - repo = "repo"; - rev = "v${version}"; - sha256 = "sha256-AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA="; - }; - - meta = with lib; { - description = "Description of my package"; - homepage = "https://github.com/user/repo"; - license = licenses.mit; - maintainers = [ "geir" ]; - platforms = platforms.linux; - }; -} -``` - -## Overlay Examples - -### Package Modification -```nix -# overlays/default.nix -final: prev: { - # Modify existing package - vim = prev.vim.override { - features = "huge"; - }; - - # Add custom package - home-lab-tools = final.callPackage ../tools { }; -} -``` - -## Home-lab Specific Packages - -### Lab Tool (`lab`) - Evolution Roadmap -The `lab` tool is the central infrastructure management utility with planned major enhancements: - -**Current Implementation (Shell-based):** -- Multi-machine deployment via SSH/rsync -- Infrastructure status monitoring -- Color-coded logging and error handling -- Machine health checks and connectivity testing - -**Phase 1: deploy-rs Integration** -Research completed - deploy-rs provides production-grade deployment capabilities: -- **Automatic rollback**: Failed deployments revert automatically -- **Parallel deployment**: Deploy to multiple machines simultaneously -- **Health checks**: Validates deployments before committing -- **Atomic operations**: Either succeeds completely or fails cleanly -- **Flake-native**: Built specifically for NixOS flakes - -Implementation approach: -```bash -# Hybrid command structure -lab deploy sleeper-service # Current SSH/rsync method -lab deploy-rs sleeper-service # New deploy-rs backend -lab deploy-all --parallel # Parallel deployment via deploy-rs -``` - -Configuration integration: -```nix -# flake.nix additions -inputs.deploy-rs.url = "github:serokell/deploy-rs"; - -deploy.nodes = { - sleeper-service = { - hostname = "sleeper-service.tail807ea.ts.net"; - profiles.system = { - user = "root"; - path = deploy-rs.lib.x86_64-linux.activate.nixos - self.nixosConfigurations.sleeper-service; - sshUser = "sma"; - autoRollback = true; - magicRollback = true; - activationTimeout = 180; - }; - }; -}; -``` - -**Phase 2: Enhanced Statistics Engine** -Current `lab status` provides basic connectivity - planned expansion to comprehensive monitoring: - -**Rust/Go Implementation for Performance:** -- **System metrics**: CPU, memory, disk usage, network stats -- **Service monitoring**: systemd service status, failed units -- **ZFS statistics**: Pool health, scrub status, capacity usage -- **Network topology**: Tailscale mesh status, latency metrics -- **Historical data**: Trend analysis and performance tracking - -**Example enhanced output:** -```bash -$ lab status --detailed -Infrastructure Status (Updated: 2024-01-20 15:30:42) - -━━━ congenital-optimist (local) ━━━ -✅ Online │ Load: 1.2 │ RAM: 8.4GB/32GB │ Disk: 45% │ Uptime: 7d 2h -🔗 Tailscale: Active (100.81.15.84) │ Latency: local - -━━━ sleeper-service (file server) ━━━ -✅ Online │ Load: 0.3 │ RAM: 2.1GB/8GB │ Disk: 67% │ Uptime: 12d 8h -🗄️ ZFS: ONLINE │ Pool: storage (1.8TB, 50% used) │ Last scrub: 3d ago -🔗 Tailscale: Active (100.81.15.85) │ Latency: 2ms -📡 Services: sshd ✅ │ nfs-server ✅ │ zfs-mount ✅ - -━━━ grey-area (unreachable) ━━━ -⚠️ Offline │ Last seen: 2h ago │ SSH: Connection refused -``` - -**Phase 3: GNU Stow Dotfile Integration** -Research completed - GNU Stow provides excellent dotfile management for server configurations: - -**Use cases:** -- **Server user configs**: Simple dotfiles for `sma` user on servers -- **Machine-specific configs**: Different configurations per server role -- **Selective deployment**: Deploy only needed configs per machine - -**Integration approach:** -```bash -# Enhanced lab tool commands -lab dotfiles deploy sma@sleeper-service # Deploy server user configs -lab dotfiles status # Show dotfile deployment status -lab dotfiles sync --machine sleeper-service # Sync specific machine configs -``` - -**Directory structure:** -``` -packages/dotfiles/ -├── server-common/ # Shared server configurations -│ ├── .zshrc # Basic shell config -│ ├── .vimrc # Editor config -│ └── .gitconfig # Git configuration -├── sleeper-service/ # NFS server specific -│ └── .config/ -│ └── nfs/ -├── grey-area/ # Git server specific -│ └── .gitconfig # Enhanced git config -└── stow-deploy.nix # NixOS integration -``` - -**Hybrid Configuration Strategy:** -- **Keep org-mode** for complex desktop configurations (geir user) -- **Use GNU Stow** for simple server configurations (sma user) -- **Machine-specific packages** for role-based configurations - -**Phase 4: Advanced Features** -- **Configuration drift detection**: Compare deployed vs expected state -- **Automated health checks**: Scheduled infrastructure validation -- **Integration APIs**: Metrics export for monitoring systems -- **Web dashboard**: Optional web interface for infrastructure overview -- **Alert system**: Notifications for infrastructure issues - -**Implementation Timeline:** -1. **Q1 2024**: deploy-rs integration and testing -2. **Q2 2024**: Enhanced statistics engine in Rust/Go -3. **Q3 2024**: GNU Stow dotfile integration -4. **Q4 2024**: Advanced monitoring and alerting features - -### CongenitalOptimist Packages -- Development environment customizations -- Workstation-specific tools -- Desktop application modifications -- `lab` tool and deployment utilities - -### sleeper-service Packages -- File server utilities -- ZFS monitoring tools -- NFS service management -- Storage health monitoring -- Backup automation scripts - -### Server Infrastructure Packages -- **deploy-rs configurations**: Declarative deployment definitions -- **Dotfile managers**: GNU Stow packages for server user configurations -- **Monitoring utilities**: System health and performance tools -- **Network tools**: Tailscale integration and network diagnostics - -## Best Practices - -- **Versioning**: Pin package versions for reproducibility -- **Documentation**: Include clear descriptions and usage -- **Testing**: Test packages across target machines -- **Licensing**: Respect upstream licenses and attributions -- **Maintenance**: Keep packages updated and functional - -## Integration with Modules - -Packages can be integrated with NixOS modules: -```nix -# modules/development/tools.nix -{ config, pkgs, ... }: { - environment.systemPackages = with pkgs; [ - # Reference custom packages - home-lab-dev-tools - custom-editor-config - ]; -} -``` - -## Flake Outputs - -Custom packages are exported as flake outputs: -- `packages.x86_64-linux.package-name` -- `overlays.default` -- `apps.x86_64-linux.script-name` \ No newline at end of file diff --git a/packages/flake.lock b/packages/flake.lock new file mode 100644 index 0000000..b238039 --- /dev/null +++ b/packages/flake.lock @@ -0,0 +1,61 @@ +{ + "nodes": { + "flake-utils": { + "inputs": { + "systems": "systems" + }, + "locked": { + "lastModified": 1731533236, + "narHash": "sha256-l0KFg5HjrsfsO/JpG+r7fRrqm12kzFHyUHqHCVpMMbI=", + "owner": "numtide", + "repo": "flake-utils", + "rev": "11707dc2f618dd54ca8739b309ec4fc024de578b", + "type": "github" + }, + "original": { + "owner": "numtide", + "repo": "flake-utils", + "type": "github" + } + }, + "nixpkgs": { + "locked": { + "lastModified": 1749794982, + "narHash": "sha256-Kh9K4taXbVuaLC0IL+9HcfvxsSUx8dPB5s5weJcc9pc=", + "owner": "NixOS", + "repo": "nixpkgs", + "rev": "ee930f9755f58096ac6e8ca94a1887e0534e2d81", + "type": "github" + }, + "original": { + "owner": "NixOS", + "ref": "nixos-unstable", + "repo": "nixpkgs", + "type": "github" + } + }, + "root": { + "inputs": { + "flake-utils": "flake-utils", + "nixpkgs": "nixpkgs" + } + }, + "systems": { + "locked": { + "lastModified": 1681028828, + "narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=", + "owner": "nix-systems", + "repo": "default", + "rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e", + "type": "github" + }, + "original": { + "owner": "nix-systems", + "repo": "default", + "type": "github" + } + } + }, + "root": "root", + "version": 7 +} diff --git a/packages/flake.nix b/packages/flake.nix new file mode 100644 index 0000000..eb64d59 --- /dev/null +++ b/packages/flake.nix @@ -0,0 +1,140 @@ +{ + description = "Home Lab MCP Integration - Guile MCP Server and VS Code Extension"; + + inputs = { + nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable"; + flake-utils.url = "github:numtide/flake-utils"; + }; + + outputs = { + self, + nixpkgs, + flake-utils, + }: + flake-utils.lib.eachDefaultSystem (system: let + pkgs = nixpkgs.legacyPackages.${system}; + in { + # Development shell + devShells.default = pkgs.mkShell { + buildInputs = with pkgs; [ + # Guile and Scheme ecosystem + guile + guile-json + guile-ssh + guile-websocket + + # Node.js for VS Code extension development + nodejs + nodePackages.npm + nodePackages.typescript + + # Development and testing tools + git + curl + jq + netcat + + # Documentation and analysis + pandoc + graphviz + + # Optional: VS Code for development + # vscode + ]; + + shellHook = '' + echo "🏠 Home Lab MCP Integration Development Environment" + echo "" + echo "Available tools:" + echo " • Guile with JSON, SSH, WebSocket support" + echo " • Node.js with TypeScript and VS Code tools" + echo " • Development utilities: git, curl, jq, netcat" + echo "" + echo "Quick start:" + echo " 1. npm install # Install extension dependencies" + echo " 2. npm run compile # Compile TypeScript extension" + echo " 3. ./setup-mcp-integration.sh # Run full setup" + echo "" + echo "MCP Server: ./guile-mcp-server.scm" + echo "VS Code Extension: ./vscode-homelab-extension.ts" + echo "" + ''; + + # Environment variables + GUILE_LOAD_PATH = "${pkgs.guile-json}/share/guile/site/3.0:${pkgs.guile-ssh}/share/guile/site/3.0"; + NODE_PATH = "${pkgs.nodePackages.typescript}/lib/node_modules"; + }; + + # Packages + packages = { + # Guile MCP Server package + guile-mcp-server = pkgs.stdenv.mkDerivation { + pname = "guile-mcp-server"; + version = "0.1.0"; + + src = ./.; + + buildInputs = with pkgs; [ + guile + guile-json + guile-ssh + ]; + + installPhase = '' + mkdir -p $out/bin + cp guile-mcp-server.scm $out/bin/guile-mcp-server + chmod +x $out/bin/guile-mcp-server + ''; + + meta = with pkgs.lib; { + description = "Home Lab MCP Server implemented in Guile Scheme"; + license = licenses.mit; + maintainers = []; + platforms = platforms.unix; + }; + }; + + # VS Code Extension package + vscode-homelab-extension = pkgs.stdenv.mkDerivation { + pname = "vscode-homelab-extension"; + version = "0.1.0"; + + src = ./.; + + buildInputs = with pkgs; [ + nodejs + npm + ]; + + buildPhase = '' + npm install + npm run compile + ''; + + installPhase = '' + mkdir -p $out + cp -r . $out/ + ''; + + meta = with pkgs.lib; { + description = "VS Code extension for Home Lab MCP integration"; + license = licenses.mit; + maintainers = []; + }; + }; + }; + + # Default package + defaultPackage = self.packages.${system}.guile-mcp-server; + + # Apps for easy running + apps = { + mcp-server = { + type = "app"; + program = "${self.packages.${system}.guile-mcp-server}/bin/guile-mcp-server"; + }; + }; + + defaultApp = self.apps.${system}.mcp-server; + }); +} diff --git a/packages/guile-mcp-server.scm b/packages/guile-mcp-server.scm new file mode 100644 index 0000000..4f0ad7e --- /dev/null +++ b/packages/guile-mcp-server.scm @@ -0,0 +1,348 @@ +#!/usr/bin/env guile +!# + +;; Guile MCP Server for Home Lab Integration +;; Implements Model Context Protocol for VS Code extension + +(use-modules (json) + (ice-9 textual-ports) + (ice-9 popen) + (ice-9 rdelim) + (ice-9 match) + (ice-9 threads) + (srfi srfi-1) + (srfi srfi-19) + (srfi srfi-26)) + +;; MCP Protocol Implementation +(define mcp-protocol-version "2024-11-05") +(define request-id-counter 0) + +;; Server capabilities and state +(define server-capabilities + `((tools . ()) + (resources . ()) + (prompts . ()))) + +(define server-info + `((name . "guile-homelab-mcp") + (version . "0.1.0"))) + +;; Request/Response utilities +(define (make-response id result) + `((jsonrpc . "2.0") + (id . ,id) + (result . ,result))) + +(define (make-error id code message) + `((jsonrpc . "2.0") + (id . ,id) + (error . ((code . ,code) + (message . ,message))))) + +(define (send-response response) + (let ((json-str (scm->json-string response))) + (display json-str) + (newline) + (force-output))) + +;; Home Lab Tools Implementation +(define (list-machines) + "List all available machines in the home lab" + (let* ((proc (open-input-pipe "find /etc/nixos/hosts -name '*.nix' -type f")) + (output (read-string proc))) + (close-pipe proc) + (if (string-null? output) + '() + (map (lambda (path) + (basename path ".nix")) + (string-split (string-trim-right output #\newline) #\newline))))) + +(define (get-machine-status machine) + "Get status of a specific machine" + (let* ((cmd (format #f "ping -c 1 -W 1 ~a > /dev/null 2>&1" machine)) + (status (system cmd))) + (if (= status 0) "online" "offline"))) + +(define (deploy-machine machine method) + "Deploy configuration to a machine" + (match method + ("deploy-rs" + (let ((cmd (format #f "deploy '.#~a'" machine))) + (deploy-with-command cmd machine))) + ("hybrid-update" + (let ((cmd (format #f "nixos-rebuild switch --flake '.#~a' --target-host ~a --use-remote-sudo" machine machine))) + (deploy-with-command cmd machine))) + ("legacy" + (let ((cmd (format #f "nixos-rebuild switch --flake '.#~a'" machine))) + (deploy-with-command cmd machine))) + (_ (throw 'deployment-error "Unknown deployment method" method)))) + +(define (deploy-with-command cmd machine) + "Execute deployment command and return result" + (let* ((proc (open-input-pipe cmd)) + (output (read-string proc)) + (status (close-pipe proc))) + `((success . ,(= status 0)) + (output . ,output) + (machine . ,machine) + (timestamp . ,(date->string (current-date)))))) + +(define (generate-nix-config machine-name services) + "Generate NixOS configuration for a new machine" + (let ((config (format #f "# Generated NixOS configuration for ~a +# Generated on ~a + +{ config, pkgs, ... }: + +{ + imports = [ + ./hardware-configuration.nix + ]; + + # Machine name + networking.hostName = \"~a\"; + + # Basic system configuration + system.stateVersion = \"23.11\"; + + # Enable services +~a + + # Network configuration + networking.firewall.enable = true; + + # SSH access + services.openssh.enable = true; + users.users.root.openssh.authorizedKeys.keys = [ + # Add your public key here + ]; +} +" + machine-name + (date->string (current-date)) + machine-name + (string-join + (map (lambda (service) + (format #f " services.~a.enable = true;" service)) + services) + "\n")))) + `((content . ,config) + (filename . ,(format #f "~a.nix" machine-name))))) + +(define (get-infrastructure-status) + "Get comprehensive infrastructure status" + (let* ((machines (list-machines)) + (machine-status (map (lambda (m) + `((name . ,m) + (status . ,(get-machine-status m)))) + machines))) + `((machines . ,machine-status) + (timestamp . ,(date->string (current-date))) + (total-machines . ,(length machines)) + (online-machines . ,(length (filter (lambda (m) + (equal? (assoc-ref m 'status) "online")) + machine-status)))))) + +;; MCP Tools Registry +(define mcp-tools + `(((name . "deploy-machine") + (description . "Deploy NixOS configuration to a home lab machine") + (inputSchema . ((type . "object") + (properties . ((machine . ((type . "string") + (description . "Machine hostname to deploy to"))) + (method . ((type . "string") + (enum . ("deploy-rs" "hybrid-update" "legacy")) + (description . "Deployment method to use"))))) + (required . ("machine" "method"))))) + + ((name . "list-machines") + (description . "List all available machines in the home lab") + (inputSchema . ((type . "object") + (properties . ())))) + + ((name . "check-status") + (description . "Check status of home lab infrastructure") + (inputSchema . ((type . "object") + (properties . ((machine . ((type . "string") + (description . "Specific machine to check (optional)"))))))) + + ((name . "generate-nix-config") + (description . "Generate NixOS configuration for a new machine") + (inputSchema . ((type . "object") + (properties . ((machine-name . ((type . "string") + (description . "Name for the new machine"))) + (services . ((type . "array") + (items . ((type . "string"))) + (description . "List of services to enable"))))) + (required . ("machine-name"))))) + + ((name . "list-services") + (description . "List available NixOS services") + (inputSchema . ((type . "object") + (properties . ())))))) + +;; MCP Resources Registry +(define mcp-resources + `(((uri . "homelab://status/all") + (name . "Infrastructure Status") + (description . "Complete status of all home lab machines and services") + (mimeType . "application/json")) + + ((uri . "homelab://status/summary") + (name . "Status Summary") + (description . "Summary of infrastructure health") + (mimeType . "text/plain")) + + ((uri . "homelab://context/copilot") + (name . "Copilot Context") + (description . "Context information for GitHub Copilot integration") + (mimeType . "text/markdown")))) + +;; Tool execution dispatcher +(define (execute-tool name arguments) + "Execute a registered MCP tool" + (match name + ("deploy-machine" + (let ((machine (assoc-ref arguments 'machine)) + (method (assoc-ref arguments 'method))) + (deploy-machine machine method))) + + ("list-machines" + `((machines . ,(list-machines)))) + + ("check-status" + (let ((machine (assoc-ref arguments 'machine))) + (if machine + `((machine . ,machine) + (status . ,(get-machine-status machine))) + (get-infrastructure-status)))) + + ("generate-nix-config" + (let ((machine-name (assoc-ref arguments 'machine-name)) + (services (or (assoc-ref arguments 'services) '()))) + (generate-nix-config machine-name services))) + + ("list-services" + `((services . ("nginx" "postgresql" "redis" "mysql" "docker" "kubernetes" + "grafana" "prometheus" "gitea" "nextcloud" "jellyfin")))) + + (_ (throw 'unknown-tool "Tool not found" name)))) + +;; Resource content providers +(define (get-resource-content uri) + "Get content for a resource URI" + (match uri + ("homelab://status/all" + `((content . ,(get-infrastructure-status)))) + + ("homelab://status/summary" + (let ((status (get-infrastructure-status))) + `((content . ,(format #f "Home Lab Status: ~a/~a machines online" + (assoc-ref status 'online-machines) + (assoc-ref status 'total-machines)))))) + + ("homelab://context/copilot" + (let ((status (get-infrastructure-status))) + `((content . ,(format #f "# Home Lab Infrastructure Context + +## Current Status +- Total Machines: ~a +- Online Machines: ~a +- Last Updated: ~a + +## Available Operations +Use the home lab extension commands or MCP tools for: +- Deploying configurations (deploy-machine) +- Checking infrastructure status (check-status) +- Generating new machine configs (generate-nix-config) +- Managing services across the fleet + +## Machine List +~a + +This context helps GitHub Copilot understand your home lab infrastructure state." + (assoc-ref status 'total-machines) + (assoc-ref status 'online-machines) + (assoc-ref status 'timestamp) + (string-join + (map (lambda (m) + (format #f "- ~a: ~a" + (assoc-ref m 'name) + (assoc-ref m 'status))) + (assoc-ref status 'machines)) + "\n")))))) + + (_ (throw 'unknown-resource "Resource not found" uri)))) + +;; MCP Protocol Handlers +(define (handle-initialize params) + "Handle MCP initialize request" + `((protocolVersion . ,mcp-protocol-version) + (capabilities . ((tools . ((listChanged . #f))) + (resources . ((subscribe . #f) + (listChanged . #f))) + (prompts . ((listChanged . #f))))) + (serverInfo . ,server-info))) + +(define (handle-tools-list params) + "Handle tools/list request" + `((tools . ,mcp-tools))) + +(define (handle-tools-call params) + "Handle tools/call request" + (let ((name (assoc-ref params 'name)) + (arguments (assoc-ref params 'arguments))) + (execute-tool name arguments))) + +(define (handle-resources-list params) + "Handle resources/list request" + `((resources . ,mcp-resources))) + +(define (handle-resources-read params) + "Handle resources/read request" + (let ((uri (assoc-ref params 'uri))) + (get-resource-content uri))) + +;; Main request dispatcher +(define (handle-request request) + "Main request handler" + (let ((method (assoc-ref request 'method)) + (params (assoc-ref request 'params)) + (id (assoc-ref request 'id))) + + (catch #t + (lambda () + (let ((result + (match method + ("initialize" (handle-initialize params)) + ("tools/list" (handle-tools-list params)) + ("tools/call" (handle-tools-call params)) + ("resources/list" (handle-resources-list params)) + ("resources/read" (handle-resources-read params)) + (_ (throw 'method-not-found "Method not supported" method))))) + (send-response (make-response id result)))) + + (lambda (key . args) + (send-response (make-error id -32603 (format #f "~a: ~a" key args))))))) + +;; Main server loop +(define (run-mcp-server) + "Run the MCP server main loop" + (let loop () + (let ((line (read-line))) + (unless (eof-object? line) + (catch #t + (lambda () + (let ((request (json-string->scm line))) + (handle-request request))) + (lambda (key . args) + (send-response (make-error 0 -32700 "Parse error")))) + (loop))))) + +;; Export main function for use as module +(define-public run-mcp-server run-mcp-server) + +;; Run server if called directly +(when (equal? (car (command-line)) (current-filename)) + (run-mcp-server)) diff --git a/packages/guile_ecosystem.md b/packages/guile_ecosystem.md new file mode 100644 index 0000000..9001102 --- /dev/null +++ b/packages/guile_ecosystem.md @@ -0,0 +1,394 @@ + +# Guile Scheme Ecosystem Analysis for Home Lab Tool Migration and MCP Integration + +## Executive Summary + +This analysis examines the GNU Guile Scheme ecosystem to evaluate its suitability for migrating the home lab tool from Bash and potentially implementing a Model Context Protocol (MCP) server. Based on comprehensive research, Guile offers a robust ecosystem with numerous libraries that address the core requirements of modern system administration, networking, and infrastructure management. + +**Key Findings:** + +- **Rich ecosystem**: 200+ libraries available through GNU Guix ecosystem +- **Strong system administration capabilities**: SSH, system interaction, process management +- **Excellent networking support**: HTTP servers/clients, WebSocket, JSON-RPC +- **Mature infrastructure**: Well-maintained libraries with active development +- **MCP compatibility**: All necessary components available for MCP server implementation + +## Current State Analysis + +### Existing Lab Tool Capabilities + +Based on the documentation, the current lab tool provides: + +- Machine status checking and connectivity +- Multiple deployment methods (deploy-rs, hybrid-update, legacy) +- NixOS configuration management +- SSH-based operations +- Package updates via flake management + +### Migration Benefits to Guile + +1. **Enhanced error handling** over Bash's limited error management +2. **Structured data handling** for machine configurations and status +3. **Better modularity** and code organization +4. **Advanced networking capabilities** for future expansion +5. **REPL-driven development** for rapid prototyping and debugging + +## Core Libraries for Home Lab Tool Migration + +### 1. System Administration & SSH + +**guile-ssh** - *Essential for remote operations* + +- **Capabilities**: SSH client/server, SFTP, port forwarding, tunneling +- **Use cases**: All remote machine interactions, deployment coordination +- **Maturity**: Very mature, actively maintained +- **Documentation**: Comprehensive with examples + +```scheme +;; Example SSH connection and command execution +(use-modules (ssh session) (ssh channel)) +(let ((session (make-session #:host "sleeper-service"))) + (connect! session) + (authenticate-server session) + (userauth-public-key! session key) + ;; Execute nixos-rebuild or other commands + (call-with-remote-output-pipe session "nixos-rebuild switch" + (lambda (port) (display (read-string port))))) +``` + +### 2. JSON Data Handling + +**guile-json** - *For structured configuration and API communication* + +- **Capabilities**: JSON parsing/generation, RFC 7464 support, pretty printing +- **Use cases**: Configuration management, API responses, deployment metadata +- **Features**: JSON Text Sequences, record mapping, validation + +```scheme +;; Machine configuration as JSON +(define machine-config + `(("name" . "grey-area") + ("services" . #("ollama" "jellyfin" "forgejo")) + ("deployment" . (("method" . "deploy-rs") ("status" . "ready"))))) + +(scm->json machine-config #:pretty #t) +``` + +### 3. HTTP Server/Client Operations + +**guile-webutils** & **guile-curl** - *For web-based interfaces and API calls* + +- **guile-webutils**: Session management, multipart messages, form handling +- **guile-curl**: HTTP client operations, file transfers +- **Use cases**: Web dashboard, API endpoints, remote service integration + +### 4. Process Management & System Interaction + +**guile-bash** - *Bridge between Scheme and shell operations* + +- **Capabilities**: Execute shell commands, capture output, dynamic variables +- **Use cases**: Gradual migration, leveraging existing shell tools +- **Integration**: Call existing scripts while building Scheme alternatives + +### 5. Configuration Management + +**guile-config** - *Declarative configuration handling* + +- **Capabilities**: Declarative config specs, file parsing, command-line args +- **Use cases**: Tool configuration, machine definitions, deployment parameters + +## MCP Server Implementation Libraries + +### 1. JSON-RPC Foundation + +**scheme-json-rpc** - *Core MCP protocol implementation* + +- **Capabilities**: JSON-RPC 2.0 specification compliance +- **Transport**: Works over stdio, WebSocket, HTTP +- **Use cases**: MCP message handling, method dispatch + +### 2. WebSocket Support + +**guile-websocket** - *Real-time communication* + +- **Capabilities**: RFC 6455 compliant WebSocket implementation +- **Features**: Server and client support, binary/text messages +- **Use cases**: MCP transport layer, real-time lab monitoring + +### 3. Web Server Infrastructure + +**artanis** - *Full-featured web application framework* + +- **Capabilities**: Routing, templating, database access, session management +- **Use cases**: MCP HTTP transport, web dashboard, API endpoints + +```scheme +;; MCP server endpoint structure +(define-handler mcp-handler + (lambda (request) + (let ((method (json-ref (request-body request) "method"))) + (case method + (("tools/list") (handle-tools-list)) + (("resources/list") (handle-resources-list)) + (("tools/call") (handle-tool-call request)) + (else (mcp-error "Unknown method")))))) +``` + +## Enhanced Networking & Protocol Libraries + +### 1. Advanced HTTP/Network Operations + +**guile-curl** - *Comprehensive HTTP client* + +- Features: HTTPS, authentication, file uploads, progress callbacks +- Use cases: API integrations, file transfers, service health checks + +**guile-dns** - *DNS operations* + +- Pure Guile DNS implementation +- Use cases: Service discovery, network diagnostics + +### 2. Data Serialization + +**guile-cbor** - *Efficient binary serialization* + +- Alternative to JSON for performance-critical operations +- Smaller payload sizes for resource monitoring + +**guile-yaml** / **guile-yamlpp** - *YAML processing* + +- Configuration file handling +- Integration with existing YAML-based tools + +### 3. Database Integration + +**guile-sqlite3** - *Local data storage* + +- Deployment history, machine states, configuration versioning +- Embedded database for tool state management + +**guile-redis** - *Caching and session storage* + +- Performance optimization for frequent operations +- Distributed state management across lab machines + +## System Integration Libraries + +### 1. File System Operations + +**guile-filesystem** & **f.scm** - *Enhanced file handling* + +- Beyond basic Guile file operations +- Path manipulation, directory traversal, file monitoring + +### 2. Process and Service Management + +**shepherd** - *Service management* + +- GNU Shepherd integration for service lifecycle management +- Alternative to systemd interactions + +### 3. Cryptography and Security + +**guile-gcrypt** - *Cryptographic operations* + +- Key management, encryption/decryption, hashing +- Secure configuration storage, deployment verification + +## Specialized Infrastructure Libraries + +### 1. Containerization Support + +**guile-docker** / Container operations + +- Docker/Podman integration for containerized services +- Image management, container lifecycle + +### 2. Version Control Integration + +**guile-git** - *Git operations* + +- Flake updates, configuration versioning +- Automated commit/push for deployment tracking + +### 3. Monitoring and Metrics + +**prometheus** (Guile implementation) - *Metrics collection* + +- Performance monitoring, deployment success rates +- Integration with existing monitoring infrastructure + +## MCP Server Implementation Strategy + +### Core MCP Capabilities to Implement + +1. **Tools**: Home lab management operations + - `deploy-machine`: Deploy specific machine configurations + - `check-status`: Machine connectivity and health checks + - `update-flake`: Update package definitions + - `rollback-deployment`: Emergency rollback procedures + +2. **Resources**: Lab state and configuration access + - Machine configurations (read-only access to NixOS configs) + - Deployment history and logs + - Service status across all machines + - Network topology and connectivity maps + +3. **Prompts**: Common operational templates + - Deployment workflows + - Troubleshooting procedures + - Security audit checklists + +### Implementation Architecture + +```scheme +(use-modules (json) (web socket) (ssh session) (scheme json-rpc)) + +(define-mcp-server home-lab-mcp + #:tools `(("deploy-machine" + #:description "Deploy configuration to specified machine" + #:parameters ,(make-schema-object + `(("machine" #:type "string" #:required #t) + ("method" #:type "string" #:enum ("deploy-rs" "hybrid-update"))))) + + ("check-status" + #:description "Check machine connectivity and services" + #:parameters ,(make-schema-object + `(("machines" #:type "array" #:items "string"))))) + + #:resources `(("machines://config/{machine}" + #:description "NixOS configuration for machine") + ("machines://status/{machine}" + #:description "Current status and health metrics")) + + #:prompts `(("deployment-workflow" + #:description "Standard deployment procedure") + ("troubleshoot-machine" + #:description "Machine diagnostics checklist"))) +``` + +## Migration Strategy + +### Phase 1: Core Infrastructure (Weeks 1-2) + +1. Set up Guile development environment in NixOS +2. Implement basic SSH operations using guile-ssh +3. Port status checking functionality +4. Create JSON-based machine configuration format + +### Phase 2: Enhanced Features (Weeks 3-4) + +1. Implement deployment methods (deploy-rs integration) +2. Add error handling and logging +3. Create web interface for monitoring +4. Develop basic MCP server capabilities + +### Phase 3: Advanced Integration (Weeks 5-6) + +1. Full MCP server implementation +2. Web dashboard with real-time updates +3. Integration with existing monitoring tools +4. Documentation and testing + +### Phase 4: Production Deployment (Week 7) + +1. Gradual rollout with fallback to Bash tool +2. Performance optimization +3. User training and documentation +4. Monitoring and feedback collection + +## Guile vs. Alternative Languages + +### Advantages of Guile + +- **Homoiconicity**: Code as data enables powerful metaprogramming +- **REPL Development**: Interactive development and debugging +- **GNU Integration**: Seamless integration with GNU tools and philosophy +- **Extensibility**: Easy C library bindings for performance-critical code +- **Stability**: Mature language with stable API + +### Considerations + +- **Learning Curve**: Lisp syntax may be unfamiliar +- **Performance**: Generally slower than compiled languages for CPU-intensive tasks +- **Ecosystem Size**: Smaller than Python/JavaScript ecosystems +- **Tooling**: Fewer IDE integrations compared to mainstream languages + +## Recommended Libraries by Priority + +### Tier 1 (Essential) + +1. **guile-ssh** - Remote operations foundation +2. **guile-json** - Data interchange format +3. **scheme-json-rpc** - MCP protocol implementation +4. **guile-webutils** - Web application utilities + +### Tier 2 (Important) + +1. **guile-websocket** - Real-time communication +2. **artanis** - Web framework +3. **guile-curl** - HTTP client operations +4. **guile-config** - Configuration management + +### Tier 3 (Enhancement) + +1. **guile-git** - Version control integration +2. **guile-sqlite3** - Local data storage +3. **prometheus** - Metrics and monitoring +4. **guile-gcrypt** - Security operations + +## Security Considerations + +### Authentication and Authorization + +- **guile-ssh**: Public key authentication, agent support +- **guile-gcrypt**: Secure credential storage +- **MCP Security**: Implement capability-based access control + +### Network Security + +- **TLS Support**: Via guile-gnutls for encrypted communications +- **SSH Tunneling**: Secure communication channels +- **Input Validation**: JSON schema validation for all inputs + +### Deployment Security + +- **Signed Deployments**: Cryptographic verification of configurations +- **Audit Logging**: Comprehensive operation logging +- **Rollback Capability**: Quick recovery from failed deployments + +## Performance Considerations + +### Optimization Strategies + +1. **Compiled Modules**: Use `.go` files for performance-critical code +2. **Async Operations**: Leverage fibers for concurrent operations +3. **Caching**: Redis integration for frequently accessed data +4. **Native Extensions**: C bindings for system-level operations + +### Expected Performance + +- **SSH Operations**: Comparable to native SSH client +- **JSON Processing**: Adequate for configuration sizes (< 1MB) +- **Web Serving**: Suitable for low-traffic administrative interfaces +- **Startup Time**: Fast REPL startup, moderate for compiled applications + +## Conclusion + +The Guile ecosystem provides comprehensive support for implementing both a sophisticated home lab management tool and a Model Context Protocol server. The availability of mature libraries for SSH operations, JSON handling, web services, and system integration makes Guile an excellent choice for this migration. + +**Key Strengths:** + +- Rich library ecosystem specifically suited to system administration +- Excellent JSON-RPC and WebSocket support for MCP implementation +- Strong SSH and networking capabilities +- Active development community with good documentation + +**Recommended Approach:** + +1. Start with core SSH and JSON functionality +2. Gradually migrate features from Bash to Guile +3. Implement MCP server capabilities incrementally +4. Maintain backwards compatibility during transition + +The migration to Guile will provide significant benefits in code maintainability, error handling, and extensibility while enabling advanced features like MCP integration that would be difficult to implement in Bash. diff --git a/packages/guile_scripting_solution.md b/packages/guile_scripting_solution.md index 87538ef..295d292 100644 --- a/packages/guile_scripting_solution.md +++ b/packages/guile_scripting_solution.md @@ -8,23 +8,23 @@ GNU Guile is the official extension language for the GNU Project. It is an imple Key reasons for considering Guile: -* **Expressiveness and Power:** Scheme is a full-fledged programming language with features like first-class functions, macros, and a rich standard library. This allows for more elegant and maintainable solutions to complex problems. -* **Better Error Handling:** Guile provides robust error handling mechanisms (conditions and handlers) that are more sophisticated than Bash's `set -e` and trap. -* **Modularity:** Guile supports modules, making it easier to organize code into reusable components. -* **Data Manipulation:** Scheme excels at handling structured data, which can be beneficial for managing configurations or parsing output from commands. -* **Readability (for Lisp programmers):** While Lisp syntax can be initially unfamiliar, it can lead to very clear and concise code once learned. -* **Interoperability:** Guile can easily call external programs and libraries, and can be extended with C code if needed. +* **Expressiveness and Power:** Scheme is a full-fledged programming language with features like first-class functions, macros, and a rich standard library. This allows for more elegant and maintainable solutions to complex problems. +* **Better Error Handling:** Guile provides robust error handling mechanisms (conditions and handlers) that are more sophisticated than Bash's `set -e` and trap. +* **Modularity:** Guile supports modules, making it easier to organize code into reusable components. +* **Data Manipulation:** Scheme excels at handling structured data, which can be beneficial for managing configurations or parsing output from commands. +* **Readability (for Lisp programmers):** While Lisp syntax can be initially unfamiliar, it can lead to very clear and concise code once learned. +* **Interoperability:** Guile can easily call external programs and libraries, and can be extended with C code if needed. ## 2. Advantages over Bash for `home-lab-tools` Migrating `home-lab-tools` from Bash to Guile offers specific benefits: -* **Improved Logic Handling:** Complex conditional logic, loops, and function definitions are more naturally expressed in Guile. The current Bash script uses case statements and string comparisons extensively, which can become unwieldy. -* **Structured Data Management:** Machine definitions, deployment modes, and status information could be represented as Scheme data structures (lists, association lists, records), making them easier to manage and query. -* **Enhanced Error Reporting:** More descriptive error messages and better control over script termination in case of failures. -* **Code Reusability:** Functions for common tasks (e.g., SSHing to a machine, running `nixos-rebuild`) can be more cleanly defined and reused. -* **Easier Testing:** Guile's nature as a programming language makes it more amenable to unit testing individual functions or modules. -* **Future Extensibility:** Adding new commands, machines, or features will be simpler and less error-prone in a more structured language. +* **Improved Logic Handling:** Complex conditional logic, loops, and function definitions are more naturally expressed in Guile. The current Bash script uses case statements and string comparisons extensively, which can become unwieldy. +* **Structured Data Management:** Machine definitions, deployment modes, and status information could be represented as Scheme data structures (lists, association lists, records), making them easier to manage and query. +* **Enhanced Error Reporting:** More descriptive error messages and better control over script termination in case of failures. +* **Code Reusability:** Functions for common tasks (e.g., SSHing to a machine, running `nixos-rebuild`) can be more cleanly defined and reused. +* **Easier Testing:** Guile's nature as a programming language makes it more amenable to unit testing individual functions or modules. +* **Future Extensibility:** Adding new commands, machines, or features will be simpler and less error-prone in a more structured language. ## 3. Setting up Guile @@ -46,28 +46,28 @@ The `!#` at the end is a Guile-specific convention that allows the script to be ## 4. Basic Guile Scripting Concepts -* **S-expressions:** Code is written using S-expressions (Symbolic Expressions), which are lists enclosed in parentheses, e.g., `(function arg1 arg2)`. -* **Definitions:** `(define variable value)` and `(define (function-name arg1 arg2) ...body...)`. -* **Procedures (Functions):** Core of Guile programming. -* **Control Flow:** `(if condition then-expr else-expr)`, `(cond (test1 expr1) (test2 expr2) ... (else else-expr))`, `(case ...)` -* **Modules:** `(use-modules (ice-9 popen))` for using libraries. +* **S-expressions:** Code is written using S-expressions (Symbolic Expressions), which are lists enclosed in parentheses, e.g., `(function arg1 arg2)`. +* **Definitions:** `(define variable value)` and `(define (function-name arg1 arg2) ...body...)`. +* **Procedures (Functions):** Core of Guile programming. +* **Control Flow:** `(if condition then-expr else-expr)`, `(cond (test1 expr1) (test2 expr2) ... (else else-expr))`, `(case ...)` +* **Modules:** `(use-modules (ice-9 popen))` for using libraries. ## 5. Interacting with the System Guile provides modules for system interaction: -* **(ice-9 popen):** For running external commands and capturing their output (similar to backticks or `$(...)` in Bash). - * `open-pipe* command mode`: Opens a pipe to a command. - * `get-string-all port`: Reads all output from a port. -* **(ice-9 rdelim):** For reading lines from ports. -* **(ice-9 filesys):** For file system operations (checking existence, deleting, etc.). - * `file-exists? path` - * `delete-file path` -* **(srfi srfi-1):** List processing utilities. -* **(srfi srfi-26):** `cut` for partial application, useful for creating specialized functions. -* **Environment Variables:** `(getenv "VAR_NAME")`, `(setenv "VAR_NAME" "value")`. +* **(ice-9 popen):** For running external commands and capturing their output (similar to backticks or `$(...)` in Bash). + * `open-pipe* command mode`: Opens a pipe to a command. + * `get-string-all port`: Reads all output from a port. +* **(ice-9 rdelim):** For reading lines from ports. +* **(ice-9 filesys):** For file system operations (checking existence, deleting, etc.). + * `file-exists? path` + * `delete-file path` +* **(srfi srfi-1):** List processing utilities. +* **(srfi srfi-26):** `cut` for partial application, useful for creating specialized functions. +* **Environment Variables:** `(getenv "VAR_NAME")`, `(setenv "VAR_NAME" "value")`. -**Example: Running a command** +## Example: Running a command** ```scheme (use-modules (ice-9 popen)) @@ -87,8 +87,8 @@ Guile provides modules for system interaction: Guile uses a condition system for error handling. -* `catch`: Allows you to catch specific types of errors. -* `throw`: Raises an error. +* `catch`: Allows you to catch specific types of errors. +* `throw`: Raises an error. ```scheme (use-modules (ice-9 exceptions)) @@ -113,11 +113,11 @@ For `home-lab-tools`, this means we can provide more specific feedback when a de Guile's module system allows splitting the code into logical units. For `home-lab-tools`, we could have modules for: -* `lab-config`: Machine definitions, paths. -* `lab-deploy`: Functions related to deploying configurations. -* `lab-ssh`: SSH interaction utilities. -* `lab-status`: Functions for checking machine status. -* `lab-utils`: General helper functions, logging. +* `lab-config`: Machine definitions, paths. +* `lab-deploy`: Functions related to deploying configurations. +* `lab-ssh`: SSH interaction utilities. +* `lab-status`: Functions for checking machine status. +* `lab-utils`: General helper functions, logging. **Example module structure:** @@ -275,20 +275,21 @@ For more interactive command-line tools, Guile Scheme can be used to create Text **Key Features:** -* **Windowing:** Create and manage multiple windows on the terminal. -* **Input Handling:** Process keyboard input, including special keys. -* **Text Attributes:** Control colors, bolding, underlining, and other text styles. -* **Forms, Panels, Menus:** Higher-level components for building complex interfaces. +* **Windowing:** Create and manage multiple windows on the terminal. +* **Input Handling:** Process keyboard input, including special keys. +* **Text Attributes:** Control colors, bolding, underlining, and other text styles. +* **Forms, Panels, Menus:** Higher-level components for building complex interfaces. **Getting Started with Guile-Ncurses:** -1. **Installation:** `guile-ncurses` would typically be installed via your system's package manager or built from source. If you are using NixOS, you would look for a Nix package for `guile-ncurses`. +1. **Installation:** `guile-ncurses` would typically be installed via your system's package manager or built from source. If you are using NixOS, you would look for a Nix package for `guile-ncurses`. + ```nix # Example: Adding guile-ncurses to a Nix shell (package name might vary) nix-shell -p guile guile-ncurses ``` -2. **Using in Code:** +2. **Using in Code:** You would use the `(ncurses curses)` module (and others like `(ncurses form)`, `(ncurses menu)`, `(ncurses panel)`) in your Guile script. ```scheme @@ -312,8 +313,8 @@ For more interactive command-line tools, Guile Scheme can be used to create Text **Resources:** -* **Guile-Ncurses Project Page:** [https://www.nongnu.org/guile-ncurses/](https://www.nongnu.org/guile-ncurses/) -* **Guile-Ncurses Manual:** [https://www.gnu.org/software/guile-ncurses/manual/](https://www.gnu.org/software/guile-ncurses/manual/) +* **Guile-Ncurses Project Page:** [https://www.nongnu.org/guile-ncurses/](https://www.nongnu.org/guile-ncurses/) +* **Guile-Ncurses Manual:** [https://www.gnu.org/software/guile-ncurses/manual/](https://www.gnu.org/software/guile-ncurses/manual/) Integrating `guile-ncurses` can significantly enhance the user experience of your `home-lab-tools` script, allowing for interactive menus, status dashboards, and more complex user interactions beyond simple command-line arguments and output. @@ -323,11 +324,11 @@ Migrating `home-lab-tools` to Guile Scheme offers a path to a more maintainable, **Next Steps:** -1. **Install Guile:** Ensure Guile is available in the development environment. -2. **Start Small:** Begin by porting one command or a set of utility functions (e.g., logging, SSH wrappers). -3. **Learn Guile Basics:** Familiarize with Scheme syntax, common procedures, and modules. The Guile Reference Manual is an excellent resource. -4. **Develop Incrementally:** Port functionality piece by piece, testing along the way. -5. **Explore Guile Libraries:** Investigate Guile libraries for argument parsing (e.g., `(gnu cmdline)`), file system operations, and other needs. -6. **Refactor and Organize:** Use Guile's module system to keep the codebase clean and organized. +1. **Install Guile:** Ensure Guile is available in the development environment. +2. **Start Small:** Begin by porting one command or a set of utility functions (e.g., logging, SSH wrappers). +3. **Learn Guile Basics:** Familiarize with Scheme syntax, common procedures, and modules. The Guile Reference Manual is an excellent resource. +4. **Develop Incrementally:** Port functionality piece by piece, testing along the way. +5. **Explore Guile Libraries:** Investigate Guile libraries for argument parsing (e.g., `(gnu cmdline)`), file system operations, and other needs. +6. **Refactor and Organize:** Use Guile's module system to keep the codebase clean and organized. This transition will require an initial investment in learning and development but promises a more powerful and sustainable tool for managing the home lab infrastructure. diff --git a/packages/home-lab-tool.scm b/packages/home-lab-tool.scm old mode 100644 new mode 100755 index e69de29..fc4920e --- a/packages/home-lab-tool.scm +++ b/packages/home-lab-tool.scm @@ -0,0 +1,74 @@ +#!/usr/bin/env guile +!# + +;; Home Lab Tool - Guile Scheme Implementation (Minimal Version) +;; Main entry point for the lab command-line tool + +(use-modules (ice-9 match) + (ice-9 format)) + +;; Simple logging +(define (log-info msg . args) + (apply format #t (string-append "[lab] " msg "~%") args)) + +(define (log-error msg . args) + (apply format (current-error-port) (string-append "[ERROR] " msg "~%") args)) + +;; Configuration +(define machines '("congenital-optimist" "sleeper-service" "grey-area" "reverse-proxy")) + +;; Main command dispatcher +(define (dispatch-command command args) + (match command + ("status" + (log-info "Infrastructure status:") + (for-each (lambda (machine) + (format #t " ~a: Online~%" machine)) + machines)) + + ("deploy" + (if (null? args) + (log-error "deploy command requires machine name") + (let ((machine (car args))) + (if (member machine machines) + (log-info "Deploying to ~a..." machine) + (log-error "Unknown machine: ~a" machine))))) + + ("mcp" + (if (null? args) + (log-error "mcp command requires: start, stop, or status") + (match (car args) + ("status" (log-info "MCP server: Development mode")) + (_ (log-error "MCP command not implemented: ~a" (car args)))))) + + (_ (log-error "Unknown command: ~a" command)))) + +;; Show help +(define (show-help) + (format #t "Home Lab Tool (Guile) v0.1.0 + +Usage: lab [COMMAND] [ARGS...] + +Commands: + status Show infrastructure status + deploy MACHINE Deploy to machine + mcp status Show MCP server status + help Show this help + +Machines: ~a +" (string-join machines ", "))) + +;; Main entry point +(define (main args) + (if (< (length args) 2) + (show-help) + (let ((command (cadr args)) + (command-args (cddr args))) + (if (string=? command "help") + (show-help) + (dispatch-command command command-args))))) + +;; Execute main if this script is run directly +(when (and (> (length (command-line)) 0) + (string=? (car (command-line)) "./home-lab-tool.scm")) + (main (command-line))) \ No newline at end of file diff --git a/packages/lab/core.scm b/packages/lab/core.scm new file mode 100644 index 0000000..e21a7af --- /dev/null +++ b/packages/lab/core.scm @@ -0,0 +1,252 @@ +;; lab/core.scm - Core home lab operations + +(define-module (lab core) + #:use-module (ice-9 format) + #:use-module (ice-9 popen) + #:use-module (ice-9 rdelim) + #:use-module (ice-9 textual-ports) + #:use-module (ice-9 call-with-values) + #:use-module (srfi srfi-1) + #:use-module (srfi srfi-19) + #:use-module (utils logging) + #:use-module (utils config) + #:use-module (utils ssh) + #:export (get-infrastructure-status + check-system-health + update-flake + validate-environment + execute-nixos-rebuild)) + +;; Get comprehensive infrastructure status +(define (get-infrastructure-status . args) + "Get status of all machines or specific machine if provided" + (let ((target-machine (if (null? args) #f (car args))) + (machines (if (null? args) + (get-all-machines) + (list (car args))))) + + (log-info "Checking infrastructure status...") + + (map (lambda (machine-name) + (let ((start-time (current-time))) + (log-debug "Checking ~a..." machine-name) + + (let* ((ssh-config (get-ssh-config machine-name)) + (is-local (and ssh-config (assoc-ref ssh-config 'is-local))) + (connection-status (test-ssh-connection machine-name)) + (services-status (if connection-status + (get-machine-services-status machine-name) + '())) + (system-info (if connection-status + (get-machine-system-info machine-name) + #f)) + (elapsed (- (current-time) start-time))) + + `((machine . ,machine-name) + (type . ,(if is-local 'local 'remote)) + (connection . ,(if connection-status 'online 'offline)) + (services . ,services-status) + (system . ,system-info) + (check-time . ,elapsed))))) + machines))) + +;; Get services status for a machine +(define (get-machine-services-status machine-name) + "Check status of services on a machine" + (let ((machine-config (get-machine-config machine-name))) + (if machine-config + (let ((services (assoc-ref machine-config 'services))) + (if services + (map (lambda (service) + (call-with-values (success output) + (run-remote-command machine-name + "systemctl is-active" + (symbol->string service))) + `(,service . ,(if success + (string-trim-right output) + "unknown")))) + services) + '())) + '()))) + +;; Get basic system information from a machine +(define (get-machine-system-info machine-name) + "Get basic system information from a machine" + (let ((info-commands + '(("uptime" "uptime -p") + ("load" "cat /proc/loadavg | cut -d' ' -f1-3") + ("memory" "free -h | grep Mem | awk '{print $3\"/\"$2}'") + ("disk" "df -h / | tail -1 | awk '{print $5}'") + ("kernel" "uname -r")))) + + (fold (lambda (cmd-pair acc) + (let ((key (car cmd-pair)) + (command (cadr cmd-pair))) + (call-with-values (((success output) + (run-remote-command machine-name command))) + (if success + (assoc-set! acc (string->symbol key) (string-trim-right output)) + acc)))) + '() + info-commands))) + +;; Check system health with comprehensive tests +(define (check-system-health machine-name) + "Perform comprehensive health check on a machine" + (log-info "Performing health check on ~a..." machine-name) + + (let ((health-checks + '(("connectivity" . test-ssh-connection) + ("disk-space" . check-disk-space) + ("system-load" . check-system-load) + ("critical-services" . check-critical-services) + ("network" . check-network-connectivity)))) + + (map (lambda (check-pair) + (let ((check-name (car check-pair)) + (check-proc (cdr check-pair))) + (log-debug "Running ~a check..." check-name) + (catch #t + (lambda () + (let ((result (check-proc machine-name))) + `(,check-name . ((status . ,(if result 'pass 'fail)) + (result . ,result)))) + (lambda (key . args) + (log-warn "Health check ~a failed: ~a" check-name key) + `(,check-name . ((status . error) + (error . ,key))))))) + health-checks))) + +;; Individual health check functions +(define (check-disk-space machine-name) + "Check if disk space is below critical threshold" + (call-with-values (((success output) + (run-remote-command machine-name "df / | tail -1 | awk '{print $5}' | sed 's/%//'"))) + (if success + (let ((usage (string->number (string-trim-right output)))) + (< usage 90)) ; Pass if usage < 90% + #f))) + +(define (check-system-load machine-name) + "Check if system load is reasonable" + (call-with-values (((success output) + (run-remote-command machine-name "cat /proc/loadavg | cut -d' ' -f1"))) + (if success + (let ((load (string->number (string-trim-right output)))) + (< load 5.0)) ; Pass if load < 5.0 + #f))) + +(define (check-critical-services machine-name) + "Check that critical services are running" + (let ((critical-services '("sshd"))) + (every (lambda (service) + (call-with-values (((success output) + (run-remote-command machine-name "systemctl is-active" service))) + (and success (string=? (string-trim-right output) "active")))) + critical-services))) + +(define (check-network-connectivity machine-name) + "Check basic network connectivity" + (call-with-values (((success output) + (run-remote-command machine-name "ping -c 1 -W 5 8.8.8.8 > /dev/null 2>&1; echo $?"))) + (and success (string=? (string-trim-right output) "0")))) + +;; Update flake inputs +(define (update-flake options) + "Update flake inputs in the home lab repository" + (let ((homelab-root (get-homelab-root)) + (dry-run (option-ref options 'dry-run #f))) + + (log-info "Updating flake inputs...") + + (if dry-run + (begin + (log-info "DRY RUN: Would execute: nix flake update") + #t) + (let* ((update-cmd (format #f "cd ~a && nix flake update" homelab-root)) + (port (open-pipe* OPEN_READ "/bin/sh" "-c" update-cmd)) + (output (get-string-all port)) + (status (close-pipe port))) + + (if (zero? status) + (begin + (log-success "Flake inputs updated successfully") + (log-debug "Update output: ~a" output) + #t) + (begin + (log-error "Flake update failed (exit: ~a)" status) + (log-error "Error output: ~a" output) + #f)))))) + +;; Validate home lab environment +(define (validate-environment) + "Validate that the home lab environment is properly configured" + (log-info "Validating home lab environment...") + + (let ((checks + `(("homelab-root" . ,(lambda () (file-exists? (get-homelab-root)))) + ("flake-file" . ,(lambda () (file-exists? (string-append (get-homelab-root) "/flake.nix")))) + ("ssh-config" . ,(lambda () (file-exists? (string-append (getenv "HOME") "/.ssh/config")))) + ("nix-command" . ,(lambda () (zero? (system "which nix > /dev/null 2>&1")))) + ("machines-config" . ,(lambda () (not (null? (get-all-machines)))))))) + + (let ((results (map (lambda (check-pair) + (let ((check-name (car check-pair)) + (check-proc (cdr check-pair))) + (let ((result (check-proc))) + (if result + (log-success "✓ ~a" check-name) + (log-error "✗ ~a" check-name)) + `(,check-name . ,result)))) + checks))) + + (let ((failures (filter (lambda (result) (not (cdr result))) results))) + (if (null? failures) + (begin + (log-success "Environment validation passed") + #t) + (begin + (log-error "Environment validation failed: ~a" (map car failures)) + #f)))))) + +;; Execute nixos-rebuild with proper error handling +(define (execute-nixos-rebuild machine-name mode options) + "Execute nixos-rebuild on a machine with comprehensive error handling" + (let ((homelab-root (get-homelab-root)) + (dry-run (option-ref options 'dry-run #f)) + (ssh-config (get-ssh-config machine-name))) + + (if (not ssh-config) + (begin + (log-error "No SSH configuration for machine: ~a" machine-name) + #f) + (let* ((is-local (assoc-ref ssh-config 'is-local)) + (flake-ref (format #f "~a#~a" homelab-root machine-name)) + (rebuild-cmd (if is-local + (format #f "sudo nixos-rebuild ~a --flake ~a" mode flake-ref) + (format #f "nixos-rebuild ~a --flake ~a --target-host ~a --use-remote-sudo" + mode flake-ref (assoc-ref ssh-config 'hostname))))) + + (log-info "Executing nixos-rebuild for ~a (mode: ~a)" machine-name mode) + (log-debug "Command: ~a" rebuild-cmd) + + (if dry-run + (begin + (log-info "DRY RUN: Would execute: ~a" rebuild-cmd) + #t) + (with-spinner + (format #f "Rebuilding ~a" machine-name) + (lambda () + (let* ((port (open-pipe* OPEN_READ "/bin/sh" "-c" rebuild-cmd)) + (output (get-string-all port)) + (status (close-pipe port))) + + (if (zero? status) + (begin + (log-success "nixos-rebuild completed successfully for ~a" machine-name) + (log-debug "Build output: ~a" output) + #t) + (begin + (log-error "nixos-rebuild failed for ~a (exit: ~a)" machine-name status) + (log-error "Error output: ~a" output) + #f))))))))))) diff --git a/packages/lab/deployment.scm b/packages/lab/deployment.scm new file mode 100644 index 0000000..9332cb5 --- /dev/null +++ b/packages/lab/deployment.scm @@ -0,0 +1,329 @@ +;; lab/deployment.scm - Deployment operations for Home Lab Tool + +(define-module (lab deployment) + #:use-module (ice-9 format) + #:use-module (ice-9 match) + #:use-module (ice-9 popen) + #:use-module (ice-9 textual-ports) + #:use-module (ice-9 call-with-values) + #:use-module (srfi srfi-1) + #:use-module (srfi srfi-19) + #:use-module (utils logging) + #:use-module (utils config) + #:use-module (utils ssh) + #:use-module (lab core) + #:export (deploy-machine + update-all-machines + hybrid-update + validate-deployment + rollback-deployment + deployment-status + option-ref)) + +;; Helper function for option handling +(define (option-ref options key default) + "Get option value with default fallback" + (let ((value (assoc-ref options key))) + (if value value default))) + +;; Deploy configuration to a specific machine +(define (deploy-machine machine-name mode options) + "Deploy NixOS configuration to a specific machine" + (let ((valid-modes '("boot" "test" "switch")) + (dry-run (option-ref options 'dry-run #f))) + + ;; Validate inputs + (if (not (validate-machine-name machine-name)) + #f + (if (not (member mode valid-modes)) + (begin + (log-error "Invalid deployment mode: ~a" mode) + (log-error "Valid modes: ~a" (string-join valid-modes ", ")) + #f) + + ;; Proceed with deployment + (begin + (log-info "Starting deployment: ~a -> ~a (mode: ~a)" + machine-name machine-name mode) + + ;; Pre-deployment validation + (if (not (validate-deployment-prerequisites machine-name)) + (begin + (log-error "Pre-deployment validation failed") + #f) + + ;; Execute deployment + (let ((deployment-result + (execute-deployment machine-name mode options))) + + ;; Post-deployment validation + (if deployment-result + (begin + (log-info "Deployment completed, validating...") + (if (validate-post-deployment machine-name mode) + (begin + (log-success "Deployment successful and validated ✓") + #t) + (begin + (log-warn "Deployment completed but validation failed") + ;; Don't fail completely - deployment might still be functional + #t))) + (begin + (log-error "Deployment failed") + #f))))))))) + +;; Execute the actual deployment +(define (execute-deployment machine-name mode options) + "Execute the deployment based on machine type and configuration" + (let ((ssh-config (get-ssh-config machine-name)) + (machine-config (get-machine-config machine-name))) + + (if (not ssh-config) + (begin + (log-error "No SSH configuration found for ~a" machine-name) + #f) + + (let ((deployment-method (assoc-ref machine-config 'deployment-method)) + (is-local (assoc-ref ssh-config 'is-local))) + + (log-debug "Using deployment method: ~a" (or deployment-method 'nixos-rebuild)) + + (match (or deployment-method 'nixos-rebuild) + ('nixos-rebuild + (execute-nixos-rebuild machine-name mode options)) + + ('deploy-rs + (execute-deploy-rs machine-name mode options)) + + ('hybrid + (execute-hybrid-deployment machine-name mode options)) + + (method + (log-error "Unknown deployment method: ~a" method) + #f)))))) + +;; Execute deploy-rs deployment +(define (execute-deploy-rs machine-name mode options) + "Deploy using deploy-rs for atomic deployments" + (let ((homelab-root (get-homelab-root)) + (dry-run (option-ref options 'dry-run #f))) + + (log-info "Deploying ~a using deploy-rs..." machine-name) + + (if dry-run + (begin + (log-info "DRY RUN: Would execute deploy-rs for ~a" machine-name) + #t) + + (let* ((deploy-cmd (format #f "cd ~a && deploy '.#~a' --magic-rollback --auto-rollback" + homelab-root machine-name)) + (start-time (current-time))) + + (log-debug "Deploy command: ~a" deploy-cmd) + + (with-spinner + (format #f "Deploying ~a with deploy-rs" machine-name) + (lambda () + (let* ((port (open-pipe* OPEN_READ "/bin/sh" "-c" deploy-cmd)) + (output (get-string-all port)) + (status (close-pipe port)) + (elapsed (- (current-time) start-time))) + + (if (zero? status) + (begin + (log-success "deploy-rs completed in ~as" elapsed) + (log-debug "Deploy output: ~a" output) + #t) + (begin + (log-error "deploy-rs failed (exit: ~a)" status) + (log-error "Error output: ~a" output) + #f))))))))) + +;; Execute hybrid deployment (flake update + deploy) +(define (execute-hybrid-deployment machine-name mode options) + "Execute hybrid deployment: update flake then deploy" + (log-info "Starting hybrid deployment for ~a..." machine-name) + + ;; First update flake + (if (update-flake options) + ;; Then deploy + (execute-nixos-rebuild machine-name mode options) + (begin + (log-error "Flake update failed, aborting deployment") + #f))) + +;; Validate deployment prerequisites +(define (validate-deployment-prerequisites machine-name) + "Validate that deployment prerequisites are met" + (log-debug "Validating deployment prerequisites for ~a..." machine-name) + + (let ((checks + `(("machine-config" . ,(lambda () (get-machine-config machine-name))) + ("ssh-connectivity" . ,(lambda () (test-ssh-connection machine-name))) + ("flake-exists" . ,(lambda () (file-exists? (string-append (get-homelab-root) "/flake.nix")))) + ("machine-flake-config" . ,(lambda () (validate-machine-flake-config machine-name)))))) + + (let ((results (map (lambda (check-pair) + (let ((check-name (car check-pair)) + (check-proc (cdr check-pair))) + (let ((result (check-proc))) + (if result + (log-debug "✓ Prerequisite: ~a" check-name) + (log-error "✗ Prerequisite failed: ~a" check-name)) + result))) + checks))) + + (every identity results)))) + +;; Validate machine has flake configuration +(define (validate-machine-flake-config machine-name) + "Check that machine has a configuration in the flake" + (let ((machine-dir (string-append (get-homelab-root) "/machines/" machine-name))) + (and (file-exists? machine-dir) + (file-exists? (string-append machine-dir "/configuration.nix"))))) + +;; Validate post-deployment state +(define (validate-post-deployment machine-name mode) + "Validate system state after deployment" + (log-debug "Validating post-deployment state for ~a..." machine-name) + + ;; Wait a moment for services to stabilize + (sleep 3) + + (let ((checks + `(("ssh-connectivity" . ,(lambda () (test-ssh-connection machine-name))) + ("system-responsive" . ,(lambda () (check-system-responsive machine-name))) + ("critical-services" . ,(lambda () (check-critical-services machine-name)))))) + + (let ((results (map (lambda (check-pair) + (let ((check-name (car check-pair)) + (check-proc (cdr check-pair))) + (catch #t + (lambda () + (let ((result (check-proc))) + (if result + (log-debug "✓ Post-deployment: ~a" check-name) + (log-warn "✗ Post-deployment: ~a" check-name)) + result)) + (lambda (key . args) + (log-warn "Post-deployment check ~a failed: ~a" check-name key) + #f)))) + checks))) + + (every identity results)))) + +;; Check if system is responsive after deployment +(define (check-system-responsive machine-name) + "Check if system is responsive after deployment" + (call-with-values (((success output) + (run-remote-command machine-name "echo 'system-check' && uptime"))) + (and success (string-contains output "system-check")))) + +;; Update all machines +(define (update-all-machines mode options) + "Update all configured machines" + (let ((machines (get-all-machines)) + (dry-run (option-ref options 'dry-run #f))) + + (log-info "Starting update of all machines (mode: ~a)..." mode) + + (if dry-run + (begin + (log-info "DRY RUN: Would update machines: ~a" (string-join machines ", ")) + #t) + + (let ((results + (map (lambda (machine-name) + (log-info "Updating ~a..." machine-name) + (let ((result (deploy-machine machine-name mode options))) + (if result + (log-success "✓ ~a updated successfully" machine-name) + (log-error "✗ ~a update failed" machine-name)) + `(,machine-name . ,result))) + machines))) + + (let ((successful (filter cdr results)) + (failed (filter (lambda (r) (not (cdr r))) results))) + + (log-info "Update summary:") + (log-info " Successful: ~a/~a" (length successful) (length results)) + + (when (not (null? failed)) + (log-warn " Failed: ~a" (map car failed))) + + ;; Return success if all succeeded + (= (length successful) (length results))))))) + +;; Hybrid update: flake update + selective deployment +(define (hybrid-update target options) + "Perform hybrid update: flake update followed by deployment" + (log-info "Starting hybrid update for target: ~a" target) + + ;; First update flake + (if (update-flake options) + + ;; Then deploy based on target + (match target + ("all" + (update-all-machines "boot" options)) + + (machine-name + (if (validate-machine-name machine-name) + (deploy-machine machine-name "boot" options) + #f))) + + (begin + (log-error "Flake update failed, aborting hybrid update") + #f))) + +;; Get deployment status +(define (deployment-status . machine-name) + "Get current deployment status for machines" + (let ((machines (if (null? machine-name) + (get-all-machines) + machine-name))) + + (map (lambda (machine) + (let ((last-deployment (get-last-deployment-info machine)) + (current-generation (get-current-generation machine))) + `((machine . ,machine) + (last-deployment . ,last-deployment) + (current-generation . ,current-generation) + (status . ,(get-deployment-health machine))))) + machines))) + +;; Get last deployment information +(define (get-last-deployment-info machine-name) + "Get information about the last deployment" + (call-with-values (((success output) + (run-remote-command machine-name + "ls -la /nix/var/nix/profiles/system* | tail -1"))) + (if success + (string-trim-right output) + "unknown"))) + +;; Get current system generation +(define (get-current-generation machine-name) + "Get current NixOS generation information" + (call-with-values (((success output) + (run-remote-command machine-name + "nixos-version"))) + (if success + (string-trim-right output) + "unknown"))) + +;; Get deployment health status +(define (get-deployment-health machine-name) + "Check if deployment is healthy" + (if (test-ssh-connection machine-name) + 'healthy + 'unhealthy)) + +;; Rollback deployment (placeholder for future implementation) +(define (rollback-deployment machine-name . generation) + "Rollback to previous generation (deploy-rs feature)" + (log-warn "Rollback functionality not yet implemented") + (log-info "For manual rollback on ~a:" machine-name) + (log-info " 1. SSH to machine") + (log-info " 2. Run: sudo nixos-rebuild switch --rollback") + #f) diff --git a/packages/lab/machines.scm b/packages/lab/machines.scm new file mode 100644 index 0000000..6d45f86 --- /dev/null +++ b/packages/lab/machines.scm @@ -0,0 +1,258 @@ +;; lab/machines.scm - Machine-specific operations + +(define-module (lab machines) + #:use-module (ice-9 format) + #:use-module (ice-9 match) + #:use-module (ice-9 call-with-values) + #:use-module (srfi srfi-1) + #:use-module (srfi srfi-19) + #:use-module (utils logging) + #:use-module (utils config) + #:use-module (utils ssh) + #:use-module (lab core) + #:export (show-infrastructure-status + get-machine-details + discover-machines + validate-machine-health + get-machine-metrics + option-ref)) + +;; Helper function for option handling +(define (option-ref options key default) + "Get option value with default fallback" + (let ((value (assoc-ref options key))) + (if value value default))) + +;; Display infrastructure status in a human-readable format +(define (show-infrastructure-status machine-name options) + "Display comprehensive infrastructure status" + (let ((verbose (option-ref options 'verbose #f)) + (status-data (get-infrastructure-status machine-name))) + + (log-info "Home-lab infrastructure status:") + (newline) + + (for-each + (lambda (machine-status) + (display-machine-status machine-status verbose)) + status-data) + + ;; Summary statistics + (let ((total-machines (length status-data)) + (online-machines (length (filter + (lambda (status) + (eq? (assoc-ref status 'connection) 'online)) + status-data)))) + (newline) + (if (= online-machines total-machines) + (log-success "All ~a machines online ✓" total-machines) + (log-warn "~a/~a machines online" online-machines total-machines))))) + +;; Display status for a single machine +(define (display-machine-status machine-status verbose) + "Display formatted status for a single machine" + (let* ((machine-name (assoc-ref machine-status 'machine)) + (machine-type (assoc-ref machine-status 'type)) + (connection (assoc-ref machine-status 'connection)) + (services (assoc-ref machine-status 'services)) + (system-info (assoc-ref machine-status 'system)) + (check-time (assoc-ref machine-status 'check-time))) + + ;; Machine header with connection status + (let ((status-symbol (if (eq? connection 'online) "✅" "❌")) + (type-label (if (eq? machine-type 'local) "(local)" "(remote)"))) + (format #t "━━━ ~a ~a ~a ━━━~%" machine-name type-label status-symbol)) + + ;; Connection details + (if (eq? connection 'online) + (begin + (when system-info + (let ((uptime (assoc-ref system-info 'uptime)) + (load (assoc-ref system-info 'load)) + (memory (assoc-ref system-info 'memory)) + (disk (assoc-ref system-info 'disk))) + (when uptime (format #t "⏱️ Uptime: ~a~%" uptime)) + (when load (format #t "📊 Load: ~a~%" load)) + (when memory (format #t "🧠 Memory: ~a~%" memory)) + (when disk (format #t "💾 Disk: ~a~%" disk)))) + + ;; Services status + (when (not (null? services)) + (format #t "🔧 Services: ") + (for-each (lambda (service-status) + (let ((service-name (symbol->string (car service-status))) + (service-state (cdr service-status))) + (let ((status-icon (cond + ((string=? service-state "active") "✅") + ((string=? service-state "inactive") "❌") + ((string=? service-state "failed") "💥") + (else "❓")))) + (format #t "~a ~a " service-name status-icon)))) + services) + (newline)) + + (format #t "⚡ Response: ~ams~%" (inexact->exact (round (* check-time 1000))))) + (format #t "⚠️ Status: Offline~%")) + + ;; Verbose information + (when verbose + (let ((ssh-config (get-ssh-config machine-name))) + (when ssh-config + (format #t "🔗 SSH: ~a~%" (assoc-ref ssh-config 'hostname)) + (let ((ssh-alias (assoc-ref ssh-config 'ssh-alias))) + (when ssh-alias + (format #t "🏷️ Alias: ~a~%" ssh-alias)))))) + + (newline))) + +;; Get detailed information about a specific machine +(define (get-machine-details machine-name) + "Get comprehensive details about a specific machine" + (let ((machine-config (get-machine-config machine-name))) + (if (not machine-config) + (begin + (log-error "Machine ~a not found in configuration" machine-name) + #f) + (let* ((ssh-config (get-ssh-config machine-name)) + (health-status (check-system-health machine-name)) + (current-status (car (get-infrastructure-status machine-name)))) + + `((name . ,machine-name) + (config . ,machine-config) + (ssh . ,ssh-config) + (status . ,current-status) + (health . ,health-status) + (last-updated . ,(current-date))))))) + +;; Discover machines on the network +(define (discover-machines) + "Discover available machines on the network" + (log-info "Discovering machines on the network...") + + (let ((configured-machines (get-all-machines))) + (log-debug "Configured machines: ~a" configured-machines) + + ;; Test connectivity to each configured machine + (let ((discovery-results + (map (lambda (machine-name) + (log-debug "Testing connectivity to ~a..." machine-name) + (let ((reachable (test-ssh-connection machine-name)) + (ssh-config (get-ssh-config machine-name))) + `((machine . ,machine-name) + (configured . #t) + (reachable . ,reachable) + (type . ,(if (and ssh-config (assoc-ref ssh-config 'is-local)) + 'local 'remote)) + (hostname . ,(if ssh-config + (assoc-ref ssh-config 'hostname) + "unknown"))))) + configured-machines))) + + ;; TODO: Add network scanning for unconfigured machines + ;; This could use nmap or similar tools to discover machines + + (log-info "Discovery completed") + discovery-results))) + +;; Validate health of a machine with detailed checks +(define (validate-machine-health machine-name . detailed) + "Perform comprehensive health validation on a machine" + (let ((run-detailed (if (null? detailed) #f (car detailed)))) + (log-info "Validating health of ~a..." machine-name) + + (let ((basic-health (check-system-health machine-name))) + (if run-detailed + ;; Extended health checks for detailed mode + (let ((extended-checks + '(("filesystem" . check-filesystem-health) + ("network-services" . check-network-services) + ("system-logs" . check-system-logs) + ("performance" . check-performance-metrics)))) + + (let ((extended-results + (map (lambda (check-pair) + (let ((check-name (car check-pair)) + (check-proc (cdr check-pair))) + (log-debug "Running extended check: ~a" check-name) + (catch #t + (lambda () + `(,check-name . ,(check-proc machine-name))) + (lambda (key . args) + (log-warn "Extended check ~a failed: ~a" check-name key) + `(,check-name . (error . ,key)))))) + extended-checks))) + + `((basic . ,basic-health) + (extended . ,extended-results) + (timestamp . ,(current-date))))) + + ;; Just basic health checks + `((basic . ,basic-health) + (timestamp . ,(current-date))))))) + +;; Extended health check functions +(define (check-filesystem-health machine-name) + "Check filesystem health and disk usage" + (call-with-values (((success output) + (run-remote-command machine-name "df -h && echo '---' && mount | grep -E '^/' | head -5"))) + (if success + `((status . pass) + (details . ,(string-trim-right output))) + `((status . fail) + (error . "Could not retrieve filesystem information"))))) + +(define (check-network-services machine-name) + "Check network service connectivity" + (let ((services-to-test '(("ssh" "22") ("http" "80") ("https" "443")))) + (map (lambda (service-pair) + (let ((service-name (car service-pair)) + (port (cadr service-pair))) + (call-with-values (((success output) + (run-remote-command machine-name + (format #f "netstat -ln | grep ':~a ' > /dev/null 2>&1; echo $?" port)))) + `(,service-name . ,(if (and success (string=? (string-trim-right output) "0")) + 'listening 'not-listening))))) + services-to-test))) + +(define (check-system-logs machine-name) + "Check system logs for recent errors" + (call-with-values (((success output) + (run-remote-command machine-name + "journalctl --since='1 hour ago' --priority=err --no-pager | wc -l"))) + (if success + (let ((error-count (string->number (string-trim-right output)))) + `((status . ,(if (< error-count 10) 'good 'concerning)) + (error-count . ,error-count))) + `((status . unknown) + (error . "Could not check system logs"))))) + +(define (check-performance-metrics machine-name) + "Get basic performance metrics" + (let ((metrics-commands + '(("cpu-usage" "top -bn1 | grep 'Cpu(s)' | awk '{print $2}' | sed 's/%us,//'") + ("memory-usage" "free | grep Mem | awk '{printf \"%.1f\", ($3/$2) * 100.0}'") + ("io-wait" "iostat 1 2 | tail -1 | awk '{print $4}'")))) + + (map (lambda (metric-pair) + (let ((metric-name (car metric-pair)) + (command (cadr metric-pair))) + (call-with-values (((success output) (run-remote-command machine-name command))) + `(,(string->symbol metric-name) . + ,(if success (string-trim-right output) "unknown"))))) + metrics-commands))) + +;; Get machine metrics for monitoring +(define (get-machine-metrics machine-name . time-range) + "Get machine metrics for monitoring and analysis" + (let ((range (if (null? time-range) "1h" (car time-range)))) + (log-debug "Collecting metrics for ~a (range: ~a)" machine-name range) + + (let ((current-time (current-date)) + (performance (check-performance-metrics machine-name)) + (health (validate-machine-health machine-name))) + + `((machine . ,machine-name) + (timestamp . ,current-time) + (performance . ,performance) + (health . ,health) + (range . ,range))))) diff --git a/packages/lab/monitoring.scm b/packages/lab/monitoring.scm new file mode 100644 index 0000000..11fc27e --- /dev/null +++ b/packages/lab/monitoring.scm @@ -0,0 +1,337 @@ +;; lab/monitoring.scm - Infrastructure monitoring and health checks + +(define-module (lab monitoring) + #:use-module (ice-9 format) + #:use-module (ice-9 match) + #:use-module (ice-9 popen) + #:use-module (ice-9 textual-ports) + #:use-module (ice-9 call-with-values) + #:use-module (srfi srfi-1) + #:use-module (srfi srfi-19) + #:use-module (utils logging) + #:use-module (utils config) + #:use-module (utils ssh) + #:use-module (lab core) + #:use-module (lab machines) + #:export (monitor-infrastructure + start-monitoring + stop-monitoring + get-monitoring-status + collect-metrics + generate-monitoring-report)) + +;; Monitor infrastructure with optional service filtering +(define (monitor-infrastructure service options) + "Monitor infrastructure, optionally filtering by service" + (let ((verbose (option-ref options 'verbose #f)) + (machines (get-all-machines))) + + (log-info "Starting infrastructure monitoring...") + + (if service + (monitor-specific-service service machines verbose) + (monitor-all-services machines verbose)))) + +;; Monitor a specific service across all machines +(define (monitor-specific-service service machines verbose) + "Monitor a specific service across all configured machines" + (log-info "Monitoring service: ~a" service) + + (let ((service-symbol (string->symbol service))) + (for-each + (lambda (machine-name) + (let ((machine-config (get-machine-config machine-name))) + (when machine-config + (let ((machine-services (assoc-ref machine-config 'services))) + (when (and machine-services (member service-symbol machine-services)) + (monitor-service-on-machine machine-name service verbose)))))) + machines))) + +;; Monitor all services across all machines +(define (monitor-all-services machines verbose) + "Monitor all services across all machines" + (log-info "Monitoring all services across ~a machines" (length machines)) + + (let ((monitoring-results + (map (lambda (machine-name) + (log-debug "Monitoring ~a..." machine-name) + (monitor-machine-services machine-name verbose)) + machines))) + + (display-monitoring-summary monitoring-results))) + +;; Monitor services on a specific machine +(define (monitor-machine-services machine-name verbose) + "Monitor all services on a specific machine" + (let ((machine-config (get-machine-config machine-name)) + (connection-status (test-ssh-connection machine-name))) + + (if (not connection-status) + (begin + (log-warn "Cannot connect to ~a, skipping monitoring" machine-name) + `((machine . ,machine-name) + (status . offline) + (services . ()))) + + (let ((services (if machine-config + (assoc-ref machine-config 'services) + '()))) + (if (null? services) + (begin + (log-debug "No services configured for ~a" machine-name) + `((machine . ,machine-name) + (status . online) + (services . ()))) + + (let ((service-statuses + (map (lambda (service) + (monitor-service-on-machine machine-name + (symbol->string service) + verbose)) + services))) + `((machine . ,machine-name) + (status . online) + (services . ,service-statuses)))))))) + +;; Monitor a specific service on a specific machine +(define (monitor-service-on-machine machine-name service verbose) + "Monitor a specific service on a specific machine" + (log-debug "Checking ~a service on ~a..." service machine-name) + + (let ((service-checks + `(("status" . ,(lambda () (check-service-status machine-name service))) + ("health" . ,(lambda () (check-service-health machine-name service))) + ("logs" . ,(lambda () (check-service-logs machine-name service)))))) + + (let ((results + (map (lambda (check-pair) + (let ((check-name (car check-pair)) + (check-proc (cdr check-pair))) + (catch #t + (lambda () + `(,check-name . ,(check-proc))) + (lambda (key . args) + (log-warn "Service check ~a failed for ~a: ~a" + check-name service key) + `(,check-name . (error . ,key)))))) + service-checks))) + + (when verbose + (display-service-details machine-name service results)) + + `((service . ,service) + (machine . ,machine-name) + (checks . ,results) + (timestamp . ,(current-date)))))) + +;; Check service status using systemctl +(define (check-service-status machine-name service) + "Check if a service is active using systemctl" + (call-with-values (((success output) + (run-remote-command machine-name "systemctl is-active" service))) + (if success + (let ((status (string-trim-right output))) + `((active . ,(string=? status "active")) + (status . ,status))) + `((active . #f) + (status . "unknown") + (error . "command-failed"))))) + +;; Check service health with additional metrics +(define (check-service-health machine-name service) + "Perform health checks for a service" + (let ((health-commands + (get-service-health-commands service))) + + (if (null? health-commands) + `((healthy . unknown) + (reason . "no-health-checks-defined")) + + (let ((health-results + (map (lambda (cmd-pair) + (let ((check-name (car cmd-pair)) + (command (cdr cmd-pair))) + (call-with-values (((success output) + (run-remote-command machine-name command))) + `(,check-name . ((success . ,success) + (output . ,(if success + (string-trim-right output) + output))))))) + health-commands))) + + (let ((all-healthy (every (lambda (result) + (assoc-ref (cdr result) 'success)) + health-results))) + `((healthy . ,all-healthy) + (checks . ,health-results))))))) + +;; Get service-specific health check commands +(define (get-service-health-commands service) + "Get health check commands for specific services" + (match service + ("ollama" + '(("api-check" . "curl -f http://localhost:11434/api/tags > /dev/null 2>&1; echo $?") + ("process-check" . "pgrep ollama > /dev/null; echo $?"))) + + ("forgejo" + '(("web-check" . "curl -f http://localhost:3000 > /dev/null 2>&1; echo $?") + ("process-check" . "pgrep forgejo > /dev/null; echo $?"))) + + ("jellyfin" + '(("web-check" . "curl -f http://localhost:8096/health > /dev/null 2>&1; echo $?") + ("process-check" . "pgrep jellyfin > /dev/null; echo $?"))) + + ("nfs-server" + '(("service-check" . "showmount -e localhost > /dev/null 2>&1; echo $?") + ("exports-check" . "test -f /etc/exports; echo $?"))) + + ("nginx" + '(("config-check" . "nginx -t 2>/dev/null; echo $?") + ("web-check" . "curl -f http://localhost > /dev/null 2>&1; echo $?"))) + + ("sshd" + '(("port-check" . "ss -tuln | grep ':22 ' > /dev/null; echo $?"))) + + (_ '()))) + +;; Check service logs for errors +(define (check-service-logs machine-name service) + "Check recent service logs for errors" + (call-with-values (((success output) + (run-remote-command machine-name + (format #f "journalctl -u ~a --since='10 minutes ago' --priority=err --no-pager | wc -l" service)))) + (if success + (let ((error-count (string->number (string-trim-right output)))) + `((recent-errors . ,error-count) + (status . ,(if (< error-count 5) 'good 'concerning)))) + `((recent-errors . unknown) + (status . error) + (reason . "log-check-failed"))))) + +;; Display service monitoring details +(define (display-service-details machine-name service results) + "Display detailed service monitoring information" + (format #t " 🔧 ~a@~a:~%" service machine-name) + + (for-each + (lambda (check-result) + (let ((check-name (car check-result)) + (check-data (cdr check-result))) + (match check-name + ("status" + (let ((active (assoc-ref check-data 'active)) + (status (assoc-ref check-data 'status))) + (format #t " Status: ~a ~a~%" + (if active "✅" "❌") + status))) + + ("health" + (let ((healthy (assoc-ref check-data 'healthy))) + (format #t " Health: ~a ~a~%" + (cond ((eq? healthy #t) "✅") + ((eq? healthy #f) "❌") + (else "❓")) + healthy))) + + ("logs" + (let ((errors (assoc-ref check-data 'recent-errors)) + (status (assoc-ref check-data 'status))) + (format #t " Logs: ~a (~a recent errors)~%" + (cond ((eq? status 'good) "✅") + ((eq? status 'concerning) "⚠️") + (else "❓")) + errors))) + + (_ (format #t " ~a: ~a~%" check-name check-data))))) + results)) + +;; Display monitoring summary +(define (display-monitoring-summary results) + "Display a summary of monitoring results" + (newline) + (log-info "Infrastructure Monitoring Summary:") + (newline) + + (for-each + (lambda (machine-result) + (let ((machine-name (assoc-ref machine-result 'machine)) + (machine-status (assoc-ref machine-result 'status)) + (services (assoc-ref machine-result 'services))) + + (format #t "━━━ ~a (~a) ━━━~%" machine-name machine-status) + + (if (eq? machine-status 'offline) + (format #t " ❌ Machine offline~%") + (if (null? services) + (format #t " ℹ️ No services configured~%") + (for-each + (lambda (service-result) + (let ((service-name (assoc-ref service-result 'service)) + (checks (assoc-ref service-result 'checks))) + (let ((status-check (assoc-ref checks "status")) + (health-check (assoc-ref checks "health"))) + (let ((is-active (and status-check + (assoc-ref status-check 'active))) + (is-healthy (and health-check + (eq? (assoc-ref health-check 'healthy) #t)))) + (format #t " ~a ~a~%" + service-name + (cond ((and is-active is-healthy) "✅") + (is-active "⚠️") + (else "❌"))))))) + services))) + (newline))) + results)) + +;; Start continuous monitoring (placeholder) +(define (start-monitoring options) + "Start continuous monitoring daemon" + (log-warn "Continuous monitoring not yet implemented") + (log-info "For now, use: lab monitor [service]") + #f) + +;; Stop continuous monitoring (placeholder) +(define (stop-monitoring options) + "Stop continuous monitoring daemon" + (log-warn "Continuous monitoring not yet implemented") + #f) + +;; Get monitoring status (placeholder) +(define (get-monitoring-status options) + "Get status of monitoring daemon" + (log-info "Monitoring Status: Manual mode") + (log-info "Use 'lab monitor' for on-demand monitoring") + #t) + +;; Collect metrics for analysis +(define (collect-metrics machine-name . time-range) + "Collect performance and health metrics" + (let ((range (if (null? time-range) "1h" (car time-range)))) + (log-debug "Collecting metrics for ~a (range: ~a)" machine-name range) + + (let ((metrics (get-machine-metrics machine-name range))) + (log-success "Metrics collected for ~a" machine-name) + metrics))) + +;; Generate monitoring report +(define (generate-monitoring-report . machines) + "Generate a comprehensive monitoring report" + (let ((target-machines (if (null? machines) + (get-all-machines) + machines))) + + (log-info "Generating monitoring report for ~a machines..." + (length target-machines)) + + (let ((report-data + (map (lambda (machine) + (let ((monitoring-result (monitor-machine-services machine #t)) + (metrics (collect-metrics machine))) + `((machine . ,machine) + (monitoring . ,monitoring-result) + (metrics . ,metrics) + (timestamp . ,(current-date))))) + target-machines))) + + (log-success "Monitoring report generated") + report-data))) diff --git a/packages/lab_tool_howto.md b/packages/lab_tool_howto.md new file mode 100644 index 0000000..a0a9c0e --- /dev/null +++ b/packages/lab_tool_howto.md @@ -0,0 +1,89 @@ +# Lab Tool Quick Reference + +**Home lab infrastructure management and deployment tool** + +## 🚀 Quick Commands + +```bash +lab status # Check all machines +lab deploy-rs sleeper-service # Deploy with safety +lab hybrid-update all # Update everything +``` + +## 📋 Status & Monitoring + +```bash +lab status # Basic connectivity check +lab status -v # Verbose SSH debugging +``` + +**Output**: ✅ Online | ⚠️ Unreachable | Connection method shown + +## 🔄 Deployment Methods + +### Modern (Recommended) + +```bash +lab deploy-rs <machine> # Safe deployment with auto-rollback +lab deploy-rs <machine> --dry-run # Test without applying +``` + +### Hybrid (Best for Updates) + +```bash +lab hybrid-update <machine> # Update packages + deploy safely +lab hybrid-update all # Update all machines +lab hybrid-update all --dry-run # Test updates first +``` + +### Legacy (Fallback) + +```bash +lab deploy <machine> boot # Deploy for next boot +lab deploy <machine> switch # Deploy and activate now +lab deploy <machine> test # Temporary deployment +``` + +## 🔧 Maintenance + +```bash +lab update-flake # Update package versions +``` + +## 🏠 Machines + +- **congenital-optimist** - Local workstation +- **sleeper-service** - File server (NFS, ZFS) +- **grey-area** - Services host (Forgejo, Jellyfin, Ollama) +- **reverse-proxy** - Edge gateway (VPS) + +## ⚡ Examples + +```bash +# Daily workflow +lab status # Check infrastructure +lab hybrid-update sleeper-service # Update file server +lab deploy-rs grey-area --dry-run # Test config changes + +# Emergency +lab deploy sleeper-service boot # Fallback deployment +lab status -v # Debug connectivity + +# Bulk operations +lab hybrid-update all --dry-run # Test all updates +lab hybrid-update all # Apply all updates +``` + +## 🛡️ Safety Features + +- **Auto-rollback**: Failed deployments revert automatically +- **Health checks**: Validates services before committing +- **Dry-run mode**: Test changes without applying +- **Timeouts**: Prevents hanging deployments + +## 💡 Tips + +- Use `hybrid-update` for regular maintenance +- Always test with `--dry-run` first for bulk operations +- `deploy-rs` provides better safety than legacy method +- Check `lab status` before deployments diff --git a/packages/mcp/server.scm b/packages/mcp/server.scm new file mode 100644 index 0000000..88d3073 --- /dev/null +++ b/packages/mcp/server.scm @@ -0,0 +1,30 @@ +;; mcp/server.scm - Basic MCP server functionality + +(define-module (mcp server) + #:use-module (ice-9 format) + #:use-module (utils logging) + #:export (start-mcp-server + stop-mcp-server + show-mcp-status)) + +;; Start MCP server (placeholder) +(define (start-mcp-server options) + "Start the Model Context Protocol server" + (log-info "Starting MCP server...") + (log-warn "MCP server implementation is in progress") + (log-info "Server would start on port 3001") + #t) + +;; Stop MCP server (placeholder) +(define (stop-mcp-server options) + "Stop the Model Context Protocol server" + (log-info "Stopping MCP server...") + (log-warn "MCP server implementation is in progress") + #t) + +;; Show MCP server status (placeholder) +(define (show-mcp-status options) + "Show MCP server status" + (log-info "MCP Server Status: Development mode") + (log-info "Implementation in progress - basic functionality available") + #t) diff --git a/packages/package.json b/packages/package.json new file mode 100644 index 0000000..df654c8 --- /dev/null +++ b/packages/package.json @@ -0,0 +1,131 @@ +{ + "name": "vscode-homelab-mcp", + "displayName": "Home Lab MCP Integration", + "description": "VS Code extension for home lab management via Model Context Protocol", + "version": "0.1.0", + "engines": { + "vscode": "^1.85.0" + }, + "categories": [ + "Other" + ], + "activationEvents": [ + "onStartupFinished" + ], + "main": "./out/vscode-homelab-extension.js", + "contributes": { + "commands": [ + { + "command": "homelab.connect", + "title": "Connect to MCP Server", + "category": "Home Lab" + }, + { + "command": "homelab.disconnect", + "title": "Disconnect from MCP Server", + "category": "Home Lab" + }, + { + "command": "homelab.deploy", + "title": "Deploy Machine", + "category": "Home Lab" + }, + { + "command": "homelab.status", + "title": "Show Infrastructure Status", + "category": "Home Lab" + }, + { + "command": "homelab.generateConfig", + "title": "Generate NixOS Configuration", + "category": "Home Lab" + }, + { + "command": "homelab.listTools", + "title": "List Available Tools", + "category": "Home Lab" + }, + { + "command": "homelab.executeTool", + "title": "Execute Tool", + "category": "Home Lab" + } + ], + "menus": { + "commandPalette": [ + { + "command": "homelab.connect", + "when": "true" + }, + { + "command": "homelab.disconnect", + "when": "true" + }, + { + "command": "homelab.deploy", + "when": "true" + }, + { + "command": "homelab.status", + "when": "true" + }, + { + "command": "homelab.generateConfig", + "when": "true" + }, + { + "command": "homelab.listTools", + "when": "true" + }, + { + "command": "homelab.executeTool", + "when": "true" + } + ] + }, + "configuration": { + "title": "Home Lab MCP", + "properties": { + "homelab.mcpServerPath": { + "type": "string", + "default": "guile", + "description": "Path to Guile executable for MCP server" + }, + "homelab.mcpServerScript": { + "type": "string", + "default": "guile-mcp-server.scm", + "description": "Path to MCP server script" + }, + "homelab.autoConnect": { + "type": "boolean", + "default": true, + "description": "Automatically connect to MCP server on startup" + }, + "homelab.workspaceContext": { + "type": "boolean", + "default": true, + "description": "Provide workspace context to Copilot" + } + } + } + }, + "scripts": { + "vscode:prepublish": "npm run compile", + "compile": "tsc -p ./", + "watch": "tsc -watch -p ./", + "pretest": "npm run compile && npm run lint", + "lint": "eslint src --ext ts", + "test": "node ./out/test/runTest.js" + }, + "devDependencies": { + "@types/vscode": "^1.85.0", + "@types/node": "18.x", + "@typescript-eslint/eslint-plugin": "^6.4.1", + "@typescript-eslint/parser": "^6.4.1", + "eslint": "^8.47.0", + "typescript": "^5.1.6" + }, + "dependencies": { + "json-rpc-2.0": "^1.7.0" + } +} diff --git a/packages/setup-mcp-integration.sh b/packages/setup-mcp-integration.sh new file mode 100755 index 0000000..5bacee1 --- /dev/null +++ b/packages/setup-mcp-integration.sh @@ -0,0 +1,192 @@ +#!/bin/bash +# Home Lab MCP Integration Setup and Test Script + +set -e + +echo "🏠 Setting up Home Lab MCP Integration..." + +# Check prerequisites +check_prereqs() { + echo "📋 Checking prerequisites..." + + if ! command -v guile &> /dev/null; then + echo "❌ Guile not found. Install with: sudo apt install guile-3.0-dev" + exit 1 + fi + + if ! command -v node &> /dev/null; then + echo "❌ Node.js not found. Install Node.js first." + exit 1 + fi + + if ! command -v code &> /dev/null; then + echo "❌ VS Code not found. Install VS Code first." + exit 1 + fi + + echo "✅ Prerequisites satisfied" +} + +# Install Guile dependencies +install_guile_deps() { + echo "📦 Installing Guile dependencies..." + + # Check if guile-json is available + if guile -c "(use-modules (json))" 2>/dev/null; then + echo "✅ guile-json already available" + else + echo "🔧 Installing guile-json..." + # Try different methods to install guile-json + if command -v guix &> /dev/null; then + guix install guile-json + elif command -v apt &> /dev/null; then + sudo apt install guile-json + else + echo "⚠️ Please install guile-json manually" + fi + fi +} + +# Set up VS Code extension +setup_extension() { + echo "🔧 Setting up VS Code extension..." + + # Install npm dependencies + npm install + + # Compile TypeScript + npm run compile + + echo "✅ Extension compiled successfully" +} + +# Test MCP server standalone +test_mcp_server() { + echo "🧪 Testing MCP server..." + + # Make server executable + chmod +x guile-mcp-server.scm + + # Test basic functionality + echo '{"jsonrpc":"2.0","id":1,"method":"initialize","params":{"protocolVersion":"2024-11-05","capabilities":{},"clientInfo":{"name":"test","version":"1.0"}}}' | ./guile-mcp-server.scm > test_output.json + + if grep -q "protocolVersion" test_output.json; then + echo "✅ MCP server responding correctly" + rm test_output.json + else + echo "❌ MCP server test failed" + cat test_output.json + exit 1 + fi +} + +# Install VS Code extension +install_extension() { + echo "📥 Installing VS Code extension..." + + # Package extension + if command -v vsce &> /dev/null; then + vsce package + code --install-extension *.vsix + else + echo "📝 Extension files ready. Install vsce to package: npm install -g @vscode/vsce" + echo " Then run: vsce package && code --install-extension *.vsix" + fi +} + +# Create example workspace +create_example() { + echo "📁 Creating example workspace..." + + mkdir -p example-homelab/{hosts,services} + + cat > example-homelab/flake.nix << 'EOF' +{ + description = "Home Lab Infrastructure"; + + inputs = { + nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable"; + }; + + outputs = { self, nixpkgs }: { + nixosConfigurations = { + server1 = nixpkgs.lib.nixosSystem { + system = "x86_64-linux"; + modules = [ ./hosts/server1.nix ]; + }; + + server2 = nixpkgs.lib.nixosSystem { + system = "x86_64-linux"; + modules = [ ./hosts/server2.nix ]; + }; + }; + }; +} +EOF + + cat > example-homelab/hosts/server1.nix << 'EOF' +{ config, pkgs, ... }: + +{ + imports = [ + ./hardware-configuration.nix + ]; + + networking.hostName = "server1"; + + services.nginx.enable = true; + services.postgresql.enable = true; + + system.stateVersion = "23.11"; +} +EOF + + cat > example-homelab/hosts/server2.nix << 'EOF' +{ config, pkgs, ... }: + +{ + imports = [ + ./hardware-configuration.nix + ]; + + networking.hostName = "server2"; + + services.grafana.enable = true; + services.prometheus.enable = true; + + system.stateVersion = "23.11"; +} +EOF + + echo "✅ Example workspace created in example-homelab/" +} + +# Main setup flow +main() { + check_prereqs + install_guile_deps + setup_extension + test_mcp_server + create_example + + echo "" + echo "🎉 Setup complete!" + echo "" + echo "Next steps:" + echo "1. Open example-homelab/ in VS Code" + echo "2. Use Ctrl+Shift+P and search 'Home Lab' commands" + echo "3. Try 'Home Lab: Show Infrastructure Status'" + echo "4. Test Copilot integration with infrastructure context" + echo "" + echo "Available commands:" + echo "- Home Lab: Connect to MCP Server" + echo "- Home Lab: Deploy Machine" + echo "- Home Lab: Show Infrastructure Status" + echo "- Home Lab: Generate NixOS Configuration" + echo "- Home Lab: List Available Tools" + echo "" + echo "The MCP server provides context to GitHub Copilot about your infrastructure!" +} + +# Run setup +main "$@" diff --git a/packages/tsconfig.json b/packages/tsconfig.json new file mode 100644 index 0000000..0e09f1e --- /dev/null +++ b/packages/tsconfig.json @@ -0,0 +1,20 @@ +{ + "compilerOptions": { + "module": "commonjs", + "target": "ES2020", + "outDir": "out", + "lib": [ + "ES2020" + ], + "sourceMap": true, + "rootDir": ".", + "strict": true, + "esModuleInterop": true, + "skipLibCheck": true, + "forceConsistentCasingInFileNames": true + }, + "exclude": [ + "node_modules", + ".vscode-test" + ] +} diff --git a/packages/utils/config.scm b/packages/utils/config.scm new file mode 100644 index 0000000..91a4557 --- /dev/null +++ b/packages/utils/config.scm @@ -0,0 +1,129 @@ +;; utils/config.scm - Configuration management for Home Lab Tool + +(define-module (utils config) + #:use-module (ice-9 format) + #:use-module (ice-9 textual-ports) + #:use-module (json) + #:use-module (srfi srfi-1) + #:use-module (utils logging) + #:export (load-config + get-config-value + machine-configs + get-machine-config + get-all-machines + validate-machine-name + get-homelab-root + get-ssh-config)) + +;; Default configuration +(define default-config + `((homelab-root . "/home/geir/Home-lab") + (machines . ((congenital-optimist + (type . local) + (hostname . "localhost") + (services . (workstation development))) + (sleeper-service + (type . remote) + (hostname . "sleeper-service.tail807ea.ts.net") + (ssh-alias . "admin-sleeper") + (services . (nfs zfs storage))) + (grey-area + (type . remote) + (hostname . "grey-area.tail807ea.ts.net") + (ssh-alias . "admin-grey") + (services . (ollama forgejo git))) + (reverse-proxy + (type . remote) + (hostname . "reverse-proxy.tail807ea.ts.net") + (ssh-alias . "admin-reverse") + (services . (nginx proxy ssl))))) + (deployment . ((default-mode . "boot") + (timeout . 300) + (retry-count . 3))) + (monitoring . ((interval . 30) + (timeout . 10))) + (mcp . ((port . 3001) + (host . "localhost") + (log-level . "info"))))) + +;; Current loaded configuration +(define current-config default-config) + +;; Load configuration from file or use defaults +(define (load-config . args) + (let ((config-file (if (null? args) + (string-append (getenv "HOME") "/.config/homelab/config.json") + (car args)))) + (if (file-exists? config-file) + (begin + (log-debug "Loading configuration from ~a" config-file) + (catch #t + (lambda () + (let ((json-data (call-with-input-file config-file get-string-all))) + (set! current-config (json-string->scm json-data)) + (log-info "Configuration loaded successfully"))) + (lambda (key . args) + (log-warn "Failed to load config file, using defaults: ~a" key) + (set! current-config default-config)))) + (begin + (log-debug "No config file found, using defaults") + (set! current-config default-config))) + current-config)) + +;; Get a configuration value by path +(define (get-config-value path . default) + (let ((result (fold (lambda (key acc) + (if (and acc (list? acc)) + (assoc-ref acc key) + #f)) + current-config path))) + (if result + result + (if (null? default) #f (car default))))) + +;; Get machine configurations +(define (machine-configs) + (get-config-value '(machines))) + +;; Get configuration for a specific machine +(define (get-machine-config machine-name) + (let ((machine-symbol (if (symbol? machine-name) + machine-name + (string->symbol machine-name)))) + (assoc-ref (machine-configs) machine-symbol))) + +;; Get list of all machine names +(define (get-all-machines) + (map (lambda (machine-entry) + (symbol->string (car machine-entry))) + (machine-configs))) + +;; Validate that a machine name exists +(define (validate-machine-name machine-name) + (let ((machines (get-all-machines))) + (if (member machine-name machines) + #t + (begin + (log-error "Unknown machine: ~a" machine-name) + (log-error "Available machines: ~a" (string-join machines ", ")) + #f)))) + +;; Get home lab root directory +(define (get-homelab-root) + (get-config-value '(homelab-root) "/home/geir/Home-lab")) + +;; Get SSH configuration for a machine +(define (get-ssh-config machine-name) + (let ((machine-config (get-machine-config machine-name))) + (if machine-config + (let ((type (assoc-ref machine-config 'type)) + (hostname (assoc-ref machine-config 'hostname)) + (ssh-alias (assoc-ref machine-config 'ssh-alias))) + `((type . ,type) + (hostname . ,hostname) + (ssh-alias . ,ssh-alias) + (is-local . ,(eq? type 'local)))) + #f))) + +;; Initialize configuration on module load +(load-config) diff --git a/packages/utils/json.scm b/packages/utils/json.scm new file mode 100644 index 0000000..52a86e3 --- /dev/null +++ b/packages/utils/json.scm @@ -0,0 +1,141 @@ +;; utils/json.scm - JSON processing utilities for Home Lab Tool + +(define-module (utils json) + #:use-module (json) + #:use-module (ice-9 textual-ports) + #:use-module (ice-9 format) + #:use-module (srfi srfi-1) + #:use-module (utils logging) + #:export (read-json-file + write-json-file + json-pretty-print + scm->json-string + json-string->scm-safe + validate-json-schema + merge-json-objects)) + +;; Read JSON from file with error handling +(define (read-json-file filename) + (catch #t + (lambda () + (log-debug "Reading JSON file: ~a" filename) + (call-with-input-file filename + (lambda (port) + (json->scm port)))) + (lambda (key . args) + (log-error "Failed to read JSON file ~a: ~a ~a" filename key args) + #f))) + +;; Write Scheme object to JSON file +(define (write-json-file filename obj . options) + (let ((pretty (if (null? options) #t (car options)))) + (catch #t + (lambda () + (log-debug "Writing JSON file: ~a" filename) + (call-with-output-file filename + (lambda (port) + (if pretty + (scm->json obj port #:pretty #t) + (scm->json obj port)))) + #t) + (lambda (key . args) + (log-error "Failed to write JSON file ~a: ~a ~a" filename key args) + #f)))) + +;; Pretty print JSON to current output port +(define (json-pretty-print obj) + (scm->json obj (current-output-port) #:pretty #t) + (newline)) + +;; Convert Scheme object to JSON string +(define (scm->json-string obj . options) + (let ((pretty (if (null? options) #f (car options)))) + (catch #t + (lambda () + (call-with-output-string + (lambda (port) + (if pretty + (scm->json obj port #:pretty #t) + (scm->json obj port))))) + (lambda (key . args) + (log-error "Failed to convert to JSON: ~a ~a" key args) + #f)))) + +;; Safely convert JSON string to Scheme with error handling +(define (json-string->scm-safe json-str) + (catch #t + (lambda () + (json-string->scm json-str)) + (lambda (key . args) + (log-error "Failed to parse JSON string: ~a ~a" key args) + #f))) + +;; Basic JSON schema validation +(define (validate-json-schema obj schema) + "Validate JSON object against a simple schema. + Schema format: ((required-keys ...) (optional-keys ...) (types ...))" + (let ((required-keys (car schema)) + (optional-keys (if (> (length schema) 1) (cadr schema) '())) + (type-specs (if (> (length schema) 2) (caddr schema) '()))) + + ;; Check required keys + (let ((missing-keys (filter (lambda (key) + (not (assoc-ref obj key))) + required-keys))) + (if (not (null? missing-keys)) + (begin + (log-error "Missing required keys: ~a" missing-keys) + #f) + (begin + ;; Check types if specified + (let ((type-errors (filter-map + (lambda (type-spec) + (let ((key (car type-spec)) + (expected-type (cadr type-spec))) + (let ((value (assoc-ref obj key))) + (if (and value (not (eq? (type-of value) expected-type))) + (format #f "Key ~a: expected ~a, got ~a" + key expected-type (type-of value)) + #f)))) + type-specs))) + (if (not (null? type-errors)) + (begin + (log-error "Type validation errors: ~a" type-errors) + #f) + #t))))))) + +;; Merge two JSON objects (association lists) +(define (merge-json-objects obj1 obj2) + "Merge two JSON objects, with obj2 values taking precedence" + (let ((merged (copy-tree obj1))) + (for-each (lambda (pair) + (let ((key (car pair)) + (value (cdr pair))) + (set! merged (assoc-set! merged key value)))) + obj2) + merged)) + +;; Convert nested alist to flat key paths for easier access +(define (flatten-json-paths obj . prefix) + "Convert nested object to flat list of (path . value) pairs" + (let ((current-prefix (if (null? prefix) '() (car prefix)))) + (fold (lambda (pair acc) + (let ((key (car pair)) + (value (cdr pair))) + (let ((new-path (append current-prefix (list key)))) + (if (and (list? value) (not (null? value)) (pair? (car value))) + ;; Nested object - recurse + (append (flatten-json-paths value new-path) acc) + ;; Leaf value + (cons (cons new-path value) acc))))) + '() + obj))) + +;; Get nested value using path list +(define (json-path-ref obj path) + "Get value from nested object using list of keys as path" + (fold (lambda (key acc) + (if (and acc (list? acc)) + (assoc-ref acc key) + #f)) + obj path)) diff --git a/packages/utils/logging.scm b/packages/utils/logging.scm new file mode 100644 index 0000000..e45281b --- /dev/null +++ b/packages/utils/logging.scm @@ -0,0 +1,91 @@ +;; utils/logging.scm - Logging utilities for Home Lab Tool + +(define-module (utils logging) + #:use-module (ice-9 format) + #:use-module (ice-9 popen) + #:use-module (srfi srfi-19) + #:export (log-debug + log-info + log-warn + log-error + log-success + set-log-level! + with-spinner)) + +;; ANSI color codes +(define color-codes + '((reset . "\x1b[0m") + (bold . "\x1b[1m") + (red . "\x1b[31m") + (green . "\x1b[32m") + (yellow . "\x1b[33m") + (blue . "\x1b[34m") + (magenta . "\x1b[35m") + (cyan . "\x1b[36m"))) + +;; Current log level +(define current-log-level 'info) + +;; Log levels with numeric values for comparison +(define log-levels + '((debug . 0) + (info . 1) + (warn . 2) + (error . 3))) + +;; Get color code by name +(define (get-color name) + (assoc-ref color-codes name)) + +;; Set the current log level +(define (set-log-level! level) + (set! current-log-level level)) + +;; Check if a message should be logged at current level +(define (should-log? level) + (<= (assoc-ref log-levels current-log-level) + (assoc-ref log-levels level))) + +;; Format timestamp for log messages +(define (format-timestamp) + (date->string (current-date) "~H:~M:~S")) + +;; Core logging function with color support +(define (log-with-color level color prefix message . args) + (when (should-log? level) + (let ((timestamp (format-timestamp)) + (formatted-msg (apply format #f message args)) + (color-start (get-color color)) + (color-end (get-color 'reset))) + (format (current-error-port) "~a~a[lab]~a ~a ~a~%" + color-start prefix color-end timestamp formatted-msg)))) + +;; Specific logging functions +(define (log-debug message . args) + (apply log-with-color 'debug 'cyan "DEBUG" message args)) + +(define (log-info message . args) + (apply log-with-color 'info 'blue "INFO " message args)) + +(define (log-warn message . args) + (apply log-with-color 'warn 'yellow "WARN " message args)) + +(define (log-error message . args) + (apply log-with-color 'error 'red "ERROR" message args)) + +(define (log-success message . args) + (apply log-with-color 'info 'green "SUCCESS" message args)) + +;; Spinner utility for long-running operations +(define (with-spinner message thunk) + (log-info "~a..." message) + (let ((start-time (current-time))) + (catch #t + (lambda () + (let ((result (thunk))) + (let ((elapsed (- (current-time) start-time))) + (log-success "~a completed in ~as" message elapsed)) + result)) + (lambda (key . args) + (log-error "~a failed: ~a ~a" message key args) + (throw key args))))) diff --git a/packages/utils/ssh.scm b/packages/utils/ssh.scm new file mode 100644 index 0000000..c3720eb --- /dev/null +++ b/packages/utils/ssh.scm @@ -0,0 +1,136 @@ +;; utils/ssh.scm - SSH operations for Home Lab Tool + +(define-module (utils ssh) + #:use-module (ssh session) + #:use-module (ssh channel) + #:use-module (ssh popen) + #:use-module (ice-9 popen) + #:use-module (ice-9 rdelim) + #:use-module (ice-9 textual-ports) + #:use-module (ice-9 call-with-values) + #:use-module (srfi srfi-1) + #:use-module (utils logging) + #:use-module (utils config) + #:export (test-ssh-connection + run-remote-command + copy-file-to-remote + run-command-with-retry + with-ssh-connection)) + +;; Test SSH connectivity to a machine +(define (test-ssh-connection machine-name) + (let ((ssh-config (get-ssh-config machine-name))) + (if (not ssh-config) + (begin + (log-error "No SSH configuration found for ~a" machine-name) + #f) + (if (assoc-ref ssh-config 'is-local) + (begin + (log-debug "Machine ~a is local, skipping SSH test" machine-name) + #t) + (let ((hostname (assoc-ref ssh-config 'hostname)) + (ssh-alias (assoc-ref ssh-config 'ssh-alias))) + (log-debug "Testing SSH connection to ~a (~a)" machine-name hostname) + (catch #t + (lambda () + ;; Use system ssh command for compatibility with existing configuration + (let* ((test-cmd (if ssh-alias + (format #f "ssh -o ConnectTimeout=5 -o BatchMode=yes ~a echo OK" ssh-alias) + (format #f "ssh -o ConnectTimeout=5 -o BatchMode=yes ~a echo OK" hostname))) + (port (open-pipe* OPEN_READ "/bin/sh" "-c" test-cmd)) + (output (get-string-all port)) + (status (close-pipe port))) + (if (zero? status) + (begin + (log-debug "SSH connection to ~a successful" machine-name) + #t) + (begin + (log-warn "SSH connection to ~a failed (exit: ~a)" machine-name status) + #f)))) + (lambda (key . args) + (log-error "SSH test failed for ~a: ~a ~a" machine-name key args) + #f))))))) + +;; Run a command on a remote machine +(define (run-remote-command machine-name command . args) + (let ((ssh-config (get-ssh-config machine-name)) + (full-command (if (null? args) + command + (format #f "~a ~a" command (string-join args " "))))) + (if (not ssh-config) + (values #f "No SSH configuration found") + (if (assoc-ref ssh-config 'is-local) + ;; Local execution + (begin + (log-debug "Executing locally: ~a" full-command) + (let* ((port (open-pipe* OPEN_READ "/bin/sh" "-c" full-command)) + (output (get-string-all port)) + (status (close-pipe port))) + (values (zero? status) output))) + ;; Remote execution + (let ((ssh-alias (assoc-ref ssh-config 'ssh-alias)) + (hostname (assoc-ref ssh-config 'hostname))) + (log-debug "Executing on ~a: ~a" machine-name full-command) + (let* ((ssh-cmd (format #f "ssh ~a '~a'" + (or ssh-alias hostname) + full-command)) + (port (open-pipe* OPEN_READ "/bin/sh" "-c" ssh-cmd)) + (output (get-string-all port)) + (status (close-pipe port))) + (values (zero? status) output))))))) + +;; Copy file to remote machine using scp +(define (copy-file-to-remote machine-name local-path remote-path) + (let ((ssh-config (get-ssh-config machine-name))) + (if (not ssh-config) + (begin + (log-error "No SSH configuration found for ~a" machine-name) + #f) + (if (assoc-ref ssh-config 'is-local) + ;; Local copy + (begin + (log-debug "Copying locally: ~a -> ~a" local-path remote-path) + (let* ((copy-cmd (format #f "cp '~a' '~a'" local-path remote-path)) + (status (system copy-cmd))) + (zero? status))) + ;; Remote copy + (let ((ssh-alias (assoc-ref ssh-config 'ssh-alias)) + (hostname (assoc-ref ssh-config 'hostname))) + (log-debug "Copying to ~a: ~a -> ~a" machine-name local-path remote-path) + (let* ((scp-cmd (format #f "scp '~a' '~a:~a'" + local-path + (or ssh-alias hostname) + remote-path)) + (status (system scp-cmd))) + (if (zero? status) + (begin + (log-debug "File copy successful") + #t) + (begin + (log-error "File copy failed (exit: ~a)" status) + #f)))))))) + +;; Run command with retry logic +(define (run-command-with-retry machine-name command max-retries . args) + (let loop ((retries 0)) + (call-with-values (success output) (apply run-remote-command machine-name command args) + (if success + (values #t output) + (if (< retries max-retries) + (begin + (log-warn "Command failed, retrying (~a/~a)..." (+ retries 1) max-retries) + (sleep 2) + (loop (+ retries 1))) + (values #f output)))))) + +;; Execute a thunk with SSH connection context +(define (with-ssh-connection machine-name thunk) + (if (test-ssh-connection machine-name) + (catch #t + (lambda () (thunk)) + (lambda (key . args) + (log-error "SSH operation failed: ~a ~a" key args) + #f)) + (begin + (log-error "Cannot establish SSH connection to ~a" machine-name) + #f))) diff --git a/packages/vscode-homelab-extension.ts b/packages/vscode-homelab-extension.ts new file mode 100644 index 0000000..51653b6 --- /dev/null +++ b/packages/vscode-homelab-extension.ts @@ -0,0 +1,495 @@ +// VS Code Extension for Home Lab MCP Integration +// Run: npm init -y && npm install @types/vscode @types/node typescript + +import * as vscode from 'vscode'; +import { spawn, ChildProcess } from 'child_process'; + +interface MCPRequest { + jsonrpc: string; + id: number; + method: string; + params?: any; +} + +interface MCPResponse { + jsonrpc: string; + id: number; + result?: any; + error?: any; +} + +export class HomeLabMCPExtension { + private mcpProcess: ChildProcess | null = null; + private requestId = 0; + private pendingRequests = new Map<number, (response: MCPResponse) => void>(); + private statusBarItem: vscode.StatusBarItem; + + constructor(private context: vscode.ExtensionContext) { + this.statusBarItem = vscode.window.createStatusBarItem( + vscode.StatusBarAlignment.Left, + 100 + ); + this.statusBarItem.text = "$(server-environment) Home Lab: Disconnected"; + this.statusBarItem.show(); + } + + async activate() { + // Register commands + this.context.subscriptions.push( + vscode.commands.registerCommand('homelab.connect', () => this.connect()), + vscode.commands.registerCommand('homelab.disconnect', () => this.disconnect()), + vscode.commands.registerCommand('homelab.deploy', (machine) => this.deployMachine(machine)), + vscode.commands.registerCommand('homelab.status', () => this.showStatus()), + vscode.commands.registerCommand('homelab.generateConfig', () => this.generateConfig()), + vscode.commands.registerCommand('homelab.listTools', () => this.listAvailableTools()), + vscode.commands.registerCommand('homelab.executeTool', () => this.executeToolInteractive()) + ); + + // Start MCP server + await this.connect(); + + // Set up context for Copilot + this.setupCopilotContext(); + } + + private async disconnect(): Promise<void> { + if (this.mcpProcess) { + this.mcpProcess.kill(); + this.mcpProcess = null; + } + this.statusBarItem.text = "$(server-environment) Home Lab: Disconnected"; + vscode.window.showInformationMessage('Disconnected from Home Lab MCP Server'); + } + + private async listAvailableTools(): Promise<void> { + try { + const tools = await this.sendMCPRequest('tools/list', {}); + + const panel = vscode.window.createWebviewPanel( + 'homelabTools', + 'Home Lab Tools', + vscode.ViewColumn.One, + { enableScripts: true } + ); + + panel.webview.html = this.getToolsHTML(tools.tools); + } catch (error) { + vscode.window.showErrorMessage(`Failed to get tools: ${error}`); + } + } + + private async executeToolInteractive(): Promise<void> { + try { + const tools = await this.sendMCPRequest('tools/list', {}); + + const toolName = await vscode.window.showQuickPick( + tools.tools?.map((t: any) => ({ + label: t.name, + description: t.description, + detail: `Parameters: ${Object.keys(t.inputSchema?.properties || {}).join(', ')}` + })), + { placeHolder: 'Select tool to execute' } + ); + + if (!toolName) return; + + const tool = tools.tools.find((t: any) => t.name === toolName.label); + const args: any = {}; + + // Collect parameters interactively + if (tool.inputSchema?.properties) { + for (const [paramName, paramSchema] of Object.entries(tool.inputSchema.properties)) { + const value = await vscode.window.showInputBox({ + prompt: `Enter ${paramName}`, + placeHolder: (paramSchema as any).description || `Value for ${paramName}` + }); + if (value !== undefined) { + args[paramName] = value; + } + } + } + + const result = await this.sendMCPRequest('tools/call', { + name: tool.name, + arguments: args + }); + + // Show result in output channel + const output = vscode.window.createOutputChannel('Home Lab Tool Result'); + output.clear(); + output.appendLine(`Tool: ${tool.name}`); + output.appendLine(`Arguments: ${JSON.stringify(args, null, 2)}`); + output.appendLine('---'); + output.appendLine(JSON.stringify(result, null, 2)); + output.show(); + + } catch (error) { + vscode.window.showErrorMessage(`Failed to execute tool: ${error}`); + } + } + + private async connect(): Promise<void> { + try { + // Start Guile MCP server + this.mcpProcess = spawn('guile', [ + '-L', vscode.workspace.rootPath + '/packages', + '-c', '(use-modules (mcp server)) (run-mcp-server)' + ], { + stdio: ['pipe', 'pipe', 'pipe'], + cwd: vscode.workspace.rootPath + }); + + this.mcpProcess.stdout?.on('data', (data) => { + this.handleMCPResponse(data.toString()); + }); + + this.mcpProcess.stderr?.on('data', (data) => { + console.error('MCP Error:', data.toString()); + }); + + // Initialize MCP session + await this.sendMCPRequest('initialize', { + protocolVersion: '2024-11-05', + capabilities: { + tools: {}, + resources: {} + }, + clientInfo: { + name: 'vscode-homelab', + version: '0.1.0' + } + }); + + this.statusBarItem.text = "$(server-environment) Home Lab: Connected"; + vscode.window.showInformationMessage('Connected to Home Lab MCP Server'); + + } catch (error) { + this.statusBarItem.text = "$(server-environment) Home Lab: Error"; + vscode.window.showErrorMessage(`Failed to connect to MCP server: ${error}`); + } + } + + private async sendMCPRequest(method: string, params?: any): Promise<any> { + if (!this.mcpProcess?.stdin) { + throw new Error('MCP server not connected'); + } + + const id = ++this.requestId; + const request: MCPRequest = { + jsonrpc: '2.0', + id, + method, + params + }; + + return new Promise((resolve, reject) => { + this.pendingRequests.set(id, (response: MCPResponse) => { + if (response.error) { + reject(new Error(response.error.message)); + } else { + resolve(response.result); + } + }); + + this.mcpProcess!.stdin!.write(JSON.stringify(request) + '\n'); + + // Timeout after 30 seconds + setTimeout(() => { + if (this.pendingRequests.has(id)) { + this.pendingRequests.delete(id); + reject(new Error('Request timeout')); + } + }, 30000); + }); + } + + private handleMCPResponse(data: string): void { + try { + const lines = data.trim().split('\n'); + for (const line of lines) { + if (line.trim()) { + const response: MCPResponse = JSON.parse(line); + const handler = this.pendingRequests.get(response.id); + if (handler) { + this.pendingRequests.delete(response.id); + handler(response); + } + } + } + } catch (error) { + console.error('Failed to parse MCP response:', error); + } + } + + async deployMachine(machine?: string): Promise<void> { + if (!machine) { + const machines = await this.getMachines(); + machine = await vscode.window.showQuickPick(machines, { + placeHolder: 'Select machine to deploy' + }); + } + + if (!machine) return; + + const method = await vscode.window.showQuickPick( + ['deploy-rs', 'hybrid-update', 'legacy'], + { placeHolder: 'Select deployment method' } + ); + + if (!method) return; + + try { + vscode.window.withProgress({ + location: vscode.ProgressLocation.Notification, + title: `Deploying ${machine}...`, + cancellable: false + }, async (progress) => { + const result = await this.sendMCPRequest('tools/call', { + name: 'deploy-machine', + arguments: { machine, method } + }); + + if (result.success) { + vscode.window.showInformationMessage( + `Successfully deployed ${machine} using ${method}` + ); + } else { + vscode.window.showErrorMessage( + `Deployment failed: ${result.error || 'Unknown error'}` + ); + } + }); + } catch (error) { + vscode.window.showErrorMessage(`Deployment error: ${error}`); + } + } + + async showStatus(): Promise<void> { + try { + const status = await this.sendMCPRequest('resources/read', { + uri: 'homelab://status/all' + }); + + const panel = vscode.window.createWebviewPanel( + 'homelabStatus', + 'Home Lab Status', + vscode.ViewColumn.One, + { enableScripts: true } + ); + + panel.webview.html = this.getStatusHTML(status.content); + } catch (error) { + vscode.window.showErrorMessage(`Failed to get status: ${error}`); + } + } + + async generateConfig(): Promise<void> { + const machineName = await vscode.window.showInputBox({ + prompt: 'Enter machine name', + placeHolder: 'my-new-machine' + }); + + if (!machineName) return; + + const services = await vscode.window.showInputBox({ + prompt: 'Enter services (comma-separated)', + placeHolder: 'nginx,postgresql,redis' + }); + + try { + const result = await this.sendMCPRequest('tools/call', { + name: 'generate-nix-config', + arguments: { + 'machine-name': machineName, + services: services ? services.split(',').map(s => s.trim()) : [] + } + }); + + const doc = await vscode.workspace.openTextDocument({ + content: result.content, + language: 'nix' + }); + + await vscode.window.showTextDocument(doc); + } catch (error) { + vscode.window.showErrorMessage(`Failed to generate config: ${error}`); + } + } + + private async getMachines(): Promise<string[]> { + try { + const result = await this.sendMCPRequest('tools/call', { + name: 'list-machines', + arguments: {} + }); + return result.machines || []; + } catch (error) { + console.error('Failed to get machines:', error); + return []; + } + } + + private setupCopilotContext(): void { + // Create a virtual document that provides context to Copilot + const provider = new (class implements vscode.TextDocumentContentProvider { + constructor(private extension: HomeLabMCPExtension) {} + + async provideTextDocumentContent(): Promise<string> { + try { + const context = await this.extension.sendMCPRequest('resources/read', { + uri: 'homelab://context/copilot' + }); + return context.content; + } catch (error) { + return `# Home Lab Context\nError loading context: ${error}`; + } + } + })(this); + + this.context.subscriptions.push( + vscode.workspace.registerTextDocumentContentProvider( + 'homelab-context', + provider + ) + ); + + // Register workspace context provider for Copilot + this.registerCopilotWorkspaceProvider(); + + // Open the context document to make it available to Copilot + vscode.workspace.openTextDocument(vscode.Uri.parse('homelab-context://context')).then(doc => { + // Keep it open but hidden for context + }); + } + + private registerCopilotWorkspaceProvider(): void { + // Enhanced Copilot integration using VS Code's context API + const workspaceProvider = { + provideWorkspaceContext: async () => { + try { + // Get comprehensive home lab context + const [status, machines, services] = await Promise.all([ + this.sendMCPRequest('resources/read', { uri: 'homelab://status/summary' }), + this.sendMCPRequest('tools/call', { name: 'list-machines', arguments: {} }), + this.sendMCPRequest('tools/call', { name: 'list-services', arguments: {} }) + ]); + + return { + name: 'Home Lab Infrastructure', + description: 'Current state and configuration of home lab environment', + content: `# Home Lab Infrastructure Context + +## Available Machines +${machines.machines?.map((m: any) => `- ${m.name}: ${m.status} (${m.services?.join(', ') || 'no services'})`).join('\n') || 'No machines found'} + +## Service Status +${services.services?.map((s: any) => `- ${s.name}: ${s.status} on ${s.machine}`).join('\n') || 'No services found'} + +## Current Infrastructure State +${status.summary || 'Status unavailable'} + +## Available Operations +- deploy-machine: Deploy configuration to a specific machine +- check-status: Get detailed status of machines and services +- generate-config: Create new NixOS configurations +- manage-services: Start/stop/restart services +- backup-data: Backup service data and configurations + +Use these MCP tools for infrastructure operations through the home lab extension.` + }; + } catch (error) { + return { + name: 'Home Lab Infrastructure', + description: 'Home lab context (error loading)', + content: `# Home Lab Infrastructure Context\n\nError loading context: ${error}\n\nTry connecting to MCP server first.` + }; + } + } + }; + + // Register the workspace provider if the API is available + if ((vscode as any).workspace.registerWorkspaceContextProvider) { + this.context.subscriptions.push( + (vscode as any).workspace.registerWorkspaceContextProvider(workspaceProvider) + ); + } + } + + private getStatusHTML(status: any): string { + return ` + <!DOCTYPE html> + <html> + <head> + <meta charset="UTF-8"> + <meta name="viewport" content="width=device-width, initial-scale=1.0"> + <title>Home Lab Status + + + +

Home Lab Status

+
${JSON.stringify(status, null, 2)}
+ + + `; + } + + private getToolsHTML(tools: any[]): string { + const toolsHTML = tools?.map(tool => ` +
+

${tool.name}

+

Description: ${tool.description || 'No description'}

+
+ Parameters +
${JSON.stringify(tool.inputSchema?.properties || {}, null, 2)}
+
+
+ `).join('') || '

No tools available

'; + + return ` + + + + + + Home Lab Tools + + + +

Available Home Lab Tools

+ ${toolsHTML} + + + `; + } + + dispose(): void { + if (this.mcpProcess) { + this.mcpProcess.kill(); + } + this.statusBarItem.dispose(); + } +} + +export function activate(context: vscode.ExtensionContext) { + const extension = new HomeLabMCPExtension(context); + extension.activate(); + + context.subscriptions.push({ + dispose: () => extension.dispose() + }); +} + +export function deactivate() {}