feat: Add little-rascal laptop config and lab-tool auto-update system
## New Machine: little-rascal - Add Lenovo Yoga Slim 7 14ARE05 configuration (AMD Ryzen 7 4700U) - Niri desktop with CLI login (greetd + tuigreet) - zram swap configuration (25% of RAM with zstd) - AMD-optimized hardware support and power management - Based on congenital-optimist structure with laptop-specific additions ## Lab Tool Auto-Update System - Implement Guile Scheme auto-update module (lab/auto-update.scm) - Add health checks, logging, and safety features - Integrate with existing deployment and machine management - Update main CLI with auto-update and auto-update-status commands - Create NixOS service module for automated updates - Document complete implementation in simple-auto-update-plan.md ## MCP Integration - Configure Task Master AI and Context7 MCP servers - Set up local Ollama integration for AI processing - Add proper environment configuration for existing models ## Infrastructure Updates - Add little-rascal to flake.nix with deploy-rs support - Fix common user configuration issues - Create missing emacs.nix module - Update package integrations 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
parent
5e1061382c
commit
6eac143f57
19 changed files with 1287 additions and 559 deletions
|
@ -1,114 +0,0 @@
|
|||
# Packaging Claude Task Master AI for NixOS
|
||||
|
||||
This document outlines suggestions for packaging the "Claude Task Master AI" Node.js application as a Nix package. The typical installation method for this tool is `npm install -g task-master-ai`.
|
||||
|
||||
## 1. Creating the Nix Package
|
||||
|
||||
Nixpkgs provides helpers for packaging Node.js applications. The primary function for this is `buildNpmPackage`.
|
||||
|
||||
A Nix expression for this package can be created in your packages directory, for example, at `packages/claude-task-master-ai.nix`.
|
||||
|
||||
### Key Steps:
|
||||
|
||||
1. **Find the Source**: Determine the source of the `task-master-ai` package. This is usually the npm registry. You'll need the package name and version.
|
||||
2. **Nix Expression (`default.nix` or `claude-task-master.nix`):** Create a Nix expression file.
|
||||
3. **Use `buildNpmPackage`**: This function handles the download, build, and installation of npm packages.
|
||||
4. **`npmDepsHash`**: You'll need to calculate a hash of the npm dependencies. This ensures reproducibility.
|
||||
5. **Binaries**: Ensure that the executables provided by `task-master-ai` are correctly placed in the output's `bin` directory. `buildNpmPackage` usually handles this if the `package.json` of the application specifies `bin` entries.
|
||||
|
||||
### Example Nix Expression:
|
||||
|
||||
```nix
|
||||
{ lib, buildNpmPackage, fetchFromGitHub, nodejs }: # Add other dependencies if needed
|
||||
|
||||
buildNpmPackage rec {
|
||||
pname = "task-master-ai";
|
||||
version = "INSERT_PACKAGE_VERSION_HERE"; # Replace with the actual version
|
||||
|
||||
src = fetchFromGitHub { # Or fetchurl if directly from npm/tarball
|
||||
owner = "eyaltoledano"; # Replace if this is not the correct source
|
||||
repo = "claude-task-master"; # Replace if this is not the correct source
|
||||
rev = "v${version}"; # Or specific commit/tag
|
||||
hash = "INSERT_SRC_HASH_HERE"; # lib.fakeSha256 for initial fetch, then replace
|
||||
};
|
||||
|
||||
# If fetching directly from npm tarball:
|
||||
# src = fetchurl {
|
||||
# url = "https://registry.npmjs.org/task-master-ai/-/task-master-ai-${version}.tgz";
|
||||
# sha256 = "INSERT_TARBALL_HASH_HERE"; # lib.fakeSha256 for initial fetch, then replace
|
||||
# };
|
||||
|
||||
npmDepsHash = "INSERT_NPMDEPSHASH_HERE"; # Calculate this after the first build attempt
|
||||
|
||||
# buildInputs = [ nodejs ]; # buildNpmPackage usually brings in nodejs
|
||||
|
||||
meta = with lib; {
|
||||
description = "Claude Task Master AI tool";
|
||||
homepage = "https://github.com/eyaltoledano/claude-task-master"; # Or actual homepage
|
||||
license = licenses.mit; # Check and replace with actual license
|
||||
maintainers = [ maintainers.yourGithubUsername ]; # Your username
|
||||
platforms = platforms.all;
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
### Obtaining `npmDepsHash`:
|
||||
|
||||
1. Initially, set `npmDepsHash = lib.fakeSha256;` or a placeholder like `"sha256-AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=";`.
|
||||
2. Attempt to build the package (e.g., `nix-build -A task-master-ai`).
|
||||
3. The build will fail, but it will output the expected hash. Copy this hash into your Nix expression.
|
||||
Alternatively, you can use `prefetch-npm-deps` if you have a `package-lock.json`:
|
||||
```sh
|
||||
# In a directory with package.json and package-lock.json for task-master-ai
|
||||
nix-shell -p nodePackages.prefetch-npm-deps --run "prefetch-npm-deps package-lock.json"
|
||||
```
|
||||
Since `task-master-ai` is installed globally, you might need to fetch its source first to get the `package-lock.json`.
|
||||
|
||||
### Global Installation Aspect:
|
||||
|
||||
`buildNpmPackage` typically installs binaries specified in the `package.json`'s `bin` field into `$out/bin/`. This makes them available when the package is installed in a Nix profile. If `task-master-ai` is made available this way, VS Code can invoke it using `npx` as shown in the MCP server configuration, or potentially directly if it's added to the PATH.
|
||||
|
||||
## 2. Integrating with VS Code as an MCP Server
|
||||
|
||||
Instead of running `task-master-ai` as a system-wide NixOS service, it can be integrated directly into VS Code (or other compatible editors) as an MCP (Model Context Protocol) server. This allows your editor to communicate with the AI for task management capabilities.
|
||||
|
||||
The Nix package created in the previous step ensures that `task-master-ai` is available in your environment, typically invokable via `npx task-master-ai` or directly if the Nix package adds it to your PATH.
|
||||
|
||||
### VS Code `settings.json` Configuration:
|
||||
|
||||
You can configure VS Code to use `task-master-ai` as an MCP server by adding the following to your `settings.json` file:
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"taskmaster-ai": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
|
||||
"env": {
|
||||
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
|
||||
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
|
||||
"MODEL": "claude-3-7-sonnet-20250219",
|
||||
"PERPLEXITY_MODEL": "sonar-pro",
|
||||
"MAX_TOKENS": 64000,
|
||||
"TEMPERATURE": 0.2,
|
||||
"DEFAULT_SUBTASKS": 5,
|
||||
"DEFAULT_PRIORITY": "medium"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Key Points for MCP Configuration:**
|
||||
|
||||
* **`command` and `args`**: These specify how to run `task-master-ai`. Using `npx -y --package=task-master-ai task-master-ai` ensures that `npx` fetches and runs the specified version of `task-master-ai`. If your Nix package makes `task-master-ai` directly available in the PATH, you might simplify the command to just `task-master-ai` and remove the `args` that specify the package for `npx`.
|
||||
* **`env`**: This section is crucial. You **must** replace placeholder API keys (`YOUR_ANTHROPIC_API_KEY_HERE`, `YOUR_PERPLEXITY_API_KEY_HERE`) with your actual keys.
|
||||
* You can customize other environment variables like `MODEL`, `MAX_TOKENS`, etc., according to your needs and the capabilities of `task-master-ai`.
|
||||
* Ensure the Nix package for `task-master-ai` (and `nodejs`/`npx`) is installed and accessible in the environment where VS Code runs.
|
||||
|
||||
## 3. Finding Package Information
|
||||
|
||||
* **NPM Registry**: Search for `task-master-ai` on [npmjs.com](https://www.npmjs.com/) to find its exact version, dependencies, and potentially its source repository. The roadmap indicates the source is `https://github.com/eyaltoledano/claude-task-master.git`.
|
||||
* **GitHub Repository**: The roadmap points to `https://github.com/eyaltoledano/claude-task-master.git`. This is likely the best source for `package.json` and understanding how the tool works.
|
||||
|
||||
This guide provides a starting point. You'll need to adapt the examples based on the specifics of the `task-master-ai` tool.
|
|
@ -100,89 +100,428 @@ in
|
|||
}
|
||||
```
|
||||
|
||||
### 2. Lab Tool Commands
|
||||
Add new commands to the existing lab tool:
|
||||
### 2. Guile Scheme Auto-Update Module
|
||||
Create the core auto-update functionality in `lab/auto-update.scm`:
|
||||
|
||||
```python
|
||||
# lab/commands/update_system.py
|
||||
class UpdateSystemCommand:
|
||||
def __init__(self, lab_config):
|
||||
self.lab_config = lab_config
|
||||
self.flake_path = lab_config.get('flake_path', '/home/geir/Home-lab')
|
||||
```scheme
|
||||
;; lab/auto-update.scm - Auto-update system implementation
|
||||
|
||||
(define-module (lab auto-update)
|
||||
#:use-module (ice-9 format)
|
||||
#:use-module (ice-9 popen)
|
||||
#:use-module (ice-9 textual-ports)
|
||||
#:use-module (srfi srfi-1)
|
||||
#:use-module (srfi srfi-19) ; Date/time
|
||||
#:use-module (utils logging)
|
||||
#:use-module (utils config)
|
||||
#:use-module (lab deployment)
|
||||
#:use-module (lab machines)
|
||||
#:export (auto-update-system
|
||||
schedule-auto-update
|
||||
check-update-health
|
||||
auto-update-status))
|
||||
|
||||
;; Pure function: Generate update log entry
|
||||
(define (format-update-log-entry timestamp operation status details)
|
||||
"Pure function to format update log entry"
|
||||
(format #f "~a: ~a - ~a (~a)" timestamp operation status details))
|
||||
|
||||
;; Pure function: Check if system is healthy for updates
|
||||
(define (system-health-check-pure)
|
||||
"Pure function returning health check criteria"
|
||||
'((disk-space-threshold . 90)
|
||||
(required-services . ("systemd"))
|
||||
(min-uptime-minutes . 30)))
|
||||
|
||||
;; Impure function: Check actual system health
|
||||
(define (check-update-health)
|
||||
"Check if system is ready for updates (impure - checks actual system)"
|
||||
(log-info "Checking system health before update...")
|
||||
|
||||
(let* ((health-checks (system-health-check-pure))
|
||||
(disk-threshold (assoc-ref health-checks 'disk-space-threshold))
|
||||
(disk-usage (get-disk-usage))
|
||||
(system-running (system-is-running?))
|
||||
(uptime-ok (check-minimum-uptime)))
|
||||
|
||||
def update_self(self):
|
||||
"""Update the current system using Nix flake"""
|
||||
try:
|
||||
# Update flake inputs
|
||||
self._run_command(['nix', 'flake', 'update'], cwd=self.flake_path)
|
||||
|
||||
# Rebuild system
|
||||
hostname = self._get_hostname()
|
||||
self._run_command([
|
||||
'nixos-rebuild', 'switch',
|
||||
'--flake', f'{self.flake_path}#{hostname}'
|
||||
])
|
||||
|
||||
print("System updated successfully")
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
print(f"Update failed: {e}")
|
||||
return False
|
||||
(log-debug "Disk usage: ~a%" disk-usage)
|
||||
(log-debug "System running: ~a" system-running)
|
||||
(log-debug "Uptime check: ~a" uptime-ok)
|
||||
|
||||
def schedule_reboot(self, delay_minutes=1):
|
||||
"""Schedule a system reboot"""
|
||||
self._run_command(['shutdown', '-r', f'+{delay_minutes}'])
|
||||
|
||||
def _get_hostname(self):
|
||||
import socket
|
||||
return socket.gethostname()
|
||||
|
||||
def _run_command(self, cmd, cwd=None):
|
||||
import subprocess
|
||||
result = subprocess.run(cmd, cwd=cwd, check=True,
|
||||
capture_output=True, text=True)
|
||||
return result.stdout
|
||||
(cond
|
||||
((> disk-usage disk-threshold)
|
||||
(log-error "Disk usage too high: ~a% (threshold: ~a%)" disk-usage disk-threshold)
|
||||
#f)
|
||||
((not system-running)
|
||||
(log-error "System not in running state")
|
||||
#f)
|
||||
((not uptime-ok)
|
||||
(log-error "System uptime too low for safe update")
|
||||
#f)
|
||||
(else
|
||||
(log-success "System health check passed")
|
||||
#t))))
|
||||
|
||||
;; Impure function: Main auto-update routine
|
||||
(define (auto-update-system . args)
|
||||
"Perform automatic system update (impure - modifies system)"
|
||||
(let* ((options (if (null? args) '() (car args)))
|
||||
(auto-reboot (option-ref options 'auto-reboot #t))
|
||||
(dry-run (option-ref options 'dry-run #f))
|
||||
(machine-name (get-hostname)))
|
||||
|
||||
(log-info "Starting auto-update for machine: ~a" machine-name)
|
||||
(write-update-log "auto-update" "started" machine-name)
|
||||
|
||||
(if (not (check-update-health))
|
||||
(begin
|
||||
(log-error "System health check failed - aborting update")
|
||||
(write-update-log "auto-update" "aborted" "health check failed")
|
||||
#f)
|
||||
(begin
|
||||
;; Update flake inputs
|
||||
(log-info "Updating flake inputs...")
|
||||
(let ((flake-result (update-flake options)))
|
||||
(if flake-result
|
||||
(begin
|
||||
(log-success "Flake update completed")
|
||||
(write-update-log "flake-update" "success" "")
|
||||
|
||||
;; Deploy configuration
|
||||
(log-info "Deploying updated configuration...")
|
||||
(let ((deploy-result (deploy-machine machine-name "switch" options)))
|
||||
(if deploy-result
|
||||
(begin
|
||||
(log-success "Configuration deployment completed")
|
||||
(write-update-log "deployment" "success" "switch mode")
|
||||
|
||||
;; Schedule reboot if enabled
|
||||
(if (and auto-reboot (not dry-run))
|
||||
(begin
|
||||
(log-info "Scheduling system reboot in 2 minutes...")
|
||||
(write-update-log "reboot" "scheduled" "2 minutes")
|
||||
(system "shutdown -r +2 'Auto-update completed - rebooting'")
|
||||
#t)
|
||||
(begin
|
||||
(log-info "Auto-reboot disabled - update complete")
|
||||
(write-update-log "auto-update" "completed" "no reboot")
|
||||
#t)))
|
||||
(begin
|
||||
(log-error "Configuration deployment failed")
|
||||
(write-update-log "deployment" "failed" "switch mode")
|
||||
#f))))
|
||||
(begin
|
||||
(log-error "Flake update failed")
|
||||
(write-update-log "flake-update" "failed" "")
|
||||
#f)))))))
|
||||
|
||||
;; Helper functions for system checks and logging
|
||||
(define (get-disk-usage)
|
||||
"Get root filesystem disk usage percentage"
|
||||
(let* ((cmd "df / | tail -1 | awk '{print $5}' | sed 's/%//'")
|
||||
(port (open-pipe* OPEN_READ "/bin/sh" "-c" cmd))
|
||||
(output (string-trim-both (get-string-all port)))
|
||||
(status (close-pipe port)))
|
||||
(if (zero? status)
|
||||
(string->number output)
|
||||
95)))
|
||||
|
||||
(define (system-is-running?)
|
||||
"Check if system is in running state"
|
||||
(let* ((cmd "systemctl is-system-running --quiet")
|
||||
(status (system cmd)))
|
||||
(zero? status)))
|
||||
|
||||
(define (get-hostname)
|
||||
"Get current system hostname"
|
||||
(let* ((cmd "hostname")
|
||||
(port (open-pipe* OPEN_READ "/bin/sh" "-c" cmd))
|
||||
(output (string-trim-both (get-string-all port)))
|
||||
(status (close-pipe port)))
|
||||
(if (zero? status) output "unknown")))
|
||||
|
||||
(define (write-update-log operation status details)
|
||||
"Write update operation to log file"
|
||||
(let* ((timestamp (date->string (current-date) "~Y-~m-~d ~H:~M:~S"))
|
||||
(log-entry (format-update-log-entry timestamp operation status details))
|
||||
(log-file "/var/log/lab-auto-update.log"))
|
||||
(catch #t
|
||||
(lambda ()
|
||||
(call-with-output-file log-file
|
||||
(lambda (port) (format port "~a\n" log-entry))
|
||||
#:append #t))
|
||||
(lambda (key . args)
|
||||
(log-error "Failed to write update log: ~a" args)))))
|
||||
|
||||
(define (auto-update-status)
|
||||
"Display auto-update service status and recent logs"
|
||||
(log-info "Checking auto-update status...")
|
||||
|
||||
(let ((log-file "/var/log/lab-auto-update.log"))
|
||||
(if (file-exists? log-file)
|
||||
(begin
|
||||
(format #t "Recent auto-update activity:\n")
|
||||
(let* ((cmd (format #f "tail -10 ~a" log-file))
|
||||
(port (open-pipe* OPEN_READ "/bin/sh" "-c" cmd))
|
||||
(output (get-string-all port))
|
||||
(status (close-pipe port)))
|
||||
(if (zero? status) (display output)
|
||||
(log-error "Failed to read update log"))))
|
||||
(log-info "No auto-update log found"))
|
||||
|
||||
;; Check systemd timer status
|
||||
(format #t "\nSystemd timer status:\n")
|
||||
(let* ((cmd "systemctl status lab-auto-update.timer --no-pager")
|
||||
(port (open-pipe* OPEN_READ "/bin/sh" "-c" cmd))
|
||||
(output (get-string-all port)))
|
||||
(display output))))
|
||||
```
|
||||
|
||||
### 3. CLI Integration
|
||||
Extend the main lab tool CLI:
|
||||
Update `main.scm` to include auto-update commands:
|
||||
|
||||
```python
|
||||
# lab/cli.py (additions)
|
||||
@cli.group()
|
||||
def update():
|
||||
"""System update commands"""
|
||||
pass
|
||||
```scheme
|
||||
;; Add to use-modules section:
|
||||
(lab auto-update)
|
||||
|
||||
@update.command('system')
|
||||
@click.option('--self', 'update_self', is_flag=True,
|
||||
help='Update the current system')
|
||||
@click.option('--reboot', is_flag=True,
|
||||
help='Reboot after update')
|
||||
def update_system(update_self, reboot):
|
||||
"""Update system using Nix flake"""
|
||||
if update_self:
|
||||
updater = UpdateSystemCommand(config)
|
||||
success = updater.update_self()
|
||||
;; Add to help text:
|
||||
auto-update Perform automatic system update with health checks
|
||||
auto-update-status Show auto-update service status and logs
|
||||
|
||||
;; Add command handlers:
|
||||
(define (cmd-auto-update)
|
||||
"Perform automatic system update"
|
||||
(log-info "Starting automatic system update...")
|
||||
(let ((result (auto-update-system '((auto-reboot . #t)))))
|
||||
(if result
|
||||
(log-success "Automatic update completed successfully")
|
||||
(log-error "Automatic update failed"))))
|
||||
|
||||
(define (cmd-auto-update-status)
|
||||
"Show auto-update status and logs"
|
||||
(auto-update-status))
|
||||
|
||||
;; Add to command dispatcher:
|
||||
('auto-update
|
||||
(cmd-auto-update))
|
||||
|
||||
('auto-update-status
|
||||
(cmd-auto-update-status))
|
||||
```
|
||||
|
||||
### 4. Updated NixOS Service Module
|
||||
Enhanced service module at `modules/services/lab-auto-update.nix`:
|
||||
|
||||
```nix
|
||||
# modules/services/lab-auto-update.nix - NixOS service for automatic lab updates
|
||||
|
||||
{ config, lib, pkgs, ... }:
|
||||
|
||||
with lib;
|
||||
|
||||
let
|
||||
cfg = config.services.lab-auto-update;
|
||||
|
||||
# Get the lab tool from our packages
|
||||
labTool = pkgs.callPackage ../../packages/lab-tools.nix {};
|
||||
|
||||
# Auto-update script that uses the Guile lab tool
|
||||
autoUpdateScript = pkgs.writeShellScript "lab-auto-update" ''
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
LOG_FILE="/var/log/lab-auto-update.log"
|
||||
LOCK_FILE="/var/run/lab-auto-update.lock"
|
||||
|
||||
# Ensure we don't run multiple instances
|
||||
if [ -f "$LOCK_FILE" ]; then
|
||||
echo "$(date): Auto-update already running (lock file exists)" >> "$LOG_FILE"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Create lock file
|
||||
echo $$ > "$LOCK_FILE"
|
||||
|
||||
# Cleanup function
|
||||
cleanup() {
|
||||
rm -f "$LOCK_FILE"
|
||||
}
|
||||
trap cleanup EXIT
|
||||
|
||||
echo "$(date): Starting lab auto-update" >> "$LOG_FILE"
|
||||
|
||||
# Change to the lab directory
|
||||
cd "${cfg.flakePath}"
|
||||
|
||||
# Run the Guile lab tool auto-update command
|
||||
if ${labTool}/bin/lab auto-update 2>&1 | tee -a "$LOG_FILE"; then
|
||||
echo "$(date): Auto-update completed successfully" >> "$LOG_FILE"
|
||||
else
|
||||
echo "$(date): Auto-update failed with exit code $?" >> "$LOG_FILE"
|
||||
exit 1
|
||||
fi
|
||||
'';
|
||||
|
||||
in
|
||||
{
|
||||
options.services.lab-auto-update = {
|
||||
enable = mkEnableOption "Lab auto-update service";
|
||||
|
||||
schedule = mkOption {
|
||||
type = types.str;
|
||||
default = "02:00";
|
||||
description = "Time to run updates (HH:MM format)";
|
||||
};
|
||||
|
||||
randomizedDelay = mkOption {
|
||||
type = types.str;
|
||||
default = "30m";
|
||||
description = "Maximum random delay before starting update";
|
||||
};
|
||||
|
||||
flakePath = mkOption {
|
||||
type = types.str;
|
||||
default = "/home/geir/Projects/home-lab";
|
||||
description = "Path to the home lab flake directory";
|
||||
};
|
||||
|
||||
persistent = mkOption {
|
||||
type = types.bool;
|
||||
default = true;
|
||||
description = "Whether the timer should be persistent across reboots";
|
||||
};
|
||||
|
||||
logRetentionDays = mkOption {
|
||||
type = types.int;
|
||||
default = 30;
|
||||
description = "Number of days to retain auto-update logs";
|
||||
};
|
||||
};
|
||||
|
||||
config = mkIf cfg.enable {
|
||||
# Systemd service for the auto-update
|
||||
systemd.services.lab-auto-update = {
|
||||
description = "Home Lab Auto-Update Service";
|
||||
after = [ "network-online.target" ];
|
||||
wants = [ "network-online.target" ];
|
||||
|
||||
serviceConfig = {
|
||||
Type = "oneshot";
|
||||
User = "root";
|
||||
Group = "root";
|
||||
ExecStart = "${autoUpdateScript}";
|
||||
|
||||
if success and reboot:
|
||||
updater.schedule_reboot()
|
||||
# Security settings
|
||||
PrivateTmp = true;
|
||||
ProtectSystem = false; # We need to modify the system
|
||||
ProtectHome = true;
|
||||
NoNewPrivileges = false; # We need privileges for nixos-rebuild
|
||||
|
||||
# Resource limits
|
||||
MemoryMax = "2G";
|
||||
CPUQuota = "50%";
|
||||
|
||||
# Timeout settings
|
||||
TimeoutStartSec = "30m";
|
||||
TimeoutStopSec = "5m";
|
||||
};
|
||||
|
||||
# Environment variables for the service
|
||||
environment = {
|
||||
PATH = lib.makeBinPath (with pkgs; [
|
||||
nix nixos-rebuild git openssh rsync gawk gnused
|
||||
coreutils util-linux systemd
|
||||
]);
|
||||
NIX_PATH = "nixpkgs=${pkgs.path}";
|
||||
};
|
||||
};
|
||||
|
||||
# Systemd timer for scheduling
|
||||
systemd.timers.lab-auto-update = {
|
||||
description = "Home Lab Auto-Update Timer";
|
||||
wantedBy = [ "timers.target" ];
|
||||
|
||||
timerConfig = {
|
||||
OnCalendar = "*-*-* ${cfg.schedule}:00";
|
||||
Persistent = cfg.persistent;
|
||||
RandomizedDelaySec = cfg.randomizedDelay;
|
||||
AccuracySec = "1min";
|
||||
};
|
||||
};
|
||||
|
||||
# Log rotation for auto-update logs
|
||||
services.logrotate.settings.lab-auto-update = {
|
||||
files = "/var/log/lab-auto-update.log";
|
||||
frequency = "daily";
|
||||
rotate = cfg.logRetentionDays;
|
||||
compress = true;
|
||||
delaycompress = true;
|
||||
missingok = true;
|
||||
notifempty = true;
|
||||
create = "644 root root";
|
||||
};
|
||||
|
||||
# Ensure log directory exists with proper permissions
|
||||
systemd.tmpfiles.rules = [
|
||||
"d /var/log 0755 root root -"
|
||||
"f /var/log/lab-auto-update.log 0644 root root -"
|
||||
];
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
### 4. Simple Configuration
|
||||
Add update settings to lab configuration:
|
||||
## Guile Scheme Implementation Advantages
|
||||
|
||||
```yaml
|
||||
# lab.yaml (additions)
|
||||
auto_update:
|
||||
enabled: true
|
||||
schedule: "02:00"
|
||||
auto_reboot: true
|
||||
flake_path: "/home/geir/Home-lab"
|
||||
log_retention_days: 30
|
||||
The Guile Scheme implementation provides several benefits over the original Python approach:
|
||||
|
||||
### 🎯 **K.I.S.S Principles Alignment**
|
||||
- **Modular**: Follows existing lab-tool module structure
|
||||
- **Functional**: Pure functions for logic, impure functions clearly marked
|
||||
- **Small**: Each function has single responsibility
|
||||
- **Simple**: Leverages existing deployment and configuration infrastructure
|
||||
|
||||
### 🔧 **Integration Benefits**
|
||||
- **Seamless Integration**: Uses existing `lab deployment` and `lab machines` modules
|
||||
- **Consistent CLI**: Follows same command pattern as other lab commands
|
||||
- **Shared Configuration**: Uses same configuration system and logging
|
||||
- **Type Safety**: Leverages Guile's type system and error handling
|
||||
|
||||
### 🛡️ **Enhanced Safety Features**
|
||||
- **Health Checks**: Pre-update validation (disk space, system state, uptime)
|
||||
- **Comprehensive Logging**: All operations logged with timestamps
|
||||
- **Lock File Protection**: Prevents concurrent update attempts
|
||||
- **Graceful Error Handling**: Proper cleanup and rollback on failures
|
||||
|
||||
### 📊 **Observability**
|
||||
- **Status Commands**: `lab auto-update-status` for monitoring
|
||||
- **Structured Logs**: Easy to parse and analyze
|
||||
- **Systemd Integration**: Native systemd service and timer management
|
||||
- **Log Rotation**: Automatic log management with configurable retention
|
||||
|
||||
### 🚀 **Usage Examples**
|
||||
|
||||
```bash
|
||||
# Manual testing
|
||||
lab auto-update # Run update with health checks
|
||||
lab auto-update-status # Check logs and service status
|
||||
|
||||
# Service management
|
||||
systemctl status lab-auto-update.timer
|
||||
systemctl list-timers lab-auto-update
|
||||
journalctl -u lab-auto-update.service
|
||||
|
||||
# Configuration (in machine's configuration.nix)
|
||||
services.lab-auto-update = {
|
||||
enable = true;
|
||||
schedule = "02:00"; # 2 AM daily
|
||||
randomizedDelay = "30m"; # Up to 30min random delay
|
||||
flakePath = "/home/geir/Projects/home-lab";
|
||||
logRetentionDays = 30;
|
||||
};
|
||||
```
|
||||
|
||||
This implementation provides a robust, well-integrated auto-update system that maintains the functional programming principles and modular architecture of the existing lab-tool infrastructure.
|
||||
|
||||
## Deployment Strategy
|
||||
|
||||
### Per-Machine Setup
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue