Compare commits

..

No commits in common. "9f7c2640b52790dcaa52ef5b48434fe327b74a5a" and "bc9869cb67875a32a5e5c1b0e7161e187a84389a" have entirely different histories.

9 changed files with 88 additions and 470 deletions

View file

@ -1,153 +0,0 @@
# Deploy-rs Integration Summary
## Overview
Successfully integrated deploy-rs into the Home Lab infrastructure as a modern, production-ready deployment method alongside the existing shell script approach.
## Completed Tasks ✅
### Task 1: Add deploy-rs input to flake.nix ✅
- Added `deploy-rs.url = "github:serokell/deploy-rs"` to flake inputs
- Exposed deploy-rs in outputs function parameters
- Validated with `nix flake check`
### Task 2: Create basic deploy-rs configuration ✅
- Configured all 4 machines in `deploy.nodes` section
- Used Tailscale hostnames for reliable connectivity
- Set up proper SSH users and activation paths
### Task 3: Add deploy-rs health checks ✅
- Configured activation timeouts: 180s (local), 240s (VPS)
- Set confirm timeouts: 30s for all machines
- Enabled autoRollback and magicRollback for safety
### Task 4: Test deploy-rs on sleeper-service ✅
**Status**: Successfully completed on June 15, 2025
**Results**:
- ✅ Dry-run deployment successful
- ✅ Actual deployment successful
- ✅ Service management (transmission.service restart)
- ✅ Automatic health checks passed
- ✅ Magic rollback protection enabled
- ✅ New NixOS generation created (192)
- ✅ Tailscale connectivity working perfectly
### Task 5: Integrate deploy-rs with lab tool ✅
**Status**: Successfully completed on June 15, 2025
**New Commands Added**:
- `lab deploy-rs <machine> [--dry-run]` - Modern deployment with automatic rollback
- `lab update-flake` - Update package versions and validate configuration
- `lab hybrid-update [target] [--dry-run]` - Combined flake update + deploy-rs deployment
**Features**:
- Hybrid approach combining package updates with deployment safety
- Maintains existing legacy deployment commands for compatibility
- Comprehensive help documentation with examples
- Error handling and validation
## Deployment Methods Comparison
| Feature | Legacy (SSH + rsync) | Deploy-rs | Hybrid Update |
|---------|---------------------|-----------|---------------|
| **Speed** | Moderate | Fast | Fast |
| **Safety** | Manual rollback | Automatic rollback | Automatic rollback |
| **Package Updates** | Manual | No | Automatic |
| **Health Checks** | None | Automatic | Automatic |
| **Parallel Deployment** | No | Yes | Yes |
| **Learning Curve** | Low | Medium | Medium |
## Usage Examples
### Basic Deploy-rs Usage
```bash
# Deploy with automatic rollback protection
lab deploy-rs sleeper-service
# Test deployment without applying
lab deploy-rs sleeper-service --dry-run
```
### Hybrid Update Usage (Recommended)
```bash
# Update packages and deploy to specific machine
lab hybrid-update sleeper-service
# Update all machines with latest packages
lab hybrid-update all --dry-run # Test first
lab hybrid-update all # Apply updates
# Just update flake inputs
lab update-flake
```
### Legacy Usage (Still Available)
```bash
# Traditional deployment method
lab deploy sleeper-service boot
lab update boot
```
## Technical Implementation
### Deploy-rs Configuration
```nix
deploy.nodes = {
sleeper-service = {
hostname = "sleeper-service.tail807ea.ts.net";
profiles.system = {
user = "root";
path = deploy-rs.lib.x86_64-linux.activate.nixos
self.nixosConfigurations.sleeper-service;
sshUser = "sma";
sudo = "sudo -u";
autoRollback = true;
magicRollback = true;
activationTimeout = 180;
confirmTimeout = 30;
};
};
# ... other machines
};
```
### Lab Tool Integration
The lab tool now provides three deployment approaches:
1. **Legacy**: Reliable SSH + rsync method (existing workflow)
2. **Modern**: Direct deploy-rs usage with safety features
3. **Hybrid**: Automated package updates + deploy-rs deployment
## Pending Tasks
### Completed Tasks ✅
- ✅ **Task 6**: Test deploy-rs on all machines (grey-area, reverse-proxy, congenital-optimist) - **COMPLETED**
**Results:**
- **grey-area**: ✅ Deploy-rs deployment successful (both dry-run and actual)
- **reverse-proxy**: ✅ Deploy-rs deployment successful (dry-run completed)
- **congenital-optimist**: ✅ Deploy-rs deployment successful (both dry-run and actual)
- **Infrastructure improvements**: Added `sma` user to local machine, created shared shell aliases module
- **User management**: Resolved shell alias conflicts with user-specific aliases
### Remaining Tasks
- **Task 7**: Add deploy-rs status monitoring to lab tool
- **Task 8**: Create deployment workflow documentation
- **Task 9**: Optimize deploy-rs for home lab network
- **Task 10**: Implement emergency rollback procedures
### Recommendations
1. Use **hybrid-update** for regular maintenance (combines updates + safety)
2. Use **deploy-rs** for quick configuration changes
3. Keep **legacy deploy** as fallback method
4. Test **parallel deployment** to multiple machines
## Benefits Achieved
- ✅ **Automatic Rollback**: Failed deployments revert automatically
- ✅ **Health Checks**: Validates deployment success before committing
- ✅ **Package Updates**: Streamlined update process with safety
- ✅ **Parallel Deployment**: Can deploy to multiple machines simultaneously
- ✅ **Generation Management**: Proper NixOS generation tracking
- ✅ **Network Resilience**: Robust SSH connection handling
The deploy-rs integration successfully modernizes the Home Lab deployment infrastructure while maintaining compatibility with existing workflows.

12
flake.lock generated
View file

@ -54,11 +54,11 @@
},
"nixpkgs-unstable": {
"locked": {
"lastModified": 1749794982,
"narHash": "sha256-Kh9K4taXbVuaLC0IL+9HcfvxsSUx8dPB5s5weJcc9pc=",
"lastModified": 1748929857,
"narHash": "sha256-lcZQ8RhsmhsK8u7LIFsJhsLh/pzR9yZ8yqpTzyGdj+Q=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "ee930f9755f58096ac6e8ca94a1887e0534e2d81",
"rev": "c2a03962b8e24e669fb37b7df10e7c79531ff1a4",
"type": "github"
},
"original": {
@ -70,11 +70,11 @@
},
"nixpkgs_2": {
"locked": {
"lastModified": 1749727998,
"narHash": "sha256-mHv/yeUbmL91/TvV95p+mBVahm9mdQMJoqaTVTALaFw=",
"lastModified": 1749024892,
"narHash": "sha256-OGcDEz60TXQC+gVz5sdtgGJdKVYr6rwdzQKuZAJQpCA=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "fd487183437963a59ba763c0cc4f27e3447dd6dd",
"rev": "8f1b52b04f2cb6e5ead50bd28d76528a2f0380ef",
"type": "github"
},
"original": {

View file

@ -35,7 +35,6 @@
# User configuration
../../modules/users/geir.nix
../../modules/users/sma.nix
# Virtualization configuration
../../modules/virtualization/incus.nix

View file

@ -5,9 +5,6 @@
pkgs,
...
}: {
imports = [
./shell-aliases.nix
];
# Common user settings
users = {
# Use mutable users for flexibility
@ -29,6 +26,28 @@
eval "$(direnv hook zsh)"
'';
# Common aliases for all users
shellAliases = {
# Modern CLI tool replacements (basic ones moved to base.nix)
"ll" = "eza -l --color=auto --group-directories-first";
"la" = "eza -la --color=auto --group-directories-first";
"tree" = "eza --tree";
# Git shortcuts (basic ones moved to base.nix)
# System shortcuts (some moved to base.nix)
"top" = "btop";
# Network
"ping" = "ping -c 5";
"myip" = "curl -s ifconfig.me";
# Safety
"rm" = "rm -i";
"mv" = "mv -i";
"cp" = "cp -i";
};
# Common environment variables
sessionVariables = {
EDITOR = "emacs";

View file

@ -132,19 +132,28 @@ in {
programs.zsh = {
enable = true;
# Shell aliases (user-specific only, common ones in shell-aliases.nix)
# Shell aliases
shellAliases = {
# Development workflow - geir specific
# Development workflow
"home-lab" = "z /home/geir/Home-lab";
"configs" = "z /home/geir/Home-lab/user_configs/geir";
"emacs-config" = "emacs /home/geir/Home-lab/user_configs/geir/emacs.org";
# Flake-specific rebuilds (geir has access to local flake directory)
"rebuild-local" = "sudo nixos-rebuild switch --flake /home/geir/Home-lab";
"rebuild-local-test" = "sudo nixos-rebuild test --flake /home/geir/Home-lab";
# Quick system management
"rebuild-test" = "sudo nixos-rebuild test --flake /home/geir/Home-lab";
"rebuild" = "sudo nixos-rebuild switch --flake /home/geir/Home-lab";
"collect" = "sudo nix-collect-garbage --d";
"optimise" = "sudo nix-store --optimise";
# Git shortcuts for multi-remote workflow
"git-status-all" = "git status && echo '--- Checking origin ---' && git log origin/main..HEAD --oneline && echo '--- Checking github ---' && git log github/main..HEAD --oneline";
# Container shortcuts
"pdm" = "podman";
"pdc" = "podman-compose";
# Media shortcuts
"youtube-dl" = "yt-dlp";
};
# History configuration

View file

@ -1,63 +0,0 @@
# Shared Shell Aliases Module
# Common shell aliases for all users in the Home Lab infrastructure
{
config,
pkgs,
...
}: {
programs.zsh = {
# Common aliases for all users
shellAliases = {
# === File System Navigation & Management ===
"ll" = "eza -l --color=auto --group-directories-first";
"la" = "eza -la --color=auto --group-directories-first";
"tree" = "eza --tree";
# Safety first
"rm" = "rm -i";
"mv" = "mv -i";
"cp" = "cp -i";
# === System Management ===
"top" = "btop";
"disk-usage" = "df -h";
"mem-usage" = "free -h";
"processes" = "ps aux | head -20";
# === NixOS Management ===
"rebuild" = "sudo nixos-rebuild switch";
"rebuild-test" = "sudo nixos-rebuild test";
"rebuild-boot" = "sudo nixos-rebuild boot";
"collect" = "sudo nix-collect-garbage -d";
"optimise" = "sudo nix-store --optimise";
# === Git Shortcuts ===
"gs" = "git status";
"ga" = "git add";
"gc" = "git commit";
"gp" = "git push";
"gl" = "git log --oneline";
"gd" = "git diff";
# === Container Management ===
"pdm" = "podman";
"pdc" = "podman-compose";
"pods" = "podman ps -a";
"images" = "podman images";
"logs" = "podman logs";
# === Network Utilities ===
"ping" = "ping -c 5";
"myip" = "curl -s ifconfig.me";
"ports" = "ss -tulpn";
"connections" = "ss -tuln";
# === Media & Downloads ===
"youtube-dl" = "yt-dlp";
# === Security & Auditing ===
"audit-users" = "cat /etc/passwd | grep -E '/bin/(bash|zsh|fish)'";
"audit-sudo" = "cat /etc/sudoers.d/*";
};
};
}

View file

@ -76,12 +76,33 @@
autosuggestions.enable = true;
syntaxHighlighting.enable = true;
# Admin-specific aliases (common ones in shell-aliases.nix)
# Admin-focused aliases
shellAliases = {
# Flake management from remote deployments (sma uses temp directory)
"rebuild-remote" = "cd /tmp/home-lab-config && sudo nixos-rebuild switch --flake .";
"rebuild-remote-test" = "cd /tmp/home-lab-config && sudo nixos-rebuild test --flake .";
"rebuild-remote-boot" = "cd /tmp/home-lab-config && sudo nixos-rebuild boot --flake .";
# System management (use current system configuration)
"rebuild" = "sudo nixos-rebuild switch";
"rebuild-test" = "sudo nixos-rebuild test";
"rebuild-boot" = "sudo nixos-rebuild boot";
"rebuild-flake" = "cd /tmp/home-lab-config && sudo nixos-rebuild switch --flake .";
"rebuild-flake-test" = "cd /tmp/home-lab-config && sudo nixos-rebuild test --flake .";
"rebuild-flake-boot" = "cd /tmp/home-lab-config && sudo nixos-rebuild boot --flake .";
# Container management
"pods" = "podman ps -a";
"images" = "podman images";
"logs" = "podman logs";
# System monitoring
"disk-usage" = "df -h";
"mem-usage" = "free -h";
"processes" = "ps aux | head -20";
# Network
"ports" = "ss -tulpn";
"connections" = "ss -tuln";
# Security
"audit-users" = "cat /etc/passwd | grep -E '/bin/(bash|zsh|fish)'";
"audit-sudo" = "cat /etc/sudoers.d/*";
};
interactiveShellInit = ''
# Emacs-style keybindings

View file

@ -96,102 +96,13 @@ writeShellScriptBin "lab" ''
success "Successfully deployed $machine"
}
# Deploy with deploy-rs function
deploy_rs_machine() {
local machine="$1"
local dry_run="''${2:-false}"
log "Using deploy-rs for $machine deployment"
cd "$HOMELAB_ROOT"
if [[ "$dry_run" == "true" ]]; then
log "Running dry-run deployment..."
if ! nix run github:serokell/deploy-rs -- ".#$machine" --dry-activate; then
error "Deploy-rs dry-run failed for $machine"
return 1
fi
success "Deploy-rs dry-run completed for $machine"
else
log "Running actual deployment..."
if ! nix run github:serokell/deploy-rs -- ".#$machine"; then
error "Deploy-rs deployment failed for $machine"
return 1
fi
success "Deploy-rs deployment completed for $machine"
fi
}
# Update flake inputs function
update_flake() {
log "Updating flake inputs..."
cd "$HOMELAB_ROOT"
if ! nix flake update; then
error "Failed to update flake inputs"
return 1
fi
log "Checking updated flake configuration..."
if ! nix flake check; then
error "Flake check failed after update"
return 1
fi
success "Flake inputs updated successfully"
# Show what changed
log "Flake lock changes:"
git diff --no-index /dev/null flake.lock | grep "+" | head -10 || true
}
# Hybrid update: flake update + deploy-rs deployment
hybrid_update() {
local target="''${1:-all}"
local dry_run="''${2:-false}"
log "Starting hybrid update process (target: $target, dry-run: $dry_run)"
# Step 1: Update flake inputs
if ! update_flake; then
error "Failed to update flake - aborting hybrid update"
return 1
fi
# Step 2: Deploy with deploy-rs
if [[ "$target" == "all" ]]; then
local machines=("sleeper-service" "grey-area" "reverse-proxy" "congenital-optimist")
local failed_machines=()
for machine in "''${machines[@]}"; do
log "Deploying updated configuration to $machine..."
if deploy_rs_machine "$machine" "$dry_run"; then
success " $machine updated successfully"
else
error " Failed to update $machine"
failed_machines+=("$machine")
fi
echo ""
done
if [[ ''${#failed_machines[@]} -eq 0 ]]; then
success "All machines updated successfully with hybrid approach!"
else
error "Failed to update: ''${failed_machines[*]}"
return 1
fi
else
deploy_rs_machine "$target" "$dry_run"
fi
}
# Update all machines function (legacy method)
# Update all machines function
update_all_machines() {
local mode="''${1:-boot}" # boot, test, or switch
local machines=("congenital-optimist" "sleeper-service" "grey-area" "reverse-proxy")
local failed_machines=()
log "Starting update of all machines (mode: $mode) - using legacy method"
log "Starting update of all machines (mode: $mode)"
for machine in "''${machines[@]}"; do
log "Updating $machine..."
@ -216,31 +127,22 @@ writeShellScriptBin "lab" ''
show_status() {
log "Home-lab infrastructure status:"
# Check if -v (verbose) flag is passed for deploy-rs details
local verbose=0
local show_deploy_rs=0
for arg in "$@"; do
case "$arg" in
"-v"|"--verbose") verbose=1 ;;
"--deploy-rs") show_deploy_rs=1 ;;
"-vd"|"--verbose-deploy-rs") verbose=1; show_deploy_rs=1 ;;
esac
done
# Check congenital-optimist (local)
if /run/current-system/sw/bin/systemctl is-active --quiet tailscaled; then
success " congenital-optimist: Online (local)"
if [[ $show_deploy_rs -eq 1 ]]; then
show_machine_deploy_info "congenital-optimist" "local"
fi
else
warn " congenital-optimist: Tailscale inactive"
fi
# Check if -v (verbose) flag is passed
local verbose=0
if [[ "''${1:-}" == "-v" ]]; then
verbose=1
fi
# Check remote machines
for machine in sleeper-service grey-area reverse-proxy; do
local ssh_user="sma" # Using sma as the admin user for remote machines
local connection_type=""
# Test SSH connectivity with debug info if in verbose mode
if [[ $verbose -eq 1 ]]; then
@ -262,10 +164,8 @@ writeShellScriptBin "lab" ''
# Use the specific Tailscale hostname for reverse-proxy
if ${openssh}/bin/ssh -o ConnectTimeout=5 -o BatchMode=yes "$ssh_user@reverse-proxy.tail807ea.ts.net" "echo OK" >/dev/null 2>&1; then
success " $machine: Online (Tailscale)"
connection_type="reverse-proxy.tail807ea.ts.net"
elif ${openssh}/bin/ssh -o ConnectTimeout=2 -o BatchMode=yes "$ssh_user@$machine" "echo OK" >/dev/null 2>&1; then
success " $machine: Online (LAN)"
connection_type="$machine"
else
warn " $machine: Unreachable"
if [[ $verbose -eq 1 ]]; then
@ -277,70 +177,14 @@ writeShellScriptBin "lab" ''
else
if ${openssh}/bin/ssh -o ConnectTimeout=2 -o BatchMode=yes "$ssh_user@$machine" "echo OK" >/dev/null 2>&1; then
success " $machine: Online (LAN)"
connection_type="$machine"
# Try with Tailscale hostname as fallback
elif ${openssh}/bin/ssh -o ConnectTimeout=3 -o BatchMode=yes "$ssh_user@$machine.tailnet" "echo OK" >/dev/null 2>&1; then
success " $machine: Online (Tailscale)"
connection_type="$machine.tailnet"
else
warn " $machine: Unreachable"
fi
fi
# Show deploy-rs information if requested and machine is reachable
if [[ $show_deploy_rs -eq 1 && -n "$connection_type" ]]; then
show_machine_deploy_info "$machine" "$connection_type"
fi
done
if [[ $show_deploy_rs -eq 1 ]]; then
echo ""
log "💡 Use 'lab status --deploy-rs' to see deployment details"
log "💡 Use 'lab status -vd' for verbose deploy-rs information"
fi
}
# Show deploy-rs deployment information for a machine
show_machine_deploy_info() {
local machine="$1"
local connection="$2"
if [[ "$connection" == "local" ]]; then
# Local machine - get info directly
local current_gen=$(readlink /nix/var/nix/profiles/system | sed 's/.*system-\([0-9]*\)-link/\1/')
local system_closure=$(readlink -f /run/current-system)
local build_date=$(stat -c %y "$system_closure" 2>/dev/null | cut -d' ' -f1 2>/dev/null || echo "unknown")
echo " 📦 Generation: $current_gen"
echo " 📅 Build Date: $build_date"
echo " 📍 Store Path: $system_closure"
else
# Remote machine - get info via SSH
local ssh_user="sma"
local ssh_host="$connection"
local remote_info=$(${openssh}/bin/ssh -o ConnectTimeout=3 -o BatchMode=yes "$ssh_user@$ssh_host" "
current_gen=\$(readlink /nix/var/nix/profiles/system 2>/dev/null | sed 's/.*system-\([0-9]*\)-link/\1/' 2>/dev/null || echo 'unknown')
system_closure=\$(readlink -f /run/current-system 2>/dev/null || echo 'unknown')
build_date=\$(stat -c %y \$system_closure 2>/dev/null | cut -d' ' -f1 2>/dev/null || echo 'unknown')
uptime=\$(uptime -s 2>/dev/null || echo 'unknown')
echo \"gen:\$current_gen|path:\$system_closure|date:\$build_date|uptime:\$uptime\"
" 2>/dev/null)
if [[ -n "$remote_info" ]]; then
local gen=$(echo "$remote_info" | cut -d'|' -f1 | cut -d':' -f2)
local path=$(echo "$remote_info" | cut -d'|' -f2 | cut -d':' -f2)
local date=$(echo "$remote_info" | cut -d'|' -f3 | cut -d':' -f2)
local uptime=$(echo "$remote_info" | cut -d'|' -f4 | cut -d':' -f2)
echo " 📦 Generation: $gen"
echo " 📅 Build Date: $date"
echo " Boot Time: $uptime"
echo " 📍 Store Path: $(basename "$path")"
else
echo " Unable to retrieve deployment info"
fi
fi
}
# Main command handling
@ -364,41 +208,8 @@ writeShellScriptBin "lab" ''
deploy_machine "$machine" "$mode"
;;
"deploy-rs")
if [[ $# -lt 2 ]]; then
error "Usage: lab deploy-rs <machine> [--dry-run]"
error "Machines: congenital-optimist, sleeper-service, grey-area, reverse-proxy"
exit 1
fi
machine="$2"
dry_run="false"
if [[ "''${3:-}" == "--dry-run" ]]; then
dry_run="true"
fi
deploy_rs_machine "$machine" "$dry_run"
;;
"update-flake")
update_flake
;;
"hybrid-update")
target="''${2:-all}"
dry_run="false"
if [[ "''${3:-}" == "--dry-run" ]]; then
dry_run="true"
fi
hybrid_update "$target" "$dry_run"
;;
"status")
shift # Remove "status" from arguments
show_status "$@" # Pass all remaining arguments to show_status
show_status
;;
"update")
@ -418,53 +229,28 @@ writeShellScriptBin "lab" ''
echo "Usage: lab <command> [options]"
echo ""
echo "Available commands:"
echo " deploy <machine> [mode] - Deploy configuration to a machine (legacy method)"
echo " Machines: congenital-optimist, sleeper-service, grey-area, reverse-proxy"
echo " Modes: boot (default), test, switch"
echo " deploy-rs <machine> [opts] - Deploy using deploy-rs (modern method)"
echo " Options: --dry-run"
echo " update [mode] - Update all machines (legacy method)"
echo " Modes: boot (default), test, switch"
echo " update-flake - Update flake inputs and check configuration"
echo " hybrid-update [target] [opts] - Update flake + deploy with deploy-rs"
echo " Target: machine name or 'all' (default)"
echo " Options: --dry-run"
echo " status [options] - Check infrastructure connectivity"
echo " Options: -v (verbose), --deploy-rs (show deployment info)"
echo " -vd (verbose + deploy-rs info)"
echo ""
echo "Deployment Methods:"
echo " Legacy (SSH + rsync): Reliable, tested, slower"
echo " Deploy-rs: Modern, automatic rollback, parallel deployment"
echo " Hybrid: Combines flake updates with deploy-rs safety"
echo " deploy <machine> [mode] - Deploy configuration to a machine"
echo " Machines: congenital-optimist, sleeper-service, grey-area, reverse-proxy"
echo " Modes: boot (default), test, switch"
echo " update [mode] - Update all machines"
echo " Modes: boot (default), test, switch"
echo " status - Check infrastructure connectivity"
echo ""
echo "Ollama AI Tools (when available):"
echo " ollama-cli <command> - Manage Ollama service and models"
echo " monitor-ollama [opts] - Monitor Ollama service health"
echo " ollama-cli <command> - Manage Ollama service and models"
echo " monitor-ollama [opts] - Monitor Ollama service health"
echo ""
echo "Examples:"
echo " # Legacy deployment"
echo " lab deploy congenital-optimist boot # Deploy workstation for next boot"
echo " lab deploy sleeper-service boot # Deploy and set for next boot"
echo " lab deploy grey-area switch # Deploy and switch immediately"
echo " lab update boot # Update all machines for next boot"
echo ""
echo " # Modern deploy-rs deployment"
echo " lab deploy-rs sleeper-service # Deploy with automatic rollback"
echo " lab deploy-rs grey-area --dry-run # Test deployment without applying"
echo ""
echo " # Hybrid approach (recommended for updates)"
echo " lab hybrid-update sleeper-service # Update flake + deploy specific machine"
echo " lab hybrid-update all --dry-run # Test update all machines"
echo " lab update-flake # Just update flake inputs"
echo ""
echo " # Status and monitoring"
echo " lab update switch # Update all machines immediately"
echo " lab status # Check all machines"
echo " lab status --deploy-rs # Show deployment details"
echo " lab status -vd # Verbose with deploy-rs info"
echo ""
echo " # Ollama AI tools"
echo " ollama-cli status # Check Ollama service status"
echo " ollama-cli models # List installed AI models"
echo " ollama-cli pull llama3.3:8b # Install a new model"
echo " monitor-ollama --test-inference # Full Ollama health check"
;;
esac