trying to get fileserver to use flake config
Some checks are pending
🏠 Home Lab CI/CD Pipeline / 🔍 Validate Configuration (push) Waiting to run
🏠 Home Lab CI/CD Pipeline / 🔨 Build Configurations (push) Blocked by required conditions
🏠 Home Lab CI/CD Pipeline / 🔒 Security Audit (push) Blocked by required conditions
🏠 Home Lab CI/CD Pipeline / 📚 Documentation & Modules (push) Blocked by required conditions
🏠 Home Lab CI/CD Pipeline / 🔄 Update Dependencies (push) Waiting to run
🏠 Home Lab CI/CD Pipeline / 🚀 Deploy Configuration (push) Blocked by required conditions
🏠 Home Lab CI/CD Pipeline / 📢 Notify Results (push) Blocked by required conditions

This commit is contained in:
Geir Okkenhaug Jerstad 2025-06-05 17:35:45 +02:00
parent c392df4a93
commit 10a4f8df56
3 changed files with 190 additions and 15 deletions

View file

@ -0,0 +1,80 @@
# ZFS Setup for sleeper-service
## Overview
sleeper-service now uses ZFS for enhanced data integrity, snapshots, and efficient storage management for the file server role.
## ZFS Pool Structure
### `filepool` - Main ZFS Pool
This pool contains all system and storage datasets:
```
filepool/root # Root filesystem (/)
filepool/nix # Nix store (/nix)
filepool/var # Variable data (/var)
filepool/storage # NFS export storage (/mnt/storage)
```
## Storage Layout
### System Datasets
- **filepool/root**: System root filesystem with snapshots for rollback
- **filepool/nix**: Nix store, can be excluded from frequent snapshots
- **filepool/var**: System logs and variable data
### Storage Dataset
- **filepool/storage**: Primary NFS export point containing:
- `media/` - Media files shared via NFS
- `downloads/` - Download directory (for Transmission when re-enabled)
- `backups/` - Backup storage
- `shares/` - General file shares
## ZFS Features Enabled
### Automatic Services
- **Auto-scrub**: Weekly integrity checks of all data
- **TRIM**: SSD optimization for supported drives
- **Snapshots**: Automatic snapshots for data protection (to be configured)
### Benefits for File Server
1. **Data Integrity**: Checksumming protects against bit rot
2. **Snapshots**: Point-in-time recovery for user data
3. **Compression**: Efficient storage usage
4. **Send/Receive**: Efficient backup to other ZFS systems
5. **Share Management**: Native NFS sharing support
## Deployment Notes
### Before First Boot
The actual ZFS pool creation needs to be done during installation:
```bash
# Example pool creation (adjust device names)
zpool create -f filepool /dev/sda
zfs create filepool/root
zfs create filepool/nix
zfs create filepool/var
zfs create filepool/storage
# Set mount points
zfs set mountpoint=/ filepool/root
zfs set mountpoint=/nix filepool/nix
zfs set mountpoint=/var filepool/var
zfs set mountpoint=/mnt/storage filepool/storage
# Enable compression for storage dataset
zfs set compression=lz4 filepool/storage
```
### Network Storage Integration
The `/mnt/storage` ZFS dataset is exported via NFS to the home lab network (10.0.0.0/24), replacing the previous "files.home" server functionality.
## Migration from Existing Setup
When deploying to the physical server:
1. Backup existing data from current file server
2. Create ZFS pool on target drives
3. Restore data to `/mnt/storage`
4. Update client machines to mount from new IP (10.0.0.8)
## Culture Reference
Like the GSV *Sleeper Service*, this configuration operates quietly in the background, providing reliable storage services with the redundancy and self-healing capabilities that ZFS brings to the table.

View file

@ -13,30 +13,31 @@
boot.kernelModules = [ "kvm-intel" ];
boot.extraModulePackages = [ ];
# ZFS Configuration for file server
# Enable ZFS support for storage pool only
boot.supportedFilesystems = [ "zfs" ];
boot.initrd.supportedFilesystems = [ "zfs" ];
# ZFS Configuration - only for storage pool
boot.zfs.extraPools = [ "storage" ];
services.zfs.autoScrub.enable = true;
services.zfs.trim.enable = true;
# OS remains on ext4
fileSystems."/" =
{ device = "filepool/root";
fsType = "zfs";
};
fileSystems."/nix" =
{ device = "filepool/nix";
fsType = "zfs";
};
fileSystems."/var" =
{ device = "filepool/var";
fsType = "zfs";
{ device = "/dev/disk/by-uuid/e7fc0e32-b9e5-4080-859e-fe9dea60823d";
fsType = "ext4";
};
# ZFS storage pool mounted for NFS exports
fileSystems."/mnt/storage" =
{ device = "filepool/storage";
{ device = "storage";
fsType = "zfs";
};
fileSystems."/boot" =
{ device = "/dev/disk/by-uuid/ABCD-1234";
{ device = "/dev/disk/by-uuid/2C7A-9F08";
fsType = "vfat";
options = [ "fmask=0022" "dmask=0022" ];
};
swapDevices = [ ];

View file

@ -0,0 +1,94 @@
#!/usr/bin/env bash
# ZFS Setup Script for sleeper-service
# This script configures the existing ZFS storage pool for NFS exports
set -euo pipefail
echo "=== ZFS Setup for sleeper-service ==="
echo "This script will configure the existing 'storage' pool for NFS exports"
echo "OS will remain on ext4 - only storage pool will be used for media/NFS"
echo ""
echo "Current ZFS pool status:"
zpool status storage
echo ""
echo "Current datasets:"
zfs list
echo ""
echo "The existing storage/media dataset with 903GB of data will be preserved"
echo "We'll set up proper mount points for NFS exports"
echo ""
read -p "Are you sure you want to proceed? (yes/no): " confirm
if [[ "$confirm" != "yes" ]]; then
echo "Aborted."
exit 1
fi
echo ""
echo "=== Step 1: Verifying ZFS tools ==="
if ! command -v zpool &> /dev/null; then
echo "ERROR: ZFS tools not found!"
exit 1
fi
echo ""
echo "=== Step 2: Checking existing pool ==="
if ! zpool status storage &> /dev/null; then
echo "ERROR: Storage pool not found!"
exit 1
fi
echo "Storage pool found. GUID: $(zpool get -H -o value guid storage)"
echo ""
echo "=== Step 3: Setting up storage mount points ==="
# Create mount point directory
echo "Creating /mnt/storage directory..."
mkdir -p /mnt/storage
# Set proper mount point for storage pool
echo "Setting mount point for storage pool..."
zfs set mountpoint=/mnt/storage storage
# Ensure media dataset has proper mountpoint
echo "Setting mount point for media dataset..."
zfs set mountpoint=/mnt/storage/media storage/media
# Create additional directories if needed
echo "Creating additional storage directories..."
mkdir -p /mnt/storage/{downloads,backups,shares}
# Set proper ownership for sma user
echo "Setting ownership for sma user..."
chown sma:users /mnt/storage/{media,downloads,backups,shares}
echo ""
echo "=== Step 4: Summary ==="
echo "ZFS storage setup complete!"
echo ""
echo "Storage pool: $(zpool get -H -o value guid storage)"
echo "Mount point: /mnt/storage"
echo "Media data: /mnt/storage/media (preserved)"
echo "Additional directories: downloads, backups, shares"
echo ""
echo "The existing 903GB of media data has been preserved."
echo "NFS exports can now use /mnt/storage/* paths."
echo ""
echo "Next: Deploy NixOS configuration to enable ZFS on boot"
echo ""
echo "=== ZFS Setup Complete! ==="
echo "Pool status:"
zpool status storage
echo ""
echo "Datasets:"
zfs list
echo ""
echo "You can now deploy the new NixOS configuration that uses ZFS."
echo "Note: The system will need to be rebooted after the deployment."
echo ""
echo "Next steps:"
echo "1. Copy the new Home-lab configuration to the server"
echo "2. Run: sudo nixos-rebuild boot --flake .#sleeper-service"
echo "3. Reboot the system to activate ZFS support"