home-lab/modules/network
Geir Okkenhaug Jerstad 7c650856f2
Some checks are pending
🏠 Home Lab CI/CD Pipeline / 🔍 Validate Configuration (push) Waiting to run
🏠 Home Lab CI/CD Pipeline / 🔨 Build Configurations (push) Blocked by required conditions
🏠 Home Lab CI/CD Pipeline / 🔒 Security Audit (push) Blocked by required conditions
🏠 Home Lab CI/CD Pipeline / 📚 Documentation & Modules (push) Blocked by required conditions
🏠 Home Lab CI/CD Pipeline / 🔄 Update Dependencies (push) Waiting to run
🏠 Home Lab CI/CD Pipeline / 🚀 Deploy Configuration (push) Blocked by required conditions
🏠 Home Lab CI/CD Pipeline / 📢 Notify Results (push) Blocked by required conditions
feat: Complete sleeper-service deployment with ZFS and network fixes
 Major deployment milestone achieved:

**sleeper-service Configuration:**
- Successfully deployed flake-based NixOS on Intel Xeon file server
- Resolved ZFS mounting conflicts causing boot failures
- Implemented ZFS native mounting (/mnt/storage, /mnt/storage/media)
- Added Pi-hole DNS integration (10.0.0.14) for package resolution
- Configured systemd-networkd with static IP (10.0.0.8)
- System boots cleanly in ~1 minute with ZFS auto-mounting

**Infrastructure Updates:**
- SSH key management deployed and operational
- Network configuration with multi-tier DNS (Pi-hole, router, Google)
- NFS server configuration for network storage
- Data preservation verified: 903GB ZFS pool intact

**Technical Solutions:**
- Added nomodeset kernel parameter for graphics compatibility
- Disabled NVIDIA drivers for headless server operation
- Removed conflicting ZFS entries from hardware-configuration.nix
- Established remote deployment workflow via rsync + SSH

**Documentation:**
- Updated plan.md with deployment status and lessons learned
- Added deployment commands and troubleshooting notes
- Documented ZFS native mounting migration process

**Data Verified:**
- Films: 184GB, Series: 612GB, Audiobooks: 94GB, Music: 9.1GB, Books: 3.5GB
- Storage pool: 903GB used, 896GB available
- All media accessible via proper ZFS auto-mounting

This represents the first successful multi-machine flake deployment in the home lab infrastructure migration.
2025-06-06 11:21:12 +02:00
..
common.nix Restructure networking configuration to per-machine modules 2025-06-05 14:54:27 +02:00
network-congenital-optimist.nix Clean up networking configuration structure 2025-06-05 15:08:22 +02:00
network-sleeper-service.nix feat: Complete sleeper-service deployment with ZFS and network fixes 2025-06-06 11:21:12 +02:00
README.md feat: Complete sleeper-service systemd-networkd configuration 2025-06-05 15:44:07 +02:00

Network Configuration Modules

This directory contains networking configurations for all machines in the Home Lab.

Structure

  • common.nix - Shared networking settings used by all machines

    • nftables firewall enabled
    • SSH access with secure defaults
    • Tailscale VPN for remote access
    • Basic firewall rules (SSH port 22)
  • network-<machine-name>.nix - Machine-specific networking configurations

    • Import common.nix for shared settings
    • Override or extend with machine-specific requirements
    • Define hostname, hostId, and additional firewall ports

Current Machines

network-congenital-optimist.nix

  • AMD Threadripper workstation
  • ZFS hostId configuration (8425e349)
  • Ready for additional service ports as needed

network-sleeper-service.nix

  • Xeon file server
  • Headless server configuration
  • Ready for additional file sharing service ports

Usage

Each machine configuration imports its specific network module:

# In machines/<machine-name>/configuration.nix
imports = [
  ../../modules/network/network-<machine-name>.nix
  # ... other imports
];

Adding New Machines

  1. Create network-<new-machine>.nix in this directory
  2. Import ./common.nix for shared settings
  3. Add machine-specific configuration (hostname, hostId, ports)
  4. Import the new file in the machine's configuration.nix

Future Refactoring

The common.nix file can be extended to include more shared networking patterns as they emerge across machines. Consider moving repeated patterns here to reduce duplication.