- Fix provider configuration from 'openai' to 'ollama' in .taskmaster/config.json
- Remove conflicting MCP configurations (.cursor/mcp.json, packages/.cursor/mcp.json)
- Standardize on single .vscode/mcp.json configuration for VS Code
- Update environment variables for proper Ollama integration
- Add .env.taskmaster for easy environment setup
- Verify AI functionality: task creation, expansion, and research working
- All models (qwen2.5-coder:7b, deepseek-r1:7b, llama3.1:8b) operational
- Cost: /run/current-system/sw/bin/zsh (using local Ollama server at grey-area:11434)
Resolves configuration conflicts and enables full AI-powered task management
with local models instead of external API dependencies.
- Optimize Ollama service configuration for maximum CPU performance
- Increase OLLAMA_NUM_PARALLEL from 2 to 4 workers
- Increase OLLAMA_CONTEXT_LENGTH from 4096 to 8192 tokens
- Add OLLAMA_KV_CACHE_TYPE=q8_0 for memory efficiency
- Set OLLAMA_LLM_LIBRARY=cpu_avx2 for optimal CPU performance
- Configure OpenMP threading with 8 threads and core binding
- Add comprehensive systemd resource limits and CPU quotas
- Remove incompatible NUMA policy setting
- Upgrade TaskMaster AI model ecosystem
- Main model: qwen3:4b → qwen2.5-coder:7b (specialized coding model)
- Research model: deepseek-r1:1.5b → deepseek-r1:7b (enhanced reasoning)
- Fallback model: gemma3:4b-it-qat → llama3.3:8b (reliable general purpose)
- Create comprehensive optimization and management scripts
- Add ollama-optimize.sh for system optimization and benchmarking
- Add update-taskmaster-models.sh for TaskMaster configuration management
- Include model installation, performance testing, and system info functions
- Update TaskMaster AI configuration
- Configure optimized models with grey-area:11434 endpoint
- Set performance parameters for 8192 context window
- Add connection timeout and retry settings
- Fix flake configuration issues
- Remove nested packages attribute in packages/default.nix
- Fix package references in modules/users/geir.nix
- Clean up obsolete package files
- Add comprehensive documentation
- Document complete optimization process and results
- Include performance benchmarking results
- Provide deployment instructions and troubleshooting guide
Successfully deployed via deploy-rs with 3-4x performance improvement estimated.
All optimizations tested and verified on grey-area server (24-core Xeon, 31GB RAM).
- Updated .cursor/mcp.json to use local Nix-built Task Master binary
- Configured Task Master to use local Ollama models via OpenAI-compatible API
- Set up three models: qwen3:4b (main), deepseek-r1:1.5b (research), gemma3:4b-it-qat (fallback)
- Created comprehensive integration status documentation
- Task Master successfully running as MCP server with 23+ available tools
- Ready for VS Code/Cursor AI chat integration