Live Infrastructure

Climacs Homelab

Self-hosted development platform β€” Git, Container Registry, Kubernetes, PostgreSQL, NFS storage

Last Updated: 2026-03-02 02:00 CET Β· Data Freshness: LIVE
8
VMs & LXCs
3
Docker Services
2
K8s Nodes
3.6T
NAS Storage
64G
Proxmox RAM

πŸ”„ Deploy Pipeline

πŸ’»
Mac Mini
git push
β†’
πŸ“¦
Gitea
.140:3001
β†’
πŸ”§
Build Host
.60 docker build
β†’
πŸ—„οΈ
Registry
.140:5000
β†’
☸️
K8s Cluster
.55 + .202
β†’
🌐
App Live
:32046

πŸ–₯️ Infrastructure

πŸ—οΈ

Proxmox Hypervisor

.50 Β· 64GB RAM Β· 1TB SSD

Virtual Machines

cka-cp-v2 (VM 310)
.55 Β· K8s Control Plane Β· kubeadm v1.32
LIVE
cka-worker-v2 (VM 311)
.202 Β· K8s Worker Β· Cilium CNI
LIVE
web-climacs-01 (VM 600)
.60 Β· Docker Compose Β· Build Host
LIVE
minikube-coolsvc (VM 700)
.75 Β· Minikube test env
IDLE
macs-vm (VM 500)
.81 Β· General Ubuntu VM
IDLE

LXC Containers

pg-db-01 (LXC 800)
.70 Β· PostgreSQL 16 Β· copilot_db
LIVE
arr (LXC 120)
.80 Β· Media automation
LIVE
πŸ’Ύ

UGREEN NAS

.140 Β· DXP2800 Β· UGOS 1.13.1 (Debian 12)

Docker Services

Gitea Git Server
:3001 (web) Β· :2222 (SSH)
LIVE
Docker Registry v2
:5000 Β· Container image storage
LIVE
Registry UI
:8080 Β· Browse images in browser
LIVE

NFS Shares (5 active)

/volume1/git
Gitea data + repos
MOUNTED
/volume1/registry
Container image layers
MOUNTED
/volume1/backups
PG dumps, VM backups
MOUNTED
/volume1/nfs-appdata
App persistent data
MOUNTED
/volume1/dev-share
SMB human-accessible
MOUNTED

Storage

Total Capacity
3.6 TB (HDD + SSD)
620GB FREE

☸️ Kubernetes Cluster

πŸŽ›οΈ

Control Plane

cka-cp-v2 Β· .55
kubeadm v1.32.11
LIVE
containerd
trusts .140:5000 (HTTP)
OK
βš™οΈ

Worker Node

cka-worker-v2 Β· .202
Cilium CNI
LIVE
containerd
trusts .140:5000 (HTTP)
OK
NodePort
App access :32046
LIVE
πŸ—ƒοΈ

PostgreSQL

pg-db-01 Β· .70
PostgreSQL 16
DB: copilot_db
LIVE
Backup to NAS
pg_dump cron
TODO

πŸ›‘οΈ Application Stack β€” BFF Guardrails

πŸ”

Backend-for-Frontend (FastAPI / Python)

web-ui/bff/main.py + guardrails.py Β· Uvicorn ASGI Β· runs in Docker / Kubernetes

The BFF sits between the React frontend and AWS API Gateway. The browser never touches AWS directly β€” the API key is injected server-side only. Every request passes through a layered guardrail stack before being forwarded.

⏱️
Rate Limiting
slowapi Β· RATE_LIMIT=10/minute
Per-IP rate limiter on all /api/<endpoint> POST routes. Returns HTTP 429 when exceeded. Limit is configurable via .env without code changes.
πŸ”‘
Secret Pattern Detection
guardrails.py Β· 6 regex patterns
Scans every request body before forwarding to AWS:
β€’ AWS Access Keys (AKIA…)
β€’ aws_access_key_id / aws_secret_access_key
β€’ RSA / PEM private keys
β€’ Slack bot tokens (xoxb-)
β€’ GitHub PATs (ghp_…)
Returns HTTP 400 if any match found.
πŸ“
Payload Size Cap
MAX_REQUEST_BYTES=51200 (50 KB)
Body size checked before secret scan or AWS forwarding. Returns HTTP 413 for oversized payloads. Prevents prompt-stuffing and runaway Lambda costs.
🚧
Endpoint Allowlist
ALLOWED_ENDPOINTS = {triage, explain, runbook-snippet}
Only 3 paths are routable to AWS. Any other path returns HTTP 404 before touching the network. Prevents endpoint-probing attacks.
πŸ”‡
No-Prompt Logging Policy
logger.info(metadata only)
Logs emit only: conv_id[:8], endpoint, status_code, latency_ms. Full prompts and responses are never logged to stdout. Only persisted (encrypted at rest) in PostgreSQL history.
πŸ”§
LLM Response Normalization
_cleanup_response() Β· per-endpoint
Handles real-world LLM inconsistencies from Lambda:
β€’ Strips markdown code fences (```json)
β€’ Re-parses broken JSON payloads
β€’ Converts "key": "value" dict responses to readable text
β€’ Filters JSON noise ({, } lines) from steps lists
β€’ Handles escaped \n and \" in runbook markdown
πŸ—„οΈ
PostgreSQL Conversation History
database.py Β· SQLite (dev) / PostgreSQL (prod)
Every request saved with: conv_id, endpoint, model_id, confidence, latency_ms, status_code. Supports project-based organization, archiving, soft-delete, and full-text search. Survives container restarts.
⏰
Request Timeout + Error Budget
REQUEST_TIMEOUT_SECONDS=30
HTTPX async client enforces 30s hard timeout to AWS. Timeouts and network errors return structured HTTP 504 / 502 and are also persisted to conversation history for debugging.
Request Flow Through Guardrails
Browser POST β†’ Rate Limit βœ“ β†’ Size ≀ 50KB βœ“ β†’ Secret Scan βœ“ β†’ Inject x-api-key β†’ AWS API GW β†’ Normalize JSON β†’ Save to PG β†’ Response βœ…

πŸ“‹ Remaining Tasks

πŸ”’

Security

Rotate NAS password
P0
Tighten file perms (777β†’770)
HIGH
NFS host-specific exports
HIGH
πŸ”§

Operational

SSH trust .60 β†’ K8s
READY
PG backup cron to NAS
READY
Pi-hole DNS
LOW
πŸš€

Pipeline

CI/CD (Gitea Actions)
PLANNED
NFS CSI for K8s PVCs
LOW
Proxmox backups β†’ NAS
LOW