419 lines
13 KiB
Markdown
419 lines
13 KiB
Markdown
# Infrastructure Dependency Graph
|
|
|
|
This document maps out the dependencies between all infrastructure components and services, providing a clear order for building out the personal infrastructure.
|
|
|
|
## Infrastructure Overview
|
|
|
|
### Machines (Hosts)
|
|
- **lapy**: Laptop (Ansible control node)
|
|
- **vipy**: Main VPS (207.154.226.192) - hosts most services
|
|
- **watchtower**: Monitoring VPS (206.189.63.167) - hosts Uptime Kuma and ntfy
|
|
- **spacey**: Headscale VPS (165.232.73.4) - hosts Headscale coordination server
|
|
- **nodito**: Proxmox server (192.168.1.139) - home infrastructure
|
|
- **memos-box**: Separate box for memos (192.168.1.149)
|
|
|
|
---
|
|
|
|
## Dependency Layers
|
|
|
|
### Layer 0: Prerequisites (No Dependencies)
|
|
These must exist before anything else can be deployed.
|
|
|
|
#### On lapy (Laptop - Ansible Control Node)
|
|
- Python venv with Ansible
|
|
- SSH keys configured
|
|
- Domain name configured (`root_domain` in `infra_vars.yml`)
|
|
|
|
**Commands:**
|
|
```bash
|
|
python3 -m venv venv
|
|
source venv/bin/activate
|
|
pip install -r requirements.txt
|
|
ansible-galaxy collection install -r ansible/requirements.yml
|
|
```
|
|
|
|
---
|
|
|
|
### Layer 1: Basic Machine Setup (Depends on: Layer 0)
|
|
Initial machine provisioning and security hardening.
|
|
|
|
#### All VPSs (vipy, watchtower, spacey)
|
|
**Playbooks (in order):**
|
|
1. `infra/01_user_and_access_setup_playbook.yml` - Create user, setup SSH
|
|
2. `infra/02_firewall_and_fail2ban_playbook.yml` - Firewall, fail2ban, auditd
|
|
|
|
**Dependencies:**
|
|
- SSH access with root user
|
|
- SSH key pair
|
|
|
|
#### Nodito (Proxmox Server)
|
|
**Playbooks (in order):**
|
|
1. `infra/nodito/30_proxmox_bootstrap_playbook.yml` - SSH keys, user creation, security
|
|
2. `infra/nodito/31_proxmox_community_repos_playbook.yml` - Switch to community repos
|
|
3. `infra/nodito/32_zfs_pool_setup_playbook.yml` - ZFS storage pool (optional)
|
|
4. `infra/nodito/33_proxmox_debian_cloud_template.yml` - Cloud template (optional)
|
|
|
|
**Dependencies:**
|
|
- Root password access initially
|
|
- Disk IDs identified for ZFS (if using ZFS)
|
|
|
|
#### Memos-box
|
|
**Playbooks:**
|
|
1. `infra/01_user_and_access_setup_playbook.yml`
|
|
2. `infra/02_firewall_and_fail2ban_playbook.yml`
|
|
|
|
---
|
|
|
|
### Layer 2: General Infrastructure Tools (Depends on: Layer 1)
|
|
Common utilities needed across multiple services.
|
|
|
|
#### On All Machines (as needed per service requirements)
|
|
**Playbooks:**
|
|
- `infra/900_install_rsync.yml` - For backup operations
|
|
- `infra/910_docker_playbook.yml` - For Docker-based services
|
|
- `infra/920_join_headscale_mesh.yml` - Join machines to VPN mesh (requires Layer 5 - Headscale)
|
|
|
|
**Dependencies:**
|
|
- Layer 1 complete (user and firewall setup)
|
|
|
|
**Notes:**
|
|
- rsync needed on: vipy, watchtower, lapy (for backups)
|
|
- docker needed on: vipy, watchtower (for containerized services)
|
|
|
|
---
|
|
|
|
### Layer 3: Reverse Proxy (Depends on: Layer 2)
|
|
Caddy provides HTTPS termination and reverse proxying for all web services.
|
|
|
|
#### On vipy, watchtower, spacey
|
|
**Playbook:**
|
|
- `services/caddy_playbook.yml`
|
|
|
|
**Dependencies:**
|
|
- Layer 1 complete (firewall configured to allow ports 80/443)
|
|
- No other services required
|
|
|
|
**Critical Note:**
|
|
- Caddy is deployed to vipy, watchtower, and spacey
|
|
- Each service deployed configures its own Caddy reverse proxy automatically
|
|
- All subsequent web services depend on Caddy being installed first
|
|
|
|
---
|
|
|
|
### Layer 4: Core Monitoring & Notifications (Depends on: Layer 3)
|
|
These services provide monitoring and alerting for all other infrastructure.
|
|
|
|
#### 4A: ntfy (Notification Service)
|
|
**Host:** watchtower
|
|
**Playbook:** `services/ntfy/deploy_ntfy_playbook.yml`
|
|
|
|
**Dependencies:**
|
|
- Caddy on watchtower (Layer 3)
|
|
- DNS record for ntfy subdomain
|
|
- NTFY_USER and NTFY_PASSWORD environment variables
|
|
|
|
**Used By:**
|
|
- Uptime Kuma (for notifications)
|
|
- ntfy-emergency-app
|
|
- Any service needing push notifications
|
|
|
|
#### 4B: Uptime Kuma (Monitoring Platform)
|
|
**Host:** watchtower
|
|
**Playbook:** `services/uptime_kuma/deploy_uptime_kuma_playbook.yml`
|
|
|
|
**Dependencies:**
|
|
- Caddy on watchtower (Layer 3)
|
|
- Docker on watchtower (Layer 2)
|
|
- DNS record for uptime kuma subdomain
|
|
|
|
**Used By:**
|
|
- All infrastructure monitoring (disk alerts, healthchecks, CPU temp)
|
|
- Service availability monitoring
|
|
|
|
**Backup:** `services/uptime_kuma/setup_backup_uptime_kuma_to_lapy.yml`
|
|
- Requires rsync on watchtower and lapy
|
|
|
|
---
|
|
|
|
### Layer 5: VPN Infrastructure (Depends on: Layer 3)
|
|
Headscale provides secure mesh networking between all machines.
|
|
|
|
#### Headscale (VPN Coordination Server)
|
|
**Host:** spacey
|
|
**Playbook:** `services/headscale/deploy_headscale_playbook.yml`
|
|
|
|
**Dependencies:**
|
|
- Caddy on spacey (Layer 3)
|
|
- DNS record for headscale subdomain
|
|
|
|
**Enables:**
|
|
- Secure communication between all machines
|
|
- Magic DNS for hostname resolution
|
|
- Join machines using: `infra/920_join_headscale_mesh.yml`
|
|
|
|
**Backup:** `services/headscale/setup_backup_headscale_to_lapy.yml`
|
|
- Requires rsync on spacey and lapy
|
|
|
|
---
|
|
|
|
### Layer 6: Infrastructure Monitoring (Depends on: Layer 4)
|
|
Automated monitoring scripts that report to Uptime Kuma.
|
|
|
|
#### On All Machines
|
|
**Playbooks:**
|
|
- `infra/410_disk_usage_alerts.yml` - Disk usage monitoring
|
|
- `infra/420_system_healthcheck.yml` - System health pings
|
|
|
|
**Dependencies:**
|
|
- Uptime Kuma deployed (Layer 4B)
|
|
- `infra_secrets.yml` with Uptime Kuma credentials
|
|
- Python uptime-kuma-api installed on lapy
|
|
|
|
#### On Nodito Only
|
|
**Playbook:**
|
|
- `infra/nodito/40_cpu_temp_alerts.yml` - CPU temperature monitoring
|
|
|
|
**Dependencies:**
|
|
- Uptime Kuma deployed (Layer 4B)
|
|
- `nodito_secrets.yml` with Uptime Kuma push URL
|
|
|
|
---
|
|
|
|
### Layer 7: Core Services (Depends on: Layers 3-4)
|
|
Essential services for personal infrastructure.
|
|
|
|
#### 7A: Vaultwarden (Password Manager)
|
|
**Host:** vipy
|
|
**Playbook:** `services/vaultwarden/deploy_vaultwarden_playbook.yml`
|
|
|
|
**Dependencies:**
|
|
- Caddy on vipy (Layer 3)
|
|
- Docker on vipy (Layer 2)
|
|
- Fail2ban on vipy (Layer 1)
|
|
- DNS record for vaultwarden subdomain
|
|
|
|
**Post-Deploy:**
|
|
- Create first user account
|
|
- Run `services/vaultwarden/disable_vaultwarden_sign_ups_playbook.yml` to disable registrations
|
|
|
|
**Backup:** `services/vaultwarden/setup_backup_vaultwarden_to_lapy.yml`
|
|
- Requires rsync on vipy and lapy
|
|
|
|
#### 7B: Forgejo (Git Server)
|
|
**Host:** vipy
|
|
**Playbook:** `services/forgejo/deploy_forgejo_playbook.yml`
|
|
|
|
**Dependencies:**
|
|
- Caddy on vipy (Layer 3)
|
|
- DNS record for forgejo subdomain
|
|
|
|
**Used By:**
|
|
- Personal blog (Layer 8)
|
|
- Any service pulling from git repos
|
|
|
|
#### 7C: LNBits (Lightning Wallet)
|
|
**Host:** vipy
|
|
**Playbook:** `services/lnbits/deploy_lnbits_playbook.yml`
|
|
|
|
**Dependencies:**
|
|
- Caddy on vipy (Layer 3)
|
|
- DNS record for lnbits subdomain
|
|
- Python 3.12 via pyenv
|
|
- Poetry for dependency management
|
|
|
|
**Backup:** `services/lnbits/setup_backup_lnbits_to_lapy.yml`
|
|
- Requires rsync on vipy and lapy
|
|
- Backups are GPG encrypted (requires GPG keys configured)
|
|
|
|
---
|
|
|
|
### Layer 8: Secondary Services (Depends on: Layer 7)
|
|
Services that depend on core services being available.
|
|
|
|
#### 8A: Personal Blog (Static Site)
|
|
**Host:** vipy
|
|
**Playbook:** `services/personal-blog/deploy_personal_blog_playbook.yml`
|
|
|
|
**Dependencies:**
|
|
- Caddy on vipy (Layer 3)
|
|
- Forgejo on vipy (Layer 7B) - blog content hosted in Forgejo repo
|
|
- rsync on vipy (Layer 2)
|
|
- DNS record for blog subdomain
|
|
- PERSONAL_BLOG_DEPLOY_TOKEN environment variable (Forgejo deploy token)
|
|
|
|
**Notes:**
|
|
- Auto-updates hourly via cron from Forgejo repo
|
|
- Serves static files directly through Caddy
|
|
|
|
#### 8B: ntfy-emergency-app
|
|
**Host:** vipy
|
|
**Playbook:** `services/ntfy-emergency-app/deploy_ntfy_emergency_app_playbook.yml`
|
|
|
|
**Dependencies:**
|
|
- Caddy on vipy (Layer 3)
|
|
- Docker on vipy (Layer 2)
|
|
- ntfy on watchtower (Layer 4A)
|
|
- DNS record for emergency app subdomain
|
|
|
|
**Notes:**
|
|
- Configured with ntfy server URL and credentials
|
|
- Sends emergency notifications to ntfy topics
|
|
|
|
#### 8C: Memos (Note-taking)
|
|
**Host:** memos-box
|
|
**Playbook:** `services/memos/deploy_memos_playbook.yml`
|
|
|
|
**Dependencies:**
|
|
- Caddy on memos-box (Layer 3)
|
|
- DNS record for memos subdomain
|
|
|
|
---
|
|
|
|
## Deployment Order Summary
|
|
|
|
### Phase 1: Foundation
|
|
1. Setup lapy as Ansible control node
|
|
2. Configure domain and DNS
|
|
3. Deploy Layer 1 on all machines (users, firewall)
|
|
4. Deploy Layer 2 tools (rsync, docker as needed)
|
|
|
|
### Phase 2: Web Infrastructure
|
|
5. Deploy Caddy (Layer 3) on vipy, watchtower, spacey
|
|
|
|
### Phase 3: Monitoring Foundation
|
|
6. Deploy ntfy on watchtower (Layer 4A)
|
|
7. Deploy Uptime Kuma on watchtower (Layer 4B)
|
|
8. Configure Uptime Kuma with ntfy notifications
|
|
|
|
### Phase 4: Mesh Network (Optional but Recommended)
|
|
9. Deploy Headscale on spacey (Layer 5)
|
|
10. Join machines to mesh using 920 playbook
|
|
|
|
### Phase 5: Infrastructure Monitoring
|
|
11. Deploy disk usage alerts on all machines (Layer 6)
|
|
12. Deploy system healthcheck on all machines (Layer 6)
|
|
13. Deploy CPU temp alerts on nodito (Layer 6)
|
|
|
|
### Phase 6: Core Services
|
|
14. Deploy Vaultwarden on vipy (Layer 7A)
|
|
15. Deploy Forgejo on vipy (Layer 7B)
|
|
16. Deploy LNBits on vipy (Layer 7C)
|
|
|
|
### Phase 7: Secondary Services
|
|
17. Deploy Personal Blog on vipy (Layer 8A)
|
|
18. Deploy ntfy-emergency-app on vipy (Layer 8B)
|
|
19. Deploy Memos on memos-box (Layer 8C)
|
|
|
|
### Phase 8: Backups
|
|
20. Configure all backup playbooks (to lapy)
|
|
|
|
---
|
|
|
|
## Critical Dependencies Map
|
|
|
|
```
|
|
Legend: → (depends on)
|
|
|
|
MONITORING CHAIN:
|
|
ntfy (Layer 4A) → Caddy (Layer 3)
|
|
Uptime Kuma (Layer 4B) → Caddy (Layer 3) + Docker (Layer 2) + ntfy (Layer 4A)
|
|
Disk Alerts (Layer 6) → Uptime Kuma (Layer 4B)
|
|
System Healthcheck (Layer 6) → Uptime Kuma (Layer 4B)
|
|
CPU Temp Alerts (Layer 6) → Uptime Kuma (Layer 4B)
|
|
|
|
WEB SERVICES CHAIN:
|
|
Caddy (Layer 3) → Firewall configured (Layer 1)
|
|
Vaultwarden (Layer 7A) → Caddy (Layer 3) + Docker (Layer 2)
|
|
Forgejo (Layer 7B) → Caddy (Layer 3)
|
|
LNBits (Layer 7C) → Caddy (Layer 3)
|
|
Personal Blog (Layer 8A) → Caddy (Layer 3) + Forgejo (Layer 7B)
|
|
ntfy-emergency-app (Layer 8B) → Caddy (Layer 3) + Docker (Layer 2) + ntfy (Layer 4A)
|
|
Memos (Layer 8C) → Caddy (Layer 3)
|
|
|
|
VPN CHAIN:
|
|
Headscale (Layer 5) → Caddy (Layer 3)
|
|
All machines can join mesh → Headscale (Layer 5)
|
|
|
|
BACKUP CHAIN:
|
|
All backups → rsync (Layer 2) on source + lapy
|
|
LNBits backups → GPG keys configured on lapy
|
|
```
|
|
|
|
---
|
|
|
|
## Host-Service Matrix
|
|
|
|
| Service | vipy | watchtower | spacey | nodito | memos-box |
|
|
|---------|------|------------|--------|--------|-----------|
|
|
| Caddy | ✓ | ✓ | ✓ | - | ✓ |
|
|
| Docker | ✓ | ✓ | - | - | - |
|
|
| Uptime Kuma | - | ✓ | - | - | - |
|
|
| ntfy | - | ✓ | - | - | - |
|
|
| Headscale | - | - | ✓ | - | - |
|
|
| Vaultwarden | ✓ | - | - | - | - |
|
|
| Forgejo | ✓ | - | - | - | - |
|
|
| LNBits | ✓ | - | - | - | - |
|
|
| Personal Blog | ✓ | - | - | - | - |
|
|
| ntfy-emergency-app | ✓ | - | - | - | - |
|
|
| Memos | - | - | - | - | ✓ |
|
|
| Disk Alerts | ✓ | ✓ | ✓ | ✓ | ✓ |
|
|
| System Healthcheck | ✓ | ✓ | ✓ | ✓ | ✓ |
|
|
| CPU Temp Alerts | - | - | - | ✓ | - |
|
|
|
|
---
|
|
|
|
## Pre-Deployment Checklist
|
|
|
|
### Before Starting
|
|
- [ ] SSH keys generated and added to VPS providers
|
|
- [ ] Domain name acquired and accessible
|
|
- [ ] Python venv created on lapy with Ansible installed
|
|
- [ ] `inventory.ini` created and populated with all host IPs
|
|
- [ ] `infra_vars.yml` configured with root domain
|
|
- [ ] All VPSs accessible via SSH as root initially
|
|
|
|
### DNS Records to Configure
|
|
Create A records pointing to appropriate IPs:
|
|
- Uptime Kuma subdomain → watchtower IP
|
|
- ntfy subdomain → watchtower IP
|
|
- Headscale subdomain → spacey IP
|
|
- Vaultwarden subdomain → vipy IP
|
|
- Forgejo subdomain → vipy IP
|
|
- LNBits subdomain → vipy IP
|
|
- Personal Blog subdomain → vipy IP
|
|
- ntfy-emergency-app subdomain → vipy IP
|
|
- Memos subdomain → memos-box IP
|
|
|
|
### Secrets to Configure
|
|
- [ ] `infra_secrets.yml` created with Uptime Kuma credentials
|
|
- [ ] `nodito_secrets.yml` created with Uptime Kuma push URL
|
|
- [ ] NTFY_USER and NTFY_PASSWORD environment variables for ntfy deployment
|
|
- [ ] PERSONAL_BLOG_DEPLOY_TOKEN environment variable (from Forgejo)
|
|
- [ ] GPG keys configured on lapy (for encrypted backups)
|
|
|
|
---
|
|
|
|
## Notes
|
|
|
|
### Why This Order Matters
|
|
|
|
1. **Caddy First**: All web services need reverse proxy, so Caddy must be deployed before any service that requires HTTPS access.
|
|
|
|
2. **Monitoring Early**: Deploying ntfy and Uptime Kuma early means all subsequent services can be monitored from the start. Infrastructure alerts can catch issues immediately.
|
|
|
|
3. **Forgejo Before Blog**: The personal blog pulls content from Forgejo, so the git server must exist first.
|
|
|
|
4. **Headscale Separation**: Headscale runs on its own VPS (spacey) because vipy needs to be part of the mesh network and can't run the coordination server itself.
|
|
|
|
5. **Backup Setup Last**: Backups should be configured after services are stable and have initial data to backup.
|
|
|
|
### Machine Isolation Strategy
|
|
|
|
- **watchtower**: Runs monitoring services (Uptime Kuma, ntfy) separately so they don't fail when vipy fails
|
|
- **spacey**: Runs Headscale coordination server isolated from the mesh clients
|
|
- **vipy**: Main services server - most applications run here
|
|
- **nodito**: Local Proxmox server for home infrastructure
|
|
- **memos-box**: Separate dedicated server for memos service
|
|
|
|
This isolation ensures monitoring remains functional even when primary services are down.
|
|
|