26 KiB
Personal Infrastructure Setup Guide
This guide walks you through setting up your complete personal infrastructure, layer by layer. Each layer must be completed before moving to the next one.
Automated Setup: Each layer has a bash script that handles the setup process. The scripts will:
- Check prerequisites
- Prompt for required variables
- Set up configuration files
- Execute playbooks
- Verify completion
Prerequisites
Before starting:
- You have a domain name
- You have VPS accounts ready
- You have nodito ready with Proxmox installed, ssh key in place
- You have SSH access to all machines
- You're running this from your laptop (lapy)
Layer 0: Foundation Setup
Goal: Set up your laptop (lapy) as the Ansible control node and configure basic settings.
Script: ./scripts/setup_layer_0.sh
What This Layer Does:
- Creates Python virtual environment
- Installs Ansible and required Python packages
- Installs Ansible Galaxy collections
- Guides you through creating
inventory.iniwith your machine IPs - Guides you through creating
infra_vars.ymlwith your domain - Creates
services_config.ymlwith centralized subdomain settings - Creates
infra_secrets.ymltemplate for Uptime Kuma credentials - Validates SSH keys exist
- Verifies everything is ready for Layer 1
Required Information:
- Your domain name (e.g.,
contrapeso.xyz) - SSH key path (default:
~/.ssh/counterganzua) - IP addresses for your infrastructure:
- vipy (main VPS)
- watchtower (monitoring VPS)
- spacey (headscale VPS)
- nodito (Proxmox server) - optional
- Note: VMs (like memos-box) will be created later on Proxmox and added to the
nodito-vmsgroup
Manual Steps:
After running the script, you'll need to:
- Ensure your SSH key is added to all VPS root users (usually done by VPS provider)
- Ensure DNS is configured for your domain (nameservers pointing to your DNS provider)
Centralized Configuration:
The script creates ansible/services_config.yml which contains all service subdomains in one place:
- Easy to review all subdomains at a glance
- No need to edit multiple vars files
- Consistent Caddy settings across all services
- Edit this file to customize your subdomains before deploying services
Verification:
The script will verify:
- ✓ Python venv exists and activated
- ✓ Ansible installed
- ✓ Required Python packages installed
- ✓ Ansible Galaxy collections installed
- ✓
inventory.iniexists and formatted correctly - ✓
infra_vars.ymlexists with domain configured - ✓
services_config.ymlcreated with subdomain settings - ✓
infra_secrets.ymltemplate created - ✓ SSH key file exists
Run the Script:
cd /home/counterweight/personal_infra
./scripts/setup_layer_0.sh
Layer 1A: VPS Basic Setup
Goal: Configure users, SSH access, firewall, and fail2ban on VPS machines.
Script: ./scripts/setup_layer_1a_vps.sh
Can be run independently - doesn't require Nodito setup.
What This Layer Does:
For VPSs (vipy, watchtower, spacey):
- Creates the
counterweightuser with sudo access - Configures SSH key authentication
- Disables root login (by design for security)
- Sets up UFW firewall with SSH access
- Installs and configures fail2ban
- Installs and configures auditd for security logging
Prerequisites:
- ✅ Layer 0 complete
- ✅ SSH key added to all VPS root users
- ✅ Root access to VPSs
Verification:
The script will verify:
- ✓ Can SSH to all VPSs as root
- ✓ VPS playbooks complete successfully
- ✓ Can SSH to all VPSs as
counterweightuser - ✓ Firewall is active and configured
- ✓ fail2ban is running
Run the Script:
source venv/bin/activate
cd /home/counterweight/personal_infra
./scripts/setup_layer_1a_vps.sh
Note: After this layer, you will no longer be able to SSH as root to VPSs (by design for security).
Layer 1B: Nodito (Proxmox) Setup
Goal: Configure the Nodito Proxmox server.
Script: ./scripts/setup_layer_1b_nodito.sh
Can be run independently - doesn't require VPS setup.
What This Layer Does:
For Nodito (Proxmox server):
- Bootstraps SSH key access for root
- Creates the
counterweightuser - Updates and secures the system
- Disables root login and password authentication
- Switches to Proxmox community repositories
- Optionally sets up ZFS storage pool (if disks configured)
- Optionally creates Debian cloud template
Prerequisites:
- ✅ Layer 0 complete
- ✅ Root password for nodito
- ✅ Nodito configured in inventory.ini
Optional: ZFS Setup
For ZFS storage pool (optional):
- SSH into nodito:
ssh root@<nodito-ip> - List disk IDs:
ls -la /dev/disk/by-id/ | grep -E "(ata-|scsi-|nvme-)" - Note the disk IDs you want to use
- The script will help you create
ansible/infra/nodito/nodito_vars.ymlwith disk configuration
⚠️ Warning: ZFS setup will DESTROY ALL DATA on specified disks!
Verification:
The script will verify:
- ✓ Nodito bootstrap successful
- ✓ Community repos configured
- ✓ Can SSH to nodito as
counterweightuser
Run the Script:
source venv/bin/activate
cd /home/counterweight/personal_infra
./scripts/setup_layer_1b_nodito.sh
Note: After this layer, you will no longer be able to SSH as root to nodito (by design for security).
Layer 2: General Infrastructure Tools
Goal: Install common utilities needed by various services.
Script: ./scripts/setup_layer_2.sh
What This Layer Does:
Installs essential tools on machines that need them:
rsync
- Purpose: Required for backup operations
- Deployed to: vipy, watchtower, lapy (and optionally other hosts)
- Playbook:
infra/900_install_rsync.yml
Docker + Docker Compose
- Purpose: Required for containerized services
- Deployed to: vipy, watchtower (and optionally other hosts)
- Playbook:
infra/910_docker_playbook.yml
Prerequisites:
- ✅ Layer 0 complete
- ✅ Layer 1A complete (for VPSs) OR Layer 1B complete (for nodito)
- ✅ SSH access as counterweight user
Services That Need These Tools:
- rsync: All backup operations (Uptime Kuma, Vaultwarden, LNBits, etc.)
- docker: Uptime Kuma, Vaultwarden, ntfy-emergency-app
Verification:
The script will verify:
- ✓ rsync installed on specified hosts
- ✓ Docker and Docker Compose installed on specified hosts
- ✓ counterweight user added to docker group
- ✓ Docker service running
Run the Script:
source venv/bin/activate
cd /home/counterweight/personal_infra
./scripts/setup_layer_2.sh
Note: This script is interactive and will let you choose which hosts get which tools.
Layer 3: Reverse Proxy (Caddy)
Goal: Deploy Caddy reverse proxy for HTTPS termination and routing.
Script: ./scripts/setup_layer_3_caddy.sh
What This Layer Does:
Installs and configures Caddy web server on VPS machines:
- Installs Caddy from official repositories
- Configures Caddy to listen on ports 80/443
- Opens firewall ports for HTTP/HTTPS
- Creates
/etc/caddy/sites-enabled/directory structure - Sets up automatic HTTPS with Let's Encrypt
Deployed to: vipy, watchtower, spacey
Why Caddy is Critical:
Caddy provides:
- Automatic HTTPS - Let's Encrypt certificates with auto-renewal
- Reverse proxy - Routes traffic to backend services
- Simple configuration - Each service adds its own config file
- HTTP/2 support - Modern protocol support
Prerequisites:
- ✅ Layer 0 complete
- ✅ Layer 1A complete (VPS setup)
- ✅ SSH access as counterweight user
- ✅ Ports 80/443 available on VPSs
Services That Need Caddy:
All web services depend on Caddy:
- Uptime Kuma (watchtower)
- ntfy (watchtower)
- Headscale (spacey)
- Vaultwarden (vipy)
- Forgejo (vipy)
- LNBits (vipy)
- Personal Blog (vipy)
- ntfy-emergency-app (vipy)
Verification:
The script will verify:
- ✓ Caddy installed on all target hosts
- ✓ Caddy service running
- ✓ Ports 80/443 open in firewall
- ✓ Sites-enabled directory created
- ✓ Can reach Caddy default page
Run the Script:
source venv/bin/activate
cd /home/counterweight/personal_infra
./scripts/setup_layer_3_caddy.sh
Note: Caddy starts with an empty configuration. Services will add their own config files in later layers.
Layer 4: Core Monitoring & Notifications
Goal: Deploy ntfy (notifications) and Uptime Kuma (monitoring platform).
Script: ./scripts/setup_layer_4_monitoring.sh
What This Layer Does:
Deploys core monitoring infrastructure on watchtower:
4A: ntfy (Notification Service)
- Installs ntfy from official repositories
- Configures ntfy with authentication (deny-all by default)
- Creates admin user for sending notifications
- Sets up Caddy reverse proxy
- Deployed to: watchtower
4B: Uptime Kuma (Monitoring Platform)
- Deploys Uptime Kuma via Docker
- Configures Caddy reverse proxy
- Sets up data persistence
- Optionally sets up backup to lapy
- Deployed to: watchtower
Prerequisites (Complete BEFORE Running):
1. Previous layers complete:
- ✅ Layer 0, 1A, 2, 3 complete (watchtower must be fully set up)
- ✅ Docker installed on watchtower (from Layer 2)
- ✅ Caddy running on watchtower (from Layer 3)
2. Configure subdomains (in centralized config):
- ✅ Edit
ansible/services_config.ymland customize subdomains undersubdomains:section- Set
ntfy:to your preferred subdomain (e.g.,ntfyornotify) - Set
uptime_kuma:to your preferred subdomain (e.g.,uptimeorkuma)
- Set
3. Create DNS records that match your configured subdomains:
- ✅ Create A record:
<ntfy_subdomain>.<yourdomain>→ watchtower IP - ✅ Create A record:
<uptime_kuma_subdomain>.<yourdomain>→ watchtower IP - ✅ Wait for DNS propagation (can take minutes to hours)
- ✅ Verify with:
dig <subdomain>.<yourdomain>should return watchtower IP
4. Prepare ntfy admin credentials:
- ✅ Decide on username (default:
admin) - ✅ Decide on a secure password (script will prompt you)
Run the Script:
source venv/bin/activate
cd /home/counterweight/personal_infra
./scripts/setup_layer_4_monitoring.sh
The script will prompt you for ntfy admin credentials during deployment.
Post-Deployment Steps (Complete AFTER Running):
The script will guide you through most of these, but here's what happens:
Step 1: Set Up Uptime Kuma Admin Account (Manual)
- Open browser and visit:
https://<uptime_kuma_subdomain>.<yourdomain> - On first visit, you'll see the setup page
- Create admin username and password
- Save these credentials securely
Step 2: Update infra_secrets.yml (Manual)
- Edit
ansible/infra_secrets.yml - Add your Uptime Kuma credentials:
uptime_kuma_username: "your-admin-username" uptime_kuma_password: "your-admin-password" - Save the file
- This is required for automated ntfy setup and Layer 6
Step 3: Configure ntfy Notification (Automated)
The script will offer to do this automatically! If you completed Steps 1 & 2, the script will:
- Connect to Uptime Kuma via API
- Create ntfy notification configuration
- Test the connection
- No manual UI configuration needed!
Alternatively (Manual):
- In Uptime Kuma web UI, go to Settings → Notifications
- Click Setup Notification, choose ntfy
- Configure with your ntfy subdomain and credentials
Step 4: Final Verification (Automated)
The script will automatically verify:
- ✓ Uptime Kuma credentials in infra_secrets.yml
- ✓ Can connect to Uptime Kuma API
- ✓ ntfy notification is configured
- ✓ All post-deployment steps complete
If anything is missing, the script will tell you exactly what to do!
Step 5: Subscribe to Notifications on Your Phone (Optional - Manual)
- Install ntfy app: https://github.com/binwiederhier/ntfy-android
- Add subscription:
- Server:
https://<ntfy_subdomain>.<yourdomain> - Topic:
alerts(same as configured in Uptime Kuma) - Username: Your ntfy admin username
- Password: Your ntfy admin password
- Server:
- You'll now receive push notifications for all alerts!
Pro tip: Run the script again after completing Steps 1 & 2, and it will automatically configure ntfy and verify everything!
Verification:
The script will automatically verify:
- ✓ DNS records are configured correctly (using
dig) - ✓ ntfy service running
- ✓ Uptime Kuma container running
- ✓ Caddy configs created for both services
After post-deployment steps, you can test:
- Visit
https://<ntfy_subdomain>.<yourdomain>(should load ntfy web UI) - Visit
https://<uptime_kuma_subdomain>.<yourdomain>(should load Uptime Kuma) - Send test notification in Uptime Kuma
Note: DNS validation requires dig command. If not available, validation will be skipped (you can continue but SSL may fail).
Why This Layer is Critical:
- All infrastructure monitoring (Layer 6) depends on Uptime Kuma
- All alerts go through ntfy
- Services availability monitoring needs Uptime Kuma
- Without this layer, you won't know when things break!
Layer 5: VPN Infrastructure (Headscale)
Goal: Deploy Headscale for secure mesh networking (like Tailscale, but self-hosted).
Script: ./scripts/setup_layer_5_headscale.sh
This layer is OPTIONAL - Skip to Layer 6 if you don't need VPN mesh networking.
What This Layer Does:
Deploys Headscale coordination server and optionally joins machines to the mesh:
5A: Deploy Headscale Server
- Installs Headscale on spacey
- Configures with deny-all ACL policy (you customize later)
- Creates namespace/user for your network
- Sets up Caddy reverse proxy
- Configures embedded DERP server for NAT traversal
- Deployed to: spacey
5B: Join Machines to Mesh (Optional)
- Installs Tailscale client on target machines
- Generates ephemeral pre-auth keys
- Automatically joins machines to your mesh
- Enables Magic DNS
- Can join: vipy, watchtower, nodito, lapy, etc.
Prerequisites (Complete BEFORE Running):
1. Previous layers complete:
- ✅ Layer 0, 1A, 3 complete (spacey must be set up)
- ✅ Caddy running on spacey (from Layer 3)
2. Configure subdomain (in centralized config):
- ✅ Edit
ansible/services_config.ymland customizeheadscale:undersubdomains:section (e.g.,headscaleorvpn)
3. Create DNS record that matches your configured subdomain:
- ✅ Create A record:
<headscale_subdomain>.<yourdomain>→ spacey IP - ✅ Wait for DNS propagation
- ✅ Verify with:
dig <subdomain>.<yourdomain>should return spacey IP
4. Decide on namespace name:
- ✅ Choose a namespace for your network (default:
counter-net) - ✅ This is set in
headscale_vars.ymlasheadscale_namespace
Run the Script:
source venv/bin/activate
cd /home/counterweight/personal_infra
./scripts/setup_layer_5_headscale.sh
The script will:
- Validate DNS configuration
- Deploy Headscale server
- Offer to join machines to the mesh
Post-Deployment Steps:
Configure ACL Policies (Required for machines to communicate)
- SSH into spacey:
ssh counterweight@<spacey-ip> - Edit ACL file:
sudo nano /etc/headscale/acl.json - Configure rules (example - allow all):
{ "ACLs": [ {"action": "accept", "src": ["*"], "dst": ["*:*"]} ] } - Restart Headscale:
sudo systemctl restart headscale
Default is deny-all for security - you must configure ACLs for machines to talk!
Join Additional Machines Manually
For machines not in inventory (mobile, desktop):
- Install Tailscale client on device
- Generate pre-auth key on spacey:
ssh counterweight@<spacey-ip> sudo headscale preauthkeys create --user <namespace> --reusable - Connect using your Headscale server:
tailscale up --login-server https://<headscale_subdomain>.<yourdomain> --authkey <key>
Automatic Uptime Kuma Monitor:
The playbook will automatically create a monitor in Uptime Kuma:
- ✅ Headscale - monitors
https://<subdomain>/health - Added to "services" monitor group
- Uses ntfy notification (if configured)
- Check every 60 seconds
Prerequisites: Uptime Kuma credentials must be in infra_secrets.yml (from Layer 4)
Verification:
The script will automatically verify:
- ✓ DNS records configured correctly
- ✓ Headscale installed and running
- ✓ Namespace created
- ✓ Caddy config created
- ✓ Machines joined (if selected)
- ✓ Monitor created in Uptime Kuma "services" group
List connected devices:
ssh counterweight@<spacey-ip>
sudo headscale nodes list
Why Use Headscale:
- Secure communication between all your machines
- Magic DNS - access machines by hostname
- NAT traversal - works even behind firewalls
- Self-hosted - full control of your VPN
- Mobile support - use official Tailscale apps
Backup:
Optional backup to lapy:
ansible-playbook -i inventory.ini services/headscale/setup_backup_headscale_to_lapy.yml
Layer 6: Infrastructure Monitoring
Goal: Deploy automated monitoring for disk usage, system health, and CPU temperature.
Script: ./scripts/setup_layer_6_infra_monitoring.sh
What This Layer Does:
Deploys monitoring scripts that report to Uptime Kuma:
6A: Disk Usage Monitoring
- Monitors disk usage on specified mount points
- Sends alerts when usage exceeds threshold (default: 80%)
- Creates Uptime Kuma push monitors automatically
- Organizes monitors in host-specific groups
- Deploys to: All hosts (selectable)
6B: System Healthcheck
- Sends regular heartbeat pings to Uptime Kuma
- Alerts if system stops responding
- "No news is good news" monitoring
- Deploys to: All hosts (selectable)
6C: CPU Temperature Monitoring (Nodito only)
- Monitors CPU temperature on Proxmox server
- Alerts when temperature exceeds threshold (default: 80°C)
- Deploys to: nodito (if configured)
Prerequisites (Complete BEFORE Running):
1. Previous layers complete:
- ✅ Layer 0, 1A/1B, 4 complete
- ✅ Uptime Kuma deployed and configured (Layer 4)
- ✅ CRITICAL:
infra_secrets.ymlhas Uptime Kuma credentials
2. Uptime Kuma API credentials ready:
- ✅ Must have completed Layer 4 post-deployment steps
- ✅
ansible/infra_secrets.ymlmust contain:uptime_kuma_username: "your-username" uptime_kuma_password: "your-password"
3. Python dependencies installed:
- ✅
uptime-kuma-apimust be in requirements.txt - ✅ Should already be installed from Layer 0
- ✅ Verify:
pip list | grep uptime-kuma-api
Run the Script:
source venv/bin/activate
cd /home/counterweight/personal_infra
./scripts/setup_layer_6_infra_monitoring.sh
The script will:
- Verify Uptime Kuma credentials
- Offer to deploy disk usage monitoring
- Offer to deploy system healthchecks
- Offer to deploy CPU temp monitoring (nodito only)
- Test monitor creation and alerts
What Gets Deployed:
For each monitored host:
- Push monitor in Uptime Kuma (upside-down mode)
- Monitor group named
{hostname} - infra - Systemd service for monitoring script
- Systemd timer for periodic execution
- Log file for monitoring history
Default settings (customizable):
- Disk usage threshold: 80%
- Disk check interval: 15 minutes
- Healthcheck interval: 60 seconds
- CPU temp threshold: 80°C
- Monitored mount point:
/(root)
Customization Options:
Change thresholds and intervals:
# Disk monitoring with custom settings
ansible-playbook -i inventory.ini infra/410_disk_usage_alerts.yml \
-e "disk_usage_threshold_percent=85" \
-e "disk_check_interval_minutes=10" \
-e "monitored_mount_point=/home"
# Healthcheck with custom interval
ansible-playbook -i inventory.ini infra/420_system_healthcheck.yml \
-e "healthcheck_interval_seconds=30"
# CPU temp with custom threshold
ansible-playbook -i inventory.ini infra/nodito/40_cpu_temp_alerts.yml \
-e "temp_threshold_celsius=75"
Verification:
The script will automatically verify:
- ✓ Uptime Kuma API accessible
- ✓ Monitors created in Uptime Kuma
- ✓ Monitor groups created
- ✓ Systemd services running
- ✓ Can send test alerts
Check Uptime Kuma web UI:
- Monitors should appear organized by host
- Should receive test pings
- Alerts will show when thresholds exceeded
Post-Deployment:
Monitor your infrastructure:
- Open Uptime Kuma web UI
- See all monitors organized by host groups
- Configure notification rules per monitor
- Set up status pages (optional)
Test alerts:
# Trigger disk usage alert (fill disk temporarily)
# Trigger healthcheck alert (stop the service)
# Check ntfy for notifications
Why This Layer is Important:
- Proactive monitoring - Know about issues before users do
- Disk space alerts - Prevent services from failing
- System health - Detect crashed/frozen machines
- Temperature monitoring - Prevent hardware damage
- Organized - All monitors grouped by host
Layer 7: Core Services
Goal: Deploy core applications: Vaultwarden, Forgejo, and LNBits.
Script: ./scripts/setup_layer_7_services.sh
What This Layer Does:
Deploys main services on vipy:
7A: Vaultwarden (Password Manager)
- Deploys via Docker
- Configures Caddy reverse proxy
- Sets up fail2ban protection
- Enables sign-ups initially (disable after creating first user)
- Deployed to: vipy
7B: Forgejo (Git Server)
- Installs Forgejo binary
- Creates git user and directories
- Configures Caddy reverse proxy
- Enables SSH cloning
- Deployed to: vipy
7C: LNBits (Lightning Wallet)
- Installs system dependencies and uv (Python 3.12 tooling)
- Clones LNBits version v1.3.1
- Syncs dependencies with uv targeting Python 3.12
- Configures with FakeWallet backend (for testing)
- Creates systemd service
- Configures Caddy reverse proxy
- Deployed to: vipy
Prerequisites (Complete BEFORE Running):
1. Previous layers complete:
- ✅ Layer 0, 1A, 2, 3 complete
- ✅ Docker installed on vipy (Layer 2)
- ✅ Caddy running on vipy (Layer 3)
2. Configure subdomains (in centralized config):
- ✅ Edit
ansible/services_config.ymland customize subdomains undersubdomains:section:- Set
vaultwarden:to your preferred subdomain (e.g.,vaultorpasswords) - Set
forgejo:to your preferred subdomain (e.g.,gitorcode) - Set
lnbits:to your preferred subdomain (e.g.,lnbitsorwallet)
- Set
3. Create DNS records matching your subdomains:
- ✅ Create A record:
<vaultwarden_subdomain>.<yourdomain>→ vipy IP - ✅ Create A record:
<forgejo_subdomain>.<yourdomain>→ vipy IP - ✅ Create A record:
<lnbits_subdomain>.<yourdomain>→ vipy IP - ✅ Wait for DNS propagation
Run the Script:
source venv/bin/activate
cd /home/counterweight/personal_infra
./scripts/setup_layer_7_services.sh
The script will:
- Validate DNS configuration
- Offer to deploy each service
- Configure backups (optional)
Post-Deployment Steps:
Vaultwarden:
- Visit
https://<vaultwarden_subdomain>.<yourdomain> - Create your first user account
- Important: Disable sign-ups after first user:
ansible-playbook -i inventory.ini services/vaultwarden/disable_vaultwarden_sign_ups_playbook.yml - Optional: Set up backup to lapy
Forgejo:
- Visit
https://<forgejo_subdomain>.<yourdomain> - Create admin account on first visit
- Default: registrations disabled for security
- SSH cloning works automatically after adding SSH key
LNBits:
- Visit
https://<lnbits_subdomain>.<yourdomain> - Create superuser on first visit
- Important: Default uses FakeWallet (testing only)
- Configure real Lightning backend:
- Edit
/opt/lnbits/lnbits/.envon vipy - Or use the superuser UI to configure backend
- Edit
- Disable new user registration for security
- Optional: Set up encrypted backup to lapy
Backup Configuration:
After services are stable, set up backups:
Vaultwarden backup:
ansible-playbook -i inventory.ini services/vaultwarden/setup_backup_vaultwarden_to_lapy.yml
LNBits backup (GPG encrypted):
ansible-playbook -i inventory.ini services/lnbits/setup_backup_lnbits_to_lapy.yml
Note: Forgejo backups are not automated - backup manually or set up your own solution.
Automatic Uptime Kuma Monitors:
The playbooks will automatically create monitors in Uptime Kuma for each service:
- ✅ Vaultwarden - monitors
https://<subdomain>/alive - ✅ Forgejo - monitors
https://<subdomain>/api/healthz - ✅ LNBits - monitors
https://<subdomain>/api/v1/health
All monitors:
- Added to "services" monitor group
- Use ntfy notification (if configured)
- Check every 60 seconds
- 3 retries before alerting
Prerequisites: Uptime Kuma credentials must be in infra_secrets.yml (from Layer 4)
Verification:
The script will automatically verify:
- ✓ DNS records configured
- ✓ Services deployed
- ✓ Docker containers running (Vaultwarden)
- ✓ Systemd services running (Forgejo, LNBits)
- ✓ Caddy configs created
Manual verification:
- Visit each service's subdomain
- Create admin/first user accounts
- Test functionality
- Check Uptime Kuma for new monitors in "services" group
Why These Services:
- Vaultwarden - Self-hosted password manager (Bitwarden compatible)
- Forgejo - Self-hosted Git server (GitHub/GitLab alternative)
- LNBits - Lightning Network wallet and accounts system
Layer 8: Secondary Services
Status: 🔒 Locked (Complete Layer 7 first)
Troubleshooting
Common Issues
SSH Connection Fails
- Verify VPS is running and accessible
- Check SSH key is in the correct location
- Ensure SSH key has correct permissions (600)
- Try manual SSH:
ssh -i ~/.ssh/counterganzua root@<ip>
Ansible Not Found
- Make sure you've activated the venv:
source venv/bin/activate - Run Layer 0 script again
DNS Not Resolving
- DNS changes can take up to 24-48 hours to propagate
- Use
dig <subdomain>.<domain>to check DNS status - You can proceed with setup; services will work once DNS propagates
Progress Tracking
Use this checklist to track your progress:
- Layer 0: Foundation Setup
- Layer 1A: VPS Basic Setup
- Layer 1B: Nodito (Proxmox) Setup
- Layer 2: General Infrastructure Tools
- Layer 3: Reverse Proxy (Caddy)
- Layer 4: Core Monitoring & Notifications
- Layer 5: VPN Infrastructure (Headscale)
- Layer 6: Infrastructure Monitoring
- Layer 7: Core Services
- Layer 8: Secondary Services
- Backups Configured