* Some services are designed to be accessible through WAN through a friendly URL.
* You'll need to have a domain where you can set DNS records and have the ability to create different subdomains, as the guide assumes each service will get its own subdomain.
* Getting and configuring the domain is outside the scope of this repo. Whenever a service needs you to set up a subdomain, it will be mentioned explictly.
* You should add the domain to the var `root_domain` in `ansible/infra_vars.yml`.
* The guide is agnostic to which provider you pick, but has been tested with VMs from https://99stack.com and contains some operations that are specifically relevant to their VPSs.
+ Boots with one of your SSH keys already authorized. If this is not the case, you'll have to manually drop the pubkey there before using the playbooks.
+ Another tiny one to monitor Uptime. We use a different one to prevent the monitoring service from falling down with the main machine.
+ A final one to run the headscale server, since the main VPS needs to be part of the mesh network and can't do so while also running the coordination server.
* You have an example `ansible/example.inventory.ini`. Copy it with `cp ansible/example.inventory.ini ansible/inventory.ini` and fill in the `[vps]` group with host entries for each machine (`vipy` for services, `watchtower` for uptime monitoring, `spacey` for headscale).
* Ansible will create a user on the first playbook `01_basic_vps_setup_playbook.yml`. This is the user that will get used regularly. But, since this user doesn't exist, you obviosuly need to first run this playbook from some other user. We assume your VPS provider has given you a root user, which is what you need to define as the running user in the next command.
* Run `ansible-playbook -i inventory.ini infra/01_user_and_access_setup_playbook.yml -e 'ansible_user="your root user here"'`
* Then, configure firewall access, fail2ban and auditd with `ansible-playbook -i inventory.ini infra/02_firewall_and_fail2ban_playbook.yml`. Since the user we will use is now present, there is no need to specify the user anymore.
* For all future playbooks targeting nodito, use the default configuration (no overrides needed).
Note that, by applying these playbooks, both the root user and the `counterweight` user will use the same SSH pubkey for auth, but root login will be disabled.
* Proxmox VE installations typically come with enterprise repositories enabled, which require a subscription. To avoid subscription warnings and use the community repositories instead:
* Run the repository switch with: `ansible-playbook -i inventory.ini infra/nodito/32_proxmox_community_repos_playbook.yml`
* This playbook will:
* Detect whether your Proxmox installation uses modern deb822 format (Proxmox VE 9) or legacy format (Proxmox VE 8)
* Remove enterprise repository files and create community repository files
* Disable subscription nag messages in both web and mobile interfaces
* Update Proxmox packages from the community repository
* Verify the changes are working correctly
* After running this playbook, clear your browser cache or perform a hard reload (Ctrl+Shift+R) before using the Proxmox VE Web UI to avoid UI display issues.
* Run the ZFS pool setup with: `ansible-playbook -i inventory.ini infra/nodito/32_zfs_pool_setup_playbook.yml`
* This will:
* Validate Proxmox VE and ZFS installation
* Install ZFS utilities and kernel modules
* Create a RAID 1 (mirror) ZFS pool named `proxmox-storage` with optimized settings
* Configure ZFS pool properties (ashift=12, compression=lz4, atime=off, etc.)
* Export and re-import the pool for Proxmox compatibility
* Configure Proxmox to use the ZFS pool storage (zfspool type)
* Enable ZFS services for automatic pool import on boot
* **Warning**: This will destroy all data on the specified disks. Make sure you're using the correct disk IDs and that the disks don't contain important data.
* Downloads the latest Debian generic cloud qcow2 image (override via `debian_cloud_image_url`/`debian_cloud_image_filename`)
* Imports it into your Proxmox storage (defaults to the configured ZFS pool) and builds VMID `9001` as a template
* Injects your SSH keys, enables qemu-guest-agent, configures DHCP networking, and sizes the disk (default 10GB)
* Drops a cloud-init snippet so clones automatically install qemu-guest-agent and can run upgrades on first boot
* Once it finishes, provision new machines with `qm clone 9001 <vmid> --name <vmname>` plus your usual cloud-init overrides.
### Provision VMs with OpenTofu
* Prefer a declarative workflow? The `tofu/nodito` project clones VM definitions from the template automatically.
* Quick start (see `tofu/nodito/README.md` for full details):
1. Install OpenTofu, copy `terraform.tfvars.example` to `terraform.tfvars`, and fill in the Proxmox API URL/token plus your SSH public key.
2. Define VMs in the `vms` map (name, cores, memory, disk size, `ipconfig0`, optional `vlan_tag`). Disks default to the `proxmox-tank-1` ZFS pool.
3. Run `tofu init`, `tofu plan -var-file=terraform.tfvars`, and `tofu apply -var-file=terraform.tfvars`.
* Each VM is cloned from the `debian-13-cloud-init` template (VMID 9001), attaches to `vmbr0`, and boots with qemu-guest-agent + your keys injected via cloud-init. Updates to the tfvars map let you grow/shrink the fleet with a single `tofu apply`.
Some of the backups are stored encrypted for security. To allow this, fill in the gpg variables listed in `example.inventory.ini` under the `lapy` block.