more stuff
This commit is contained in:
parent
6a43132bc8
commit
79e6a1a543
18 changed files with 426 additions and 144 deletions
|
|
@ -35,9 +35,9 @@ This describes how to prepare each machine before deploying services on them.
|
||||||
|
|
||||||
### Prepare Ansible vars
|
### Prepare Ansible vars
|
||||||
|
|
||||||
* You have an example `ansible/example.inventory.ini`. Copy it with `cp ansible/example.inventory.ini ansible/inventory.ini` and fill in with the values for your VPSs. `[vipy]` is the services VPS. `[watchtower]` is the watchtower VPS. `[spacey]`is the headscale VPS.
|
* You have an example `ansible/example.inventory.ini`. Copy it with `cp ansible/example.inventory.ini ansible/inventory.ini` and fill in the `[vps]` group with host entries for each machine (`vipy` for services, `watchtower` for uptime monitoring, `spacey` for headscale).
|
||||||
* A few notes:
|
* A few notes:
|
||||||
* The guides assume you'll only have one VPS in the `[vipy]` group. Stuff will break if you have multiple, so avoid that.
|
* The guides assume you'll only have one `vipy` host entry. Stuff will break if you have multiple, so avoid that.
|
||||||
|
|
||||||
### Create user and secure VPS access
|
### Create user and secure VPS access
|
||||||
|
|
||||||
|
|
@ -48,6 +48,10 @@ This describes how to prepare each machine before deploying services on them.
|
||||||
|
|
||||||
Note that, by applying these playbooks, both the root user and the `counterweight` user will use the same SSH pubkey for auth.
|
Note that, by applying these playbooks, both the root user and the `counterweight` user will use the same SSH pubkey for auth.
|
||||||
|
|
||||||
|
Checklist:
|
||||||
|
- [ ] All 3 VPS are accessible with the `counterweight` user
|
||||||
|
- [ ] All 3 VPS have UFW up and running
|
||||||
|
|
||||||
## Prepare Nodito Server
|
## Prepare Nodito Server
|
||||||
|
|
||||||
### Source the Nodito Server
|
### Source the Nodito Server
|
||||||
|
|
@ -61,7 +65,7 @@ Note that, by applying these playbooks, both the root user and the `counterweigh
|
||||||
|
|
||||||
### Prepare Ansible vars for Nodito
|
### Prepare Ansible vars for Nodito
|
||||||
|
|
||||||
* Add a `[nodito]` group to your `ansible/inventory.ini` (or simply use the one you get by copying `example.inventory.ini`) and fill in with values.
|
* Ensure your inventory contains a `[nodito_host]` group and the `nodito` host entry (copy the example inventory if needed) and fill in with values.
|
||||||
|
|
||||||
### Bootstrap SSH Key Access and Create User
|
### Bootstrap SSH Key Access and Create User
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,6 +1,6 @@
|
||||||
# 02 VPS Core Services Setup
|
# 02 VPS Core Services Setup
|
||||||
|
|
||||||
Now that Vipy is ready, we need to deploy some basic services which are foundational for the apps we're actually interested in.
|
Now that the VPSs are ready, we need to deploy some basic services which are foundational for the apps we're actually interested in.
|
||||||
|
|
||||||
This assumes you've completed the markdown `01`.
|
This assumes you've completed the markdown `01`.
|
||||||
|
|
||||||
|
|
@ -28,6 +28,9 @@ Simply run the playbook:
|
||||||
ansible-playbook -i inventory.ini infra/910_docker_playbook.yml
|
ansible-playbook -i inventory.ini infra/910_docker_playbook.yml
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Checklist:
|
||||||
|
- [ ] All 3 VPSs responde to `docker version`
|
||||||
|
- [ ] All 3 VPSs responde to `docker compose version`
|
||||||
|
|
||||||
## Deploy Caddy
|
## Deploy Caddy
|
||||||
|
|
||||||
|
|
@ -40,6 +43,9 @@ ansible-playbook -i inventory.ini infra/910_docker_playbook.yml
|
||||||
|
|
||||||
* Starting config will be empty. Modifying the caddy config file to add endpoints as we add services is covered by the instructions of each service.
|
* Starting config will be empty. Modifying the caddy config file to add endpoints as we add services is covered by the instructions of each service.
|
||||||
|
|
||||||
|
Checklist:
|
||||||
|
- [ ] All 3 VPSs have Caddy up and running
|
||||||
|
|
||||||
|
|
||||||
## Uptime Kuma
|
## Uptime Kuma
|
||||||
|
|
||||||
|
|
@ -47,9 +53,8 @@ Uptime Kuma gets used to monitor the availability of services, keep track of the
|
||||||
|
|
||||||
### Deploy
|
### Deploy
|
||||||
|
|
||||||
* Decide what subdomain you want to serve Uptime Kuma on and add it to `services/uptime_kuma/uptime_kuma_vars.yml` on the `uptime_kuma_subdomain`.
|
* Decide what subdomain you want to serve Uptime Kuma on and add it to `services/services_config.yml` on the `uptime_kuma` entry.
|
||||||
* Note that you will have to add a DNS entry to point to the VPS public IP.
|
* Note that you will have to add a DNS entry to point to the VPS public IP.
|
||||||
* Make sure docker is available on the host.
|
|
||||||
* Run the deployment playbook: `ansible-playbook -i inventory.ini services/uptime_kuma/deploy_uptime_kuma_playbook.yml`.
|
* Run the deployment playbook: `ansible-playbook -i inventory.ini services/uptime_kuma/deploy_uptime_kuma_playbook.yml`.
|
||||||
|
|
||||||
### Set up backups to Lapy
|
### Set up backups to Lapy
|
||||||
|
|
@ -69,6 +74,49 @@ Uptime Kuma gets used to monitor the availability of services, keep track of the
|
||||||
* Overwrite the data folder with one of the backups.
|
* Overwrite the data folder with one of the backups.
|
||||||
* Start it up again.
|
* Start it up again.
|
||||||
|
|
||||||
|
Checklist:
|
||||||
|
- [ ] Uptime kuma is accesible at the FQDN
|
||||||
|
- [ ] The backup script runs fine
|
||||||
|
- [ ] You have stored the credentials of the Uptime kuma admin user
|
||||||
|
|
||||||
|
|
||||||
|
## ntfy
|
||||||
|
|
||||||
|
ntfy is a notifications server.
|
||||||
|
|
||||||
|
### Deploy
|
||||||
|
|
||||||
|
* Decide what subdomain you want to serve ntfy on and add it to `services/ntfy/ntfy_vars.yml` on the `ntfy_subdomain`.
|
||||||
|
* Note that you will have to add a DNS entry to point to the VPS public IP.
|
||||||
|
* Ensure the admin user credentials are set in `ansible/infra_secrets.yml` under `ntfy_username` and `ntfy_password`. This user is the only one authorised to send and read messages from topics.
|
||||||
|
* Run the deployment playbook: `ansible-playbook -i inventory.ini services/ntfy/deploy_ntfy_playbook.yml`.
|
||||||
|
* Run this playbook to create a notifaction entry in uptime kuma that points to your freshly deployed ntfy instance: `ansible-playbook -i inventory.ini services/ntfy/setup_ntfy_uptime_kuma_notification.yml`
|
||||||
|
|
||||||
|
### Configure
|
||||||
|
|
||||||
|
* You can visit the ntfy web UI at the FQDN you configured.
|
||||||
|
* You can start using notify to send alerts with uptime kuma by visiting the uptime kuma UI and using the credentials for the ntfy admin user.
|
||||||
|
* To receive alerts on your phone, install the official ntfy app: https://github.com/binwiederhier/ntfy-android.
|
||||||
|
* You can also subscribe on the web UI on your laptop.
|
||||||
|
|
||||||
|
### Backups
|
||||||
|
|
||||||
|
Given that ntfy is almost stateless, no backups are made. If it blows up, simply set it up again.
|
||||||
|
|
||||||
|
Checklist
|
||||||
|
- [ ] ntfy UI is reachable
|
||||||
|
- [ ] You can see the notification in uptime kuma and test it successfully
|
||||||
|
|
||||||
|
## VPS monitoring scripts
|
||||||
|
|
||||||
|
### Deploy
|
||||||
|
|
||||||
|
- Run playbooks:
|
||||||
|
- `ansible-playbook -i inventory.ini infra/410_disk_usage_alerts.yml --limit vps`
|
||||||
|
- `ansible-playbook -i inventory.ini infra/420_system_healthcheck.yml --limit vps`
|
||||||
|
|
||||||
|
Checklist:
|
||||||
|
- [ ] You can see both the system healthcheck and disk usage check for all VPSs in the uptime kuma UI.
|
||||||
|
|
||||||
## Vaultwarden
|
## Vaultwarden
|
||||||
|
|
||||||
|
|
@ -121,29 +169,6 @@ Forgejo is a git server.
|
||||||
* SSH cloning should work out of the box (after you've set up your SSH pub key in Forgejo, that is).
|
* SSH cloning should work out of the box (after you've set up your SSH pub key in Forgejo, that is).
|
||||||
|
|
||||||
|
|
||||||
## ntfy
|
|
||||||
|
|
||||||
ntfy is a notifications server.
|
|
||||||
|
|
||||||
### Deploy
|
|
||||||
|
|
||||||
* Decide what subdomain you want to serve ntfy on and add it to `services/ntfy/ntfy_vars.yml` on the `ntfy_subdomain`.
|
|
||||||
* Note that you will have to add a DNS entry to point to the VPS public IP.
|
|
||||||
* Before running the playbook, you should decide on a user and password for the admin user. This user is the only one authorised to send and read messages from topics. Once you've picked, export them in your terminal like this `export NTFY_USER=admin; export NTFY_PASSWORD=secret`.
|
|
||||||
* In the same shell, run the deployment playbook: `ansible-playbook -i inventory.ini services/ntfy/deploy_ntfy_playbook.yml`.
|
|
||||||
|
|
||||||
### Configure
|
|
||||||
|
|
||||||
* You can visit the ntfy web UI at the FQDN you configured.
|
|
||||||
* You can start using notify to send alerts with uptime kuma by visiting the uptime kuma UI and using the credentials for the ntfy admin user.
|
|
||||||
* To receive alerts on your phone, install the official ntfy app: https://github.com/binwiederhier/ntfy-android.
|
|
||||||
* You can also subscribe on the web UI on your laptop.
|
|
||||||
|
|
||||||
### Backups
|
|
||||||
|
|
||||||
Given that ntfy is almost stateless, no backups are made. If it blows up, simply set it up again.
|
|
||||||
|
|
||||||
|
|
||||||
## LNBits
|
## LNBits
|
||||||
|
|
||||||
LNBits is a Lightning Network wallet and accounts system.
|
LNBits is a Lightning Network wallet and accounts system.
|
||||||
|
|
|
||||||
57
SCRIPT_PLAYBOOK_MAPPING.md
Normal file
57
SCRIPT_PLAYBOOK_MAPPING.md
Normal file
|
|
@ -0,0 +1,57 @@
|
||||||
|
# Script to Playbook Mapping
|
||||||
|
|
||||||
|
This document describes which playbooks each setup script applies to which machines.
|
||||||
|
|
||||||
|
## Table
|
||||||
|
|
||||||
|
| Script | Playbook | Target Machines/Groups | Notes |
|
||||||
|
|--------|----------|------------------------|-------|
|
||||||
|
| **setup_layer_0.sh** | None | N/A | Initial setup script - creates venv, config files |
|
||||||
|
| **setup_layer_1a_vps.sh** | `infra/01_user_and_access_setup_playbook.yml` | `vps` (vipy, watchtower, spacey) | Creates counterweight user, configures SSH |
|
||||||
|
| **setup_layer_1a_vps.sh** | `infra/02_firewall_and_fail2ban_playbook.yml` | `vps` (vipy, watchtower, spacey) | Configures UFW firewall and fail2ban |
|
||||||
|
| **setup_layer_1b_nodito.sh** | `infra/nodito/30_proxmox_bootstrap_playbook.yml` | `nodito_host` (nodito) | Initial Proxmox bootstrap |
|
||||||
|
| **setup_layer_1b_nodito.sh** | `infra/nodito/31_proxmox_community_repos_playbook.yml` | `nodito_host` (nodito) | Configures Proxmox community repositories |
|
||||||
|
| **setup_layer_1b_nodito.sh** | `infra/nodito/32_zfs_pool_setup_playbook.yml` | `nodito_host` (nodito) | Sets up ZFS pool on Proxmox |
|
||||||
|
| **setup_layer_1b_nodito.sh** | `infra/nodito/33_proxmox_debian_cloud_template.yml` | `nodito_host` (nodito) | Creates Debian cloud template for VMs |
|
||||||
|
| **setup_layer_2.sh** | `infra/900_install_rsync.yml` | `all` (vipy, watchtower, spacey, nodito) | Installs rsync on all machines |
|
||||||
|
| **setup_layer_2.sh** | `infra/910_docker_playbook.yml` | `all` (vipy, watchtower, spacey, nodito) | Installs Docker on all machines |
|
||||||
|
| **setup_layer_3_caddy.sh** | `services/caddy_playbook.yml` | `vps` (vipy, watchtower, spacey) | Installs and configures Caddy reverse proxy |
|
||||||
|
| **setup_layer_4_monitoring.sh** | `services/ntfy/deploy_ntfy_playbook.yml` | `watchtower` | Deploys ntfy notification service |
|
||||||
|
| **setup_layer_4_monitoring.sh** | `services/uptime_kuma/deploy_uptime_kuma_playbook.yml` | `watchtower` | Deploys Uptime Kuma monitoring |
|
||||||
|
| **setup_layer_4_monitoring.sh** | `services/uptime_kuma/setup_backup_uptime_kuma_to_lapy.yml` | `lapy` (localhost) | Configures backup of Uptime Kuma to laptop |
|
||||||
|
| **setup_layer_4_monitoring.sh** | `services/ntfy/setup_ntfy_uptime_kuma_notification.yml` | `watchtower` | Configures ntfy notifications for Uptime Kuma |
|
||||||
|
| **setup_layer_5_headscale.sh** | `services/headscale/deploy_headscale_playbook.yml` | `spacey` | Deploys Headscale mesh VPN server |
|
||||||
|
| **setup_layer_5_headscale.sh** | `infra/920_join_headscale_mesh.yml` | `all` (vipy, watchtower, spacey, nodito) | Joins all machines to Headscale mesh (with --limit) |
|
||||||
|
| **setup_layer_5_headscale.sh** | `services/headscale/setup_backup_headscale_to_lapy.yml` | `lapy` (localhost) | Configures backup of Headscale to laptop |
|
||||||
|
| **setup_layer_6_infra_monitoring.sh** | `infra/410_disk_usage_alerts.yml` | `all` (vipy, watchtower, spacey, nodito, lapy) | Sets up disk usage monitoring alerts |
|
||||||
|
| **setup_layer_6_infra_monitoring.sh** | `infra/420_system_healthcheck.yml` | `all` (vipy, watchtower, spacey, nodito, lapy) | Sets up system health checks |
|
||||||
|
| **setup_layer_6_infra_monitoring.sh** | `infra/430_cpu_temp_alerts.yml` | `nodito_host` (nodito) | Sets up CPU temperature alerts for Proxmox |
|
||||||
|
| **setup_layer_7_services.sh** | `services/vaultwarden/deploy_vaultwarden_playbook.yml` | `vipy` | Deploys Vaultwarden password manager |
|
||||||
|
| **setup_layer_7_services.sh** | `services/forgejo/deploy_forgejo_playbook.yml` | `vipy` | Deploys Forgejo Git server |
|
||||||
|
| **setup_layer_7_services.sh** | `services/lnbits/deploy_lnbits_playbook.yml` | `vipy` | Deploys LNbits Lightning wallet |
|
||||||
|
| **setup_layer_7_services.sh** | `services/vaultwarden/setup_backup_vaultwarden_to_lapy.yml` | `lapy` (localhost) | Configures backup of Vaultwarden to laptop |
|
||||||
|
| **setup_layer_7_services.sh** | `services/lnbits/setup_backup_lnbits_to_lapy.yml` | `lapy` (localhost) | Configures backup of LNbits to laptop |
|
||||||
|
| **setup_layer_8_secondary_services.sh** | `services/ntfy-emergency-app/deploy_ntfy_emergency_app_playbook.yml` | `vipy` | Deploys emergency ntfy app |
|
||||||
|
| **setup_layer_8_secondary_services.sh** | `services/memos/deploy_memos_playbook.yml` | `memos-box` (VM on nodito) | Deploys Memos note-taking service |
|
||||||
|
|
||||||
|
## Machine Groups Reference
|
||||||
|
|
||||||
|
- **vps**: vipy, watchtower, spacey (VPS servers)
|
||||||
|
- **nodito_host**: nodito (Proxmox server)
|
||||||
|
- **nodito_vms**: memos-box and other VMs created on nodito
|
||||||
|
- **lapy**: localhost (your laptop)
|
||||||
|
- **all**: All machines in inventory
|
||||||
|
- **watchtower**: Single VPS for monitoring services
|
||||||
|
- **vipy**: Single VPS for main services
|
||||||
|
- **spacey**: Single VPS for Headscale
|
||||||
|
- **memos-box**: VM on nodito for Memos service
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- Scripts use `--limit` flag to restrict playbooks that target `all` to specific hosts
|
||||||
|
- Backup playbooks run on `lapy` (localhost) to configure backup jobs
|
||||||
|
- Some playbooks are optional and may be skipped if hosts aren't configured
|
||||||
|
- Layer 0 is a prerequisite for all other layers
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
@ -1,17 +1,13 @@
|
||||||
[vipy]
|
[vps]
|
||||||
207.154.226.192 ansible_user=counterweight ansible_port=22 ansible_ssh_private_key_file=~/.ssh/counterganzua
|
vipy ansible_host=207.154.226.192 ansible_user=counterweight ansible_port=22 ansible_ssh_private_key_file=~/.ssh/counterganzua
|
||||||
|
watchtower ansible_host=206.189.63.167 ansible_user=counterweight ansible_port=22 ansible_ssh_private_key_file=~/.ssh/counterganzua
|
||||||
|
spacey ansible_host=165.232.73.4 ansible_user=counterweight ansible_port=22 ansible_ssh_private_key_file=~/.ssh/counterganzua
|
||||||
|
|
||||||
[watchtower]
|
[nodito_host]
|
||||||
206.189.63.167 ansible_user=counterweight ansible_port=22 ansible_ssh_private_key_file=~/.ssh/counterganzua
|
nodito ansible_host=192.168.1.139 ansible_user=counterweight ansible_port=22 ansible_ssh_pass=noesfacilvivirenunmundocentralizado ansible_ssh_private_key_file=~/.ssh/counterganzua
|
||||||
|
|
||||||
[spacey]
|
[nodito_vms]
|
||||||
165.232.73.4 ansible_user=counterweight ansible_port=22 ansible_ssh_private_key_file=~/.ssh/counterganzua
|
memos-box ansible_host=192.168.1.149 ansible_user=counterweight ansible_port=22 ansible_ssh_private_key_file=~/.ssh/counterganzua
|
||||||
|
|
||||||
[nodito]
|
|
||||||
192.168.1.139 ansible_user=counterweight ansible_port=22 ansible_ssh_pass=noesfacilvivirenunmundocentralizado ansible_ssh_private_key_file=~/.ssh/counterganzua
|
|
||||||
|
|
||||||
[memos-box]
|
|
||||||
192.168.1.149 ansible_user=counterweight ansible_port=22 ansible_ssh_private_key_file=~/.ssh/counterganzua
|
|
||||||
|
|
||||||
|
|
||||||
# Local connection to laptop: this assumes you're running ansible commands from your personal laptop
|
# Local connection to laptop: this assumes you're running ansible commands from your personal laptop
|
||||||
|
|
|
||||||
|
|
@ -45,7 +45,7 @@ Before starting:
|
||||||
- watchtower (monitoring VPS)
|
- watchtower (monitoring VPS)
|
||||||
- spacey (headscale VPS)
|
- spacey (headscale VPS)
|
||||||
- nodito (Proxmox server) - optional
|
- nodito (Proxmox server) - optional
|
||||||
- **Note:** VMs (like memos-box) will be created later on Proxmox and added to the `nodito-vms` group
|
- **Note:** VMs (like memos-box) will be created later on Proxmox and added to the `nodito_vms` group
|
||||||
|
|
||||||
### Manual Steps:
|
### Manual Steps:
|
||||||
After running the script, you'll need to:
|
After running the script, you'll need to:
|
||||||
|
|
|
||||||
|
|
@ -218,45 +218,39 @@ setup_inventory_file() {
|
||||||
|
|
||||||
EOF
|
EOF
|
||||||
|
|
||||||
|
vps_entries=""
|
||||||
if [ -n "$vipy_ip" ]; then
|
if [ -n "$vipy_ip" ]; then
|
||||||
cat >> inventory.ini << EOF
|
vps_entries+="vipy ansible_host=$vipy_ip ansible_user=counterweight ansible_port=22 ansible_ssh_private_key_file=$ssh_key\n"
|
||||||
[vipy]
|
|
||||||
$vipy_ip ansible_user=counterweight ansible_port=22 ansible_ssh_private_key_file=$ssh_key
|
|
||||||
|
|
||||||
EOF
|
|
||||||
fi
|
fi
|
||||||
|
|
||||||
if [ -n "$watchtower_ip" ]; then
|
if [ -n "$watchtower_ip" ]; then
|
||||||
cat >> inventory.ini << EOF
|
vps_entries+="watchtower ansible_host=$watchtower_ip ansible_user=counterweight ansible_port=22 ansible_ssh_private_key_file=$ssh_key\n"
|
||||||
[watchtower]
|
fi
|
||||||
$watchtower_ip ansible_user=counterweight ansible_port=22 ansible_ssh_private_key_file=$ssh_key
|
if [ -n "$spacey_ip" ]; then
|
||||||
|
vps_entries+="spacey ansible_host=$spacey_ip ansible_user=counterweight ansible_port=22 ansible_ssh_private_key_file=$ssh_key\n"
|
||||||
EOF
|
|
||||||
fi
|
fi
|
||||||
|
|
||||||
if [ -n "$spacey_ip" ]; then
|
if [ -n "$vps_entries" ]; then
|
||||||
cat >> inventory.ini << EOF
|
cat >> inventory.ini << EOF
|
||||||
[spacey]
|
[vps]
|
||||||
$spacey_ip ansible_user=counterweight ansible_port=22 ansible_ssh_private_key_file=$ssh_key
|
${vps_entries}
|
||||||
|
|
||||||
EOF
|
EOF
|
||||||
fi
|
fi
|
||||||
|
|
||||||
if [ -n "$nodito_ip" ]; then
|
if [ -n "$nodito_ip" ]; then
|
||||||
cat >> inventory.ini << EOF
|
cat >> inventory.ini << EOF
|
||||||
[nodito]
|
[nodito_host]
|
||||||
$nodito_ip ansible_user=counterweight ansible_port=22 ansible_ssh_private_key_file=$ssh_key
|
nodito ansible_host=$nodito_ip ansible_user=counterweight ansible_port=22 ansible_ssh_private_key_file=$ssh_key
|
||||||
|
|
||||||
EOF
|
EOF
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Add nodito-vms placeholder for VMs that will be created later
|
# Add nodito_vms placeholder for VMs that will be created later
|
||||||
cat >> inventory.ini << EOF
|
cat >> inventory.ini << EOF
|
||||||
# Nodito VMs - These don't exist yet and will be created on the Proxmox server
|
# Nodito VMs - These don't exist yet and will be created on the Proxmox server
|
||||||
# Add them here once you create VMs on nodito (e.g., memos-box, etc.)
|
# Add them here once you create VMs on nodito (e.g., memos-box, etc.)
|
||||||
[nodito-vms]
|
[nodito_vms]
|
||||||
# Example:
|
# Example:
|
||||||
# 192.168.1.150 ansible_user=counterweight ansible_port=22 ansible_ssh_private_key_file=$ssh_key hostname=memos-box
|
# memos_box ansible_host=192.168.1.150 ansible_user=counterweight ansible_port=22 ansible_ssh_private_key_file=$ssh_key
|
||||||
|
|
||||||
EOF
|
EOF
|
||||||
|
|
||||||
|
|
@ -439,9 +433,9 @@ print_summary() {
|
||||||
echo ""
|
echo ""
|
||||||
|
|
||||||
print_info "Note about inventory groups:"
|
print_info "Note about inventory groups:"
|
||||||
echo " • [nodito-vms] group created as placeholder"
|
echo " • [nodito_vms] group created as placeholder"
|
||||||
echo " • These VMs will be created later on Proxmox"
|
echo " • These VMs will be created later on Proxmox"
|
||||||
echo " • Add their IPs to inventory.ini once created"
|
echo " • Add their host entries to inventory.ini once created"
|
||||||
echo ""
|
echo ""
|
||||||
|
|
||||||
print_info "To test SSH access to a host:"
|
print_info "To test SSH access to a host:"
|
||||||
|
|
|
||||||
|
|
@ -114,29 +114,63 @@ check_layer_0_complete() {
|
||||||
}
|
}
|
||||||
|
|
||||||
get_hosts_from_inventory() {
|
get_hosts_from_inventory() {
|
||||||
local group="$1"
|
local target="$1"
|
||||||
cd "$ANSIBLE_DIR"
|
cd "$ANSIBLE_DIR"
|
||||||
ansible-inventory -i inventory.ini --list | \
|
|
||||||
python3 -c "import sys, json; data=json.load(sys.stdin); print(' '.join(data.get('$group', {}).get('hosts', [])))" 2>/dev/null || echo ""
|
# Parse inventory.ini directly - more reliable than ansible-inventory
|
||||||
|
if [ -f "$ANSIBLE_DIR/inventory.ini" ]; then
|
||||||
|
# Look for the group section [target]
|
||||||
|
local in_section=false
|
||||||
|
local hosts=""
|
||||||
|
while IFS= read -r line; do
|
||||||
|
# Remove comments and whitespace
|
||||||
|
line=$(echo "$line" | sed 's/#.*$//' | xargs)
|
||||||
|
[ -z "$line" ] && continue
|
||||||
|
|
||||||
|
# Check if we're entering the target section
|
||||||
|
if [[ "$line" =~ ^\[$target\]$ ]]; then
|
||||||
|
in_section=true
|
||||||
|
continue
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check if we're entering a different section
|
||||||
|
if [[ "$line" =~ ^\[.*\]$ ]]; then
|
||||||
|
in_section=false
|
||||||
|
continue
|
||||||
|
fi
|
||||||
|
|
||||||
|
# If we're in the target section, extract hostname
|
||||||
|
if [ "$in_section" = true ]; then
|
||||||
|
local hostname=$(echo "$line" | awk '{print $1}')
|
||||||
|
if [ -n "$hostname" ]; then
|
||||||
|
hosts="$hosts $hostname"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
done < "$ANSIBLE_DIR/inventory.ini"
|
||||||
|
echo "$hosts" | xargs
|
||||||
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
check_vps_configured() {
|
check_vps_configured() {
|
||||||
print_header "Checking VPS Configuration"
|
print_header "Checking VPS Configuration"
|
||||||
|
|
||||||
|
# Get all hosts from the vps group
|
||||||
|
local vps_hosts=$(get_hosts_from_inventory "vps")
|
||||||
local has_vps=false
|
local has_vps=false
|
||||||
for group in vipy watchtower spacey; do
|
|
||||||
local hosts=$(get_hosts_from_inventory "$group")
|
# Check for expected VPS hostnames
|
||||||
if [ -n "$hosts" ]; then
|
for expected_host in vipy watchtower spacey; do
|
||||||
print_success "$group configured: $hosts"
|
if echo "$vps_hosts" | grep -q "\b$expected_host\b"; then
|
||||||
|
print_success "$expected_host configured"
|
||||||
has_vps=true
|
has_vps=true
|
||||||
else
|
else
|
||||||
print_info "$group not configured (skipping)"
|
print_info "$expected_host not configured (skipping)"
|
||||||
fi
|
fi
|
||||||
done
|
done
|
||||||
|
|
||||||
if [ "$has_vps" = false ]; then
|
if [ "$has_vps" = false ]; then
|
||||||
print_error "No VPSs configured in inventory.ini"
|
print_error "No VPSs configured in inventory.ini"
|
||||||
print_info "Add at least one VPS (vipy, watchtower, or spacey) to proceed"
|
print_info "Add at least one VPS (vipy, watchtower, or spacey) to the [vps] group to proceed"
|
||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
|
@ -154,20 +188,20 @@ check_ssh_connectivity() {
|
||||||
|
|
||||||
local all_good=true
|
local all_good=true
|
||||||
|
|
||||||
|
# Get all hosts from the vps group
|
||||||
|
local vps_hosts=$(get_hosts_from_inventory "vps")
|
||||||
|
|
||||||
# Test VPSs (vipy, watchtower, spacey)
|
# Test VPSs (vipy, watchtower, spacey)
|
||||||
for group in vipy watchtower spacey; do
|
for expected_host in vipy watchtower spacey; do
|
||||||
local hosts=$(get_hosts_from_inventory "$group")
|
if echo "$vps_hosts" | grep -q "\b$expected_host\b"; then
|
||||||
if [ -n "$hosts" ]; then
|
print_info "Testing SSH to $expected_host as root..."
|
||||||
for host in $hosts; do
|
if timeout 10 ssh -i "$ssh_key" -o StrictHostKeyChecking=no -o BatchMode=yes root@$expected_host "echo 'SSH OK'" &>/dev/null; then
|
||||||
print_info "Testing SSH to $host as root..."
|
print_success "SSH to $expected_host as root: OK"
|
||||||
if timeout 10 ssh -i "$ssh_key" -o StrictHostKeyChecking=no -o BatchMode=yes root@$host "echo 'SSH OK'" &>/dev/null; then
|
|
||||||
print_success "SSH to $host as root: OK"
|
|
||||||
else
|
else
|
||||||
print_error "Cannot SSH to $host as root"
|
print_error "Cannot SSH to $expected_host as root"
|
||||||
print_warning "Make sure your SSH key is added to root on $host"
|
print_warning "Make sure your SSH key is added to root on $expected_host"
|
||||||
all_good=false
|
all_good=false
|
||||||
fi
|
fi
|
||||||
done
|
|
||||||
fi
|
fi
|
||||||
done
|
done
|
||||||
|
|
||||||
|
|
@ -265,17 +299,17 @@ verify_layer_1a() {
|
||||||
|
|
||||||
local all_good=true
|
local all_good=true
|
||||||
|
|
||||||
for group in vipy watchtower spacey; do
|
# Get all hosts from the vps group
|
||||||
local hosts=$(get_hosts_from_inventory "$group")
|
local vps_hosts=$(get_hosts_from_inventory "vps")
|
||||||
if [ -n "$hosts" ]; then
|
|
||||||
for host in $hosts; do
|
for expected_host in vipy watchtower spacey; do
|
||||||
if timeout 10 ssh -i "$ssh_key" -o StrictHostKeyChecking=no -o BatchMode=yes counterweight@$host "echo 'SSH OK'" &>/dev/null; then
|
if echo "$vps_hosts" | grep -q "\b$expected_host\b"; then
|
||||||
print_success "SSH to $host as counterweight: OK"
|
if timeout 10 ssh -i "$ssh_key" -o StrictHostKeyChecking=no -o BatchMode=yes counterweight@$expected_host "echo 'SSH OK'" &>/dev/null; then
|
||||||
|
print_success "SSH to $expected_host as counterweight: OK"
|
||||||
else
|
else
|
||||||
print_error "Cannot SSH to $host as counterweight"
|
print_error "Cannot SSH to $expected_host as counterweight"
|
||||||
all_good=false
|
all_good=false
|
||||||
fi
|
fi
|
||||||
done
|
|
||||||
fi
|
fi
|
||||||
done
|
done
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -106,20 +106,30 @@ check_layer_0_complete() {
|
||||||
}
|
}
|
||||||
|
|
||||||
get_hosts_from_inventory() {
|
get_hosts_from_inventory() {
|
||||||
local group="$1"
|
local target="$1"
|
||||||
cd "$ANSIBLE_DIR"
|
cd "$ANSIBLE_DIR"
|
||||||
ansible-inventory -i inventory.ini --list | \
|
ansible-inventory -i inventory.ini --list | \
|
||||||
python3 -c "import sys, json; data=json.load(sys.stdin); print(' '.join(data.get('$group', {}).get('hosts', [])))" 2>/dev/null || echo ""
|
python3 - "$target" <<'PY' 2>/dev/null || echo ""
|
||||||
|
import json, sys
|
||||||
|
data = json.load(sys.stdin)
|
||||||
|
target = sys.argv[1]
|
||||||
|
if target in data:
|
||||||
|
print(' '.join(data[target].get('hosts', [])))
|
||||||
|
else:
|
||||||
|
hostvars = data.get('_meta', {}).get('hostvars', {})
|
||||||
|
if target in hostvars:
|
||||||
|
print(target)
|
||||||
|
PY
|
||||||
}
|
}
|
||||||
|
|
||||||
check_nodito_configured() {
|
check_nodito_configured() {
|
||||||
print_header "Checking Nodito Configuration"
|
print_header "Checking Nodito Configuration"
|
||||||
|
|
||||||
local nodito_hosts=$(get_hosts_from_inventory "nodito")
|
local nodito_hosts=$(get_hosts_from_inventory "nodito_host")
|
||||||
|
|
||||||
if [ -z "$nodito_hosts" ]; then
|
if [ -z "$nodito_hosts" ]; then
|
||||||
print_error "No nodito host configured in inventory.ini"
|
print_error "No nodito host configured in inventory.ini"
|
||||||
print_info "Add nodito to [nodito] group in inventory.ini to proceed"
|
print_info "Add the nodito host to the [nodito_host] group in inventory.ini to proceed"
|
||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -95,10 +95,20 @@ check_layer_0_complete() {
|
||||||
}
|
}
|
||||||
|
|
||||||
get_hosts_from_inventory() {
|
get_hosts_from_inventory() {
|
||||||
local group="$1"
|
local target="$1"
|
||||||
cd "$ANSIBLE_DIR"
|
cd "$ANSIBLE_DIR"
|
||||||
ansible-inventory -i inventory.ini --list | \
|
ansible-inventory -i inventory.ini --list | \
|
||||||
python3 -c "import sys, json; data=json.load(sys.stdin); print(' '.join(data.get('$group', {}).get('hosts', [])))" 2>/dev/null || echo ""
|
python3 - "$target" <<'PY' 2>/dev/null || echo ""
|
||||||
|
import json, sys
|
||||||
|
data = json.load(sys.stdin)
|
||||||
|
target = sys.argv[1]
|
||||||
|
if target in data:
|
||||||
|
print(' '.join(data[target].get('hosts', [])))
|
||||||
|
else:
|
||||||
|
hostvars = data.get('_meta', {}).get('hostvars', {})
|
||||||
|
if target in hostvars:
|
||||||
|
print(target)
|
||||||
|
PY
|
||||||
}
|
}
|
||||||
|
|
||||||
check_ssh_connectivity() {
|
check_ssh_connectivity() {
|
||||||
|
|
|
||||||
|
|
@ -95,10 +95,20 @@ check_layer_0_complete() {
|
||||||
}
|
}
|
||||||
|
|
||||||
get_hosts_from_inventory() {
|
get_hosts_from_inventory() {
|
||||||
local group="$1"
|
local target="$1"
|
||||||
cd "$ANSIBLE_DIR"
|
cd "$ANSIBLE_DIR"
|
||||||
ansible-inventory -i inventory.ini --list | \
|
ansible-inventory -i inventory.ini --list | \
|
||||||
python3 -c "import sys, json; data=json.load(sys.stdin); print(' '.join(data.get('$group', {}).get('hosts', [])))" 2>/dev/null || echo ""
|
python3 - "$target" <<'PY' 2>/dev/null || echo ""
|
||||||
|
import json, sys
|
||||||
|
data = json.load(sys.stdin)
|
||||||
|
target = sys.argv[1]
|
||||||
|
if target in data:
|
||||||
|
print(' '.join(data[target].get('hosts', [])))
|
||||||
|
else:
|
||||||
|
hostvars = data.get('_meta', {}).get('hostvars', {})
|
||||||
|
if target in hostvars:
|
||||||
|
print(target)
|
||||||
|
PY
|
||||||
}
|
}
|
||||||
|
|
||||||
check_target_hosts() {
|
check_target_hosts() {
|
||||||
|
|
|
||||||
|
|
@ -55,6 +55,43 @@ confirm_action() {
|
||||||
[[ "$response" =~ ^[Yy]$ ]]
|
[[ "$response" =~ ^[Yy]$ ]]
|
||||||
}
|
}
|
||||||
|
|
||||||
|
get_hosts_from_inventory() {
|
||||||
|
local target="$1"
|
||||||
|
cd "$ANSIBLE_DIR"
|
||||||
|
ansible-inventory -i inventory.ini --list | \
|
||||||
|
python3 - "$target" <<'PY' 2>/dev/null || echo ""
|
||||||
|
import json, sys
|
||||||
|
data = json.load(sys.stdin)
|
||||||
|
target = sys.argv[1]
|
||||||
|
if target in data:
|
||||||
|
print(' '.join(data[target].get('hosts', [])))
|
||||||
|
else:
|
||||||
|
hostvars = data.get('_meta', {}).get('hostvars', {})
|
||||||
|
if target in hostvars:
|
||||||
|
print(target)
|
||||||
|
PY
|
||||||
|
}
|
||||||
|
|
||||||
|
get_host_ip() {
|
||||||
|
local target="$1"
|
||||||
|
cd "$ANSIBLE_DIR"
|
||||||
|
ansible-inventory -i inventory.ini --list | \
|
||||||
|
python3 - "$target" <<'PY' 2>/dev/null || echo ""
|
||||||
|
import json, sys
|
||||||
|
data = json.load(sys.stdin)
|
||||||
|
target = sys.argv[1]
|
||||||
|
hostvars = data.get('_meta', {}).get('hostvars', {})
|
||||||
|
if target in hostvars:
|
||||||
|
print(hostvars[target].get('ansible_host', target))
|
||||||
|
else:
|
||||||
|
hosts = data.get(target, {}).get('hosts', [])
|
||||||
|
if hosts:
|
||||||
|
first = hosts[0]
|
||||||
|
hv = hostvars.get(first, {})
|
||||||
|
print(hv.get('ansible_host', first))
|
||||||
|
PY
|
||||||
|
}
|
||||||
|
|
||||||
###############################################################################
|
###############################################################################
|
||||||
# Verification Functions
|
# Verification Functions
|
||||||
###############################################################################
|
###############################################################################
|
||||||
|
|
@ -87,7 +124,7 @@ check_prerequisites() {
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Check if watchtower is configured
|
# Check if watchtower is configured
|
||||||
if ! grep -q "^\[watchtower\]" "$ANSIBLE_DIR/inventory.ini"; then
|
if [ -z "$(get_hosts_from_inventory "watchtower")" ]; then
|
||||||
print_error "watchtower not configured in inventory.ini"
|
print_error "watchtower not configured in inventory.ini"
|
||||||
print_info "Layer 4 requires watchtower VPS"
|
print_info "Layer 4 requires watchtower VPS"
|
||||||
((errors++))
|
((errors++))
|
||||||
|
|
@ -131,7 +168,7 @@ check_dns_configuration() {
|
||||||
cd "$ANSIBLE_DIR"
|
cd "$ANSIBLE_DIR"
|
||||||
|
|
||||||
# Get watchtower IP
|
# Get watchtower IP
|
||||||
local watchtower_ip=$(ansible-inventory -i inventory.ini --list | python3 -c "import sys, json; data=json.load(sys.stdin); hosts=data.get('watchtower', {}).get('hosts', []); print(hosts[0] if hosts else '')" 2>/dev/null)
|
local watchtower_ip=$(get_host_ip "watchtower")
|
||||||
|
|
||||||
if [ -z "$watchtower_ip" ]; then
|
if [ -z "$watchtower_ip" ]; then
|
||||||
print_error "Could not determine watchtower IP from inventory"
|
print_error "Could not determine watchtower IP from inventory"
|
||||||
|
|
@ -431,7 +468,8 @@ verify_deployments() {
|
||||||
local ssh_key=$(grep "ansible_ssh_private_key_file" "$ANSIBLE_DIR/inventory.ini" | head -n1 | sed 's/.*ansible_ssh_private_key_file=\([^ ]*\).*/\1/')
|
local ssh_key=$(grep "ansible_ssh_private_key_file" "$ANSIBLE_DIR/inventory.ini" | head -n1 | sed 's/.*ansible_ssh_private_key_file=\([^ ]*\).*/\1/')
|
||||||
ssh_key="${ssh_key/#\~/$HOME}"
|
ssh_key="${ssh_key/#\~/$HOME}"
|
||||||
|
|
||||||
local watchtower_host=$(ansible-inventory -i inventory.ini --list | python3 -c "import sys, json; data=json.load(sys.stdin); print(' '.join(data.get('watchtower', {}).get('hosts', [])))" 2>/dev/null)
|
local watchtower_host
|
||||||
|
watchtower_host=$(get_hosts_from_inventory "watchtower")
|
||||||
|
|
||||||
if [ -z "$watchtower_host" ]; then
|
if [ -z "$watchtower_host" ]; then
|
||||||
print_error "Could not determine watchtower host"
|
print_error "Could not determine watchtower host"
|
||||||
|
|
|
||||||
|
|
@ -88,7 +88,7 @@ check_prerequisites() {
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Check if spacey is configured
|
# Check if spacey is configured
|
||||||
if ! grep -q "^\[spacey\]" "$ANSIBLE_DIR/inventory.ini"; then
|
if [ -z "$(get_hosts_from_inventory "spacey")" ]; then
|
||||||
print_error "spacey not configured in inventory.ini"
|
print_error "spacey not configured in inventory.ini"
|
||||||
print_info "Layer 5 requires spacey VPS for Headscale server"
|
print_info "Layer 5 requires spacey VPS for Headscale server"
|
||||||
((errors++))
|
((errors++))
|
||||||
|
|
@ -105,10 +105,40 @@ check_prerequisites() {
|
||||||
}
|
}
|
||||||
|
|
||||||
get_hosts_from_inventory() {
|
get_hosts_from_inventory() {
|
||||||
local group="$1"
|
local target="$1"
|
||||||
cd "$ANSIBLE_DIR"
|
cd "$ANSIBLE_DIR"
|
||||||
ansible-inventory -i inventory.ini --list | \
|
ansible-inventory -i inventory.ini --list | \
|
||||||
python3 -c "import sys, json; data=json.load(sys.stdin); print(' '.join(data.get('$group', {}).get('hosts', [])))" 2>/dev/null || echo ""
|
python3 - "$target" <<'PY' 2>/dev/null || echo ""
|
||||||
|
import json, sys
|
||||||
|
data = json.load(sys.stdin)
|
||||||
|
target = sys.argv[1]
|
||||||
|
if target in data:
|
||||||
|
print(' '.join(data[target].get('hosts', [])))
|
||||||
|
else:
|
||||||
|
hostvars = data.get('_meta', {}).get('hostvars', {})
|
||||||
|
if target in hostvars:
|
||||||
|
print(target)
|
||||||
|
PY
|
||||||
|
}
|
||||||
|
|
||||||
|
get_host_ip() {
|
||||||
|
local target="$1"
|
||||||
|
cd "$ANSIBLE_DIR"
|
||||||
|
ansible-inventory -i inventory.ini --list | \
|
||||||
|
python3 - "$target" <<'PY' 2>/dev/null || echo ""
|
||||||
|
import json, sys
|
||||||
|
data = json.load(sys.stdin)
|
||||||
|
target = sys.argv[1]
|
||||||
|
hostvars = data.get('_meta', {}).get('hostvars', {})
|
||||||
|
if target in hostvars:
|
||||||
|
print(hostvars[target].get('ansible_host', target))
|
||||||
|
else:
|
||||||
|
hosts = data.get(target, {}).get('hosts', [])
|
||||||
|
if hosts:
|
||||||
|
first = hosts[0]
|
||||||
|
hv = hostvars.get(first, {})
|
||||||
|
print(hv.get('ansible_host', first))
|
||||||
|
PY
|
||||||
}
|
}
|
||||||
|
|
||||||
check_vars_files() {
|
check_vars_files() {
|
||||||
|
|
@ -135,7 +165,7 @@ check_dns_configuration() {
|
||||||
cd "$ANSIBLE_DIR"
|
cd "$ANSIBLE_DIR"
|
||||||
|
|
||||||
# Get spacey IP
|
# Get spacey IP
|
||||||
local spacey_ip=$(ansible-inventory -i inventory.ini --list | python3 -c "import sys, json; data=json.load(sys.stdin); hosts=data.get('spacey', {}).get('hosts', []); print(hosts[0] if hosts else '')" 2>/dev/null)
|
local spacey_ip=$(get_host_ip "spacey")
|
||||||
|
|
||||||
if [ -z "$spacey_ip" ]; then
|
if [ -z "$spacey_ip" ]; then
|
||||||
print_error "Could not determine spacey IP from inventory"
|
print_error "Could not determine spacey IP from inventory"
|
||||||
|
|
|
||||||
|
|
@ -189,10 +189,20 @@ EOFPYTHON
|
||||||
}
|
}
|
||||||
|
|
||||||
get_hosts_from_inventory() {
|
get_hosts_from_inventory() {
|
||||||
local group="$1"
|
local target="$1"
|
||||||
cd "$ANSIBLE_DIR"
|
cd "$ANSIBLE_DIR"
|
||||||
ansible-inventory -i inventory.ini --list | \
|
ansible-inventory -i inventory.ini --list | \
|
||||||
python3 -c "import sys, json; data=json.load(sys.stdin); print(' '.join(data.get('$group', {}).get('hosts', [])))" 2>/dev/null || echo ""
|
python3 - "$target" <<'PY' 2>/dev/null || echo ""
|
||||||
|
import json, sys
|
||||||
|
data = json.load(sys.stdin)
|
||||||
|
target = sys.argv[1]
|
||||||
|
if target in data:
|
||||||
|
print(' '.join(data[target].get('hosts', [])))
|
||||||
|
else:
|
||||||
|
hostvars = data.get('_meta', {}).get('hostvars', {})
|
||||||
|
if target in hostvars:
|
||||||
|
print(target)
|
||||||
|
PY
|
||||||
}
|
}
|
||||||
|
|
||||||
###############################################################################
|
###############################################################################
|
||||||
|
|
|
||||||
|
|
@ -87,7 +87,7 @@ check_prerequisites() {
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Check if vipy is configured
|
# Check if vipy is configured
|
||||||
if ! grep -q "^\[vipy\]" "$ANSIBLE_DIR/inventory.ini"; then
|
if [ -z "$(get_hosts_from_inventory "vipy")" ]; then
|
||||||
print_error "vipy not configured in inventory.ini"
|
print_error "vipy not configured in inventory.ini"
|
||||||
print_info "Layer 7 requires vipy VPS"
|
print_info "Layer 7 requires vipy VPS"
|
||||||
((errors++))
|
((errors++))
|
||||||
|
|
@ -104,10 +104,40 @@ check_prerequisites() {
|
||||||
}
|
}
|
||||||
|
|
||||||
get_hosts_from_inventory() {
|
get_hosts_from_inventory() {
|
||||||
local group="$1"
|
local target="$1"
|
||||||
cd "$ANSIBLE_DIR"
|
cd "$ANSIBLE_DIR"
|
||||||
ansible-inventory -i inventory.ini --list | \
|
ansible-inventory -i inventory.ini --list | \
|
||||||
python3 -c "import sys, json; data=json.load(sys.stdin); print(' '.join(data.get('$group', {}).get('hosts', [])))" 2>/dev/null || echo ""
|
python3 - "$target" <<'PY' 2>/dev/null || echo ""
|
||||||
|
import json, sys
|
||||||
|
data = json.load(sys.stdin)
|
||||||
|
target = sys.argv[1]
|
||||||
|
if target in data:
|
||||||
|
print(' '.join(data[target].get('hosts', [])))
|
||||||
|
else:
|
||||||
|
hostvars = data.get('_meta', {}).get('hostvars', {})
|
||||||
|
if target in hostvars:
|
||||||
|
print(target)
|
||||||
|
PY
|
||||||
|
}
|
||||||
|
|
||||||
|
get_host_ip() {
|
||||||
|
local target="$1"
|
||||||
|
cd "$ANSIBLE_DIR"
|
||||||
|
ansible-inventory -i inventory.ini --list | \
|
||||||
|
python3 - "$target" <<'PY' 2>/dev/null || echo ""
|
||||||
|
import json, sys
|
||||||
|
data = json.load(sys.stdin)
|
||||||
|
target = sys.argv[1]
|
||||||
|
hostvars = data.get('_meta', {}).get('hostvars', {})
|
||||||
|
if target in hostvars:
|
||||||
|
print(hostvars[target].get('ansible_host', target))
|
||||||
|
else:
|
||||||
|
hosts = data.get(target, {}).get('hosts', [])
|
||||||
|
if hosts:
|
||||||
|
first = hosts[0]
|
||||||
|
hv = hostvars.get(first, {})
|
||||||
|
print(hv.get('ansible_host', first))
|
||||||
|
PY
|
||||||
}
|
}
|
||||||
|
|
||||||
check_dns_configuration() {
|
check_dns_configuration() {
|
||||||
|
|
@ -116,7 +146,7 @@ check_dns_configuration() {
|
||||||
cd "$ANSIBLE_DIR"
|
cd "$ANSIBLE_DIR"
|
||||||
|
|
||||||
# Get vipy IP
|
# Get vipy IP
|
||||||
local vipy_ip=$(ansible-inventory -i inventory.ini --list | python3 -c "import sys, json; data=json.load(sys.stdin); hosts=data.get('vipy', {}).get('hosts', []); print(hosts[0] if hosts else '')" 2>/dev/null)
|
local vipy_ip=$(get_host_ip "vipy")
|
||||||
|
|
||||||
if [ -z "$vipy_ip" ]; then
|
if [ -z "$vipy_ip" ]; then
|
||||||
print_error "Could not determine vipy IP from inventory"
|
print_error "Could not determine vipy IP from inventory"
|
||||||
|
|
|
||||||
|
|
@ -58,17 +58,40 @@ record_summary() {
|
||||||
}
|
}
|
||||||
|
|
||||||
get_hosts_from_inventory() {
|
get_hosts_from_inventory() {
|
||||||
local group="$1"
|
local target="$1"
|
||||||
cd "$ANSIBLE_DIR"
|
cd "$ANSIBLE_DIR"
|
||||||
ansible-inventory -i inventory.ini --list | \
|
ansible-inventory -i inventory.ini --list | \
|
||||||
python3 -c "import sys, json; data=json.load(sys.stdin); print(' '.join(data.get('$group', {}).get('hosts', [])))" 2>/dev/null || echo ""
|
python3 - "$target" <<'PY' 2>/dev/null || echo ""
|
||||||
|
import json, sys
|
||||||
|
data = json.load(sys.stdin)
|
||||||
|
target = sys.argv[1]
|
||||||
|
if target in data:
|
||||||
|
print(' '.join(data[target].get('hosts', [])))
|
||||||
|
else:
|
||||||
|
hostvars = data.get('_meta', {}).get('hostvars', {})
|
||||||
|
if target in hostvars:
|
||||||
|
print(target)
|
||||||
|
PY
|
||||||
}
|
}
|
||||||
|
|
||||||
get_primary_host_ip() {
|
get_primary_host_ip() {
|
||||||
local group="$1"
|
local target="$1"
|
||||||
cd "$ANSIBLE_DIR"
|
cd "$ANSIBLE_DIR"
|
||||||
ansible-inventory -i inventory.ini --list | \
|
ansible-inventory -i inventory.ini --list | \
|
||||||
python3 -c "import sys, json; data=json.load(sys.stdin); hosts=data.get('$group', {}).get('hosts', []); print(hosts[0] if hosts else '')" 2>/dev/null || echo ""
|
python3 - "$target" <<'PY' 2>/dev/null || echo ""
|
||||||
|
import json, sys
|
||||||
|
data = json.load(sys.stdin)
|
||||||
|
target = sys.argv[1]
|
||||||
|
hostvars = data.get('_meta', {}).get('hostvars', {})
|
||||||
|
if target in hostvars:
|
||||||
|
print(hostvars[target].get('ansible_host', target))
|
||||||
|
else:
|
||||||
|
hosts = data.get(target, {}).get('hosts', [])
|
||||||
|
if hosts:
|
||||||
|
first = hosts[0]
|
||||||
|
hv = hostvars.get(first, {})
|
||||||
|
print(hv.get('ansible_host', first))
|
||||||
|
PY
|
||||||
}
|
}
|
||||||
|
|
||||||
check_prerequisites() {
|
check_prerequisites() {
|
||||||
|
|
@ -112,14 +135,14 @@ check_prerequisites() {
|
||||||
print_success "services_config.yml exists"
|
print_success "services_config.yml exists"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
if ! grep -q "^\[vipy\]" "$ANSIBLE_DIR/inventory.ini"; then
|
if [ -z "$(get_hosts_from_inventory "vipy")" ]; then
|
||||||
print_error "vipy not configured in inventory.ini"
|
print_error "vipy not configured in inventory.ini"
|
||||||
((errors++))
|
((errors++))
|
||||||
else
|
else
|
||||||
print_success "vipy configured in inventory"
|
print_success "vipy configured in inventory"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
if ! grep -q "^\[memos-box\]" "$ANSIBLE_DIR/inventory.ini"; then
|
if [ -z "$(get_hosts_from_inventory "memos-box")" ]; then
|
||||||
print_warning "memos-box not configured in inventory.ini (memos deployment will be skipped)"
|
print_warning "memos-box not configured in inventory.ini (memos deployment will be skipped)"
|
||||||
else
|
else
|
||||||
print_success "memos-box configured in inventory"
|
print_success "memos-box configured in inventory"
|
||||||
|
|
@ -173,8 +196,9 @@ check_dns_configuration() {
|
||||||
fi
|
fi
|
||||||
|
|
||||||
local memos_ip=""
|
local memos_ip=""
|
||||||
if grep -q "^\[memos-box\]" "$ANSIBLE_DIR/inventory.ini"; then
|
local memos_host=$(get_hosts_from_inventory "memos-box")
|
||||||
memos_ip=$(get_primary_host_ip "memos-box")
|
if [ -n "$memos_host" ]; then
|
||||||
|
memos_ip=$(get_primary_host_ip "$memos_host")
|
||||||
fi
|
fi
|
||||||
|
|
||||||
local dns_ok=true
|
local dns_ok=true
|
||||||
|
|
@ -262,7 +286,7 @@ deploy_ntfy_emergency_app() {
|
||||||
deploy_memos() {
|
deploy_memos() {
|
||||||
print_header "Deploying Memos"
|
print_header "Deploying Memos"
|
||||||
|
|
||||||
if ! grep -q "^\[memos-box\]" "$ANSIBLE_DIR/inventory.ini"; then
|
if [ -z "$(get_hosts_from_inventory "memos-box")" ]; then
|
||||||
print_warning "memos-box not in inventory. Skipping memos deployment."
|
print_warning "memos-box not in inventory. Skipping memos deployment."
|
||||||
record_summary "${YELLOW}• memos${NC}: skipped (memos-box missing)"
|
record_summary "${YELLOW}• memos${NC}: skipped (memos-box missing)"
|
||||||
return 0
|
return 0
|
||||||
|
|
@ -311,10 +335,8 @@ verify_services() {
|
||||||
echo ""
|
echo ""
|
||||||
fi
|
fi
|
||||||
|
|
||||||
if grep -q "^\[memos-box\]" "$ANSIBLE_DIR/inventory.ini"; then
|
|
||||||
local memos_host
|
local memos_host
|
||||||
memos_host=$(get_hosts_from_inventory "memos-box")
|
memos_host=$(get_hosts_from_inventory "memos-box")
|
||||||
|
|
||||||
if [ -n "$memos_host" ]; then
|
if [ -n "$memos_host" ]; then
|
||||||
print_info "Checking memos on memos-box ($memos_host)..."
|
print_info "Checking memos on memos-box ($memos_host)..."
|
||||||
if timeout 5 ssh -i "$ssh_key" -o StrictHostKeyChecking=no -o BatchMode=yes counterweight@$memos_host "systemctl is-active memos" &>/dev/null; then
|
if timeout 5 ssh -i "$ssh_key" -o StrictHostKeyChecking=no -o BatchMode=yes counterweight@$memos_host "systemctl is-active memos" &>/dev/null; then
|
||||||
|
|
@ -324,7 +346,6 @@ verify_services() {
|
||||||
fi
|
fi
|
||||||
echo ""
|
echo ""
|
||||||
fi
|
fi
|
||||||
fi
|
|
||||||
}
|
}
|
||||||
|
|
||||||
print_summary() {
|
print_summary() {
|
||||||
|
|
|
||||||
|
|
@ -48,9 +48,8 @@ vms = {
|
||||||
data_disks = [
|
data_disks = [
|
||||||
{
|
{
|
||||||
size_gb = 50
|
size_gb = 50
|
||||||
# optional overrides:
|
# storage defaults to var.zfs_storage_name (proxmox-tank-1)
|
||||||
# storage = "proxmox-tank-1"
|
# optional: slot = "scsi2"
|
||||||
# slot = "scsi2"
|
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
|
@ -66,6 +65,8 @@ tofu plan -var-file=terraform.tfvars
|
||||||
tofu apply -var-file=terraform.tfvars
|
tofu apply -var-file=terraform.tfvars
|
||||||
```
|
```
|
||||||
|
|
||||||
|
> VMs are created once and then protected: the module sets `lifecycle.prevent_destroy = true` and ignores subsequent config changes. After the initial apply, manage day‑2 changes directly in Proxmox (or remove the lifecycle block if you need OpenTofu to own ongoing updates).
|
||||||
|
|
||||||
### Notes
|
### Notes
|
||||||
- Clones are full clones by default (`full_clone = true`).
|
- Clones are full clones by default (`full_clone = true`).
|
||||||
- Cloud-init injects `cloud_init_user` and `ssh_authorized_keys`.
|
- Cloud-init injects `cloud_init_user` and `ssh_authorized_keys`.
|
||||||
|
|
|
||||||
|
|
@ -28,6 +28,20 @@ resource "proxmox_vm_qemu" "vm" {
|
||||||
boot = "c"
|
boot = "c"
|
||||||
bootdisk = "scsi0"
|
bootdisk = "scsi0"
|
||||||
|
|
||||||
|
lifecycle {
|
||||||
|
prevent_destroy = true
|
||||||
|
ignore_changes = [
|
||||||
|
name,
|
||||||
|
cpu,
|
||||||
|
memory,
|
||||||
|
network,
|
||||||
|
ipconfig0,
|
||||||
|
ciuser,
|
||||||
|
sshkeys,
|
||||||
|
cicustom,
|
||||||
|
]
|
||||||
|
}
|
||||||
|
|
||||||
serial {
|
serial {
|
||||||
id = 0
|
id = 0
|
||||||
type = "socket"
|
type = "socket"
|
||||||
|
|
|
||||||
|
|
@ -23,8 +23,6 @@ vms = {
|
||||||
data_disks = [
|
data_disks = [
|
||||||
{
|
{
|
||||||
size_gb = 50
|
size_gb = 50
|
||||||
# optional: storage = "proxmox-tank-1"
|
|
||||||
# optional: slot = "scsi2"
|
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
|
|
||||||
Loading…
Add table
Add a link
Reference in a new issue