Compare commits

...

10 commits

Author SHA1 Message Date
5d92c7734f
immich notes 2025-05-15 20:10:54 +02:00
b035508c3e
stuff 2025-05-15 12:25:08 +02:00
143ab7280d some stuff on vaultwarden 2024-01-14 13:01:14 +01:00
6764e6d094 Stuff 2023-12-23 16:59:51 +01:00
d6b41b5e0f Stuff 2023-12-23 16:59:11 +01:00
91f9627a9b
Stuff 2023-11-20 15:31:58 +01:00
pablo
7a09b6f811 Actualizar 'ArgentoNAS/README.md' 2023-11-02 15:14:10 +00:00
56685189ee
Some details on the NAS idea. 2023-11-01 20:00:56 +01:00
pablo
92d08fd29f BUg on tailscaled hostinger 2023-02-26 00:28:34 +01:00
pablo
a099c94848 stuff 2023-02-12 11:34:05 +01:00
12 changed files with 226 additions and 24 deletions

23
ArgentoNAS/README.md Normal file
View file

@ -0,0 +1,23 @@
# ArgentoNAS
ArgentoNAS is my NAS server deployed in my parents place.
## Hardware
I'm looking at a humble desktop PC and starting out with a small NVME SSD for OS and 2x4TB HDDs for storage. I might expand at some point in the future.
I made this hardware selection in Neobyte: www.neobyte.es/configurador-pc?conf=1af724d758b67
Or this alternative in PCComponentes: https://www.pccomponentes.com/configurador/A1A888766
## OS
I'm gonna use TrueNAS just because it looks solid and is what apparently everyone out there is using.
## Videos
Here are some good videos explaining interesting stuff:
- Super in depth explainer on ZFS RAID set ups: https://www.youtube.com/watch?v=-AnkHc7N0zM
- How to replace failed drives in TrueNAS: https://www.youtube.com/watch?v=TvaK2I3LY68

37
Radicale/setup.md Normal file
View file

@ -0,0 +1,37 @@
# Radicale
I'm tired of being held hostage by Google to have a calendar and contacts list.
I've looked at options an decided to give it a shot at Radicale(https://radicale.org) as my selfhosted server and DAVx5 (https://www.davx5.com) as my android app.
## Installing server
I'll install the server on Frankie, redirect through Navaja.
* I'm following these instructions: https://radicale.org/v3.html#simple-5-minute-setup
-
I've crafted this nginx config:
```
server {
listen 80;
server_name radicale.contrapeso.xyz;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name radicale.contrapeso.xyz;
ssl_certificate /certs/domain.cert.pem;
ssl_certificate_key /certs/private.key.pem;
ssl_trusted_certificate /certs/intermediate.cert.pem;
location / { # The trailing / is important!
proxy_pass http://100.76.214.54:5232/radicale/; # The / is important!
proxy_set_header X-Script-Name /radicale;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_pass_header Authorization;
}
```

View file

@ -0,0 +1,25 @@
# Automatic Porkbun Cert Renewal
I'm tired of manually setting up the certs on the Nginx server every 3 months.
I've found this tool to do it: https://github.com/porkbundomains/certbun
# How to deploy
On navaja, clone this repo: https://github.com/porkbundomains/certbun
Copy the `config.json.example` into a `config.json` file.
Generate API keys following this: https://kb.porkbun.com/article/190-getting-started-with-the-porkbun-dns-api
Set the right paths for the cert files
For the web server reload command, I simply trigger a docker compose down and up since the Nginx is in a container.
Run manually once to verify it all works fine.
Afterwards, cron it.
# Quirky issues
- Paths in crontab entry should be absolute, otherwise funky shit happens.

View file

@ -1,12 +0,0 @@
# Router RMQiP
Las cosillas que siempre me olvido:
- Se accede en 192.168.1.1
- El usuario es 1234
- La clave es noesfacilvivirsinpi
# Establecer IP fija en la red local
- Ir al menu Network > LAN > DHCP Binding
- Pegar MAC del dispositivo e IP a asignar

View file

@ -1,7 +0,0 @@
navaja pablo pass -> noesfacilvivirenunmundocentralizado
banky pablo pass -> noesfacilvivirenunmundocentralizado
umbrel pass -> noesfacilvivirenunmundocentralizado
oli pablo pass -> Cdcbvpt8
noesfacilvivirsinemail en gmail punto com -> noesfacilvivirsinpin

39
framework_screen_pains.md Normal file
View file

@ -0,0 +1,39 @@
# How to get the home office monitor working
- Run the following commands
```shell
# Check output and note what's the name of the display
xrandr --listmonitors
DISPLAY_NAME="write_the_name_here"
# Then run the following and copy what comes in the modeline after "Modeline"
cvt 1920 1080 59.80
# First generate a "modeline" by using cvt
# Syntax is: cvt width height refreshrate
cvt 1920 1080 59.80
#this gives you:
# 1920x1080 59.79 Hz (CVT) hsync: 66.96 kHz; pclk: 172.50 MHz
Modeline "1920x1080_59.80" 172.50 1920 2048 2248 2576 1080 1083 1088 1120 -hsync +vsync
# Now tell this to xrandr:
xrandr --newmode "1920x1080_59.80" 172.50 1920 2048 2248 2576 1080 1083 1088 1120 -hsync +vsync
# Then you can now add it to the table of possible resolutions of an output of your choice:
xrandr --addmode ${DISPLAY_NAME} 1920x1080_59.80
#The changes are lost after reboot, to set up the resolution persistently, create the file ~/.xprofile with the content:
#!/bin/sh
xrandr --newmode "1920x1080_59.80" 172.50 1920 2048 2248 2576 1080 1083 1088 1120 -hsync +vsync
xrandr --addmode ${DISPLAY_NAME} 1920x1080_59.80
```

11
immich/setup.md Normal file
View file

@ -0,0 +1,11 @@
# Immich setup
* I'm following this:
+ https://immich.app/docs/install/docker-compose
* I've installed the project in the Barracuda HDD
* Next I'm following this:
+ https://immich.app/docs/install/post-install
* Works just fine. I'll stick to the admin user for my own stuff
* Next, I set up networking. The usual reverse proxy. Works fine.
* Installed Android app, links easy.
* Okay, only issue I had was that the backup from the graphene gallery to immich was failing silently. I quickly imagined it was an issue with nginx limiting upload sizes (I had a video that was 150Mb heavy). I changed the `client_max_body_size` in the `location` entry for Immich's `server` entry and raised it to 2gb. Then stuff works fine.

30
mounting_with_sshfs.md Normal file
View file

@ -0,0 +1,30 @@
If you ever get this error when mounting:
```
fuse: failed to open mountpoint for reading: Too many levels of symbolic links
```
The solution is (got it from here https://blog.luukhendriks.eu/2019/01/25/sshfs-too-many-levels-of-symbolic-links.html)
I've found using sshfs for network mounts to be quite convenient. Especially on
my laptop, which I'm using on various places (i.e. outside of my own home, thus
outside of my own network): mounting `mydomain.nl:/some/path/on/my/server' will
be available to me everywhere, securely, because SSH.
However, in certain situations sshfs can throw an error that left me puzzled
for quite some time, multiple times already.
too many levels of symbolic links
In my case, a fresh key pair on the server turned out to be the cause. The
sshfs was mounted by root (though as a normal user), but root had not connect
to the server after the key refresh. Ergo, the new fingerprint was not seen
before. How this results in an error about symbolic links is beyond me, but it
did. The Arch wiki points this caveat out as well:
And most importantly, use each sshfs mount at least once manually while root so
the host's signature is added to the /root/.ssh/known_hosts file.
Hope this saves someone from the headache it caused me.

View file

@ -0,0 +1,39 @@
My notes on setting up a shitty 3-machines cluster to mess around with Proxmox clusters and HA.
## Router issues
Connecting three extra devices to the home network through wired connections is gonna get complicated because I'm running out of slots in my home router. From what I've read, I should buy a switch. There are "managed" switches (which offer config possibilities) and "unmanaged" or dumb switches that just... connect stuff. From what I've seen, I think I'm only gonna buy a dumb switch for now.
Today I tried to connect XQ1 to my network, but something is odd. When I tried to look for it's IP in the DHCP server of my router, I couldn't find it listed there. I have noticed that the config page of the DHCP server mentions that the IP range goes from `192.168.1.128` to `192.168.1.254`. I also remember vaguely that there was some config thingie about IPs when configuring proxmox during the install. I didn't paid any kind of attention when setting that up and just went ahead in full 'meh, whatever' style. So I probably fucked it up.
I'm gonna reconfigure Proxmox again and pay attention this time. I probably need to set those network details right in proxmox so that the device is discoverable by the network.
Okay, here's what I did:
- For DNS server and Default Gateway fields: the right values can be found in the DHCP server of the router config webpage.
- As for the IP: I set an IP within the DHCP server. I saw in a video that this might become a problem because if the DHCP server assigns another IP to that address a conflict can appear, but yeah, whatevah.
- Once everything is set up, the device is reachable at the IP that was configured in Proxmox EVEN THOUGH it doesn't appear in the device list of the DHCP devices. From the little I understand, the Proxmox box sets it's own IP and does not rely on the router providing an IP for it, so that's the reason it doesn't appear there.
To make the proxmox box reachable by name instead of IP, I had to:
- Create an entry in the DNS Server of the router
- Follow these instructions to add the router DNS server so that Oli's Ubuntu would pick it up and use it: https://askubuntu.com/questions/1280277/how-to-change-dns-server-permanently-on-ubuntu-20-04
Today I decided I wanna change the IP of xq1 to put it outside of my router's DHCP server range. My planned set up would be the following:
- xq1: 192.168.1.11
- xq2: 192.168.1.12
- xq3: 192.168.1.13
## Node name issues
Okay, another fuck up: all my nodes have the same hostname and apparently changing the hostname of an existing node is a complete and utter mess.
I'm gonna reinstall proxmox once again on each node and add a proper hostname on each.
## Links
Links:
- Full Proxmox Course: https://www.youtube.com/watch?v=5j0Zb6x_hOk&list=PLT98CRl2KxKHnlbYhtABg6cF50bYa8Ulo&index=1&pp=iAQB
- Add DNS server in Ubuntu permanently: https://askubuntu.com/questions/1280277/how-to-change-dns-server-permanently-on-ubuntu-20-04

View file

@ -24,4 +24,17 @@ sudo tailscale up
# your credentials. # your credentials.
tailscale ip -4 tailscale ip -4
``` ```
## TUN
The first time I tried to run tailscale on a Hostinger VPS, I got the following
error: `failed to connect to local tailscaled; it doesn't appear to be running
(sudo systemctl start tailscaled ?)`
I read something about some stuff called `TUN` on a forum, and realised that
the VPS panel in hostinger had a switch with the title: `TUN/TAP Adapter: It's
a virtual network adapter that will allow you to set up a VPN on your server.
`. I activated it (comes off by default) and that did the trick.

View file

@ -1,3 +0,0 @@
noesfacilvivirenunmundocentralizado

View file

@ -10,4 +10,11 @@ To prevent users from registering freely, activate the following env var:
To activate the admin panel, you need to add the admin token as an env var, To activate the admin panel, you need to add the admin token as an env var,
like this: `ADMIN_TOKEN=<the-very-safe-token>`. Afterwards, you can enter the like this: `ADMIN_TOKEN=<the-very-safe-token>`. Afterwards, you can enter the
admin panel by adding `/admin` in the URL. admin panel by adding `/admin` in the URL.
## How to create new users
1. Go to the `docker-compose.yaml` and look for the env var `SIGNUPS_ALLOWED=false`.
2. Turn it to `true` and restart the container.
3. Have the user go to the web UI and create himself a user
4. Harden the instance again by setting the env var back to false and rebooting the container.