diff --git a/.gitignore b/.gitignore new file mode 100644 index 0000000..8f24f11 --- /dev/null +++ b/.gitignore @@ -0,0 +1,14 @@ +# Deployment configuration (contains sensitive server details) +deploy.config + +# OS files +.DS_Store +Thumbs.db + +# Editor files +.vscode/ +.idea/ +*.swp +*.swo +*~ + diff --git a/README.md b/README.md index b448f02..32ee0c7 100644 --- a/README.md +++ b/README.md @@ -8,4 +8,6 @@ The `index.html` is ready in the `public` folder. ## How to deploy -Somehow get the `public` folder behind a webserver manually and sort out DNS. +1. Copy `deploy.config.example` to `deploy.config` +2. Fill in your server details in `deploy.config` (host, user, remote path) +3. Run `./deploy.sh` to sync the `public` folder to your remote webserver diff --git a/deploy.config.example b/deploy.config.example new file mode 100644 index 0000000..1c84f53 --- /dev/null +++ b/deploy.config.example @@ -0,0 +1,21 @@ +# Deployment Configuration +# Copy this file to deploy.config and fill in your server details +# deploy.config is gitignored to keep your credentials safe + +# Remote server hostname or IP address +REMOTE_HOST="example.com" + +# SSH username for the remote server +REMOTE_USER="username" + +# Remote path where the website should be deployed +# This should be the directory served by your webserver (e.g., /var/www/html, /home/username/public_html) +REMOTE_PATH="/var/www/html" + +# Optional: Path to SSH private key (if not using default ~/.ssh/id_rsa) +# Leave empty to use default SSH key +SSH_KEY="" + +# Optional: SSH port (defaults to 22 if not specified) +# SSH_PORT="22" + diff --git a/deploy.sh b/deploy.sh new file mode 100755 index 0000000..5cf9692 --- /dev/null +++ b/deploy.sh @@ -0,0 +1,34 @@ +#!/bin/bash + +# Deployment script for pablohere website +# This script syncs the public folder to a remote webserver + +set -e # Exit on error + +# Load deployment configuration +if [ ! -f "deploy.config" ]; then + echo "Error: deploy.config file not found!" + echo "Please copy deploy.config.example to deploy.config and fill in your server details." + exit 1 +fi + +source deploy.config + +# Validate required variables +if [ -z "$REMOTE_HOST" ] || [ -z "$REMOTE_USER" ] || [ -z "$REMOTE_PATH" ]; then + echo "Error: Required variables not set in deploy.config" + echo "Please ensure REMOTE_HOST, REMOTE_USER, and REMOTE_PATH are set." + exit 1 +fi + +# Use rsync to sync files +echo "Deploying public folder to $REMOTE_USER@$REMOTE_HOST:$REMOTE_PATH" +rsync -avz --delete \ + --exclude='.git' \ + --exclude='.DS_Store' \ + $SSH_OPTS \ + public/ \ + $REMOTE_USER@$REMOTE_HOST:$REMOTE_PATH + +echo "Deployment complete!" + diff --git a/public/index.html b/public/index.html index 101f398..1c18d0d 100644 --- a/public/index.html +++ b/public/index.html @@ -1,164 +1,230 @@ - + - Pablo here - - - + Pablo here + + + -
-

- Hi, Pablo here -

-

- Welcome to my website. Here I discuss thoughts and ideas. This is mostly professional. -

-
-

What you'll find here:

- -
-
-

About me

-

A few facts you might care about:

- -
-
-
-

Contact

-

You can contact me on:

- -

If you are looking for my CV, no need to reach out, you can fetch it - yourself here.

-

Good reason to reach out include:

- -

Bad reasons to reach out include:

- -
-
-
-

My projects

-

Some of the projects I've shared publicly:

- -

There are also some other projects that I generally keep private but might disclose under the right - circumstances. Some notable hints:

- -
-
-
-

Writings

-

Sometimes I like to jot down ideas and drop them here.

- -
-
- +
+

Hi, Pablo here

+

+ Welcome to my website. Here I discuss thoughts and ideas. This is mostly + professional. +

+
+

What you'll find here:

+ +
+
+

About me

+

A few facts you might care about:

+ +
+
+
+

Contact

+

You can contact me on:

+ +

+ If you are looking for my CV, no need to reach out, + you can fetch it yourself here. +

+

Good reasons to reach out include:

+ +

Bad reasons to reach out include:

+ +
+
+
+

My projects

+

Some of the projects I've shared publicly:

+ +

+ There are also some other projects that I generally keep private but + might disclose under the right circumstances. Some notable hints: +

+ +
+
+
+

Writings

+

Sometimes I like to jot down ideas and drop them here.

+ +
+
+ \ No newline at end of file diff --git a/public/keybase.txt b/public/keybase.txt new file mode 100644 index 0000000..43ffead --- /dev/null +++ b/public/keybase.txt @@ -0,0 +1,56 @@ +================================================================== +https://keybase.io/pablomartincalvo +-------------------------------------------------------------------- + +I hereby claim: + + * I am an admin of https://pablohere.contrapeso.xyz + * I am pablomartincalvo (https://keybase.io/pablomartincalvo) on keybase. + * I have a public key ASDgHxztDlU_R4hjxbkO21-rS4Iv1gABa3BPb_Aff7aNAgo + +To do so, I am signing this object: + +{ + "body": { + "key": { + "eldest_kid": "0120d9bde13d9012e681cef2edd668d70426f1f6ef69ce7dfae20b404096eca5b06f0a", + "host": "keybase.io", + "kid": "0120e01f1ced0e553f478863c5b90edb5fab4b822fd600016b704f6ff01f7fb68d020a", + "uid": "8e71277fbc0fb1fea28d60308f495d19", + "username": "pablomartincalvo" + }, + "merkle_root": { + "ctime": 1755635067, + "hash": "4f91af0b9c674e0f1d74a7cfad7abd15a7065cded92b96ac8a6abeb5c8553318599aa1bf7b065a3312e303506256b729b8b60b3a5dd06b68694423f4341a6a14", + "hash_meta": "6472dbf2ed33341fb30b6a0c5c5c7fb39c219dd0ffd03c6e08b68c788e0de60a", + "seqno": 27031070 + }, + "service": { + "entropy": "LEFJJ4FMmlJQWPPFEO4xHE5y", + "hostname": "pablohere.contrapeso.xyz", + "protocol": "https:" + }, + "type": "web_service_binding", + "version": 2 + }, + "client": { + "name": "keybase.io go client", + "version": "6.5.1" + }, + "ctime": 1755635082, + "expire_in": 504576000, + "prev": "37f12270050ab037897ccf6ef9451b1911cb505eca7c3842993b0b8925bc79b8", + "seqno": 31, + "tag": "signature" +} + +which yields the signature: + +hKRib2R5hqhkZXRhY2hlZMOpaGFzaF90eXBlCqNrZXnEIwEg4B8c7Q5VP0eIY8W5Dttfq0uCL9YAAWtwT2/wH3+2jQIKp3BheWxvYWTESpcCH8QgN/EicAUKsDeJfM9u+UUbGRHLUF7KfDhCmTsLiSW8ebjEIAnIWTmufZ017e9WLdI1LhKBPaZ3HzmTrgyASDvY3PwoAgHCo3NpZ8RA9a3xgkSTU6Ht7M7DCsy4ClMmoWFtDEqzX9/dqskeoH2DrJUZYVymBQE1nyB0p1GuXiZA1cP5WY5SDURWZ5bBC6hzaWdfdHlwZSCkaGFzaIKkdHlwZQildmFsdWXEIEJZ4g4HC5qXcqbFf6sJ8XuZyMtoppazFqr1zPu0LH5co3RhZ80CAqd2ZXJzaW9uAQ== + +And finally, I am proving ownership of this host by posting or +appending to this document. + +View my publicly-auditable identity here: https://keybase.io/pablomartincalvo + +================================================================== \ No newline at end of file diff --git a/public/my_cv.pdf b/public/my_cv.pdf index 27038a6..f3422a0 100644 Binary files a/public/my_cv.pdf and b/public/my_cv.pdf differ diff --git a/public/static/homophobic-socialist-drug-dealer.png b/public/static/homophobic-socialist-drug-dealer.png new file mode 100644 index 0000000..f013ebc Binary files /dev/null and b/public/static/homophobic-socialist-drug-dealer.png differ diff --git a/public/styles.css b/public/styles.css index 690b8b1..e7908b3 100644 --- a/public/styles.css +++ b/public/styles.css @@ -7,7 +7,8 @@ body { h1, h2, -h3 { +h3, +h4 { text-align: center; } diff --git a/public/writings/a-degraded-pool-with-a-healthy-disk.html b/public/writings/a-degraded-pool-with-a-healthy-disk.html new file mode 100644 index 0000000..521b7cc --- /dev/null +++ b/public/writings/a-degraded-pool-with-a-healthy-disk.html @@ -0,0 +1,133 @@ + + + + + Pablo here + + + + + + + +
+

+ Hi, Pablo here +

+

back to home

+
+

A degraded pool with a healthy disk

+

Part 2 of 3 in my "First ZFS Degradation" series. See also Part 1: The Setup and Part 3: The Fix.

+

The "Oh Shit" Moment

+

I wasn't even looking for trouble. I was clicking around the Proxmox web UI, exploring some storage views I hadn't noticed before, when I saw it: my ZFS pool was in DEGRADED state.

+

I opened the details. One of my two mirrored drives was listed as FAULTED.

+

I was very surprised. This box and disks were brand new and didn't even have three months of running on them. I was not expecting HW issues to come at me that fast. I SSH'd into the server and ran the command that would become my best friend over the next 24 hours:

+
zpool status -v proxmox-tank-1
+

No glitch. The pool was degraded. The drive had racked up over 100 read errors, 600+ write errors, and 129 checksum errors. ZFS had given up on it.

+
  NAME                                 STATE     READ WRITE CKSUM
+  proxmox-tank-1                       DEGRADED     0     0     0
+    mirror-0                           DEGRADED     0     0     0
+      ata-ST4000NT001-3M2101_WX11TN0Z  FAULTED    108   639   129  too many errors
+      ata-ST4000NT001-3M2101_WX11TN2P  ONLINE       0     0     0
+

The good news: errors: No known data errors. ZFS was serving all my data from the healthy drive. Nothing was lost yet.

+

The bad news: I was running on a single point of failure. If AGAPITO2 decided to have a bad day too, I'd be in real trouble.

+

I tried the classic IT move: rebooting. The system came back up and ZFS immediately started trying to resilver (rebuild) the degraded drive. But within minutes, the errors started piling up again and the resilver stalled.

+

Time to actually figure out what was wrong.

+

The Diagnostic Toolbox

+

When a ZFS drive acts up, you have two main sources of truth: what the kernel sees happening at the hardware level, and what the drive itself reports about its health. This can be looked up with dmesg and smartctl.

+

dmesg: The Kernel's Diary

+

The Linux kernel maintains a ring buffer of messages about hardware events, driver activities, and system operations. The dmesg command lets you read it. For disk issues, you want to grep for SATA-related keywords:

+
dmesg -T | egrep -i 'ata[0-9]|sata|reset|link|i/o error' | tail -100
+

The -T flag gives you human-readable timestamps instead of seconds-since-boot.

+

What I saw was... weird. Here's an excerpt:

+
[Fri Jan  2 22:25:13 2026] ata4.00: exception Emask 0x50 SAct 0x70220001 SErr 0xe0802 action 0x6 frozen
+[Fri Jan  2 22:25:13 2026] ata4.00: irq_stat 0x08000000, interface fatal error
+[Fri Jan  2 22:25:13 2026] ata4.00: failed command: READ FPDMA QUEUED
+[Fri Jan  2 22:25:13 2026] ata4: hard resetting link
+[Fri Jan  2 22:25:14 2026] ata4: SATA link down (SStatus 0 SControl 300)
+

Let me translate: the kernel tried to read from the drive on ata4, got a "fatal error," and responded by doing a hard reset of the SATA link. Then the link went down entirely. The drive just... disappeared.

+

But it didn't stay gone. A few seconds later:

+
[Fri Jan  2 22:25:24 2026] ata4: link is slow to respond, please be patient (ready=0)
+[Fri Jan  2 22:25:24 2026] ata4: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
+

The drive came back! At full speed! But then...

+
[Fri Jan  2 22:25:29 2026] ata4.00: qc timeout after 5000 msecs (cmd 0xec)
+[Fri Jan  2 22:25:29 2026] ata4.00: failed to IDENTIFY (I/O error, err_mask=0x4)
+[Fri Jan  2 22:25:29 2026] ata4: limiting SATA link speed to 3.0 Gbps
+

It failed again. The kernel, trying to be helpful, dropped the link speed from 6.0 Gbps to 3.0 Gbps. Maybe a slower speed would be more stable?

+

It wasn't. The pattern repeated: connect, fail, reset, reconnect at a slower speed. 6.0 Gbps, then 3.0 Gbps, then 1.5 Gbps. Eventually:

+
[Fri Jan  2 22:27:06 2026] ata4.00: disable device
+

The kernel gave up entirely.

+

This wasn't what a dying drive looks like. A dying drive throws read errors on specific bad sectors. This drive was connecting and disconnecting like someone was jiggling the cable. The kernel was calling it "interface fatal error", emphasis on interface.

+

smartctl: Asking the Drive Directly

+

Every modern hard drive has S.M.A.R.T. (Self-Monitoring, Analysis, and Reporting Technology) — basically a built-in health monitor. The smartctl command lets you get info out of it.

+

First, the overall health check:

+
smartctl -H /dev/sdb
+
SMART overall-health self-assessment test result: PASSED
+

Okay, that looks great. But if the disk is healthy, what the hell is going on, and where are all those errors that ZFS was spotting coming from?

+

Let's dig deeper with the extended info:

+
smartctl -x /dev/sdb
+

The key attributes I was looking for:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
AttributeValueWhat it means
Reallocated_Sector_Ct0Bad sectors the drive has swapped out. Zero is good.
Current_Pending_Sector0Sectors waiting to be checked. Zero is good.
UDMA_CRC_Error_Count0Data corruption during transfer. Zero is good.
Number of Hardware Resets39Times the connection has been reset. Uh...
+

All the sector-level health metrics looked perfect. No bad blocks, no pending errors, no CRC errors. The drive's magnetic platters and read/write heads were fine.

+

But 39 hardware resets? That's not normal. That's the drive (or its connection) getting reset nearly 40 times.

+

I ran the short self-test to be sure:

+
smartctl -t short /dev/sdb
+# Wait a minute...
+smartctl -l selftest /dev/sdb
+
# 1  Short offline       Completed without error       00%
+

The drive passed its own self-test. The platters spin, the heads move, the firmware works, and it can read its own data just fine.

+

Hypothesis

+

At this point, the evidence was pointing clearly away from "the drive is dying" and toward "something is wrong with the connection."

+

What the kernel logs told me: the drive keeps connecting and disconnecting. Each time it reconnects, the kernel tries slower speeds. Eventually it gives up entirely. This is what you see with an unstable physical connection.

+

What SMART told me: the drive itself is healthy. No bad sectors, no media errors, no signs of wear. But there have been dozens of hardware resets — the connection keeps getting interrupted.

+

The suspects, in order of likelihood:

+
    +
  1. SATA data cable: the most common culprit for intermittent connection issues. Cables go bad, or weren't seated properly in the first place.
  2. +
  3. Power connection: if the drive isn't getting stable power, it might brown out intermittently.
  4. +
  5. SATA port on the motherboard: less likely, but possible.
  6. +
  7. PSU: power supply issues could affect the power rail feeding the drive. Unlikely, since both disks where feeding from the same cable tread, but still an option.
  8. +
+

Given that I had just built this server a few weeks earlier, and a good part of that happened after midnight... I was beginning to suspect that perhaps I simply might not have plugged in the disk properly.

+

The Verdict

+

I was pretty confident now: the drive was fine, but the connection was bad. Most likely the SATA data cable, and most probably simply not connected properly.

+

The fix would require shutting down the server, opening the case, and reseating (or replacing) cables. Before doing that, I wanted to take the drive offline cleanly and document everything.

+

In Part 3, I'll walk through exactly how I fixed it: the ZFS commands, the physical work, and the validation to make sure everything was actually okay afterward.

+

Continue to Part 3: The Fix

+

back to home

+
+
+ + + + + diff --git a/public/writings/a-note-for-the-future-the-tax-bleeding-in-2025.html b/public/writings/a-note-for-the-future-the-tax-bleeding-in-2025.html new file mode 100644 index 0000000..abefd1d --- /dev/null +++ b/public/writings/a-note-for-the-future-the-tax-bleeding-in-2025.html @@ -0,0 +1,188 @@ + + + + Pablo here + + + + + + +
+

Hi, Pablo here

+

back to home

+
+
+

A note for the future: the tax bleeding in 2025

+

+ I hate taxes deeply. I fell through the rabbit hole of libertarian and + anarcocapitalist ideas some years ago, and taxes have been repulsive + to me ever since. I go to great lengths to not pay them, and feel + deeply hurt everytime they sting my wallet against my will. +

+

+ I know life goes by fast, and what today is vivid in your memory fades + away bit by bit until it's gone. I'm truly hoping that, some day in + the future, the world will have changed to the better and people won't + be paying as much tax as we're doing today in the West. Since in that + bright, utopical future I'm dreaming of I might have forgotten about + how bad things were on this matter in 2025, I've decided to make a + little entry here making an estimate on how many taxes I'm + theoretically bleeding on a yearly basis right now. So that we can + someday look back in time and wonder: "how the fuck did we tolerate + that pillaging". +

+

Inventory

+

+ Before going hard into the number crunching let's list all the tax + items I'm aware of being subject to: +

+ +

+ There may be some other small, less frequent taxes that I'm not + considering. These are the ones that will hit most people in my + country. +

+

The numbers

+

+ Okay, let's go compute the hideous bill. I'll make a hypothetical + profile that's roughly close to mine, with a few assumptions along the + way. +

+ +

With those clear, let's see the actual figures:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Tax€/year
Income Tax (IRPF)22,401 €
Social Security (worker + employer) + 25,375 € + (worker 4,445 € + employer 20,930 €) +
VAT (blended basket)5,250 €
Real Estate Tax (IBI)1,000 €
Vehicle Tax (1.5 vehicles)225 €
Wealth Transfer (10% home, spread 50y)1,000 €
Inheritance (7% of 250k, spread 50y)350 €
Total55,602 €
+

+ So there you go. A peaceful existence as a tech professional living a + normal life leads to bleeding at least 55K€ per year, all while + getting an 85K€ salary. The tax rate sits at a wonderful 64%. How far + away is this from hardcore USSR-grade communism? +

+

+ And this is generous, since I didn't model (1) what gets stolen + through inflation diluting savings and (2) any capital gains that this + profile might end up paying for whatever investments he is doing with + his savings. +

+

+ Then you'll see mainstream media puppets discussing why young people + don't have children. As if it was some kind of mistery. They're being + robbed their children's bread left and right, while getting hypnotized + into believing that protecting themselves against this outrageous + robbery is somehow morally despicable. +

+

Motherfuckers.

+
+

back to home

+
+
+ + diff --git a/public/writings/fixing-a-degraded-zfs-mirror.html b/public/writings/fixing-a-degraded-zfs-mirror.html new file mode 100644 index 0000000..e8f1f0a --- /dev/null +++ b/public/writings/fixing-a-degraded-zfs-mirror.html @@ -0,0 +1,188 @@ + + + + + Pablo here + + + + + + + +
+

+ Hi, Pablo here +

+

back to home

+
+

Fixing a Degraded ZFS Mirror: Reseat, Resilver, and Scrub

+

Part 3 of 3 in my "First ZFS Degradation" series. See also Part 1: The Setup and Part 2: Diagnosing the Problem.

+

The Game Plan

+

By now I was pretty confident about what was wrong: not a dying drive, but a flaky SATA connection. The fix should be straightforward. Just take the drive offline, shut down, reseat the cables, bring it back up, and let ZFS heal itself.

+

But I wanted to do this methodically. ZFS is forgiving, but I didn't want to make things worse by rushing.

+

Here was my plan:

+
    +
  1. Take the faulty drive offline in ZFS (tell ZFS "stop trying to use this drive")
  2. +
  3. Power down the server
  4. +
  5. Open the case, inspect and reseat cables
  6. +
  7. Boot up, verify the drive is detected
  8. +
  9. Bring the drive back online in ZFS
  10. +
  11. Let the resilver complete
  12. +
  13. Run a scrub to verify data integrity
  14. +
  15. Check SMART one more time
  16. +
+

Let's walk through each step.

+

Step 1: Taking the Drive Offline

+

Before touching hardware, I wanted ZFS to stop trying to use the problematic drive.

+

First, I set up some variables to avoid typos with that long disk ID:

+
DISKID="ata-ST4000NT001-3M2101_WX11TN0Z"
+DISKPATH="/dev/disk/by-id/$DISKID"
+

Then I took it offline:

+
zpool offline proxmox-tank-1 "$DISKID"
+

Checking the status afterward:

+
zpool status -v proxmox-tank-1
+
  NAME                                 STATE     READ WRITE CKSUM
+  proxmox-tank-1                       DEGRADED     0     0     0
+    mirror-0                           DEGRADED     0     0     0
+      ata-ST4000NT001-3M2101_WX11TN0Z  OFFLINE    108   639   129
+      ata-ST4000NT001-3M2101_WX11TN2P  ONLINE       0     0     0
+

The state changed from FAULTED to OFFLINE. ZFS knows I intentionally took it offline rather than it failing on its own. The error counts are still there as a historical record, but ZFS isn't actively trying to use the drive anymore.

+

Time to shut down and get my hands dirty.

+

Step 2: Opening the Case

+

I powered down the server and opened up the Fractal Node 804. This case has a lovely design with drive bays accessible from the side, which I love. No reaching out into weird corners in the case, just unscrew a couple screws, slide the drive bay out and there they are, handy and reachable.

+

I located AGAPITO1 (I had handwritten labels on the drives, lesson learned after many sessions of playing "which drive is which") and inspected the connections.

+

Here's the honest truth: everything looked fine. The SATA data cable was plugged in. The power connector was plugged in. Nothing was obviously loose or damaged. There was a bit of tension in the cable as it moved from one area of the case (where the motherboard is) to the drives area, but I really didn't think that was affecting the connection to either the drive or the motherboard itself.

+

But "looks fine" doesn't mean "is fine". So I did a full reseat:

+ +

I made sure each connector clicked in solidly. Then I closed up the case and hit the power button.

+

Step 3: Verifying Detection

+

The server booted up. Would Linux see the drive?

+
ls -l /dev/disk/by-id/ | grep WX11TN0Z
+
lrwxrwxrwx 1 root root  9 Jan  2 23:15 ata-ST4000NT001-3M2101_WX11TN0Z -> ../../sdb
+

The drive was there, mapped to /dev/sdb.

+

I opened a second terminal and started watching the kernel log in real time:

+
dmesg -Tw
+

This would show me immediately if the connection started acting flaky again. For now, it was quiet, showing just normal boot messages, the drive being detected successfully, etc. Nothing alarming.

+

Step 4: Bringing It Back Online

+

Moment of truth. I told ZFS to start using the drive again:

+
zpool online proxmox-tank-1 "$DISKID"
+

Immediately checked the status:

+
zpool status -v proxmox-tank-1
+
  pool: proxmox-tank-1
+ state: DEGRADED
+status: One or more devices is currently being resilvered.
+action: Wait for the resilver to complete.
+  scan: resilver in progress since Fri Jan  2 23:17:35 2026
+        0B resilvered, 0.00% done, no estimated completion time
+
+    NAME                                 STATE     READ WRITE CKSUM
+    proxmox-tank-1                       DEGRADED     0     0     0
+      mirror-0                           DEGRADED     0     0     0
+        ata-ST4000NT001-3M2101_WX11TN0Z  DEGRADED     0     0     0  too many errors
+        ata-ST4000NT001-3M2101_WX11TN2P  ONLINE       0     0     0
+

Two things to notice: the drive's error counters are now at zero (we're starting fresh), and ZFS immediately started resilvering. It shows "too many errors" as the reason for the degraded state, which is historical, it remembers why the drive was marked bad before.

+

I kept watching both the status and the kernel log. No errors, no link resets.

+

Step 5: The Resilver

+

Resilvering is ZFS's term for rebuilding redundancy. Copying data from the healthy drive to the one that fell behind. In my case, the drive had been desynchronized for who knows how long (the pool had drifted 524GB out of sync before I noticed), so there was a lot to copy.

+

I shut down my VMs to reduce I/O contention and let the resilver have the disk bandwidth. Progress:

+
scan: resilver in progress since Fri Jan  2 23:17:35 2026
+      495G / 618G scanned, 320G / 618G issued at 100M/s
+      320G resilvered, 51.78% done, 00:50:12 to go
+

The kernel log stayed quiet the whole time. Everything was indicating the cable reseat had worked.

+

I went to bed and let it run overnight. The next morning:

+
scan: resilvered 495G in 01:07:58 with 0 errors on Sat Jan  3 00:25:33 2026
+

495 gigabytes resilvered in about an hour, zero errors. But the pool still showed DEGRADED with a warning about "unrecoverable error." I was very confused about this, but I solved that with some research. Apparently, ZFS is cautious and wants human acknowledgement before declaring everything healthy again.

+
zpool clear proxmox-tank-1 ata-ST4000NT001-3M2101_WX11TN0Z
+

This command clears the error flags. Immediately:

+
  pool: proxmox-tank-1
+ state: ONLINE
+  scan: resilvered 495G in 01:07:58 with 0 errors on Sat Jan  3 00:25:33 2026
+
+    NAME                                 STATE     READ WRITE CKSUM
+    proxmox-tank-1                       ONLINE       0     0     0
+      mirror-0                           ONLINE       0     0     0
+        ata-ST4000NT001-3M2101_WX11TN0Z  ONLINE       0     0     0
+        ata-ST4000NT001-3M2101_WX11TN2P  ONLINE       0     0     0
+

Damn, seeing this felt nice.

+

Step 6: The Scrub

+

A resilver copies data to bring the drives back in sync, but it doesn't verify that all the existing data is still good. For that, you run a scrub. ZFS reads every block on the pool, verifies checksums, and repairs anything that doesn't match.

+
zpool scrub proxmox-tank-1
+

I let this run while I brought my VMs back up (scrubs can run in the background without blocking normal operations, though performance takes a hit). A few hours later:

+
scan: scrub repaired 13.0M in 02:14:22 with 0 errors on Sat Jan  3 11:03:54 2026
+
+    NAME                                 STATE     READ WRITE CKSUM
+    proxmox-tank-1                       ONLINE       0     0     0
+      mirror-0                           ONLINE       0     0     0
+        ata-ST4000NT001-3M2101_WX11TN0Z  ONLINE       0     0   992
+        ata-ST4000NT001-3M2101_WX11TN2P  ONLINE       0     0     0
+

Interesting. The scrub repaired 13MB of data and found 992 checksum mismatches on AGAPITO1. From what I read, checksum errors are typically a sign of the disk being in terrible shape and needing a replacement ASAP. That sounds scary, but I took the risk and assumed those were blocks that had been written incorrectly (or not at all) during the period when the connection was flaky, and not an issue with the disk itself. ZFS detected the bad checksums and healed them using the good copies from AGAPITO2.

+

I cleared the errors again and the pool was clean:

+
zpool clear proxmox-tank-1 ata-ST4000NT001-3M2101_WX11TN0Z
+

Step 7: Final Validation with SMART

+

One more check. I wanted to see if SMART had anything new to say about the drive after all that activity:

+
smartctl -x /dev/sdb | egrep -i 'overall|Reallocated|Pending|CRC|Hardware Resets'
+
SMART overall-health self-assessment test result: PASSED
+  5 Reallocated_Sector_Ct   PO--CK   100   100   010    -    0
+197 Current_Pending_Sector  -O--C-   100   100   000    -    0
+199 UDMA_CRC_Error_Count    -OSRCK   200   200   000    -    0
+0x06  0x008  4              41  ---  Number of Hardware Resets
+

Still passing. The hardware reset count went from 39 to 41 — just the reboots I did during this process.

+

For completeness, I ran the long self-test. The short test only takes a minute and does basic checks, the long test actually reads every sector on the disk, which for a 4TB drive takes... a while.

+
smartctl -t long /dev/sdb
+

The estimated time was about 6 hours. In practice, it took closer to 12. Running VMs in parallel probably didn't help.

+

But eventually:

+
SMART Self-test log structure revision number 1
+Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
+# 1  Extended offline    Completed without error       00%      1563         -
+# 2  Short offline       Completed without error       00%      1551         -
+# 3  Short offline       Completed without error       00%      1462         -
+

The extended test passed. Every sector on the disk is readable. The drive is genuinely healthy — it was just the connection that was bad.

+

Lessons Learned

+ +

I'm happy I was able to test out recoverying from a faulty disk with such a tiny issue. I learned a lot fixing it, and now I'm even more happy than before having decided to go for this ZFS pool setup.

+

Quick Reference: The Commands

+

For future me (and anyone else who ends up here with a degraded pool):

+
# Check pool status
+zpool status -v 
+
+# Watch kernel logs in real time
+dmesg -Tw
+
+# Check SMART health
+smartctl -H /dev/sdX
+smartctl -x /dev/sdX
+
+# Take a drive offline before physical work
+zpool offline  
+
+# Bring a drive back online
+zpool online  
+
+# Clear error flags after recovery
+zpool clear  
+
+# Run a scrub to verify all data
+zpool scrub 
+
+# Run SMART self-tests
+smartctl -t short /dev/sdX  # Quick test (~1 min)
+smartctl -t long /dev/sdX   # Full surface scan (hours)
+smartctl -l selftest /dev/sdX  # Check test results
+

Thanks for reading! This was Part 3: The Fix. You might also enjoy Part 1: The Setup and Part 2: Diagnosing the Problem.

+

back to home

+
+
+ + + + + diff --git a/public/writings/gresham-law-has-nothing-to-do-with-bitcoin.html b/public/writings/gresham-law-has-nothing-to-do-with-bitcoin.html new file mode 100644 index 0000000..71ec88f --- /dev/null +++ b/public/writings/gresham-law-has-nothing-to-do-with-bitcoin.html @@ -0,0 +1,138 @@ + + + + + Pablo here + + + + + + +
+

Hi, Pablo here

+

back to home

+
+
+

Gresham's Law has nothing to do with Bitcoin

+

+ This is going to be a thorough explanation for a simple thing, but we + will take it slow since this topic somehow causes loads of confusion. +

+

+ Okay, so there are a lot of people in Bitcoin circles who talk about + Gresham's + Law. They often say, “Gresham's Law states that bad money drives out + good money”, then relate it to Bitcoin and the USD, and finally + proceed to reason all sort of of things on top of that. But here's + some very much needed clarification: Gresham's law has nothing to do + with Bitcoin's relationship to the USD. In fact, it actually has + nothing to do Bitcoin, or with the current USD for that matter. +

+ +

+ Gresham's Law is relevant to a very specific type of monetary system: + when we used coins that contained precious metals (spoiler: we don't + live in that period of history anymore). The law states that bad money + drives out good money, but what a lot of Bitcoiners seem to miss is + the actual meaning of “good” and “bad” in this context. People tend to + interpret “good” and “bad” as meaning “hard” and "easy" money, so they + reason something like: “Because Bitcoin is harder than the USD, + Gresham's law applies here.” But that is not what Gresham's law is + about at all. +

+ +

+ In the context of Gresham's law, “good” and “bad” refer to face value + versus commodity value. That doesn't ring a bell? Let me explain: +

+ +

+ Imagine a magic land where there is only one type of coin. There's no + other money — just this one coin. These coin states on themselves that + they contain one gram of gold, and right now, they really do contain + one gram of gold. Everyone uses it, and everyone is happy. There's no + “bad” money, no “good” money — it's all nice and simple. +

+ +

Now, let's spice it up a bit.

+ +

+ After some time, a cheeky bastard (typically, a king) comes along and + starts making coins that look exactly like the original coins. I'll + call these the bad coins. The original coins will be the good coins. + Both types of coins say on them “one gram of gold,” but the bad coins + only have half a gram of gold actually in them (hence why they are + bad). +

+ +

+ So, to recap:
+ - Good coins: one gram of gold on the coin, and actually one gram of + gold inside.
+ - Bad coins: one gram of gold on the coin, but only 0.5 grams of gold + inside. +

+ +

This is where Gresham's Law applies.

+ +

+ People in this coiny fantasy land are not stupid — they know that the + gold content is what matters. At some point, someone will realize the + bad coins don't have as much gold as they claim and will develop a + preference for the good ones. So, if I'm John the Blacksmith and I + want to buy some iron, and I have a stash of coins — some good, some + bad — I would rather keep the good coins and spend the bad coins. Why? + Because I want to keep as much gold as possible, of course. +

+ +

+ What happens eventually is that people grow into the habit of trying to get + rid of the bad coins and hold on to the good coins. They exploit the + confusion created by the fact that all coins have the same face value + (it says “one gram” on all coins, so everyone assumes they're worth + the same), even though the actual commodity value (the gold inside) + differs.[1] +

+ +

That is the quick explanation of Gresham's law.

+ +

+ Now, back to the original point: what are the face value and commodity + value of Bitcoin? +

+ +

+ That makes no sense! Bitcoin is not a physical coin with metal in + it. It has no concept of face and commodity value. And neither does the + USD nowadays. Therefore, Gresham's law has absolutely nothing to do + with Bitcoin, the USD and any preferences the world might develop + between the two. +

+ +

+ Hopefully, this explanation helps make things clear. From now on, if + you want to keep your public image intact, please refrain from + invoking Gresham's law when discussing Bitcoin and USD — because doing it + shows you don't know what Gresham's Law is actually about. Don't feel + too bad if it happened to you though: it can happen even to + massive + exchanges with a great reputation. +

+ +

+ [1] Not relevant to the point of this post, but it's worth noting + that Gresham's Law situation is not always sure to happen in the + described scenario. If the difference between the good and bad coins + is massive, and no force opposes it, the market might jump into + Thier's Law + instead. +

+
+

back to home

+
+
+ + + \ No newline at end of file diff --git a/public/writings/is-your-drug-dealer-a-homophobic-socialist.html b/public/writings/is-your-drug-dealer-a-homophobic-socialist.html new file mode 100644 index 0000000..e3063c8 --- /dev/null +++ b/public/writings/is-your-drug-dealer-a-homophobic-socialist.html @@ -0,0 +1,94 @@ + + + + + Pablo here + + + + + + +
+

Hi, Pablo here

+

back to home

+
+
+

Is your drug dealer a homophobic socialist?

+

+ Lately, I've noticed a branch of + cancel + culture + I've come to find quite disturbing. I think it has mainly extended in + the US, though I think it's starting to happen in Europe too. It's + this tendency for people at companies to politically and morally judge + business counterparties and come to the conclusion that business + shouldn't be done with them because of it. +

+ +

+ I experienced this first hand during some afterwork beers, and for + some reason the scene got burned into my retina. A colleague of mine, + beer in hand, said something like, “We're working with this customer, + and they're unbearable because they complain a lot and challenge us + all the time when we run the monthly reconciliation. Plus, they're + from Israel.” I was mindblown at how casually that was dropped, with + not even a footnote-like explanation deemed necessary. I played my 5 + year old child attitude card and asked, "What's the problem with them + being in Israel?" She said, "Well, you know, they're in Israel and the + whole thing is happening. It's terrible. We shouldn't deal with them." +

+ +

+ I couldn't hold it in: I asked her if her hairdresser was from Israel. + She looked at me completely puzzled: “I don't know. Why does that + matter?” I told her, “I don't know. Apparently, you're upset about + dealing people from Israel, so I'm assuming you need to check if + everyone you do business with is from there to not do it if that's the + case.” Silent stood and the air got thick. Someone jumped in with a + nervous joke to break up the tension that my child like questions had + somehow brought to the room, and the conversation moved on. +

+ +

+ Ever since that day, I've come across this kind of + social-justice-business-censor thinking pop up a lot. Since that fun + first encounter, whenever someone points out at how business should + not be done with <whatever ideology/country/demographic they don't + like>, I started jokingly triggering them by asking, “Actually, are + you making sure your drug dealer a homophobic socialist?” They + generally laugh, not grasping how their stances on politically + deciding to do or not do business with someone sound as ridicolous to + me. +

+ + + +

+ Here's what disturbs me: trade is a very civilized act. When we + trade—whether it's goods, services, or anything else—we're putting + aside our differences and doing something mutually beneficial. We both + walk away better off. We hurt no one. We make things a tiny bit better + overall. Deciding not to trade with someone because of some political + detail which is completely irrelevant to the trade itself is + backwards. Even if I didn't like communists, I wouldn't care if a + communist is selling me bananas. It just doesn't matter. +

+ +

+ Seeing people blow up trade over politics makes me sad. I think it's + ignorant and hateful. And I don't think they realize where that kind + of thinking can lead. +

+ +

+ In the end, I just hope people can leave politics out of business. + Let's do business and all be better off thanks to it. +

+
+

back to home

+
+
+ + + \ No newline at end of file diff --git a/public/writings/notes-and-lessons-from-my-departure-from-superhog.html b/public/writings/notes-and-lessons-from-my-departure-from-superhog.html new file mode 100644 index 0000000..a38a812 --- /dev/null +++ b/public/writings/notes-and-lessons-from-my-departure-from-superhog.html @@ -0,0 +1,203 @@ + + + + + Pablo here + + + + + + + +
+

+ Hi, Pablo here +

+

back to home

+
+
+

Notes for myself during my departure from Superhog

+

I'm writing this a few days before my last day at Superhog (now called Truvi). Having a few company + departures under my belt already, I know a bit on what will come next. I know one part of the drill is + that 99% of the details of what happened during my tenure at the company will completely disappear from + my memory for the most part, only triggered by eerily coincidental cues here and there every few years. + I will remember clearly a few crucial, exciting days and situations. I will also hold well the names and + faces of those with who I worked closely, as well as my personal impression and judgement of them. I + will remember the office, and some details of how my daily life was when I went there.

+

But most other things will be gone from my brain, surprisingly fast.

+

Knowing that experience is a great teacher, and regretting not doing this in the past, I've decided to + collect a few notes from my time at Superhog, hoping they will serve me in making the lessons I've + learnt here stick properly.

+ +
+

back to home

+
+
+ + + + \ No newline at end of file diff --git a/public/writings/why-i-put-my-vms-on-a-zfs-mirror.html b/public/writings/why-i-put-my-vms-on-a-zfs-mirror.html new file mode 100644 index 0000000..77301cb --- /dev/null +++ b/public/writings/why-i-put-my-vms-on-a-zfs-mirror.html @@ -0,0 +1,120 @@ + + + + + Pablo here + + + + + + + +
+

+ Hi, Pablo here +

+

back to home

+
+

Why I Put My VMs on a ZFS Mirror

+

Part 1 of 3 in my "First ZFS Degradation" series. Also read Part 2: Diagnosing the Problem and Part 3: The Fix.

+

Why This Series Exists

+

A few weeks into running my new homelab server, I stumbled upon something I wasn't expecting to see that early: my ZFS pool was in "DEGRADED" state. One of my two mirrored drives had gone FAULTED.

+

This was the first machine I had set up with a ZFS mirror, precisely to be able to deal with disk issues smoothly, without losing data and having downtime. Although it felt like a pain in the ass to spot the problem, I was also happy because it gave me a chance to drill the kind of disk maintenance I was hoping to do in this new server.

+

But here's the thing: when I was in the middle of it, I couldn't find a single resource that walked through the whole experience in detail. Plenty of docs explain what ZFS is. Plenty of forum posts have people asking "help my pool is degraded." But nothing that said "here's what it actually feels like to go through this, step by step, with all the commands and logs and reasoning behind the decisions."

+

So I wrote it down. I took a lot of notes during the process and crafted a more or less organized story from them. This three-part series is for fellow amateur homelabbers who are curious about ZFS, maybe a little intimidated by it, and want to know what happens when things go sideways. I wish I had found a very detailed log like this when I was researching ZFS initially. Hope it helps you.

+

The server and disks

+

My homelab server is a modest but capable box I built in late 2025. It has decent consumer hardware, but nothing remarkable. I'll only specify that I have currently three disks on it:

+ +

The two IronWolf drives are where this story takes place. I labeled them AGAPITO1 and AGAPITO2 because... well, every pair of drives deserves a silly name. I have issues remembering serial numbers.

+

The server runs Proxmox and hosts most of my self-hosted life: personal services, testing VMs, and my Bitcoin infrastructure (which I share over at bitcoininfra.contrapeso.xyz). If this pool goes down, everything goes down.

+

Why ZFS?

+

I'll be honest: I didn't overthink this decision. ZFS is the default storage recommendation for Proxmox, it has a reputation for being rock-solid, and I'd heard enough horror stories about silent data corruption to want something with checksumming built in.

+

What I was most interested in was the ability to define RAID setups in software and deal easily with disks going in and out of them. I had never gone beyond the naive "one disk for the OS, one disk for data" setup in previous servers. After having disks failing on me in previous boxes, I decided it was time to gear up and do it proper this time. My main concern initially was just saving time: it's messy when a "simple" host has disk issues, and I hoped mirroring would allow me to invest less time in cleaning up disasters.

+

Why a Mirror?

+

When I set up the pool, I had two 4TB drives. That gave me a few options:

+
    +
  1. Single disk: Maximum space (8TB usable), zero redundancy. One bad sector and you're crying.
  2. +
  3. Mirror: Half the space (4TB usable from 8TB raw), but everything is written to both drives. One drive can completely die and you lose nothing.
  4. +
  5. RAIDZ: Needs at least 3 drives, gives you parity-based redundancy. More space-efficient than mirrors at scale.
  6. +
+

I went with the mirror for a few reasons.

+

First, I only had two drives to start with, so RAIDZ wasn't even an option yet.

+

Second, mirrors are simple. Data goes to both drives. If one dies, the other has everything. No parity calculations, no write penalties, no complexity.

+

Third (and this is the one that sold me), mirrors let you expand incrementally. With ZFS, you can add more mirror pairs (called "vdevs") to your pool later. You can even mix sizes: start with two 4TB drives, add two 8TB drives later, and ZFS will use all of it. RAIDZ doesn't give you that flexibility; once you set your vdev width, you're stuck with it.

+

When Would RAIDZ Make More Sense?

+

If you're starting with 4+ drives and you want to maximize usable space, RAIDZ starts looking attractive:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ConfigurationDrivesUsable SpaceFault Tolerance
Mirror250%1 drive
RAIDZ13~67%1 drive
RAIDZ1475%1 drive
RAIDZ2450%2 drives
RAIDZ26~67%2 drives
+

RAIDZ2 is popular for larger arrays because it can survive two drive failures, which matters more as you add drives (more drives = higher chance of one failing during a resilver).

+

But for a two-drive homelab that might grow to four drives someday, I felt a mirror was the right call. I can always add another mirror pair later.

+

The Pool: proxmox-tank-1

+

My ZFS pool is called proxmox-tank-1. Here's what it looks like when everything is healthy:

+
  pool: proxmox-tank-1
+ state: ONLINE
+config:
+
+    NAME                                 STATE     READ WRITE CKSUM
+    proxmox-tank-1                       ONLINE       0     0     0
+      mirror-0                           ONLINE       0     0     0
+        ata-ST4000NT001-3M2101_WX11TN0Z  ONLINE       0     0     0
+        ata-ST4000NT001-3M2101_WX11TN2P  ONLINE       0     0     0
+

That's it. One pool, one mirror vdev, two drives. The drives are identified by their serial numbers (the WX11TN0Z and WX11TN2P parts), which is important — ZFS uses stable identifiers so it doesn't get confused if Linux decides to shuffle around /dev/sda and /dev/sdb.

+

All my Proxmox VMs store their virtual disks on this pool. When I create a new VM, I point its storage at proxmox-tank-1 and ZFS handles the rest.

+

What Could Possibly Go Wrong?

+

Everything was humming along nicely. VMs were running fine and I was feeling pretty good about my setup.

+

Then, a few weeks in, I was poking around the Proxmox web UI and noticed something that caught my eye.

+

The ZFS pool was DEGRADED. One of my drives — AGAPITO1, serial WX11TN0Z — was FAULTED.

+

In Part 2, I'll walk through how I diagnosed what was actually wrong. Spoiler: the drive itself was fine. The problem was much dumber than that.

+

Continue to Part 2: Diagnosing the Problem

+

back to home

+
+
+ + + + +