diff --git a/20251230_first_zfs_degradation.md b/20251230_first_zfs_degradation.md new file mode 100644 index 0000000..6710968 --- /dev/null +++ b/20251230_first_zfs_degradation.md @@ -0,0 +1,510 @@ +# First ZFS Degradation + + +## Problem + +On 2025-12-30 I was snooping around the Proxmox UI where I accidentally bumped into a storage view that showed that my ZFS pool (which I use for the disks of all my VMs) was in degraded state. Opening up the detail, it appeared one of the disks was in FAULTED state. + +I attempted rebooting the host, which trigger an attempt at resilvering. But the disk remained in the same state. + +## First diagnostic + +After researching a bit, I ran the following diagnostic steps: + +- I scanned the kernel logs with `dmesg -T | egrep -i 'ata[0-9]|sata|reset|link|i/o error|blk_update_request|scsi|sense' | tail -200`. + - The logs revealed that the disk connection seemed flaky, with the device reconnecting and going from bein `sdb` to `sdc`. Also, speed was being downgraded sequentially. +- I ran the `smartctl` health assessment and the `smartctl` short test. + - This revealed that the disk looked healthy, and that it was having repeated "Hardware resets". Apparently, this is consistent with the idea that the HDD is repeteadly connecing/disconnecting, as the kernel logs point to. + + +So, after these, the initial hypothesis is that the drive itself is in good shape, but there is a connection issue. The first candidates are that either the SATA data cable is not working properly, or the power supply to the disk is flickering. If that's not the case, the issue could lie in the SATA port in the motherbase, or some other motherbase issue upstream of the sata port. Finally, it could also be an issue in the PSU. + +## First solution attempt + + +### Plan + +With this info, I decide to try to reseat the HDD as a first solution. + +Rough plan: +- Pre-action + - Take some vars in: + ``` + DISKID="$(ls -1 /dev/disk/by-id/ | grep -m1 WX11TN0Z)" + echo "$DISKID" + DISKPATH="/dev/disk/by-id/$DISKID" + echo "$DISKPATH" + ``` + - Log: + - Serial of the bad drive + - ZFS status: `zpool status -v proxmox-tank-1` + - Kernel log after boot and some minutes of normal operation: `dmesg -T | egrep -i 'ata4|sata|reset|link|exception|interface fatal|failed to IDENTIFY|qc timeout|I/O error|blk_update_request|Not Ready' | tail -200` + - Smartctl output: + ``` + readlink -f "$DISKPATH" + smartctl -x "$(readlink -f "$DISKPATH")" | egrep -i 'overall|Reallocated|Pending|Offline_Uncorrect|CRC|Error Log|Hardware Resets|COMRESET|SATA Phy' + ``` + - Reseat + - Remove the disk from the ZFS pool + - Turn it off with `zpool offline proxmox-tank-1 "$DISKID"` + - Check with `zpool status -v proxmox-tank-1` + - Power down the server + - Open up, check the disks. Map out the relation between the name tags and the serial numbers. + - Reseat both the power cable and the data cable of the faulty disk. + - Since we opened up, also check the cables of the other unit. + - Close again, reboot. + - Taking back online and evaluating + - First, check that the OS sees the disk properly: + ``` + ls -l /dev/disk/by-id/ | grep WX11TN0Z + readlink -f "$DISKPATH" + ``` + - Start a terminal dedicated to monitoring kernel logs with `dmesg -Tw`. Leave it running while we go for next steps. + - If the disk appears properly and there are no weird signals in kernel logs, add it back into the ZFS pool. Monitor the resilvering process regularly with `zpool status`, and wait for resilvering to finish: + ``` + zpool online proxmox-tank-1 "$DISKID" + zpool status -v proxmox-tank-1 + ``` + - Run a scrub and monitor the disk state while it happens. Also, pay special attention to the kernel log output, we shouldn't see any SATA resets. The scrub can take an hour or two. + ``` + zpool scrub proxmox-tank-1 + watch -n 5 "zpool status -v proxmox-tank-1" + ``` + - If ZFS shows no problems, after ~30min of activity, check again with smartctl to evaluate disk health and confirm it's all good: + ``` + smartctl -x "$(readlink -f "$DISKPATH")" | egrep -i 'Reallocated|Pending|Offline_Uncorrect|CRC|Hardware Resets|COMRESET|SATA Phy' + ``` + - Decision: + - If outcome was like this: + - No weird kernel log messages. + - ZFS indicates healthy status after resilvering, holds it consistently. + - smartctl reads that disk is healthy. + - Then disk is healthy, cable connection was solved and we're all good. + - If any of the previous was not successful, we're still in trouble. Given outputs, evaluate if we should judge cables, disk or motherboard next. + +### Execution notes + +- Pre shutdown logs + - Disk ID: + ``` + DISKID="ata-ST4000NT001-3M2101_WX11TN0Z" + DISKPATH="/dev/disk/by-id/ata-ST4000NT001-3M2101_WX11TN0Z" + ``` + - ZFS Pool status with `zpool status -v proxmox-tank-1`: + ``` + pool: proxmox-tank-1 + state: DEGRADED + status: One or more devices is currently being resilvered. The pool will + continue to function, possibly in a degraded state. + action: Wait for the resilver to complete. + scan: resilver in progress since Fri Jan 2 22:25:31 2026 + 978G / 1.26T scanned at 1.92G/s, 26.0G / 524G issued at 52.3M/s + 0B resilvered, 4.96% done, no estimated completion time + config: + + NAME STATE READ WRITE CKSUM + proxmox-tank-1 DEGRADED 0 0 0 + mirror-0 DEGRADED 0 0 0 + ata-ST4000NT001-3M2101_WX11TN0Z FAULTED 108 639 129 too many errors + ata-ST4000NT001-3M2101_WX11TN2P ONLINE 0 0 0 + + errors: No known data errors + ``` + - Kernel logs coming from a clean boot after around 10 min of operation: + ``` + sudo dmesg -T | egrep -i 'ata4|sata|reset|link|exception|interface fatal|failed to IDENTIFY|qc timeout|I/O error|blk_update_request|Not Ready' | tail -200 + [Fri Jan 2 22:25:08 2026] ata4.00: ATA-11: ST4000NT001-3M2101, EN01, max UDMA/133 + [Fri Jan 2 22:25:08 2026] ata4.00: 7814037168 sectors, multi 16: LBA48 NCQ (depth 32), AA + [Fri Jan 2 22:25:08 2026] ata4.00: Features: NCQ-sndrcv + [Fri Jan 2 22:25:08 2026] ata4.00: configured for UDMA/133 + [Fri Jan 2 22:25:13 2026] ata4.00: exception Emask 0x50 SAct 0x70220001 SErr 0xe0802 action 0x6 frozen + [Fri Jan 2 22:25:13 2026] ata4.00: irq_stat 0x08000000, interface fatal error + [Fri Jan 2 22:25:13 2026] ata4: SError: { RecovComm HostInt PHYInt CommWake 10B8B } + [Fri Jan 2 22:25:13 2026] ata4.00: failed command: READ FPDMA QUEUED + [Fri Jan 2 22:25:13 2026] ata4.00: cmd 60/20:00:98:4e:04/00:00:38:00:00/40 tag 0 ncq dma 16384 in + [Fri Jan 2 22:25:13 2026] ata4.00: status: { DRDY } + [Fri Jan 2 22:25:13 2026] ata4.00: failed command: READ FPDMA QUEUED + [Fri Jan 2 22:25:13 2026] ata4.00: cmd 60/00:88:d8:39:00/01:00:38:00:00/40 tag 17 ncq dma 131072 in + [Fri Jan 2 22:25:13 2026] ata4.00: status: { DRDY } + [Fri Jan 2 22:25:13 2026] ata4.00: failed command: READ FPDMA QUEUED + [Fri Jan 2 22:25:13 2026] ata4.00: cmd 60/20:a8:98:49:04/00:00:38:00:00/40 tag 21 ncq dma 16384 in + [Fri Jan 2 22:25:13 2026] ata4.00: status: { DRDY } + [Fri Jan 2 22:25:13 2026] ata4.00: failed command: READ FPDMA QUEUED + [Fri Jan 2 22:25:13 2026] ata4.00: cmd 60/18:e0:a8:26:d4/00:00:38:00:00/40 tag 28 ncq dma 12288 in + [Fri Jan 2 22:25:13 2026] ata4.00: status: { DRDY } + [Fri Jan 2 22:25:13 2026] ata4.00: failed command: READ FPDMA QUEUED + [Fri Jan 2 22:25:13 2026] ata4.00: cmd 60/30:e8:e8:ad:d5/00:00:38:00:00/40 tag 29 ncq dma 24576 in + [Fri Jan 2 22:25:13 2026] ata4.00: status: { DRDY } + [Fri Jan 2 22:25:13 2026] ata4.00: failed command: READ FPDMA QUEUED + [Fri Jan 2 22:25:13 2026] ata4.00: cmd 60/00:f0:00:2a:00/01:00:38:00:00/40 tag 30 ncq dma 131072 in + [Fri Jan 2 22:25:13 2026] ata4.00: status: { DRDY } + [Fri Jan 2 22:25:13 2026] ata4: hard resetting link + [Fri Jan 2 22:25:14 2026] ata4: SATA link down (SStatus 0 SControl 300) + [Fri Jan 2 22:25:14 2026] ata4: hard resetting link + [Fri Jan 2 22:25:19 2026] ata4: link is slow to respond, please be patient (ready=0) + [Fri Jan 2 22:25:24 2026] ata4: found unknown device (class 0) + [Fri Jan 2 22:25:24 2026] ata4: found unknown device (class 0) + [Fri Jan 2 22:25:24 2026] ata4: SATA link up 6.0 Gbps (SStatus 133 SControl 300) + [Fri Jan 2 22:25:29 2026] ata4.00: qc timeout after 5000 msecs (cmd 0xec) + [Fri Jan 2 22:25:29 2026] ata4.00: failed to IDENTIFY (I/O error, err_mask=0x4) + [Fri Jan 2 22:25:29 2026] ata4.00: revalidation failed (errno=-5) + [Fri Jan 2 22:25:29 2026] ata4: limiting SATA link speed to 3.0 Gbps + [Fri Jan 2 22:25:29 2026] ata4: hard resetting link + [Fri Jan 2 22:25:30 2026] ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 320) + [Fri Jan 2 22:25:30 2026] ata4.00: configured for UDMA/133 + [Fri Jan 2 22:25:30 2026] ata4: EH complete + [Fri Jan 2 22:25:33 2026] ata4.00: exception Emask 0x50 SAct 0x30008000 SErr 0xe0802 action 0x6 frozen + [Fri Jan 2 22:25:33 2026] ata4.00: irq_stat 0x08000000, interface fatal error + [Fri Jan 2 22:25:33 2026] ata4: SError: { RecovComm HostInt PHYInt CommWake 10B8B } + [Fri Jan 2 22:25:33 2026] ata4.00: failed command: READ FPDMA QUEUED + [Fri Jan 2 22:25:33 2026] ata4.00: cmd 60/a0:78:30:92:54/00:00:a2:00:00/40 tag 15 ncq dma 81920 in + [Fri Jan 2 22:25:33 2026] ata4.00: status: { DRDY } + [Fri Jan 2 22:25:33 2026] ata4.00: failed command: READ FPDMA QUEUED + [Fri Jan 2 22:25:33 2026] ata4.00: cmd 60/40:e0:50:ae:11/00:00:a0:00:00/40 tag 28 ncq dma 32768 in + [Fri Jan 2 22:25:33 2026] ata4.00: status: { DRDY } + [Fri Jan 2 22:25:33 2026] ata4.00: failed command: READ FPDMA QUEUED + [Fri Jan 2 22:25:33 2026] ata4.00: cmd 60/68:e8:30:0f:59/02:00:a2:00:00/40 tag 29 ncq dma 315392 in + [Fri Jan 2 22:25:33 2026] ata4.00: status: { DRDY } + [Fri Jan 2 22:25:33 2026] ata4: hard resetting link + [Fri Jan 2 22:25:34 2026] ata4: SATA link down (SStatus 0 SControl 320) + [Fri Jan 2 22:25:34 2026] ata4: hard resetting link + [Fri Jan 2 22:25:39 2026] ata4: link is slow to respond, please be patient (ready=0) + [Fri Jan 2 22:25:44 2026] ata4: found unknown device (class 0) + [Fri Jan 2 22:25:44 2026] ata4: found unknown device (class 0) + [Fri Jan 2 22:25:44 2026] ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 320) + [Fri Jan 2 22:25:49 2026] ata4.00: qc timeout after 5000 msecs (cmd 0xec) + [Fri Jan 2 22:25:49 2026] ata4.00: failed to IDENTIFY (I/O error, err_mask=0x4) + [Fri Jan 2 22:25:49 2026] ata4.00: revalidation failed (errno=-5) + [Fri Jan 2 22:25:49 2026] ata4: limiting SATA link speed to 1.5 Gbps + [Fri Jan 2 22:25:49 2026] ata4: hard resetting link + [Fri Jan 2 22:25:50 2026] ata4: SATA link up 1.5 Gbps (SStatus 113 SControl 310) + [Fri Jan 2 22:25:50 2026] ata4.00: configured for UDMA/133 + [Fri Jan 2 22:25:50 2026] ata4.00: limiting speed to UDMA/100:PIO4 + [Fri Jan 2 22:25:51 2026] ata4: SATA link up 1.5 Gbps (SStatus 113 SControl 310) + [Fri Jan 2 22:25:51 2026] ata4.00: configured for UDMA/100 + [Fri Jan 2 22:25:51 2026] ata4: EH complete + [Fri Jan 2 22:25:52 2026] ata4.00: exception Emask 0x50 SAct 0x80001001 SErr 0x20802 action 0x6 frozen + [Fri Jan 2 22:25:52 2026] ata4.00: irq_stat 0x08000000, interface fatal error + [Fri Jan 2 22:25:52 2026] ata4: SError: { RecovComm HostInt PHYInt } + [Fri Jan 2 22:25:52 2026] ata4.00: failed command: READ FPDMA QUEUED + [Fri Jan 2 22:25:52 2026] ata4.00: cmd 60/b8:00:c0:31:57/00:00:94:00:00/40 tag 0 ncq dma 94208 in + [Fri Jan 2 22:25:52 2026] ata4.00: status: { DRDY } + [Fri Jan 2 22:25:52 2026] ata4.00: failed command: READ FPDMA QUEUED + [Fri Jan 2 22:25:52 2026] ata4.00: cmd 60/58:60:c0:73:45/00:00:94:00:00/40 tag 12 ncq dma 45056 in + [Fri Jan 2 22:25:52 2026] ata4.00: status: { DRDY } + [Fri Jan 2 22:25:52 2026] ata4.00: failed command: READ FPDMA QUEUED + [Fri Jan 2 22:25:52 2026] ata4.00: cmd 60/58:f8:08:72:45/00:00:94:00:00/40 tag 31 ncq dma 45056 in + [Fri Jan 2 22:25:52 2026] ata4.00: status: { DRDY } + [Fri Jan 2 22:25:52 2026] ata4: hard resetting link + [Fri Jan 2 22:25:52 2026] ata4: SATA link down (SStatus 0 SControl 310) + [Fri Jan 2 22:25:52 2026] ata4: hard resetting link + [Fri Jan 2 22:25:57 2026] ata4: link is slow to respond, please be patient (ready=0) + [Fri Jan 2 22:26:02 2026] ata4: found unknown device (class 0) + [Fri Jan 2 22:26:02 2026] ata4: found unknown device (class 0) + [Fri Jan 2 22:26:02 2026] ata4: SATA link up 1.5 Gbps (SStatus 113 SControl 310) + [Fri Jan 2 22:26:08 2026] ata4.00: qc timeout after 5000 msecs (cmd 0xec) + [Fri Jan 2 22:26:08 2026] ata4.00: failed to IDENTIFY (I/O error, err_mask=0x4) + [Fri Jan 2 22:26:08 2026] ata4.00: revalidation failed (errno=-5) + [Fri Jan 2 22:26:08 2026] ata4: hard resetting link + [Fri Jan 2 22:26:08 2026] ata4: SATA link up 1.5 Gbps (SStatus 113 SControl 310) + [Fri Jan 2 22:26:08 2026] ata4.00: configured for UDMA/100 + [Fri Jan 2 22:26:08 2026] ata4: EH complete + [Fri Jan 2 22:26:10 2026] ata4.00: exception Emask 0x50 SAct 0x1800200 SErr 0x60802 action 0x6 frozen + [Fri Jan 2 22:26:10 2026] ata4.00: irq_stat 0x08000000, interface fatal error + [Fri Jan 2 22:26:10 2026] ata4: SError: { RecovComm HostInt PHYInt CommWake } + [Fri Jan 2 22:26:10 2026] ata4.00: failed command: READ FPDMA QUEUED + [Fri Jan 2 22:26:10 2026] ata4.00: cmd 60/58:48:40:00:09/00:00:7b:00:00/40 tag 9 ncq dma 45056 in + [Fri Jan 2 22:26:10 2026] ata4.00: status: { DRDY } + [Fri Jan 2 22:26:10 2026] ata4.00: failed command: READ FPDMA QUEUED + [Fri Jan 2 22:26:10 2026] ata4.00: cmd 60/40:b8:38:53:18/00:00:86:00:00/40 tag 23 ncq dma 32768 in + [Fri Jan 2 22:26:10 2026] ata4.00: status: { DRDY } + [Fri Jan 2 22:26:10 2026] ata4.00: failed command: READ FPDMA QUEUED + [Fri Jan 2 22:26:10 2026] ata4.00: cmd 60/58:c0:90:0f:0e/00:00:7f:00:00/40 tag 24 ncq dma 45056 in + [Fri Jan 2 22:26:10 2026] ata4.00: status: { DRDY } + [Fri Jan 2 22:26:10 2026] ata4: hard resetting link + [Fri Jan 2 22:26:10 2026] ata4: SATA link down (SStatus 0 SControl 310) + [Fri Jan 2 22:26:10 2026] ata4: hard resetting link + [Fri Jan 2 22:26:15 2026] ata4: link is slow to respond, please be patient (ready=0) + [Fri Jan 2 22:26:20 2026] ata4: found unknown device (class 0) + [Fri Jan 2 22:26:20 2026] ata4: found unknown device (class 0) + [Fri Jan 2 22:26:20 2026] ata4: SATA link up 1.5 Gbps (SStatus 113 SControl 310) + [Fri Jan 2 22:26:26 2026] ata4.00: qc timeout after 5000 msecs (cmd 0xec) + [Fri Jan 2 22:26:26 2026] ata4.00: failed to IDENTIFY (I/O error, err_mask=0x4) + [Fri Jan 2 22:26:26 2026] ata4.00: revalidation failed (errno=-5) + [Fri Jan 2 22:26:26 2026] ata4: hard resetting link + [Fri Jan 2 22:26:26 2026] ata4: SATA link up 1.5 Gbps (SStatus 113 SControl 310) + [Fri Jan 2 22:26:26 2026] ata4.00: configured for UDMA/100 + [Fri Jan 2 22:26:26 2026] ata4.00: limiting speed to UDMA/33:PIO4 + [Fri Jan 2 22:26:27 2026] ata4: SATA link up 1.5 Gbps (SStatus 113 SControl 310) + [Fri Jan 2 22:26:27 2026] ata4.00: configured for UDMA/33 + [Fri Jan 2 22:26:27 2026] ata4: EH complete + [Fri Jan 2 22:26:28 2026] ata4.00: exception Emask 0x50 SAct 0x800003 SErr 0x20802 action 0x6 frozen + [Fri Jan 2 22:26:28 2026] ata4.00: irq_stat 0x08000000, interface fatal error + [Fri Jan 2 22:26:28 2026] ata4: SError: { RecovComm HostInt PHYInt } + [Fri Jan 2 22:26:28 2026] ata4.00: failed command: READ FPDMA QUEUED + [Fri Jan 2 22:26:28 2026] ata4.00: cmd 60/58:00:f0:f8:23/00:00:ab:00:00/40 tag 0 ncq dma 45056 in + [Fri Jan 2 22:26:28 2026] ata4.00: status: { DRDY } + [Fri Jan 2 22:26:28 2026] ata4.00: failed command: READ FPDMA QUEUED + [Fri Jan 2 22:26:28 2026] ata4.00: cmd 60/58:08:80:fa:23/00:00:ab:00:00/40 tag 1 ncq dma 45056 in + [Fri Jan 2 22:26:28 2026] ata4.00: status: { DRDY } + [Fri Jan 2 22:26:28 2026] ata4.00: failed command: READ FPDMA QUEUED + [Fri Jan 2 22:26:28 2026] ata4.00: cmd 60/50:b8:48:f8:23/00:00:ab:00:00/40 tag 23 ncq dma 40960 in + [Fri Jan 2 22:26:28 2026] ata4.00: status: { DRDY } + [Fri Jan 2 22:26:28 2026] ata4: hard resetting link + [Fri Jan 2 22:26:28 2026] ata4: SATA link down (SStatus 0 SControl 310) + [Fri Jan 2 22:26:28 2026] ata4: hard resetting link + [Fri Jan 2 22:26:33 2026] ata4: link is slow to respond, please be patient (ready=0) + [Fri Jan 2 22:26:38 2026] ata4: found unknown device (class 0) + [Fri Jan 2 22:26:38 2026] ata4: found unknown device (class 0) + [Fri Jan 2 22:26:38 2026] ata4: SATA link up 1.5 Gbps (SStatus 113 SControl 310) + [Fri Jan 2 22:26:43 2026] ata4.00: qc timeout after 5000 msecs (cmd 0xec) + [Fri Jan 2 22:26:43 2026] ata4.00: failed to IDENTIFY (I/O error, err_mask=0x4) + [Fri Jan 2 22:26:43 2026] ata4.00: revalidation failed (errno=-5) + [Fri Jan 2 22:26:43 2026] ata4: hard resetting link + [Fri Jan 2 22:26:44 2026] ata4: SATA link up 1.5 Gbps (SStatus 113 SControl 310) + [Fri Jan 2 22:26:44 2026] ata4.00: configured for UDMA/33 + [Fri Jan 2 22:26:44 2026] ata4: EH complete + [Fri Jan 2 22:26:45 2026] ata4.00: exception Emask 0x50 SAct 0x801080 SErr 0x60802 action 0x6 frozen + [Fri Jan 2 22:26:45 2026] ata4.00: irq_stat 0x08000000, interface fatal error + [Fri Jan 2 22:26:45 2026] ata4: SError: { RecovComm HostInt PHYInt CommWake } + [Fri Jan 2 22:26:45 2026] ata4.00: failed command: READ FPDMA QUEUED + [Fri Jan 2 22:26:45 2026] ata4.00: cmd 60/50:38:18:df:c5/00:00:a8:01:00/40 tag 7 ncq dma 40960 in + [Fri Jan 2 22:26:45 2026] ata4.00: status: { DRDY } + [Fri Jan 2 22:26:45 2026] ata4.00: failed command: READ FPDMA QUEUED + [Fri Jan 2 22:26:45 2026] ata4.00: cmd 60/58:60:98:a0:33/00:00:a9:01:00/40 tag 12 ncq dma 45056 in + [Fri Jan 2 22:26:45 2026] ata4.00: status: { DRDY } + [Fri Jan 2 22:26:45 2026] ata4.00: failed command: READ FPDMA QUEUED + [Fri Jan 2 22:26:45 2026] ata4.00: cmd 60/58:b8:a0:e5:b9/00:00:1b:00:00/40 tag 23 ncq dma 45056 in + [Fri Jan 2 22:26:45 2026] ata4.00: status: { DRDY } + [Fri Jan 2 22:26:45 2026] ata4: hard resetting link + [Fri Jan 2 22:26:45 2026] ata4: SATA link down (SStatus 0 SControl 310) + [Fri Jan 2 22:26:45 2026] ata4: hard resetting link + [Fri Jan 2 22:26:50 2026] ata4: found unknown device (class 0) + [Fri Jan 2 22:26:50 2026] ata4: SATA link down (SStatus 0 SControl 310) + [Fri Jan 2 22:26:51 2026] ata4: hard resetting link + [Fri Jan 2 22:26:56 2026] ata4: link is slow to respond, please be patient (ready=0) + [Fri Jan 2 22:27:01 2026] ata4: found unknown device (class 0) + [Fri Jan 2 22:27:01 2026] ata4: found unknown device (class 0) + [Fri Jan 2 22:27:01 2026] ata4: SATA link up 1.5 Gbps (SStatus 113 SControl 310) + [Fri Jan 2 22:27:06 2026] ata4.00: qc timeout after 5000 msecs (cmd 0xec) + [Fri Jan 2 22:27:06 2026] ata4.00: failed to IDENTIFY (I/O error, err_mask=0x4) + [Fri Jan 2 22:27:06 2026] ata4.00: revalidation failed (errno=-5) + [Fri Jan 2 22:27:06 2026] ata4.00: disable device + [Fri Jan 2 22:27:07 2026] ata4: SATA link up 6.0 Gbps (SStatus 133 SControl 300) + [Fri Jan 2 22:27:07 2026] ata4.00: ATA-11: ST4000NT001-3M2101, EN01, max UDMA/133 + [Fri Jan 2 22:27:07 2026] ata4.00: 7814037168 sectors, multi 16: LBA48 NCQ (depth 32), AA + [Fri Jan 2 22:27:07 2026] ata4.00: Features: NCQ-sndrcv + [Fri Jan 2 22:27:07 2026] ata4.00: configured for UDMA/133 + [Fri Jan 2 22:27:07 2026] sd 3:0:0:0: [sdb] tag#7 Sense Key : Not Ready [current] + [Fri Jan 2 22:27:07 2026] sd 3:0:0:0: [sdb] tag#7 Add. Sense: Logical unit not ready, hard reset required + [Fri Jan 2 22:27:07 2026] I/O error, dev sdb, sector 7126507288 op 0x0:(READ) flags 0x0 phys_seg 2 prio class 0 + [Fri Jan 2 22:27:07 2026] sd 3:0:0:0: [sdb] tag#12 Sense Key : Not Ready [current] + [Fri Jan 2 22:27:07 2026] sd 3:0:0:0: [sdb] tag#12 Add. Sense: Logical unit not ready, hard reset required + [Fri Jan 2 22:27:07 2026] I/O error, dev sdb, sector 7133700248 op 0x0:(READ) flags 0x0 phys_seg 3 prio class 0 + [Fri Jan 2 22:27:07 2026] I/O error, dev sdb, sector 630388008 op 0x0:(READ) flags 0x0 phys_seg 3 prio class 0 + [Fri Jan 2 22:27:07 2026] sd 3:0:0:0: [sdb] tag#23 Sense Key : Not Ready [current] + [Fri Jan 2 22:27:07 2026] sd 3:0:0:0: [sdb] tag#23 Add. Sense: Logical unit not ready, hard reset required + [Fri Jan 2 22:27:07 2026] I/O error, dev sdb, sector 465167776 op 0x0:(READ) flags 0x0 phys_seg 3 prio class 0 + [Fri Jan 2 22:27:07 2026] I/O error, dev sdb, sector 2576 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0 + [Fri Jan 2 22:27:07 2026] I/O error, dev sdb, sector 1847276792 op 0x0:(READ) flags 0x0 phys_seg 14 prio class 0 + [Fri Jan 2 22:27:07 2026] I/O error, dev sdb, sector 7814018064 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0 + [Fri Jan 2 22:27:07 2026] ata4: EH complete + [Fri Jan 2 22:27:07 2026] ata4.00: detaching (SCSI 3:0:0:0) + [Fri Jan 2 22:27:07 2026] I/O error, dev sdb, sector 1847277304 op 0x0:(READ) flags 0x0 phys_seg 3 prio class 0 + [Fri Jan 2 22:28:39 2026] r8169 0000:07:00.0 eno1: Link is Down + [Fri Jan 2 22:28:42 2026] r8169 0000:07:00.0 eno1: Link is Up - 1Gbps/Full - flow control off + ``` + - Outputs of suggested smartctl command: + ``` + SMART overall-health self-assessment test result: PASSED + Error logging capability: (0x01) Error logging supported. + 5 Reallocated_Sector_Ct PO--CK 100 100 010 - 0 + 197 Current_Pending_Sector -O--C- 100 100 000 - 0 + 198 Offline_Uncorrectable ----C- 100 100 000 - 0 + 199 UDMA_CRC_Error_Count -OSRCK 200 200 000 - 0 + 0x01 SL R/O 1 Summary SMART error log + 0x02 SL R/O 5 Comprehensive SMART error log + 0x03 GPL R/O 5 Ext. Comprehensive SMART error log + 0x0c GPL R/O 2048 Pending Defects log + 0x10 GPL R/O 1 NCQ Command Error log + 0x11 GPL R/O 1 SATA Phy Event Counters log + 0x21 GPL R/O 1 Write stream error log + 0x22 GPL R/O 1 Read stream error log + SMART Extended Comprehensive Error Log Version: 1 (5 sectors) + If Selective self-test is pending on power-up, resume after 0 minute delay. + 0x03 0x020 4 0 --- Number of Reallocated Logical Sectors + 0x06 0x008 4 39 --- Number of Hardware Resets + 0x06 0x018 4 0 --- Number of Interface CRC Errors + Pending Defects log (GP Log 0x0c) + SATA Phy Event Counters (GP Log 0x11) + 0x000a 2 2 Device-to-host register FISes sent due to a COMRESET + 0x0001 2 0 Command failed due to ICRC error + ``` + + +- Turning off the disk: + - Ran `sudo zpool offline proxmox-tank-1 "$DISKID"`. + - Once finished, status read: + ``` + pool: proxmox-tank-1 + state: DEGRADED + status: One or more devices is currently being resilvered. The pool will + continue to function, possibly in a degraded state. + action: Wait for the resilver to complete. + scan: resilver in progress since Fri Jan 2 22:25:31 2026 + 1.04T / 1.26T scanned at 1.14G/s, 96.2G / 523G issued at 106M/s + 0B resilvered, 18.41% done, no estimated completion time + config: + + NAME STATE READ WRITE CKSUM + proxmox-tank-1 DEGRADED 0 0 0 + mirror-0 DEGRADED 0 0 0 + ata-ST4000NT001-3M2101_WX11TN0Z OFFLINE 108 639 129 + ata-ST4000NT001-3M2101_WX11TN2P ONLINE 0 0 0 + ``` +- Reseating + - I opened up the case. Generally it looked as if everything was properly seated. I revied the power and data ports on the disk, the power connection to the PSU tread and the data connection to the motherboard. Perhaps the faulty disk power port might have been sliiiiiightly loose, but not that much. All pins were clearly contacting. To be honest, it didn't look like it wasn't properly seated. + - In any case, I reaseted both of the connectors on the disk, and also the motherboard data one. +- Booting again + - Is the disk detected? + - `ls -l /dev/disk/by-id/ | grep WX11TN0Z` shows the disk is using `sdb`. + - `readlink -f "$DISKPATH"` agrees. + - First check of `dmesg` shows the disk succesfully detected and attached as a SCSI drive. No more logs. +- Adding the disk back to the pool + - I add the disk back with `sudo zpool online proxmox-tank-1 "$DISKID"` + - First status right after + ``` + pool: proxmox-tank-1 + state: DEGRADED + status: One or more devices is currently being resilvered. The pool will + continue to function, possibly in a degraded state. + action: Wait for the resilver to complete. + scan: resilver in progress since Fri Jan 2 23:17:35 2026 + 759G / 1.26T scanned at 8.43G/s, 0B / 618G issued + 0B resilvered, 0.00% done, no estimated completion time + config: + + NAME STATE READ WRITE CKSUM + proxmox-tank-1 DEGRADED 0 0 0 + mirror-0 DEGRADED 0 0 0 + ata-ST4000NT001-3M2101_WX11TN0Z DEGRADED 0 0 0 too many errors + ata-ST4000NT001-3M2101_WX11TN2P ONLINE 0 0 0 + + errors: No known data errors + + ``` + - It shows degraded, but 0 errors. I guess that's good news. + - Printing multiple times shows the resilvering process is ongoing and progressing. I'll let that run for a while. + - The writing part of the resilvering begins and the disk still shows 0 errors in the ZFS status. The kernel logs are quiet. The resilvering will take a while: the lagging disk must catch up on 524Gb of content. Dear god, for how long was that disk faulty??? + - The writing side of the resilvering is running at ~50MB/s. I'll shut down all the VMs in hopes of preventing contention for the disk IO. + - After around 30min, speed has increased to 100MB/s. + - The resilvering will take a long time and it's late, so I'l go to sleep and continue tomorrow. + +- Other notes + - I labeled the two disks by hand as AGAPITO1 and AGAPITO2, but I never noted their serial numbers. Silly me. This is the relation: + - AGAPITO1 is ata-ST4000NT001-3M2101_WX11TN0Z. + - AGAPITO2 is ata-ST4000NT001-3M2101_WX11TN2P. + - + +## Side quests + +### Using and understanding dmesg + +#### Generalities + +The Linux Kernel writes into the kernel ring buffer. This is an in-memory log where the kernel writes messages. The `dmesg` command reads from it. This log contains relatively low-level info from the kernel, including hardware related operations. + +This is an in-memory structure because it starts very early in the boot process, right after the bootloader. Hence, information is held in-memory before the disk file-systems are available and `syslog` can persist logs. + +Because of the in-memory nature, info on hardware failures is available in the kernel ring buffer before appearing in other persisted logs. + +Good keywords to `grep` the output of `dmesg` to find hard drive related issues include: +- ata +- sata +- ahci +- link +- COMRESET +- PHY + +#### Specific examples + +Some examples of telling log lines: + +``` +ata4: hard resetting link +ata4: SATA link down (SStatus 0 SControl 300) +ata4: link is slow to respond, please be patient (ready=0) +ata4: found unknown device (class 0) +``` + +The disk is connecting and disconnecting, or appears slow, or doesn't display info for not being properly connected (`unknown device`). + +``` +ata4: limiting SATA link speed to 3.0 Gbps +ata4: limiting SATA link speed to 1.5 Gbps +``` + +Speed downgrades. + +``` +I/O error, dev sdb, sector 5414318144 op 0x0:(READ) +I/O error, dev sdb, sector 7814018064 op 0x0:(READ) +``` + +Failures of read attempts. + + +#### Why is it called dmesg? + +`dmesg` stands for "diagnostic message". + +#### What about persistence of logs? + +Once userspace is available, `systemd-journald` reads messages. The kernel exposes the messages in `/dev/kmsg` for streaming. Those are copied into the in-memory journal. Then they get flushed to disk according to journald own policies and events. + +Kernel messages have log levels (EMERG, ALERT, CRIT, ...). The userspace processes decide which levels deserve to hit disk persistence and what's simply left in the ring buffer. + +### Why do SATA drives get mentioned as SCIS drives in Kernel log messages? + +SCIS is the protocol used by SAS drives. + +The Linux Kernel uses the SCIS protocol as an emulated middleware layer between the actual ATA protocol used by the hard drive firmware. + + +### Understanding smartctl + +#### Naming and maintenance + +Stands for Self-monitoring, Analysis, and Reporting Technology Control. + +It's maintained by three randos, classic xkcd fragile software supply chain meme. + +#### About the commands + +`smartctl -t short /dev/sdx` prompts the drive to run a self-test. It's a very basic check that doesn't actually fully examine the disk. Having a failure in the short test basically guarantees that the disk is toast. It checks: +- That CPU, RAM, etc work. +- That the onboard microcode execution works fine. +- It reads a small subset of the disk and verifies that it's readable. +- It moves the mechanical parts to check they work. + +By contrast, the longcheck (`smartctl -t long /dev/sdx`) actually reads every block on the disk to verify it can be read correctly. + +`smartctl -l selftest /dev/sdx` just reads the result of the selftests. + +#### On SATA vs SCSI (SAS) drives + +SAS drives are enterprise drives with much more advanced reporting features and a different approach to transparency. + +SATA drives run the vendor firmware, which keeps most details private and doesn't share them with the Linux. The drive itself decies when to report failues back to smartmontools. + +SAS drives, on the other hand, provides more transparency, informs of failures earlier, and doesn't try to simplify and "hide" issues. + +In the context of ZFS, SCSI helps the ZFS runtime realise faster that the disk is hiccup-y and make the call of marking it degraded and relying on alternative disks. SATA drives do not "confess" issues that fast, so the ZFS runtime keeps trying to use them and facing issues, which makes reads/writes fail and performance go to shit until finally ZFS decides to mark it degraded. + +SAS drives don't fit on SATA ports: to plug them into a regular motherboard, you need a SAS controller + + + +## Open questions + +- How can I automate the monitoring of the ZFS pool or the disks to avoid this issue happening silently again?