The Nice thing about ZFS, is it'll tell you exactly which file is borked. You won't get that out of traditional raid/file-system pair, because the two don't know about each other. At best you'll get a notice that block @ meaningless number is bad.

[root@muse ~]# zpool status -v
  pool: muse
 state: ONLINE
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
   see: http://zfsonlinux.org/msg/ZFS-8000-8A
  scan: scrub in progress since Thu May 18 23:46:04 2017
    3.48T scanned out of 3.90T at 417M/s, 0h17m to go
    0 repaired, 89.25% done
config:

NAME STATE READ WRITE CKSUM muse ONLINE 0 0 2 raidz2-0 ONLINE 0 0 4 ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N4TZU5LS ONLINE 0 0 0 ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N5AHPHK8 ONLINE 0 0 0 ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N1VJKLZ5 ONLINE 0 0 0 ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N4TZURA5 ONLINE 0 0 0 ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N3YP9AJL ONLINE 0 0 0
        spares
          ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N3YP95XE    AVAIL

errors: Permanent errors have been detected in the following files:


/data01/backups/1x.jarvis.revident.ca/libvirt/images/vs130.revident.ca.img

muse/backups/1x.jarvis.revident.ca@20170423_193849-0400:/libvirt/images/vs130.revident.ca.img


This is bit rot in action.

--
Scott Sullivan
---
Talk Mailing List
[email protected]
https://gtalug.org/mailman/listinfo/talk

Reply via email to