Right on the first post of this thread I thought “weren’t you a big fan of my md5 directory hasher script”? :D
Am Mon, Mar 02, 2026 at 03:30:54PM -0600 schrieb Dale: > What I was hoping, I could tell it to generate a checksum for a entire > directory, the way it does for a file. Then I do the same thing on my main > version and see if my main version matches the backup version. I think that > even if this could be done, it would take days to generate it. Definitely. But if you generate the checksum files once, then you have them available and use them to detect bitrot. And reference them when copying only individual directories. > I was hoping > for something that takes less than a hour at least, maybe even 15 or 20 > minutes. I'm getting the idea that what I'm wanting could take days or > longer to perform. 42TBs of data is a lot to check. Some 64,000 files. Well as others mentioned, there’s no other way than reading them all in. > I could just copy Franks script over and run it for the files. It would > report any errors. I'm sure that would take several days, maybe even a week > or more, to run on all those files tho. Well, to check whether a ZFS raid is still intact, one does a ZFS scrub. Which is exactly what you describe: it reads everything and compares checksums. On my 6 TB drives at 80 % use, it takes 10½ hours. -- Grüße | Greetings | Salut | Qapla’ Please do not share anything from, with or about me on any social network. The problem with FORTRAN jokes is that they never fit into a single li
signature.asc
Description: PGP signature

