Il 22/11/2023 04:16, i...@tutanota.com ha scritto:
Ever since I read a post on @misc from Nick Holland to someone asking
about running a large filesystem on OpenBSD, in which Nick wrote:

[...]

Then for every important big file use something like par2cmdline to
create parity data.
[...]

Of course backup is essential, it's not about that.

Running a script that checks all checksums is a "poor mans" version of
ZFS scrubbing. If bit rot is found, repair the file with par2 parity.


You already got many good pointers. Let me add one: if you're after *long term* archiving, establish a process first. Understand what you need, what are your goals and how much hassle you can bear (err...afford). Write it down, in paper (in stone would be a better choice). Double check you're not working with data subject to regulations. If not, read some of them anyway; it's interesting.

When the process is nailed down and you/your organization are willing to follow it, then look for tools. Someone in this thread correctly observed that papyrus and friends lasted centuries: right, because they were simple.

Good luck!
--
f

Reply via email to