Ever since I read a post on @misc from Nick Holland to someone asking
about running a large filesystem on OpenBSD, in which Nick wrote:

> ZFS is kinda the IPv6 of file systems.  A few good ideas trying to
> solve a one issue... and then they went way overboard trying to pack
> too much else into it.
>
> I've setup some cool stuff using ZFS (dynamically sized partitions,
> snapshots, zfs sends of snapshots to other machines, etc), but man, I
> spent a comical amount of time babysitting and fixing file system
> problems.  The 1980s are over, file systems should Just Work now. If
> you are babysitting them constantly, something ain't right.  If
> someone wants to add a ZFS-like "scrubbing" feature to ffs, I'd be
> all for it. But not for the penalties that come with ZFS.

I have been thinking about a simple way to do some of this because ZFS
just keeps getting bigger and bigger and more and more complex.

I was thinking something like this:

Running disks in RAID1 or RAID5 (pick your poison) with softraid.

Then for every important big file use something like par2cmdline to
create parity data.

par2cmdline can be used to verify and re-create files.

I would perhaps also create simple checksums for files as well, because
that's faster to run through a script, checking all files, than
par2verify.

For smaller files, perhaps put them into a version control system with
integrity checking and parity rather than the above.

Of course backup is essential, it's not about that.

Running a script that checks all checksums is a "poor mans" version of
ZFS scrubbing. If bit rot is found, repair the file with par2 parity.

For send/receive, if needed, I think rsync is adequate as it also uses
 checksums to validate the transfer of files.

Any feedback? Do you do something similar on OpenBSD?

Cheers.

Reply via email to