On Feb 22 07:52:11, n...@holland-consulting.net wrote:
(this is a request for a "that's stupid", not a suggestion
of something people should do at this point)

An idea that's been floating around in my head, inspired
by the ZFS "scrubbing" idea: rather than build that "check
your data" process into the file system, just do something
periodically like this:

  # dd if=/dev/rsd0c of=/dev/null bs=1m

There is a lot of prior art on this concept.

See https://www.nsc.liu.se/lcsc2007/presentations/LCSC_2007-kelemen.pdf
for analysis of failure modes and frequencies.  There is a background
processing tool proposed too.  See

https://marc.info/?l=openbsd-ports&m=122889297621831&w=2

for a (probably terrible attempt at a) port.

There was another thread titled "Ensuring data integrity" which I
just noticed.  It seems people adopt an assumption that a file
system is the best way to store data.  A slightly less conditioned
view is to consider files, databases, and "objects" as possible
solutions to a data storage problem or problems.

"Just pick the best one."

"Objects", aka AWS S3 to many people, is easily available on OpenBSD
as minio and the replication options in that are many.

Of course if you have 20GB of files accumulated in 20-years this
newfangled database stuff won't fly.


J

Reply via email to