On 15/11/2019 10:11, gwes wrote:
On 11/14/19 3:52 PM, Andrew Luke Nesbit wrote:
On 15/11/2019 07:44, Raymond, David wrote:
I hadn't heard about file corruption on OpenBSD. It would be good to
get to the bottom of this if it occurred.
I was surprised when I read mention of it too, without any real claim
or detailed analysis to back it up. This is why I added my disclaimer
about "correcting me if I'm wrong because I don't want to spread
There was a thread a couple of months ago started by someone either pretty
ignorant or a troll.
The consensus answer: no more than any other OS, less than many.
Thank you gwes, for the clarification.
The thread is vaguely coming back to my memory now. I was dipping in
and out of it at the time as I didn't have time to study the details at
One size definitely doesn't fit all.
That is pretty obvious. I never mentioned a blanket rule, and I assume
that OP is able to tailor any suggestion to their needs.
Backup strategies depend on user's criteria, cost of design and
cost of doing the backups - administration & storage, etc.
Sure. I don't have a personal archival storage system yet for long term
storage that satisfies my specifications because I don't have the
infrastructure and medium yet to store it. I plan on investing in LTO
tape but I can not afford the initial cost yet.
In an ideal world every version of every file lasts forever.
Given real limitations, versioning filesystems can't and don't.
Indeed. But having archival snapshots at various points in time
increases the _probability_ that the version of the file that you need
will be present if+when you need it.
If your data are critical, invest in a dozen or more portable
USB drives. Cycle them off-site. Reread them (not too often)
to check for decay.
Yes, this is part of the backup system that I'm designing for my NAS,
but it's not so much for archiving.
If you have much $$$$ available, get a
modern tape system.
Yes, as I mentioned above LTO would be great if+when I can afford it.
The backup system used over 50 years ago still suitable for many
circumstances looks something like this:
daily backups held for 1 month
weekly backups held for 6-12 months
monthly backups held indefinitely offsite.
Hold times vary according to circumstances.
I think something like this is a good plan.
The backup(8) program can assist this by storing deltas so that
more frequent backups only contain deltas from the previous
less frequent backup.
I've not used backup(8) before, thanks for the suggestion. I will have
The compromise between backup storage requirements and granularity
of recovery points can be mitigated. The way to do it depends on
the type and structure of the data:
Some data are really transient and can be left out.
Source code control systems (or whatever the name is this week)
are a good way for intermittent backups to capture a good history
of whatever data is around if it's text.
I don't understand how SCM's are supposed to help with this...
DBs often have their own built-in backup mechanisms.
This underscores the difference between file system-level backups,
block-level backups, and (for DBs) application-level backups. In
particular I'm trying to figure out a generally applicable way of taking
a _consistent_ backup of a disk without resorting to single user mode.
I think COW file systems might help in this regard but I don't think
anything like this exists in OpenBSD.
Binary files can be regenerated if the source *and* environment
are backed up.
Storing the environment is a tricky problem that I haven't found an
entirely satisfactory solution for, yet.
been there, mounted the wrong tape... what write protect ring?
Ohhhh yeah... me too. My team inherited a hosted service and upon
auditing we discovered its backup system was stranger than fiction. But
it was so bizarre that we couldn't determine whether it was _supposed_
to be that way or if our reasoning was flawed. A classic type of problem.
OpenPGP key: EB28 0338 28B7 19DA DAB0 B193 D21D 996E 883B E5B9