Thanks, everyone!

While I do appreciate the many suggestions for backup systems, my task
is to determine our exposure for each of the various risks from which
the "rules-of-thumb" for data backups are derived.  That means I need
to determine the likelihood and impact of each potential risk that
results in data loss.

Some of this will be depend on other information sources, but the rate
of changes (and size of the files changed, as suggested) should be
subject to sampling and some basic analysis.  So I am looking for any
ready-made tools to provide this sort of information, or suggestions
as to what data (from linux file systems) might most useful should I
have to construct my own tool/procedure for this.

(I especially appreciate the reminders that I can generate lists of
relevant files using 'find'.  Some of these are exceptionally large
filesystems, so things like this to limit the number of files that
have to be examined will certainly help.  In fact, I could probably
generate an initial table of how many files have been changed in the
last $x, $x+1, $x+2, ... time intervals, without having to run and
record the same scan as many times, and as far apart.

Other ideas besides regular "brute force" sampling are also certainly
welcome!  And I will look at some of the backup systems that were
mentioned, but we had already shortlisted a few solutions, depending
on what the outcome of the risk analysis)


I might ask some of my data-forensics acquaintances for a completely
different perspective, as well...
_______________________________________________
Tech mailing list
[email protected]
https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to