Justin,
Not sure what your timeline is but we run a script to pull the status
daily and compile it into a db. That way there is not much overhead
because it is only the last 24 hours back ups but the DB gives us
flexibility to run the reports etc.
Thanks.
Phil
456-3136
-Original
If one is to create a script to ensure that the files on the
filesystem are backed upon before removing them, what is the best
data-store model for doing so?
Obviously, if you have 1,000,000 files in the catalog and you need
to check each of those, you do not want to bplist -B -C -R 99
Why not set up an archive schedule? That way, the files can be archived and
NetBackup will ensure that they are on tape before removing.
Bobby Williams
2205 Peterson Drive
Chattanooga, Tennessee 37421
423-296-8200
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]
The problem with that is two-fold:
1. We backup multiple copies of the data, therefore, the archive
option will not work.
2. What if a tape has an I/O error half way through the archive process? Yikes.
Justin.
On 3/26/07, Bobby Williams [EMAIL PROTECTED] wrote:
Why not set up an archive
If one is to create a script to ensure that the files on the
filesystem are backed upon before removing them, what is the best
data-store model for doing so?
Obviously, if you have 1,000,000 files in the catalog and you need
to check each of those, you do not want to bplist -B -C -R 99
The problem I worry about with running a bplist on each file is the
network overhead and the overhead that will hit the master server. If
you have 50 servers with 1,000,000 files each, that would be 50
million network requests total. I was thinking dump the catalog onto
the local machine, build
The good, 1 network connection to pull the data from the master to the client.
The bad, it will be a lot of data for a larger catalog, but one can
always compress it.
The reason for this, is, I am not sure how many of you have used
NetBackup 6.0MPx or for how long, but early on, maybe this is
Why not have a script that runs a backup followed by an archive. Check
the error code of the backup, if is not 0 then don't run the archive. If
it is 0 then run the archive which will automatically delete the files
when it completes successfully. It will not delete antything if it fails
even with
A nice idea; however, this is actually part of a much larger and
complicated system in which certain files have to kept for certain
retentions, both on disk and backed up to tape, think tape archival
with different retention rates. If I were to change the entire
architecture behind it, this may
The main reason for something like this overall is you will get 5
kilobytes per second if you backup a filesystem with a lot of spare
data.
Think of 500,000 directories and 60,000 files, but the 60,000 files
are scattered across the 500,000 directories. 99% of the backup is
NetBackup traversing
And sure yes there is FlashBackup -- but it does not work with an EXT3
filesystem! Also, VxFS for Linux, think of the licensing costs etc
for a lot of servers, not really a solution.
Justin.
On 3/26/07, Justin Piszcz [EMAIL PROTECTED] wrote:
The main reason for something like this overall is
2. What if a tape has an I/O error half way through the
archive process? Yikes.
Nothing happens; you misunderstand bparchive. File deletion occurs only
after the backup completes with a status 0.
find / foo; bplist parameters / | diff foo -
Make appropriate sorting or exclusion
As a colleague of mine once put it: 'BANANA - Backups Are Not Archives,
NOT ARCHIVES'. Practise this mantra. You should really think about
rearchitecting.
That said, if your data isn't important to your firm and you're
confident that you can restore from backups (you're making multiple
copies,
13 matches
Mail list logo