We have a NetApp filer that has a few TB of data made up largely of 
millions of small files (about 30 million or so) and we are using several 
NDMP policies to back up this data. The two main problems are length of 
time it takes to backup (we usually have 2-3 backups running all day every 
day) and when there is a maintenance or other event in the NBU domain, we 
have to kill the job, resulting in having to start all over (no 
checkpoints).

For those of you who have faced a similar situation, how are you backing 
up this data?

Current thoughts are moving away from NDMP and going with just snapshots 
and then getting the snap offsite either by backing it up or replicating 
it. We've also thought about backing it up via NFS, but that will probably 
be slower, though we would get checkpoints.

I appreciate any other suggestions anyone has.

Rusty Major, MCSE, BCFP, VCS ▪ Sr. Storage Engineer ▪ SunGard 
Availability Services ▪ 757 N. Eldridge Suite 200, Houston TX 77079 ▪ 
281-584-4693
Keeping People and Information Connected® ▪ 
http://availability.sungard.com/ 
P Think before you print 
CONFIDENTIALITY:  This e-mail (including any attachments) may contain 
confidential, proprietary and privileged information, and unauthorized 
disclosure or use is prohibited.  If you received this e-mail in error, 
please notify the sender and delete this e-mail from your system. 
_______________________________________________
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu

Reply via email to