What Wanda has said here is all true.  But, there is one tidbit that she did
not mention.

If you collocate by filespace the parallelism can be attained at even
further.  But, this actually does not make sense unless the filespaces are
large; otherwise, they waste a lot of tape unless you just have small tapes.
The prime example of this is SAP backups.  But, it works for file system
backups as well.  There are always tradeoffs.

-----Original Message-----
From: Prather, Wanda [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, December 26, 2001 3:00 PM
To: [EMAIL PROTECTED]
Subject: Re: backup/recovery


You can multi-thread your restores as well.

It doesn't occur automatically, but if your fileserver has more than one
drive/filesystem, just open a second GUI window and start a second or third
restore, one for each filesystem/drive.

Even if the restores call for all the same tapes, assuming you have more
than 1 tape drive you can get a lot of parallelism.  Restore number 2 may
have to wait for a tape until restore number 1 is finished with it, but then
while restore number 1 is processing the next tape, restore number 2 gets
his tape and gets work done, and so forth, until you either saturate your
network or run out of tape drives.

Doesn't work too well on an NT system if your data is on multiple LOGICAL
C:, D;, E: drives, but are really all the same physical drive, since you
create contention for the physical device.  But if your 300 GB is spread
across multiple physical disk drives, you should be able to improve your
throughput quite a bit.


************************************************************************
Wanda Prather
The Johns Hopkins Applied Physics Lab
443-778-8769
[EMAIL PROTECTED]

"Intelligence has much less practical application than you'd think" -
Scott Adams/Dilbert
************************************************************************




 

-----Original Message-----
From: Anderson F. Nobre [mailto:[EMAIL PROTECTED]]
Sent: Saturday, December 22, 2001 8:10 PM
To: [EMAIL PROTECTED]
Subject: backup/recovery


Hi,

I have some doubt about how to improve the recovery time of a fileserver. I
mean a fileserver with 300GB I spend 4 hour to backup. But to recover it
from scratch takes about 24 hours. I was thinking in use backupsets, but in
my opinion I just transfering the problem from client to server. So I would
spend 24 hours to generate a backupset to this node.
The reasons for the recovery is so slow compared to backup I understand that
the first reason is because I always do incremental backup, the second it�s
because I use multithreading on client and when I do the recovery is always
one session. The other reason is the fragmentation of the files on the tape,
even using colocation=yes.
I�ve already created a mgmtclass to make the backup of directories on a disk
stgpool.
Anyone have any sugestion on how to increase the speed of restore?

Regards,

Anderson

Reply via email to