> Sounds like you had both crappy tape drives, and just poor performance > over the SCSI bus. From another email, I see you had a StoreVault > thingy, which is NOT enterprise class Netapp hardware.
The tape drive is LTO3 and the scsi 160. The reason the backup to tape took so long is cuz the data is a million small files, and the filer didn't have any efficient way of generating anything like a contiguous data stream. It must have been simply reading the filesystem, and walking the tree. > Nice setup. How many snapshots can you store on the Dell or the Sun > and how often have you had to restore from Tape? We've never had to restore from tape. Just do it once in a while to be sure we can. We're currently retaining a month of daily snaps on the filer itself, and another month on the secondary server. But the whole system is about 4 months old. > Sure, restores from snapshots are trivial. Never argued they > weren't. And I personally *like* snapshot restores. But when an > engineer creates a 500+gb file during a simulation run, it will simply > *kill* your snapshot reserve, and reduce the usefulness of snapshots > remarkably. Snapshot reserve? I never really got the point of having a snapshot reserve. I just have one big data pool, I don't care how much space the snaps take. Unless my whole disk starts to get full. Then we've got to rm some files, and bump off some of the oldest snaps. We have a rule: Since there are tons of compute servers, and only one central file server, no matter how many aggregate links you may have to that filer, you don't want heavy IO sims running on it. Every compute server has a /scratch area which is local disk. So simulations generate re-generatable output on local disk, and all machines are able to work in parallel, at local disk speeds. All the source files stay on the filer, where they are universally available and backed up. _______________________________________________ bblisa mailing list [email protected] http://www.bblisa.org/mailman/listinfo/bblisa
