Yup, I"ve seen the same type of bottleneck, esp. on Windows file servers
Alternatives: Check into creating image backups: Restoring an image will take a lot less time than restoring 10 million files individually (because you don't go through the file create process for each individual file, as Kelly said. ). Then you "roll forward" by restoring files from TSM incrementals that are newer than the image. Question is, can you find an optimum point where you can take the image dumps often enough tp still get your complete restore time within your SLA's. Works best with data that has a very low change rate. Hardware solutions: physical mirrors or hardware-type flashcopy for the filesystem; then the same "roll forward" idea by restoring files from TSM that are newer than the image. Physical mirrors or flashcopy is probably going to be the most satisfactory solution in the long run. Software solutions: you can experiment with virtual mount points (if this is a Unix system) in TSM; lets you back up the 1 filesystem as several. Then you can collocate by filesystem in TSM, use multiple restore streams. That may not help much, if the disk itself is the bottleneck. You have to find the solution that works for your particular OS, and hardware. The BEST solution, is to educate folks not to stick 10mil files in 1 filesystem, in the first place... Wanda Prather "I/O, I/O, It's all about I/O" -(me) -----Original Message----- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Wallace.Dwight Sent: Wednesday, May 17, 2006 8:33 AM To: [email protected] Subject: Re: Disk-to-Disk Backup Another interesting thought. We had thought about this one. -----Original Message----- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Kelly Lipp Sent: Wednesday, May 17, 2006 3:12 AM To: [email protected] Subject: Re: Disk-to-Disk Backup Before spending a ton of time optimizing this realize that the impediment to fast restore is file create time on the file server. My testing has shown that we can create between 50K and 75K files per hour. 10 Million files at 50K files is a long time: 200 hours. Won't matter how many TB of disk pool you have... Kelly J. Lipp VP Manufacturing & CTO STORServer, Inc. 485-B Elkton Drive Colorado Springs, CO 80907 719-266-8777 [EMAIL PROTECTED] -----Original Message----- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Christoph Pilgram Sent: Tuesday, May 16, 2006 7:43 AM To: [email protected] Subject: [ADSM-L] Disk-to-Disk Backup Hi all, Because we have problems to hold our service level agreements with the customers for restoring big file-servers (10 Mio files, 1TB disk-space in one filesystem), we are thinking about storing the backups not anymore on tape but on disk. Does anybody has experience with that kind of storage-pool for about 40 TB of backup data ? Does anybody use for example a "Data Domain DD460" or other systems using COS to reduce the amount of data. Thanks for help Chris
