If you are not using HSM, the virtual mountpoint approach is a good one. Since we do have an integrated TSM/HSM system on GPFS, we can't do that. It takes a bit over 3 days to wade through 130+ million files. Not enough disk to use mmbackup, which basically 'journals' the changes. I do run incrementals on directories that are critical while the overall incremental runs.
What version of GPFS and TSM are you using? We are a bit backlevel at GPFS 3.3 and TSM 5.5, but are in the process of upgrading. I think a more current combination will allow for faster backups. Gretchen Thiele Princeton University -----Original Message----- From: ADSM: Dist Stor Manager [mailto:[email protected]] On Behalf Of Lee, Gary Sent: Thursday, February 02, 2012 9:50 AM To: [email protected] Subject: Re: [ADSM-L] million files backup Is it organized into subdirectory trees? If so, virtual mountpoints might be a way to go. Gary Lee Senior System Programmer Ball State University phone: 765-285-1310 -----Original Message----- From: ADSM: Dist Stor Manager [mailto:[email protected]] On Behalf Of Jorge Amil Sent: Thursday, February 02, 2012 9:30 AM To: [email protected] Subject: [ADSM-L] million files backup Hi everybody, Does anyone know what is the best way to make a filesystem backup than contains million files? Backup image is not posible because is a GPFS filesystem and is not supported. Thanks in advance Jorge
