Some things to consider with large file systems, and Unix ones in particular:
1. Use CLI type backups rather than GUI type, for speed. 2. "Divide and conquer": Very large file systems are conspicuous candidates for Virtualmountpoint TSM subdivision, which will greatly improve things overall. For that matter, oversized file systems are conspicuous candidates for splitting into multiple physical file systems.
In suspect areas of Unix file systems I would interactively run a 'find' command with the -ls option to test the speed with which sub-trees of the file system can be traversed, to point out ringers. A single directory with a huge number of files can induce a lot of overhead, and is ripe for re-architecting.
The big problem in all file systems is that they are simply created and then ignored, and the users of those file systems fill them with whatever they want, organized at whim in most cases. The architecting of file systems for performance and organizational sanity is a greatly overlooked subject area.
Richard Sims
On Mar 28, 2005, at 5:22 PM, Zoltan Forray/AC/VCU wrote:
I am having issues backing up a large Linux server (client=5.2.3.0).
The TSM server is also on a RH Linux box (5.2.2.5).
This system has over 4.6M objects.
A standard incremental WILL NOT complete successfully. It usually hangs/times-out/etc.
The troubles seem to be related to one particular directory with 40-subdirs, comprising 1.4M objects (from the box owner).
If I point to this directory as a whole (via the web ba-client), and try to back it up in one shot, it displays the "inspecting objects" message and then never comes back.
If I drill down further and select the subdirs in groups of 10, it seems to back them up, with no problem.
So, one question I have is, anyone out there backing up large Linux systems, similar to this ?
Any suggestions on what the problem could be.
Currently, I do not have access to the error-log files since this is a protected/firewalled system and I don't have the id/pw.
