If you look at the scheduler logs on the proxy agents, do you see a lot of file retries? If you do, it might help using a non-shared static or dynamic copy serialization on your copy groups. That'll save a bunch of stat(2) calls from dsmc. stat over NFS is very expensive, because many implementations will trigger a cache flush on the server for that file to get an accurate block count. Switching the copy serialization reduced some of our proxy node-based back times by 10x.
-- Skylar Thompson ([email protected]) -- Genome Sciences Department, System Administrator -- Foege Building S046, (206)-685-7354 -- University of Washington School of Medicine On 08/ 6/12 01:12 PM, Arbogast, Warren K wrote:
There is a Linux fileserver here that serves web content. It has 21 million files in one filesystem named /ip. There are over 4,500 directories at the second level of the filesystem. The server is running the 6.3.0.0 client, and has 2 virtual cpus and 16 GB of RAM. Resourceutilization is set to 10, and currently there are six client sessions running. I am looking for ways to accelerate the backup of this server since currently it never ends. The filesystem is NFS mounted so a journal based backup won't work. Recently, we added four proxy agents, and are splitting up the one big filesystem among them using include/exclude statements. Here is one of the agent's include/exclude files. exclude /ip/[g-z]*/.../* include /ip/[a-f]*/.../* __Since we added the proxies the proxy backups are copying many thousands of files, as if this were the first backup of the server as a whole. Is that expected behavior? __Recently, the TSM server database is growing faster than it usually does, and I'm wondering whether there could be any correlation between the ultra long running backup, many thousands of files copied, and the faster pace of the database growth. __The four proxies haven't made a big difference in the run time of the backup. Could something else be done to speed it up? Thank you, Keith Arbogast Indiana University
