Hi All,

We use Spectrum Protect (TSM) to backup our Scale filesystems. We have the 
backup setup to use multiple nodes with the PROXY node function turned on (and 
to some extent also use multiple target servers).

This all feels like it is nice and parallel, on the TSM servers, we have disk 
pools for any “small” files to drop into (I think we set anything smaller than 
20GB) to prevent lots of small files stalling tape drive writes.

Whilst digging into why we have slow backups at times, we found that the disk 
pool empties with a single thread (one drive). And looking at the docs:
https://www.ibm.com/support/pages/concurrent-migration-processes-and-constraints

This implies that we are limited to the number of client nodes stored in the 
pool. i.e. because we have one node and PROXY nodes, we are essentially limited 
to a single thread streaming out of the disk pool when full.

Have we understood this correctly as if so, this appears to make the whole 
purpose of PROXY nodes sort of pointless if you have lots of small files. Or is 
there some other setting we should be looking at to increase the number of 
threads when the disk pool is emptying? (The disk pool itself has Migration 
Processes: 6)

Thanks

Simon
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to