I think the collocation settings of the target pool for the migration come
into play as well. If you have multiple filespaces associated with a node
and collocation is set to FILESPACE, then you should be able to get one
migration process per filespace rather than one per node/collocation group.

On Mon, Jan 04, 2021 at 12:21:05PM +0000, Simon Thompson wrote:
> Hi All,
> 
> We use Spectrum Protect (TSM) to backup our Scale filesystems. We have the 
> backup setup to use multiple nodes with the PROXY node function turned on 
> (and to some extent also use multiple target servers).
> 
> This all feels like it is nice and parallel, on the TSM servers, we have disk 
> pools for any ???small??? files to drop into (I think we set anything smaller 
> than 20GB) to prevent lots of small files stalling tape drive writes.
> 
> Whilst digging into why we have slow backups at times, we found that the disk 
> pool empties with a single thread (one drive). And looking at the docs:
> https://www.ibm.com/support/pages/concurrent-migration-processes-and-constraints
> 
> This implies that we are limited to the number of client nodes stored in the 
> pool. i.e. because we have one node and PROXY nodes, we are essentially 
> limited to a single thread streaming out of the disk pool when full.
> 
> Have we understood this correctly as if so, this appears to make the whole 
> purpose of PROXY nodes sort of pointless if you have lots of small files. Or 
> is there some other setting we should be looking at to increase the number of 
> threads when the disk pool is emptying? (The disk pool itself has Migration 
> Processes: 6)
> 
> Thanks
> 
> Simon

> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss


-- 
-- Skylar Thompson ([email protected])
-- Genome Sciences Department (UW Medicine), System Administrator
-- Foege Building S046, (206)-685-7354
-- Pronouns: He/Him/His
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to