--
Jordi Caubet Serrabou
IBM Storage Client Technical Specialist (IBM Spain)
Ext. Phone: (+34) 679.79.17.84 (internal 55834)
E-mail: jordi.cau...@es.ibm.com
-----gpfsug-discuss-boun...@spectrumscale.org wrote: -----
From: Simon Thompson
Sent by: gpfsug-discuss-boun...@spectrumscale.org
Date: 01/04/2021 01:21PM
Subject: [EXTERNAL] [gpfsug-discuss] Spectrum Protect and disk pools
Hi All,
We use Spectrum Protect (TSM) to backup our Scale filesystems. We have the backup setup to use multiple nodes with the PROXY node function turned on (and to some extent also use multiple target servers).
This all feels like it is nice and parallel, on the TSM servers, we have disk pools for any “small” files to drop into (I think we set anything smaller than 20GB) to prevent lots of small files stalling tape drive writes.
Whilst digging into why we have slow backups at times, we found that the disk pool empties with a single thread (one drive). And looking at the docs:
https://www.ibm.com/support/pages/concurrent-migration-processes-and-constraints
This implies that we are limited to the number of client nodes stored in the pool. i.e. because we have one node and PROXY nodes, we are essentially limited to a single thread streaming out of the disk pool when full.
Have we understood this correctly as if so, this appears to make the whole purpose of PROXY nodes sort of pointless if you have lots of small files. Or is there some other setting we should be looking at to increase the number of threads when the disk pool is emptying? (The disk pool itself has Migration Processes: 6)
Thanks
Simon
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
Salvo indicado de otro modo más arriba / Unless stated otherwise above:
International Business Machines, S.A.
Santa Hortensia, 26-28, 28002 Madrid
Registro Mercantil de Madrid; Folio 1; Tomo 1525; Hoja M-28146
CIF A28-010791
_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss