Hi,

thanks a lot. About the requested information:

* Waiters were captured with the command 'mmdiag --waiters', and it was 
performed on one of the IO (NSD) nodes.
* Connection between storage and client clusters is with Infiniband EDR. For 
the GPFS client cluster we have 3 chassis, each one has 24 blades with 
unmanaged EDR switch (24 for the blades, 12 external), and currently 10 EDR 
external ports are connected for external connectivity. On the other hand, the 
GPFS storage cluster has 2 IO nodes (as commented in the previous e-mail, DSS 
G240). Each IO node has connected 4 x EDR ports. Regarding the Infiniband 
connectivty, my network contains 2 top EDR managed switches configured with 
up/down routing, connecting the unmanaged switches from the chassis and the 2 
managed Infiniband switches for the storage (for redundancy).

Whenever needed I can go through PMR if this would easy the debug, no problem 
for me. I was wondering about the meaning "waiting for helper threads" and what 
could be the reason for that

Thanks a lot for your help and best regards,
Marc
_________________________________________
Paul Scherrer Institut
High Performance Computing
Marc Caubet Serrabou
Building/Room: WHGA/019A
Forschungsstrasse, 111
5232 Villigen PSI
Switzerland

Telephone: +41 56 310 46 67
E-Mail: [email protected]
________________________________
From: [email protected] 
[[email protected]] on behalf of IBM Spectrum Scale 
[[email protected]]
Sent: Thursday, April 18, 2019 5:54 PM
To: gpfsug main discussion list
Cc: [email protected]
Subject: Re: [gpfsug-discuss] Performance problems + 
(MultiThreadWorkInstanceCond), reason 'waiting for helper threads'

We can try to provide some guidance on what you are seeing but generally to do 
true analysis of performance issues customers should contact IBM lab based 
services (LBS).  We need some additional information to understand what is 
happening.

  *   On which node did you collect the waiters and what command did you run to 
capture the data?
  *   What is the network connection between the remote cluster and the storage 
cluster?

Regards, The Spectrum Scale (GPFS) team

------------------------------------------------------------------------------------------------------------------
If you feel that your question can benefit other users of  Spectrum Scale 
(GPFS), then please post it to the public IBM developerWroks Forum at 
https://www.ibm.com/developerworks/community/forums/html/forum?id=11111111-0000-0000-0000-000000000479.

If your query concerns a potential software error in Spectrum Scale (GPFS) and 
you have an IBM software maintenance contract please contact  1-800-237-5511 in 
the United States or your local IBM Service Center in other countries.

The forum is informally monitored as time permits and should not be used for 
priority messages to the Spectrum Scale (GPFS) team.



From:        "Caubet Serrabou Marc (PSI)" <[email protected]>
To:        gpfsug main discussion list <[email protected]>
Date:        04/18/2019 11:41 AM
Subject:        [gpfsug-discuss] Performance problems + 
(MultiThreadWorkInstanceCond), reason 'waiting for helper threads'
Sent by:        [email protected]
________________________________



Hi all,

I would like to have some hints about the following problem:

Waiting 26.6431 sec since 17:18:32, ignored, thread 38298 
NSPDDiscoveryRunQueueThread: on ThCond 0x7FC98EB6A2B8 
(MultiThreadWorkInstanceCond), reason 'waiting for helper threads'
Waiting 2.7969 sec since 17:18:55, monitored, thread 39736 NSDThread: for I/O 
completion
Waiting 2.8024 sec since 17:18:55, monitored, thread 39580 NSDThread: for I/O 
completion
Waiting 3.0435 sec since 17:18:55, monitored, thread 39448 NSDThread: for I/O 
completion

I am testing a new GPFS cluster (GPFS cluster client with computing nodes 
remotely mounting the Storage GPFS Cluster) and I am running 65 gpfsperf 
commands (1 command per client in parallell) as follows:

/usr/lpp/mmfs/samples/perf/gpfsperf create seq 
/gpfs/home/caubet_m/gpfsperf/$(hostname).txt -fsync -n 24g -r 16m -th 8

I am unable to reach more than 6.5GBps (Lenovo DSS G240 GPFS 5.0.2-1, on a 
testing a 'home' filesystem with 1MB blocksize and subblocks of 8KB). After 
several seconds I see many waiters for I/O completion (up to 5 seconds)
and also the 'waiting for helper threads' message shown above. Can somebody 
explain me the meaning for this message? How could I improve that?

Current config in the storage cluster is:

[root@merlindssio02 ~]# mmlsconfig
Configuration data for cluster merlin.psi.ch:
---------------------------------------------
clusterName merlin.psi.ch
clusterId 1511090979434548295
autoload no
dmapiFileHandleSize 32
minReleaseLevel 5.0.2.0
ccrEnabled yes
nsdRAIDFirmwareDirectory /opt/lenovo/dss/firmware
cipherList AUTHONLY
maxblocksize 16m
[merlindssmgt01]
ignorePrefetchLUNCount yes
[common]
pagepool 4096M
[merlindssio01,merlindssio02]
pagepool 270089M
[merlindssmgt01,dssg]
pagepool 57684M
maxBufferDescs 2m
numaMemoryInterleave yes
[common]
prefetchPct 50
[merlindssmgt01,dssg]
prefetchPct 20
nsdRAIDTracks 128k
nsdMaxWorkerThreads 3k
nsdMinWorkerThreads 3k
nsdRAIDSmallThreadRatio 2
nsdRAIDThreadsPerQueue 16
nsdClientCksumTypeLocal ck64
nsdClientCksumTypeRemote ck64
nsdRAIDFlusherFWLogHighWatermarkMB 1000
nsdRAIDBlockDeviceMaxSectorsKB 0
nsdRAIDBlockDeviceNrRequests 0
nsdRAIDBlockDeviceQueueDepth 0
nsdRAIDBlockDeviceScheduler off
nsdRAIDMaxPdiskQueueDepth 128
nsdMultiQueue 512
verbsRdma enable
verbsPorts mlx5_0/1 mlx5_1/1
verbsRdmaSend yes
scatterBufferSize 256K
maxFilesToCache 128k
maxMBpS 40000
workerThreads 1024
nspdQueues 64
[common]
subnets 192.168.196.0/merlin-hpc.psi.ch;merlin.psi.ch
adminMode central

File systems in cluster merlin.psi.ch:
--------------------------------------
/dev/home
/dev/t16M128K
/dev/t16M16K
/dev/t1M8K
/dev/t4M16K
/dev/t4M32K
/dev/test

And for the computing cluster:

[root@merlin-c-001 ~]# mmlsconfig
Configuration data for cluster merlin-hpc.psi.ch:
-------------------------------------------------
clusterName merlin-hpc.psi.ch
clusterId 14097036579263601931
autoload yes
dmapiFileHandleSize 32
minReleaseLevel 5.0.2.0
ccrEnabled yes
cipherList AUTHONLY
maxblocksize 16M
numaMemoryInterleave yes
maxFilesToCache 128k
maxMBpS 20000
workerThreads 1024
verbsRdma enable
verbsPorts mlx5_0/1
verbsRdmaSend yes
scatterBufferSize 256K
ignorePrefetchLUNCount yes
nsdClientCksumTypeLocal ck64
nsdClientCksumTypeRemote ck64
pagepool 32G
subnets 192.168.196.0/merlin-hpc.psi.ch;merlin.psi.ch
adminMode central

File systems in cluster merlin-hpc.psi.ch:
------------------------------------------
(none)

Thanks a lot and best regards,
Marc
_________________________________________
Paul Scherrer Institut
High Performance Computing
Marc Caubet Serrabou
Building/Room: WHGA/019A
Forschungsstrasse, 111
5232 Villigen PSI
Switzerland

Telephone: +41 56 310 46 67
E-Mail: [email protected]_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss



_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to