As it turned out, the 'authorized_keys' file placed in the
/var/mmfs/ssl directory of the NDS for the new storage cluster 4
(4.1.1-14) needed an explicit entry of the following format for the
bracket associated with clients on cluster 0:
nistCompliance=off
Apparently the default for 4.1.x i
If you haven't already, measure the time directly on the CES node command
line skipping Windows and Samba overheads:
time ls -l /path
or
time ls -lR /path
Depending which you're interested in.
From: "Sven Oehme"
To: gpfsug main discussion list
Date: 05/09/2017 01:01 PM
Subject
ESS nodes have cache, but what matters most for this type of workloads is
to have a very large metadata cache, this resides on the CES node for
SMB/NFS workloads. so if you know that your client will use this 300k
directory a lot you want to have a very large maxfilestocache setting on
this nodes.
I have a customer who is struggling (they already have a PMR open and it’s
being actively worked on now). I’m simply seeking understanding of potential
places to look. They have an ESS with a few CES nodes in front. Clients
connect via SMB to the CES nodes. One fileset has about 300k smallis
Hi, Jaime,
I'd suggest you trace a client while trying to connect and check what
addresses it is going to talk to actually. It is a bit tedious, but you
will be able to find this in the trace report file. You might also get an
idea what's going wrong...
Mit freundlichen Grüßen / Kind regard