Bryan,

That looks like a really useful set of presentation slides! Thanks for sharing!

Which one in particular is the one Yuri gave that you’re referring to?

~jonathon


On 4/11/17, 9:51 AM, "[email protected] on behalf of 
Bryan Banister" <[email protected] on behalf of 
[email protected]> wrote:

    There are so many things to look at and many tools for doing so (iostat, 
htop, nsdperf, mmdiag, mmhealth, mmlsconfig, mmlsfs, etc).  I would recommend a 
review of the presentation that Yuri gave at the most recent GPFS User Group:
    https://drive.google.com/drive/folders/0B124dhp9jJC-UjFlVjJTa2ZaVWs
    
    Cheers,
    -Bryan
    
    -----Original Message-----
    From: [email protected] 
[mailto:[email protected]] On Behalf Of Peter Childs
    Sent: Tuesday, April 11, 2017 3:58 AM
    To: gpfsug main discussion list <[email protected]>
    Subject: [gpfsug-discuss] Spectrum Scale Slow to create directories
    
    This is a curious issue which I'm trying to get to the bottom of.
    
    We currently have two Spectrum Scale file systems, both are running GPFS 
4.2.1-1 some of the servers have been upgraded to 4.2.1-2.
    
    The older one which was upgraded from GPFS 3.5 works find create a 
directory is always fast and no issue.
    
    The new one, which has nice new SSD for metadata and hence should be 
faster. can take up to 30 seconds to create a directory but usually takes less 
than a second, The longer directory creates usually happen on busy nodes that 
have not used the new storage in a while. (Its new so we've not moved much of 
the data over yet) But it can also happen randomly anywhere, including from the 
NSD servers them selves. (times of 3-4 seconds from the NSD servers have been 
seen, on a single directory create)
    
    We've been pointed at the network and suggested we check all network 
settings, and its been suggested to build an admin network, but I'm not sure I 
entirely understand why and how this would help. Its a mixed 1G/10G network 
with the NSD servers connected at 40G with an MTU of 9000.
    
    However as I say, the older filesystem is fine, and it does not matter if 
the nodes are connected to the old GPFS cluster or the new one, (although the 
delay is worst on the old gpfs cluster), So I'm really playing spot the 
difference. and the network is not really an obvious difference.
    
    Its been suggested to look at a trace when it occurs but as its difficult 
to recreate collecting one is difficult.
    
    Any ideas would be most helpful.
    
    Thanks
    
    
    
    Peter Childs
    ITS Research Infrastructure
    Queen Mary, University of London
    _______________________________________________
    gpfsug-discuss mailing list
    gpfsug-discuss at spectrumscale.org
    http://gpfsug.org/mailman/listinfo/gpfsug-discuss
    
    ________________________________
    
    Note: This email is for the confidential use of the named addressee(s) only 
and may contain proprietary, confidential or privileged information. If you are 
not the intended recipient, you are hereby notified that any review, 
dissemination or copying of this email is strictly prohibited, and to please 
notify the sender immediately and destroy this email and any attachments. Email 
transmission cannot be guaranteed to be secure or error-free. The Company, 
therefore, does not make any guarantees as to the completeness or accuracy of 
this email or any attachments. This email is for informational purposes only 
and does not constitute a recommendation, offer, request or solicitation of any 
kind to buy, sell, subscribe, redeem or perform any type of transaction of a 
financial product.
    _______________________________________________
    gpfsug-discuss mailing list
    gpfsug-discuss at spectrumscale.org
    http://gpfsug.org/mailman/listinfo/gpfsug-discuss
    

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to