ab | 720-430-8821 <(720)%20430-8821>
> sto...@us.ibm.com
>
>
>
> - Original message -----
> From: "Buterbaugh, Kevin L"
> Sent by: gpfsug-discuss-boun...@spectrumscale.org
> To: gpfsug main discussion list
>
> Cc:
> Subject: Re: [gpfsug-di
Lab | 720-430-8821
sto...@us.ibm.com
- Original message -
From: "Buterbaugh, Kevin L"
Sent by: gpfsug-discuss-boun...@spectrumscale.org
To: gpfsug main discussion list
Cc:
Subject: Re: [gpfsug-discuss] Clarification of mmdiag --iohist output
Date: Thu, Feb 21, 2019 6:39
Kevin I'm assuming you have seen the article on IBM developerWorks about the GPFS NSD queues. It provides useful background for analyzing the dump nsd information. Here I'll list some thoughts for items that you can investigate/consider.
If your NSD servers are doing both large (greater than
so from the nsdMaxWorkerThreads 1024 I used to specify the same way for minWorker
... and tell everybody in the cluster.. ignorePrefetchLunCount=yesto adjust the min/maxworkers to your
infrastructure according your need.. how many IOPS - and / or bandwidth
with your given BS , do you think can
Hi All,
My thanks to Aaron, Sven, Steve, and whoever responded for the GPFS team. You
confirmed what I suspected … my example 10 second I/O was _from an NSD server_
… and since we’re in a 8 Gb FC SAN environment, it therefore means - correct me
if I’m wrong about this someone - that I’ve got
oun...@spectrumscale.org>>
on behalf of Aaron Knister
mailto:aaron.knis...@gmail.com>>
Sent: Sunday, February 17, 2019 8:26:23 AM
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] Clarification of mmdiag --iohist output
Hi Kevin,
It's funny you bring this up because I was
discuss-boun...@spectrumscale.org
> on behalf of Aaron Knister
>
> Sent: Sunday, February 17, 2019 8:26:23 AM
> To: gpfsug main discussion list
> Subject: Re: [gpfsug-discuss] Clarification of mmdiag --iohist output
>
> Hi Kevin,
>
> It's funny you bring t
get network/client delays .
Sven
From: on behalf of Steve Crusan
Reply-To: gpfsug main discussion list
Date: Tuesday, February 19, 2019 at 12:29 AM
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] Clarification of mmdiag --iohist output
Context is key here. Where
gpfsug-discuss-boun...@spectrumscale.org
on behalf of Aaron Knister
Sent: Sunday, February 17, 2019 8:26:23 AM
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] Clarification of mmdiag --iohist output
Hi Kevin,
It's funny you bring this up because I was looking at this yeste
Reply-To: gpfsug main discussion list
Date: Sunday, February 17, 2019 at 6:26 AM
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] Clarification of mmdiag --iohist output
Hi Kevin,
It's funny you bring this up because I was looking at this yesterday. My belief
is that it's
Hi Kevin,
The I/O hist shown by the command mmdiag --iohist actually depends on the
node on which you are running this command from.
If you are running this on a NSD server node then it will show the time
taken to complete/serve the read or write I/O operation sent from the
client node.
And
Hi Kevin,
It's funny you bring this up because I was looking at this yesterday. My
belief is that it's the time the from when the I/O request was queued
internally by the client to when the I/O response was received from the NSD
server which means it absolutely includes the network RTT. It would
12 matches
Mail list logo