I don't see a simple fix that can be implemented by tweaking a
general-purpose low-level synchronization primitive. It should be possible
to integrate GPFS better into the Linux IO accounting infrastructure, but
that would require some investigation a likely a non-trivial amount of work
to do
Thanks Yuri!
I thought calling io_schedule was the right thing to do because the nfs
client in the kernel did this directly until fairly recently. Now it
calls wait_on_bit_io which I believe ultimately calls io_schedule.
Do you see a more targeted approach for having GPFS register IO wait as
I would advise caution on using "mmdiag --iohist" heavily. In more recent
code streams (V4.1, V4.2) there's a problem with internal locking that
could, under certain conditions could lead to the symptoms that look very
similar to sporadic network blockage. Basically, if "mmdiag --iohist" gets
You may also be interested in a panel session on the Friday of SC16:
http://sc16.supercomputing.org/presentation/?id=pan120=sess185
This isn't a user group event, but part of the technical programme for SC16,
though I'm sure you will recognise some of the names from the storage community.
Hello,
I know many of you may be planning your SC16 schedule already. We wanted to
give you a heads up that a Spectrum Scale (GPFS) Users Group event is being
planned. The event will be much like last year’s event with a combination of
technical updates and user experiences and thus far is
Hi Richard,
You can of course change any of the other options with the "net conf"
(/usr/lpp/mmfs/bin/net conf) command. As its just stored in the Samba registry.
Of course whether or not you end up with a supported configuration is a
different matter...
When we first rolled out CES/SMB, there
There is a known performance issue that can possibly cause longer than
expected network time-outs if you are running iohist too often. So be
careful it is best to collect it as a sample, instead of all of the time.
Scott Fadden
Spectrum Scale - Technical Marketing
Phone: (503) 880-5833
Nice! Thanks Bryan. I wonder what the implications are of setting it to
something high enough that we could capture data every 10s. I figure if
512 events only takes me to 1 second I would need to log in the realm of
10k to capture every 10 seconds and account for spikes in I/O.
-Aaron
On
Try this:
mmchconfig ioHistorySize=1024 # Or however big you want!
Cheers,
-Bryan
-Original Message-
From: gpfsug-discuss-boun...@spectrumscale.org
[mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Aaron Knister
Sent: Monday, August 29, 2016 1:05 PM
To: gpfsug main
That's an interesting idea. I took a look at mmdig --iohist on a busy
node it doesn't seem to capture more than literally 1 second of history.
Is there a better way to grab the data or have gpfs capture more of it?
Just to give some more context, as part of our monthly reporting
requirements
try mmfsadm dump iohist gives you a nice approach, on how long
it takes until an IO is processed .. the statistic reports the time it
takes, the IO is done from GPFS <--> to your block devices (including
the path to it) Mit freundlichen Grüßen / Kind regards Olaf Weiser EMEA Storage Competence
There is the iohist data that may have what you're looking for,
-Bryan
-Original Message-
From: gpfsug-discuss-boun...@spectrumscale.org
[mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Aaron Knister
Sent: Monday, August 29, 2016 12:54 PM
To:
Any reason you can't just use iostat or collectl or any of a number of
other standards tools to look at disk utilization?
On 08/29/2016 10:33 AM, Aaron Knister wrote:
Hi Everyone,
Would it be easy to have GPFS report iowait values in linux? This would
be a huge help for us in determining
Hi all,
In the last months several customers were asking for the option to use
multiple IBM Spectrum Protect servers to protect a single IBM Spectrum
Scale file system. Some of these customer reached the server scalability
limits, others wanted to increase the parallelism of the server
Hi,
When trying to register on the website, I each time get the error:
"Session expired. Please try again later."
Stef
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
15 matches
Mail list logo