Aaron, Peter,
I'm monitoring the pagepool usage as:
buffers=`/usr/lpp/mmfs/bin/mmfsadm dump buffers | grep bufLen | awk '{
SUM += $7} END { print SUM }'`
result in bytes
If your pagepool is huge - the execution could take some time ( ~5 sec on
100Gb pagepool )
--Alex
On Wed, May 2,
a few more weeks and we have a better answer than dump pgalloc ;-)
On Wed, May 2, 2018 at 6:07 AM Peter Smith
wrote:
> "how do I see how much of the pagepool is in use and by what? I've looked
> at mmfsadm dump and mmdiag --memory and neither has provided me the
>
GPFS doesn't do flush on close by default unless explicit asked by the
application itself, but you can configure that .
mmchconfig flushOnClose=yes
if you use O_SYNC or O_DIRECT then each write ends up on the media before
we return.
sven
On Wed, Apr 11, 2018 at 7:06 AM Peter Serocka
discussion list <gpfsug-discuss@spectrumscale.org>
Date: 02/05/2018 12:10
Subject: Re: [gpfsug-discuss] Confusing I/O Behavior
Sent by:gpfsug-discuss-boun...@spectrumscale.org
"how do I see how much of the pagepool is in use and by what? I've looked
at mmfsadm dum
"how do I see how much of the pagepool is in use and by what? I've looked
at mmfsadm dump and mmdiag --memory and neither has provided me the
information I'm looking for (or at least not in a format I understand)"
+1. Pointers appreciated! :-)
On 10 April 2018 at 17:22, Aaron Knister
pool than regular
file data.
From: Bryan Banister <bbanis...@jumptrading.com>
To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Date: 04/11/2018 12:51 PM
Subject: Re: [gpfsug-discuss] Confusing I/O Behavior
Sent by:gpfsug-discuss-boun...@spe
Let’s keep in mind that line buffering is a concept
within the standard C library;
if every log line triggers one write(2) system call,
and it’s not direct io, then multiple write still get
coalesced into few larger disk writes (as with the dd example).
A logging application might choose to
athan.buzz...@strath.ac.uk>
> To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
> Date: 11/04/2018 12:14
> Subject: Re: [gpfsug-discuss] Confusing I/O Behavior
> Sent by: gpfsug-discuss-boun...@spectrumscale.org
>
> On Tue, 2018-04-10 at 23:43 +0200, Uwe Falke
Hi Aaron,
to how many different files do these tiny I/O requests go?
Mind that the write aggregates the I/O over a limited time (5 secs or so)
and ***per file***.
It is for that matter a large difference to write small chunks all to one
file or to a large number of individual files .
to fill
Chris,
The job runs across multiple nodes and the tinky 8K writes *should* be
to different files that are unique per-rank.
-Aaron
On 4/10/18 12:18 PM, Chris Hoffman wrote:
Hi Stumped,
Is this MPI job on one machine? Multiple nodes? Are the tiny 8K writes
to the same file or different
?Hi Stumped,
Is this MPI job on one machine? Multiple nodes? Are the tiny 8K writes to the
same file or different ones?
Chris
From: gpfsug-discuss-boun...@spectrumscale.org
on behalf of Knister, Aaron S.
Debug messages are typically unbuffered or "line buffered". If that is
truly causing a performance problem AND you still want to collect the
messages -- you'll need to find a better way to channel and collect those
messages.
___
gpfsug-discuss
12 matches
Mail list logo