In my case GPFS storage is used to store VM images (KVM) and hence the
small IO.
I always see lots of small 4K writes and the GPFS filesystem block size
is 8MB. I thought the reason for the small writes is that the linux
kernel requests GPFS to initiate a periodic sync which by default is
every 5 seconds and can be controlled by "vm.dirty_writeback_centisecs".
I thought HAWC would help in such cases and would harden (coalesce) the
small writes in the "system" pool and would flush to the "data" pool in
larger block size.
Note - I am not doing direct i/o explicitly.
On 8/1/2016 14:49, Sven Oehme wrote:
when you say 'synchronous write' what do you mean by that ?
if you are talking about using direct i/o (O_DIRECT flag), they don't
leverage HAWC data path, its by design.
sven
On Mon, Aug 1, 2016 at 11:36 AM, Tejas Rao <[email protected]
<mailto:[email protected]>> wrote:
I have enabled write cache (HAWC) by running the below commands.
The recovery logs are supposedly placed in the replicated system
metadata pool (SSDs). I do not have a "system.log" pool as it is
only needed if recovery logs are stored on the client nodes.
mmchfs gpfs01 --write-cache-threshold 64K
mmchfs gpfs01 -L 1024M
mmchconfig logPingPongSector=no
I have recycled the daemon on all nodes in the cluster (including
the NSD nodes).
I still see small synchronous writes (4K) from the clients going
to the data drives (data pool). I am checking this by looking at
"mmdiag --iohist" output. Should they not be going to the system pool?
Do I need to do something else? How can I confirm that HAWC is
working as advertised?
Thanks.
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org <http://spectrumscale.org>
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss