Hi,

On 15/06/15 15:43, Bob Peterson wrote:
----- Original Message -----
I'm assuming that these figures are bandwidth rather than times, since
that appears to show that the patch makes quite a large difference.
However the reclen is rather small. In the 32 bytes case, thats 128
writes for each new block thats being allocated, unless of course that
is 32k?

Steve.
Hi,

To do this test, I'm executing this command:
numactl --cpunodebind=0 --membind=0 /home/bob/iozone/iozone3_429/src/current/iozone 
-az -f /mnt/gfs2/iozone-gfs2 -n 2048m -g 2048m -y 32k -q 1m -e -i 0 -+n &> 
/home/bob/iozone.out

According to iozone -h, specifying -y this way is 32K, not 32 bytes.
The -q is maximum write size, in KB, so -q 1m is 1MB writes.
The -g is maximum file size, in KB, so -g 2048 is a 2MB file.

So the test varies the writes from 32K to 1MB, adjusting the number
of writes to get the file to 2MB.

Regards,

Bob Peterson
Red Hat File Systems

Ok, that makes sense to me. I guess that there is not a lot of searching for the rgrps going on, which is why we are not seeing a big gain in the rgrplvb case. If the fs was nearly full perhaps we'd see more of a difference in that case.

Either way, provided the rgrplvb code can continue to work, that is really the important thing, so I think we should be ok with this. I'd still very much like to see what can be done to reduce the page look up cost more directly though, since that seems to be the real issue here,

Steve.

Reply via email to