Re: [Gluster-users] Unreasonably poor performance of replicated volumes

2018-04-14 Thread Joe Julian
A jumbo ethernet frame can be 9000 bytes. The ethernet frame header is 
at least 38 bytes, and the minimum TCP/IP header size is 40 bytes or 
0.78% of the jumbo frame combined. Gluster's RPC also adds a few bytes 
(not sure how many and don't have time to test at the moment but for the 
sake of argument we'll just say 20 bytes) but, all together, it's about 
99% efficient. If you write 20 bytes to a file (for an extreme example) 
then you'll have your 20 bytes+RPC header+TCP/IP header+ethernet header; 
118 bytes in headers for 20 bytes of data. That header being 90% of the 
frame means that your packet is only 10% efficient. That's per replica 
so if you have a replica 3 that's three individual frames with 118 bytes 
of headers each to write the same 20 bytes of data. Those go out to the 
three servers and wait for their response. So you have a network round 
trip + a tiny bit of latency for stacking the three frames in the kernel 
+ disk write latency. That's a lot of overhead and cannot ever be as 
fast as writing to a local disk for any networked storage.


The question, however, is does it need to be? Do you care if a single 
thread is slower in a clustered environment than it would be on a local 
raid stack? In good clustered engineering your workload will be handled 
by multiple threads over a cluster of workers. Overall, you have more 
threads than you could have on a single machine. This allows servicing a 
greater overall workload than you could without a cluster. I refer to 
that as comparing apples to orchards (1 
).


On 04/13/18 10:58, Anastasia Belyaeva wrote:

Thanks a lot for your reply!

You guessed it right though  - mailing lists, various blogs, 
documentation, videos and even source code at this point. Changing 
some off the options does make performance slightly better, but 
nothing particularly groundbreaking.


So, if I understand you correctly, no one has yet managed to get 
acceptable performance (relative to underlying hardware capabilities) 
with smaller block sizes? Is there an explanation for this?



2018-04-13 1:57 GMT+03:00 Vlad Kopylov >:


Guess you went through user lists and tried something like this
already
http://lists.gluster.org/pipermail/gluster-users/2018-April/033811.html

I have a same exact setup and below is as far as it went after
months of trail and error.
We all have somewhat same setup and same issue with this - you can
find same post as yours on the daily basis.

On Wed, Apr 11, 2018 at 3:03 PM, Anastasia Belyaeva
mailto:anastasia@gmail.com>> wrote:

Hello everybody!

I have 3 gluster servers (*gluster 3.12.6, Centos 7.2*; those
are actually virtual machines located on 3 separate physical
XenServer7.1 servers)

They are all connected via infiniband network. Iperf3 shows
around *23 Gbit/s network bandwidth *between each 2 of them.

Each server has 3 HDD put into a *stripe*3 thin pool (LVM2)
*with logical volume created on top of it, formatted with
*xfs*. Gluster top reports the following throughput:

root@fsnode2 ~ $ gluster volume top r3vol write-perf bs
4096 count 524288 list-cnt 0
Brick: fsnode2.ibnet:/data/glusterfs/r3vol/brick1/brick
Throughput *631.82 MBps *time 3.3989 secs
Brick: fsnode6.ibnet:/data/glusterfs/r3vol/brick1/brick
Throughput *566.96 MBps *time 3.7877 secs
Brick: fsnode4.ibnet:/data/glusterfs/r3vol/brick1/brick
Throughput *546.65 MBps *time 3.9285 secs


root@fsnode2 ~ $ gluster volume top r2vol write-perf bs
4096 count 524288 list-cnt 0
Brick: fsnode2.ibnet:/data/glusterfs/r2vol/brick1/brick
Throughput *539.60 MBps *time 3.9798 secs
Brick: fsnode4.ibnet:/data/glusterfs/r2vol/brick1/brick
Throughput *580.07 MBps *time 3.7021 secs


And two *pure replicated ('replica 2' and 'replica 3')*
volumes. *The 'replica 2' volume is for testing purpose only.

Volume Name: r2vol
Type: Replicate
Volume ID: 4748d0c0-6bef-40d5-b1ec-d30e10cfddd9
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: fsnode2.ibnet:/data/glusterfs/r2vol/brick1/brick
Brick2: fsnode4.ibnet:/data/glusterfs/r2vol/brick1/brick
Options Reconfigured:
nfs.disable: on

Volume Name: r3vol
Type: Replicate
Volume ID: b0f64c28-57e1-4b9d-946b-26ed6b499f29
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:

Re: [Gluster-users] Unreasonably poor performance of replicated volumes

2018-04-13 Thread Anastasia Belyaeva
Thanks a lot for your reply!

You guessed it right though  - mailing lists, various blogs, documentation,
videos and even source code at this point. Changing some off the options
does make performance slightly better, but nothing particularly
groundbreaking.

So, if I understand you correctly, no one has yet managed to get acceptable
performance (relative to underlying hardware capabilities) with smaller
block sizes? Is there an explanation for this?


2018-04-13 1:57 GMT+03:00 Vlad Kopylov :

> Guess you went through user lists and tried something like this already
> http://lists.gluster.org/pipermail/gluster-users/2018-April/033811.html
> I have a same exact setup and below is as far as it went after months of
> trail and error.
> We all have somewhat same setup and same issue with this - you can find
> same post as yours on the daily basis.
>
> On Wed, Apr 11, 2018 at 3:03 PM, Anastasia Belyaeva <
> anastasia@gmail.com> wrote:
>
>> Hello everybody!
>>
>> I have 3 gluster servers (*gluster 3.12.6, Centos 7.2*; those are
>> actually virtual machines located on 3 separate physical XenServer7.1
>> servers)
>>
>> They are all connected via infiniband network. Iperf3 shows around *23
>> Gbit/s network bandwidth *between each 2 of them.
>>
>> Each server has 3 HDD put into a *stripe*3 thin pool (LVM2) *with
>> logical volume created on top of it, formatted with *xfs*. Gluster top
>> reports the following throughput:
>>
>> root@fsnode2 ~ $ gluster volume top r3vol write-perf bs 4096 count
>>> 524288 list-cnt 0
>>> Brick: fsnode2.ibnet:/data/glusterfs/r3vol/brick1/brick
>>> Throughput *631.82 MBps *time 3.3989 secs
>>> Brick: fsnode6.ibnet:/data/glusterfs/r3vol/brick1/brick
>>> Throughput *566.96 MBps *time 3.7877 secs
>>> Brick: fsnode4.ibnet:/data/glusterfs/r3vol/brick1/brick
>>> Throughput *546.65 MBps *time 3.9285 secs
>>
>>
>> root@fsnode2 ~ $ gluster volume top r2vol write-perf bs 4096 count
>>> 524288 list-cnt 0
>>> Brick: fsnode2.ibnet:/data/glusterfs/r2vol/brick1/brick
>>> Throughput *539.60 MBps *time 3.9798 secs
>>> Brick: fsnode4.ibnet:/data/glusterfs/r2vol/brick1/brick
>>> Throughput *580.07 MBps *time 3.7021 secs
>>
>>
>> And two *pure replicated ('replica 2' and 'replica 3')* volumes. *The
>> 'replica 2' volume is for testing purpose only.
>>
>>> Volume Name: r2vol
>>> Type: Replicate
>>> Volume ID: 4748d0c0-6bef-40d5-b1ec-d30e10cfddd9
>>> Status: Started
>>> Snapshot Count: 0
>>> Number of Bricks: 1 x 2 = 2
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: fsnode2.ibnet:/data/glusterfs/r2vol/brick1/brick
>>> Brick2: fsnode4.ibnet:/data/glusterfs/r2vol/brick1/brick
>>> Options Reconfigured:
>>> nfs.disable: on
>>>
>>
>>
>>> Volume Name: r3vol
>>> Type: Replicate
>>> Volume ID: b0f64c28-57e1-4b9d-946b-26ed6b499f29
>>> Status: Started
>>> Snapshot Count: 0
>>> Number of Bricks: 1 x 3 = 3
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: fsnode2.ibnet:/data/glusterfs/r3vol/brick1/brick
>>> Brick2: fsnode4.ibnet:/data/glusterfs/r3vol/brick1/brick
>>> Brick3: fsnode6.ibnet:/data/glusterfs/r3vol/brick1/brick
>>> Options Reconfigured:
>>> nfs.disable: on
>>
>>
>>
>> *Client *is also gluster 3.12.6, Centos 7.3 virtual machine, *FUSE mount*
>>
>>
>>> root@centos7u3-nogdesktop2 ~ $ mount |grep gluster
>>> gluster-host.ibnet:/r2vol on /mnt/gluster/r2 type fuse.glusterfs
>>> (rw,relatime,user_id=0,group_id=0,default_permissions,allow_
>>> other,max_read=131072)
>>> gluster-host.ibnet:/r3vol on /mnt/gluster/r3 type fuse.glusterfs
>>> (rw,relatime,user_id=0,group_id=0,default_permissions,allow_
>>> other,max_read=131072)
>>
>>
>>
>> *The problem *is that there is a significant performance loss with
>> smaller block sizes. For example:
>>
>> *4K block size*
>> [replica 3 volume]
>> root@centos7u3-nogdesktop2 ~ $ dd if=/dev/zero
>> of=/mnt/gluster/r3/file$RANDOM bs=4096 count=262144
>> 262144+0 records in
>> 262144+0 records out
>> 1073741824 bytes (1.1 GB) copied, 11.2207 s, *95.7 MB/s*
>>
>> [replica 2 volume]
>> root@centos7u3-nogdesktop2 ~ $ dd if=/dev/zero
>> of=/mnt/gluster/r2/file$RANDOM bs=4096 count=262144
>> 262144+0 records in
>> 262144+0 records out
>> 1073741824 bytes (1.1 GB) copied, 12.0149 s, *89.4 MB/s*
>>
>> *512K block size*
>> [replica 3 volume]
>> root@centos7u3-nogdesktop2 ~ $ dd if=/dev/zero
>> of=/mnt/gluster/r3/file$RANDOM bs=512K count=2048
>> 2048+0 records in
>> 2048+0 records out
>> 1073741824 bytes (1.1 GB) copied, 5.27207 s, *204 MB/s*
>>
>> [replica 2 volume]
>> root@centos7u3-nogdesktop2 ~ $ dd if=/dev/zero
>> of=/mnt/gluster/r2/file$RANDOM bs=512K count=2048
>> 2048+0 records in
>> 2048+0 records out
>> 1073741824 bytes (1.1 GB) copied, 4.22321 s, *254 MB/s*
>>
>> With bigger block size It's still not where I expect it to be, but at
>> least it starts to make some sense.
>>
>> I've been trying to solve this for a very long time with no luck.
>> I've already tried both kernel tuning (different 'tuned' profiles and the
>> ones recommended in the "Linux Kernel Tuning" section) and t

Re: [Gluster-users] Unreasonably poor performance of replicated volumes

2018-04-12 Thread Vlad Kopylov
Guess you went through user lists and tried something like this already
http://lists.gluster.org/pipermail/gluster-users/2018-April/033811.html
I have a same exact setup and below is as far as it went after months of
trail and error.
We all have somewhat same setup and same issue with this - you can find
same post as yours on the daily basis.

On Wed, Apr 11, 2018 at 3:03 PM, Anastasia Belyaeva  wrote:

> Hello everybody!
>
> I have 3 gluster servers (*gluster 3.12.6, Centos 7.2*; those are
> actually virtual machines located on 3 separate physical XenServer7.1
> servers)
>
> They are all connected via infiniband network. Iperf3 shows around *23
> Gbit/s network bandwidth *between each 2 of them.
>
> Each server has 3 HDD put into a *stripe*3 thin pool (LVM2) *with logical
> volume created on top of it, formatted with *xfs*. Gluster top reports
> the following throughput:
>
> root@fsnode2 ~ $ gluster volume top r3vol write-perf bs 4096 count 524288
>> list-cnt 0
>> Brick: fsnode2.ibnet:/data/glusterfs/r3vol/brick1/brick
>> Throughput *631.82 MBps *time 3.3989 secs
>> Brick: fsnode6.ibnet:/data/glusterfs/r3vol/brick1/brick
>> Throughput *566.96 MBps *time 3.7877 secs
>> Brick: fsnode4.ibnet:/data/glusterfs/r3vol/brick1/brick
>> Throughput *546.65 MBps *time 3.9285 secs
>
>
> root@fsnode2 ~ $ gluster volume top r2vol write-perf bs 4096 count 524288
>> list-cnt 0
>> Brick: fsnode2.ibnet:/data/glusterfs/r2vol/brick1/brick
>> Throughput *539.60 MBps *time 3.9798 secs
>> Brick: fsnode4.ibnet:/data/glusterfs/r2vol/brick1/brick
>> Throughput *580.07 MBps *time 3.7021 secs
>
>
> And two *pure replicated ('replica 2' and 'replica 3')* volumes. *The
> 'replica 2' volume is for testing purpose only.
>
>> Volume Name: r2vol
>> Type: Replicate
>> Volume ID: 4748d0c0-6bef-40d5-b1ec-d30e10cfddd9
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x 2 = 2
>> Transport-type: tcp
>> Bricks:
>> Brick1: fsnode2.ibnet:/data/glusterfs/r2vol/brick1/brick
>> Brick2: fsnode4.ibnet:/data/glusterfs/r2vol/brick1/brick
>> Options Reconfigured:
>> nfs.disable: on
>>
>
>
>> Volume Name: r3vol
>> Type: Replicate
>> Volume ID: b0f64c28-57e1-4b9d-946b-26ed6b499f29
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x 3 = 3
>> Transport-type: tcp
>> Bricks:
>> Brick1: fsnode2.ibnet:/data/glusterfs/r3vol/brick1/brick
>> Brick2: fsnode4.ibnet:/data/glusterfs/r3vol/brick1/brick
>> Brick3: fsnode6.ibnet:/data/glusterfs/r3vol/brick1/brick
>> Options Reconfigured:
>> nfs.disable: on
>
>
>
> *Client *is also gluster 3.12.6, Centos 7.3 virtual machine, *FUSE mount*
>
>> root@centos7u3-nogdesktop2 ~ $ mount |grep gluster
>> gluster-host.ibnet:/r2vol on /mnt/gluster/r2 type fuse.glusterfs
>> (rw,relatime,user_id=0,group_id=0,default_permissions,
>> allow_other,max_read=131072)
>> gluster-host.ibnet:/r3vol on /mnt/gluster/r3 type fuse.glusterfs
>> (rw,relatime,user_id=0,group_id=0,default_permissions,
>> allow_other,max_read=131072)
>
>
>
> *The problem *is that there is a significant performance loss with
> smaller block sizes. For example:
>
> *4K block size*
> [replica 3 volume]
> root@centos7u3-nogdesktop2 ~ $ dd if=/dev/zero
> of=/mnt/gluster/r3/file$RANDOM bs=4096 count=262144
> 262144+0 records in
> 262144+0 records out
> 1073741824 bytes (1.1 GB) copied, 11.2207 s, *95.7 MB/s*
>
> [replica 2 volume]
> root@centos7u3-nogdesktop2 ~ $ dd if=/dev/zero
> of=/mnt/gluster/r2/file$RANDOM bs=4096 count=262144
> 262144+0 records in
> 262144+0 records out
> 1073741824 bytes (1.1 GB) copied, 12.0149 s, *89.4 MB/s*
>
> *512K block size*
> [replica 3 volume]
> root@centos7u3-nogdesktop2 ~ $ dd if=/dev/zero
> of=/mnt/gluster/r3/file$RANDOM bs=512K count=2048
> 2048+0 records in
> 2048+0 records out
> 1073741824 bytes (1.1 GB) copied, 5.27207 s, *204 MB/s*
>
> [replica 2 volume]
> root@centos7u3-nogdesktop2 ~ $ dd if=/dev/zero
> of=/mnt/gluster/r2/file$RANDOM bs=512K count=2048
> 2048+0 records in
> 2048+0 records out
> 1073741824 bytes (1.1 GB) copied, 4.22321 s, *254 MB/s*
>
> With bigger block size It's still not where I expect it to be, but at
> least it starts to make some sense.
>
> I've been trying to solve this for a very long time with no luck.
> I've already tried both kernel tuning (different 'tuned' profiles and the
> ones recommended in the "Linux Kernel Tuning" section) and tweaking gluster
> volume options, including write-behind/flush-behind/
> write-behind-window-size.
> The latter, to my surprise, didn't make any difference. 'Cause at first I
> thought it was the buffering issue but it turns out it does buffer writes,
> just not very efficient (well at least what it looks like in the *gluster
> profile output*)
>
> root@fsnode2 ~ $ gluster volume profile r3vol info clear
>> ...
>> Cleared stats.
>
>
> root@centos7u3-nogdesktop2 ~ $ dd if=/dev/zero
>> of=/mnt/gluster/r3/file$RANDOM bs=4096 count=262144
>> 262144+0 records in
>> 262144+0 records out
>> 1073741824 bytes (1.1 GB) copied, 10.9743 s, 97.8 MB/s
>
>
>
>> root@fs