Dear Krutika,

Yes I did but I use 6 ports (1 gbit/s each) and this is the reason that reads 
get slower.
Do you know a way to force gluster to open more connections (client to server & 
server to server)?

Thanks for the detailed explanation.

Best Regards,
Strahil NikolovOn May 21, 2019 08:36, Krutika Dhananjay <kdhan...@redhat.com> 
wrote:
>
> So in our internal tests (with nvme ssd drives, 10g n/w), we found read 
> performance to be better with choose-local 
> disabled in hyperconverged setup.  See 
> https://bugzilla.redhat.com/show_bug.cgi?id=1566386 for more information.
>
> With choose-local off, the read replica is chosen randomly (based on hash 
> value of the gfid of that shard).
> And when it is enabled, the reads always go to the local replica.
> We attributed better performance with the option disabled to bottlenecks in 
> gluster's rpc/socket layer. Imagine all read
> requests lined up to be sent over the same mount-to-brick connection as 
> opposed to (nearly) randomly getting distributed
> over three (because replica count = 3) such connections. 
>
> Did you run any tests that indicate "choose-local=on" is giving better read 
> perf as opposed to when it's disabled?
>
> -Krutika
>
> On Sun, May 19, 2019 at 5:11 PM Strahil Nikolov <hunter86...@yahoo.com> wrote:
>>
>> Ok,
>>
>> so it seems that Darell's case and mine are different as I use vdo.
>>
>> Now I have destroyed Storage Domains, gluster volumes and vdo and recreated 
>> again (4 gluster volumes on a single vdo).
>> This time vdo has '--emulate512=true' and no issues have been observed.
>>
>> Gluster volume options before 'Optimize for virt':
>>
>> Volume Name: data_fast
>> Type: Replicate
>> Volume ID: 378804bf-2975-44d8-84c2-b541aa87f9ef
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x (2 + 1) = 3
>> Transport-type: tcp
>> Bricks:
>> Brick1: gluster1:/gluster_bricks/data_fast/data_fast
>> Brick2: gluster2:/gluster_bricks/data_fast/data_fast
>> Brick3: ovirt3:/gluster_bricks/data_fast/data_fast (arbiter)
>> Options Reconfigured:
>> transport.address-family: inet
>> nfs.disable: on
>> performance.client-io-threads: off
>> cluster.enable-shared-storage: enable
>>
>> Gluster volume after 'Optimize for virt':
>>
>> Volume Name: data_fast
>> Type: Replicate
>> Volume ID: 378804bf-2975-44d8-84c2-b541aa87f9ef
>> Status: Stopped
>> Snapshot Count: 0
>> Number of Bricks: 1 x (2 + 1) = 3
>> Transport-type: tcp
>> Bricks:
>> Brick1: gluster1:/gluster_bricks/data_fast/data_fast
>> Brick2: gluster2:/gluster_bricks/data_fast/data_fast
>> Brick3: ovirt3:/gluster_bricks/data_fast/data_fast (arbiter)
>> Options Reconfigured:
>> network.ping-timeout: 30
>> performance.strict-o-direct: on
>> storage.owner-gid: 36
>> storage.owner-uid: 36
>> server.event-threads: 4
>> client.event-threads: 4
>> cluster.choose-local: off
>> user.cifs: off
>> features.shard: on
>> cluster.shd-wait-qlength: 10000
>> cluster.shd-max-threads: 8
>> cluster.locking-scheme: granular
>> cluster.data-self-heal-algorithm: full
>> cluster.server-quorum-type: server
>> cluster.quorum-type: auto
>> cluster.eager-lock: enable
>> network.remote-dio: off
>> performance.low-prio-threads: 32
>> performance.io-cache: off
>> performance.read-ahead: off
>> performance.quick-read: off
>> transport.address-family: inet
>> nfs.disable: on
>> performance.client-io-threads: on
>> cluster.enable-shared-storage: enable
>>
>> After that adding the volumes as storage domains (via UI) worked without any 
>> issues.
>>
>> Can someone clarify why we have now 'cluster.choose-local: off' when in 
>> oVirt 4.2.7 (gluster v3.12.15) we didn't have that ?
>> I'm using storage that is faster than network and reading from local brick 
>> gives very high read speed.
>>
>> Best Regards,
>> Strahil Nikolov
>>
>>
>>
>> В неделя, 19 май 2019 г., 9:47:27 ч. Г�
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FEDHZUJUB5ODQ34ME4BZP2L73KYUU5CH/

Reply via email to