On Mon, Mar 23, 2020 at 11:44 PM Christian Reiss
<em...@christian-reiss.de> wrote:
>
> Hey folks,
>
> gluster related question. Having SSD in a RAID that can do 2 GB writes
> and Reads (actually above, but meh) in a 3-way HCI cluster connected
> with 10gbit connection things are pretty slow inside gluster.
> I have these settings:
>
> Options Reconfigured:
> cluster.server-quorum-type: server
> cluster.quorum-type: auto
> cluster.shd-max-threads: 8
> features.shard: on
> features.shard-block-size: 64MB
> server.event-threads: 8
> user.cifs: off
> cluster.shd-wait-qlength: 10000
> cluster.locking-scheme: granular
> cluster.eager-lock: enable
> performance.low-prio-threads: 32
> network.ping-timeout: 30
> cluster.granular-entry-heal: enable
> storage.owner-gid: 36
> storage.owner-uid: 36
> cluster.choose-local: true
> client.event-threads: 16

These settings mean:

> performance.strict-o-direct: on
> network.remote-dio: enable

That you are using direct I/O both on the client and server side.

> performance.client-io-threads: on
> nfs.disable: on
> storage.fips-mode-rchecksum: on
> transport.address-family: inet
> cluster.readdir-optimize: on
> cluster.metadata-self-heal: on
> cluster.data-self-heal: on
> cluster.entry-self-heal: on
> cluster.data-self-heal-algorithm: full
> features.uss: enable
> features.show-snapshot-directory: on
> features.barrier: disable
> auto-delete: enable
> snap-activate-on-create: enable
>
> Writing inside the /gluster_bricks yields those 2GB/sec writes, Reading
> the same.

How did you test this?

Did you test reading from the storage on the server side using direct
I/O? if not,
you test accessing server buffer cache, which is pretty fast.

> Reading inside the /rhev/data-center/mnt/glusterSD/ dir reads go down to
> 366mb/sec while writes plummet to to 200mb/sec.

This use direct I/O.

> Summed up: Writing into the SSD Raid in the lvm/xfs gluster brick
> directory is fast, writing into the mounted gluster dir is horribly slow.
>
> The above can be seen and repeated on all 3 servers. The network can do
> full 10gbit (tested with, among others: rsync, iperf3).
>
> Anyone with some idea on whats missing/ going on here?

Please share the commands/configuration files used to perform the tests.

Adding storage folks that can help with analyzing this.

> Thanks folks,
> as always stay safe and healthy!

Nir

>
> --
> with kind regards,
> mit freundlichen Gruessen,
>
> Christian Reiss
> _______________________________________________
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OMAAERV4IUISYEWD4QP5OAM4DK4JTTLF/
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LWCO6URJZZHCBOLIKLOBVY7CW62MLBW2/

Reply via email to