More details on the tests you ran, and also gluster profile data while you
were running the tests can help analyse.
Similar to my request to another user on thread, you can also help provide
some feedback on the data gathering ansible scripts by trying out
https://github.com/gluster/gluster-ansible-maintenance/pull/4
These scripts gather data from the hosts, and perform I/O tests on the
VM. +Sachidananda
URS <s...@redhat.com>

On Thu, Aug 15, 2019 at 1:39 PM <lyubomir.grancha...@bottleshipvfx.com>
wrote:

> Hi folks,
>
>  I have been experimenting with oVirt cluster based on glusterfs for the
> past few days. (first-timer). The cluster is up and running and it consists
> of 4 nodes and has 4 replicas. When I try to deploy Windows Server VM I
> encounter the following issue: The disk of the VM has ok reading speed (
> close to bare metal) but the writing speed is very slow. ( about 10 times
> slower than it is supposed to be). Can anyone give me any suggestion,
> please? Thanks in advance! Here are the settings of the glusterfs volume:
>
> Volume Name: bottle-volume
> Type: Replicate
> Volume ID: 869b8d1e-1266-4820-8dcd-4fea92346b90
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 4 = 4
> Transport-type: tcp
> Bricks:
> Brick1:
> cnode01.bottleship.local:/gluster_bricks/brick-cnode01/brick-cnode01
> Brick2:
> cnode02.bottleship.local:/gluster_bricks/brick-cnode02/brick-cnode02
> Brick3:
> cnode03.bottleship.local:/gluster_bricks/brick-cnode03/brick-cnode03
> Brick4:
> cnode04.bottleship.local:/gluster_bricks/brick-cnode04/brick-cnode04
> Options Reconfigured:
> network.ping-timeout: 30
> cluster.granular-entry-heal: enable
> performance.strict-o-direct: on
> storage.owner-gid: 36
> storage.owner-uid: 36
> server.event-threads: 4
> client.event-threads: 4
> cluster.choose-local: off
> features.shard: on
> cluster.shd-wait-qlength: 10000
> cluster.shd-max-threads: 8
> cluster.locking-scheme: granular
> cluster.data-self-heal-algorithm: full
> cluster.server-quorum-type: server
> cluster.quorum-type: auto
> cluster.eager-lock: enable
> network.remote-dio: off
> performance.low-prio-threads: 32
> performance.io-cache: off
> performance.read-ahead: off
> performance.quick-read: off
> auth.allow: *
> user.cifs: off
> transport.address-family: inet
> nfs.disable: on
> performance.client-io-threads: off
> server.allow-insecure: on
>
> Please let me know if you need any more configuration info or hardware
> specs.
>
> best,
>
> Lyubo
> _______________________________________________
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/JRD7377WQX6FQAZRLJ6YUHIYRRAFZLIY/
>
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PDZG4R4LUMWEUSY3SWCDWS7PIAK2JYN6/

Reply via email to