Hello,

I'm running a home setup with 3 nodes and 2 SATA SSDs per node.
As storage I'm running glusterfs and 40GBit/s links.

Software Version:4.3.9.4-1.el7

I've a lot of I/O Wait on the nodes (20%) and on the VMs (50%).

gluster volume top vmstore write-perf bs 2014 count 1024 | grep Through
Throughput 635.54 MBps time 0.0032 secs
Throughput 614.89 MBps time 0.0034 secs
Throughput 622.31 MBps time 0.0033 secs
Throughput 643.07 MBps time 0.0032 secs
Throughput 621.75 MBps time 0.0033 secs
Throughput 609.26 MBps time 0.0034 secs

gluster volume top vmstore read-perf bs 2014 count 1024 | grep Through
Throughput 1274.62 MBps time 0.0016 secs
Throughput 1320.32 MBps time 0.0016 secs
Throughput 1203.93 MBps time 0.0017 secs
Throughput 1293.81 MBps time 0.0016 secs
Throughput 1213.14 MBps time 0.0017 secs
Throughput 1193.48 MBps time 0.0017 secs

Volume Name: vmstore
Type: Distributed-Replicate
Volume ID: 195e2a05-9667-4b8b-b0b7-82294631de50
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: 10.9.9.101:/gluster_bricks/vmstore/vmstore
Brick2: 10.9.9.102:/gluster_bricks/vmstore/vmstore
Brick3: 10.9.9.103:/gluster_bricks/vmstore/vmstore
Brick4: 10.9.9.101:/gluster_bricks/S4CYNF0M219849L/S4CYNF0M219849L
Brick5: 10.9.9.102:/gluster_bricks/S4CYNF0M219836L/S4CYNF0M219836L
Brick6: 10.9.9.103:/gluster_bricks/S4CYNF0M219801Y/S4CYNF0M219801Y
Options Reconfigured:
performance.client-io-threads: on
nfs.disable: on
transport.address-family: inet
performance.strict-o-direct: on
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.low-prio-threads: 32
network.remote-dio: off
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
features.shard: on
user.cifs: off
cluster.choose-local: off
client.event-threads: 4
server.event-threads: 4
network.ping-timeout: 30
storage.owner-uid: 36
storage.owner-gid: 36
cluster.granular-entry-heal: enable

Please help me to analyse the root cause.

Many thanks
Metz
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSACLDXFZXD5XV6WQ5GPJDBJQBNAUC7P/

Reply via email to