I did it following the {read,write}-perf example reported in paragraph 12.6 and
12.7
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/sect-running_the_volume_top_command,
here are the results:
https://cloud.ssis.sm/index.php/s/9bncnNSopnFReRS
__
It's just a guess until there is a proof in the gluster's top
read-perf/write-perf .
Can you share atleast the read-perf ?
I'm pretty confident that the issue is not network-related , as the
cluster.choose-local requires all reads to be local (reducing the network
usage).
Best Regards,Strahil Ni
I tried to measure IOs using gluster volume top but its results seem very
cryptic to me (I need a deep analyze and don't have the time now)
Thank you very much for your analysis, if I understood the problem is that the
consumer SSD cache is too weak to help in times under a smoll number ~15 not
I hope you do realize that modern consumer SSDs have a small cache (at least
according to https://www.storagereview.com/review/samsung-860-evo-ssd-review )
and we can't rule out the disks.
Use gluster's top command to view the read (17.2.6) and write (17.2.7)
performance of the bricks before (r
Parameter cluster.choose-local set to off.
I confirm the filesystem of the bricks are all XFS as required.
I started the farm only to accomplish a test bench of oVirt implementation, so
I used 3 hosts based on ryzen5 processor desktop environment equipped with 4
DDR (4 32GB modules) and 1 disk fo
Set it back to the original value .
The option picks the local brick for reading instead of picking the fastest one
(which could be either a remote or a local one) which could help with bandwidth
issues.
Can you provide details about the bricks like HW raid/JBOD, raid type
(0,5,6,10), stripe si
During this time (Hosted-Engine Hung, this appears in the host were it's
supposed to have Hosted-Engine Running:
2022-09-15 13:59:27,762+ WARN (Thread-10) [virt.vm]
(vmId='8486ed73-df34-4c58-bfdc-7025dec63b7f') Shutdown by QEMU Guest Agent
failed (agent probably inactive) (vm:5490)
2022-09-
The current set is:
[root@ovirt-node2 ~]# gluster volume get glen cluster.choose-local| awk
'/choose-local/ {print $2}'
off
[root@ovirt-node2 ~]# gluster volume get gv0 cluster.choose-local| awk
'/choose-local/ {print $2}'
off
[root@ovirt-node2 ~]# gluster volume get gv1 cluster.choose-local| awk
Can you test the backup after setting:status=$(gluster volume get
cluster.choose-local awk '/choose-local/ {print $2}')
gluster volume set cluster.choose-local true
And after the test:gluster volume set cluster.choose-local $status
Best Regards,Strahil Nikolov
On Thu, Sep 15, 2022 at 12:
Sorry, I see that the editor bring away all the head spaces that indent the
timestamp.
I retried the test, hoping to find the same error, and I found it. On node3. I
changed the code of the read routine:
cd /rhev/data-center/mnt/glusterSD/ovirt-node2.ovirt:_gv1; while sleep 0.1 ; do
date +'Times
Thank you for the analisys:
The version is the last distributed in the ovirt@centos8 distribution:
[root@ovirt-node2 ~]# rpm -qa | grep '\(glusterfs-server\|ovirt-node\)'
ovirt-node-ng-image-update-placeholder-4.5.2-1.el8.noarch
glusterfs-server-10.2-1.el8s.x86_64
ovirt-node-ng-nodectl-4.4.2-1.el8
I see some entries that are not good:
[2022-09-11 03:50:26.131393 +] W [MSGID: 108001]
[afr-transaction.c:1016:afr_handle_quorum] 0-gv1-replicate-0:
228740f8-1d14-4253-b95b-47e5feb6a3cc: Failing WRITE as quorum is not met
[Invalid argument]
When the backup goes , what is the output of 'glus
Hello. I did a full backup using veeam but I recorded many errors in the
gluster log.
This is the log (https://cloud.ssis.sm/index.php/s/KRimf5MLXK3Ds3d). The log is
from the same node where veeam-proxy and the backupped VMs resides.
Both are running in the gv1 storage domain.
See that hours are
Yes, it seem so, but I cannot record any "erroir" on the interface, I have 0 TX
error and 0 RX error. all three nodes are connected through a single
switch. I set the MTU to 9000 to help gluster transfers but I cannot record any
error.
In the /var/log/vdsm/vdsm.log I log periodically in all
I suspect you have network issues.Check the gluster log for the client side
/var/log/glusterfs/rhev-data-center-mnt-glusterSD-:_.log
Best Regards,Strahil Nikolov
On Tue, Sep 6, 2022 at 17:19, Diego Ercolani wrote:
I really don't understand, I was monitoring vdsm.log of one node (node2)
A
I really don't understand, I was monitoring vdsm.log of one node (node2)
And I saw a complain:
2022-09-06 14:08:27,105+ ERROR (check/loop) [storage.monitor] Error
checking path
/rhev/data-center/mnt/glusterSD/ovirt-node2.ovirt:_gv1/45b4f14c-8323-482f-90ab-99d8fd610018/dom_md/metadata
(monito
I don't have disk problems as I enabled smartd and I perform a periodic test
(smartctl -t long )
but in sanlock I have some problems, and also in gluster glheal logs are not
clean:
The last event I recorded is today at 00:28 (22/09/4 22:28 GMTZ), this is the
time when node3 sent mail:
ovirt-ho
Any sanlock errors to indicate storage problems ?Have you checked Gluster logs
for errors or indication of network disruption?
Best Regards,Strahil Nikolov
On Thu, Sep 1, 2022 at 12:18, Diego Ercolani wrote:
Hello, I have a cluster made by 3 nodes in a "self-hosted-engine" topology.
I im
Versions are last:
ovirt-host-4.5.0-3.el8.x86_64
ovirt-engine-4.5.2.4-1.el8.noarch
glusterfs-server-10.2-1.el8s.x86_64
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org
19 matches
Mail list logo