Hi Peter,
I think Staril means, running the command; hosted-engine --set-maintenance
--mode=local, this is also possible from the ovirt ui, via the ribbon on
the hosts section;
[image: image.png]
>From the log's it seems gluster has difficulty find the shared's, e.g.;
.shard/e5f699e2-de11-41be-bd
Hi Olaf, I tried running "gluster volume start hdd force" but sadly it did not change anything. the raid rebuild has finished now and everything seems to be fine:md6 : active raid6 sdu1[2] sdx1[5] sds1[0] sdt1[1] sdz1[7] sdv1[3] sdw1[4] sdaa1[8] sdy1[6] 68364119040 blocks super 1.2 level 6, 5
Hello Strahil, I tried restarting the glusterd.service on storage2 but it had no effect. What do you mean exactly with "set the node in maintenance"? Only the "ovirthostX" are available as compute hosts in oVirt. Or is that some other option in oVirt that I don't know about? The gluster volume itse
Hi Peter,
I see your raid array is rebuilding, could it be your xfs needs a repair,
using xfs_repair?
did you try running gluster v hdd start force?
Kind regards,
Olaf
Op do 24 mrt. 2022 om 15:54 schreef Peter Schmidt <
peterschmidt18...@yandex.com>:
> Hello everyone,
>
> I'm running an oVirt
Hello everyone, I'm running an oVirt cluster on top of a distributed-replicate gluster volume and one of the bricks cannot be mounted anymore from my oVirt hosts. This morning I also noticed a stack trace and a spike in TCP connections on one of the three gluster nodes (storage2), which I have atta
Hello everyone, I'm running an oVirt cluster on top of a distributed-replicate gluster volume and one of the bricks cannot be mounted anymore from my oVirt hosts. This morning I also noticed a stack trace and a spike in TCP connections on one of the three gluster nodes (storage2), which I have atta