On Wed, Sep 23, 2020 at 10:16 PM Strahil Nikolov via Users
wrote:
>
> >1) ..."I would give the engine a 'Windows'-style fix (a.k.a.
> reboot)" >how does one restart just the oVirt-engine?
> ssh to HostedEngine VM and run one of the following:
> - reboot
> - systemctl restart
We do have gluster volume UI sync issue and this is fixed in ovirt-4.4.2
BZ: https://bugzilla.redhat.com/show_bug.cgi?id=1860775
Could be the same issue.
On Wed, Sep 23, 2020 at 10:16 PM Strahil Nikolov via Users
wrote:
>
> >1) ..."I would give the engine a 'Windows'-style fix (a.k.a.
>
>1) ..."I would give the engine a 'Windows'-style fix (a.k.a. reboot)"
>>how does one restart just the oVirt-engine?
ssh to HostedEngine VM and run one of the following:
- reboot
- systemctl restart ovirt-engine.service
>2) I now show in shell 3 nodes, each with the one brick for data,
in oVirt Engine I think I see some of the issue
When you go under volumes -> Data ->
[image: image.png]
It notes two servers.. when you choose "add brick" it says volume has 3
bricks but only two servers.
So I went back to my deployment notes and walked through setup
yum install
eMail client with this forum is a bit .. I was told this web interface
I could post images... as embedded ones in email get scraped out... but not
seeing how that is done. Seems to be txt only.
1) ..."I would give the engine a 'Windows'-style fix (a.k.a. reboot)" how
does one restart
Ovirt uses the "/rhev/mnt... mountpoints.
Do you have those (for each storage domain ) ?
Here is an example from one of my nodes:
[root@ovirt1 ~]# df -hT | grep rhev
gluster1:/engine fuse.glusterfs 100G 19G 82G
19%
By the way, did you add the third host in the oVirt ?
If not , maybe that is the real problem :)
Best Regards,
Strahil Nikolov
В вторник, 22 септември 2020 г., 17:23:28 Гринуич+3, Jeremey Wise
написа:
Its like oVirt thinks there are only two nodes in gluster replication
# Yet
That's really wierd.
I would give the engine a 'Windows'-style fix (a.k.a. reboot).
I guess some of the engine's internal processes crashed/looped and it doesn't
see the reality.
Best Regards,
Strahil Nikolov
В вторник, 22 септември 2020 г., 16:27:25 Гринуич+3, Jeremey Wise
написа:
Also in some rare cases, I have seen oVirt showing gluster as 2 out of 3 bricks
up , but usually it was an UI issue and you go to UI and mark a "force start"
which will try to start any bricks that were down (won't affect gluster) and
will wake up the UI task to verify again brick status.
Usually I first start with:
'gluster volume heal info summary'
Anything that is not 'Connected' is bad.
Yeah, the abstraction is not so nice, but the good thing is that you can always
extract the data from a single node left (it will require to play a little bit
with the quorum of the
when I posted last.. in the tread I paste a roling restart.And... now
it is replicating.
oVirt still showing wrong. BUT.. I did my normal test from each of the
three nodes.
1) Mount Gluster file system with localhost as primary and other two as
tertiary to local mount (like a client
At around Sep 21 20:33 local time , you got a loss of quorum - that's not good.
Could it be a network 'hicup' ?
Best Regards,
Strahil Nikolov
В вторник, 22 септември 2020 г., 15:05:16 Гринуич+3, Jeremey Wise
написа:
I did.
Here are all three nodes with restart. I find it odd ...
Replication issue could mean that one of the client (FUSE mounts) is not
attached to all bricks.
You can check the amount of clients via:
gluster volume status all client-list
As a prevention , just do a rolling restart:
- set a host in maintenance and mark it to stop glusterd service (I'm
I did.
Here are all three nodes with restart. I find it odd ... their has been a
set of messages at end (see below) which I don't know enough about what
oVirt laid out to know if it is bad.
###
[root@thor vmstore]# systemctl status glusterd
● glusterd.service - GlusterFS, a clustered
Have you restarted glusterd.service on the affected node.
glusterd is just management layer and it won't affect the brick processes.
Best Regards,
Strahil Nikolov
В вторник, 22 септември 2020 г., 01:43:36 Гринуич+3, Jeremey Wise
написа:
Start is not an option.
It notes two bricks.
Start is not an option.
It notes two bricks. but command line denotes three bricks and all present
[root@odin thorst.penguinpages.local:_vmstore]# gluster volume status data
Status of volume: data
Gluster process TCP Port RDMA Port Online Pid
Just select the volume and press "start" . It will automatically mark "force
start" and will fix itself.
Best Regards,
Strahil Nikolov
В понеделник, 21 септември 2020 г., 20:53:15 Гринуич+3, Jeremey Wise
написа:
oVirt engine shows one of the gluster servers having an issue. I
You could try setting host to maintenance and check stop gluster option,
then re-activate host or try restarting glusterd service on the host
On Mon, Sep 21, 2020 at 2:52 PM Jeremey Wise wrote:
>
> oVirt engine shows one of the gluster servers having an issue. I did a
> graceful shutdown of
18 matches
Mail list logo