Hello Team,

We are running a setup of 3-way replica HC gluster setup configured during
the initial deployment from the cockpit console using ansible.

NODE1
  - /dev/sda   (OS)
  - /dev/sdb   ( Gluster Bricks )
       * /gluster_bricks/engine/engine/
       * /gluster_bricks/data/data/
       * /gluster_bricks/vmstore/vmstore/

NODE2 and NODE3 with a similar setup.

Hosted engine was running on node2.

- While moving NODE1 to maintenance mode along with stopping the
gluster service as it prompts before, Hosted engine instantly went down.

- I start the gluster service back on node1 and start the hosted engine
again and found hosted engine started properly but getting crashed again
and again within frames of second after a successful start because HE
itself stopping glusterd on node1. (not sure) but cross-verified by
checking glusterd status.

*Is it possible to clear pending tasks or not let the HE to stop
glusterd on node1?*

*Or we can start the HE using other gluster node?*

https://paste.fedoraproject.org/paste/Qu2tSHuF-~G4GjGmstV6mg


-- 

ABHISHEK SAHNI


IISER Bhopal
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7ETASYIKXRAGYZRBZIS6G743UHPKGCNA/

Reply via email to