Re-raising this for discussion. As I commented on the bug, Hosted Engine is such a special use case in terms of setup, configuration, and migration that I'm not sure engine itself is the right place to handle this. We have the option of changing the ha broker|agent to use the engine API to initiate migrations, but there's still a risk that the hosts in the secondary cluster will not be able to reach the storage, etc.
It would be great to get this resolved if there's not currently a way to do it, but we need to decide on a long-term direction for it. Currently, HE can run on additional hosts in the datacenter as an emergency fallback, but it reverts once the HE cluster is back out of maintenance. My ideal would be to extend the hosted-engine utility with an additional parameter which reaches out to the Engine API in order to handle the needed database updates after some safety checks (probably over ansible) to ensure that the HE storage domain is reachable from hosts in the other cluster. But I'm not a hosted engine expert. Is there currently a way to do this? If there isn't, do we want to add additional logic to ha agent|broker, or reach out to the Engine? On Tue, Jan 15, 2019 at 8:27 AM Douglas Duckworth <dod2...@med.cornell.edu> wrote: > Hi > > I opened a BugZilla at https://bugzilla.redhat.com/show_bug.cgi?id=1664777 > but no steps have been shared on how to resolve. Does anyone know how this > can be fixed without destroying the data center and building a new hosted > engine? > > Thanks, > > Douglas Duckworth, MSc, LFCS > HPC System Administrator > Scientific Computing Unit <https://scu.med.cornell.edu> > Weill Cornell Medicine > 1300 York Avenue > New York, NY 10065 > E: d...@med.cornell.edu > O: 212-746-6305 > F: 212-746-8690 > > > On Wed, Jan 9, 2019 at 10:22 AM Douglas Duckworth <dod2...@med.cornell.edu> > wrote: > >> Hi >> >> Should I open a Bugzilla to resolve this problem? >> >> Thanks, >> >> Douglas Duckworth, MSc, LFCS >> HPC System Administrator >> Scientific Computing Unit <https://scu.med.cornell.edu> >> Weill Cornell Medicine >> 1300 York Avenue >> New York, NY 10065 >> E: d...@med.cornell.edu >> O: 212-746-6305 >> F: 212-746-8690 >> >> >> On Wed, Dec 19, 2018 at 1:13 PM Douglas Duckworth < >> dod2...@med.cornell.edu> wrote: >> >>> Hello >>> >>> I am trying to migrate my hosted-engine VM to another cluster in the >>> same data center. Hosts in both clusters have the same logical networks >>> and storage. Yet migrating the VM isn't an option. >>> >>> To get the hosted-engine VM on the other cluster I started the VM on >>> host in that other cluster using "hosted-engine --vm-start." >>> >>> However HostedEngine still associated with old cluster as shown >>> attached. So I cannot live migrate the VM. Does anyone know how to >>> resolve? With other VMs one can shut them down then using the "Edit" >>> option. Though that will not work for HostedEngine. >>> >>> >>> Thanks, >>> >>> Douglas Duckworth, MSc, LFCS >>> HPC System Administrator >>> Scientific Computing Unit <https://scu.med.cornell.edu> >>> Weill Cornell Medicine >>> 1300 York Avenue >>> New York, NY 10065 >>> E: d...@med.cornell.edu >>> O: 212-746-6305 >>> F: 212-746-8690 >>> >> _______________________________________________ > Users mailing list -- users@ovirt.org > To unsubscribe send an email to users-le...@ovirt.org > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > oVirt Code of Conduct: > https://www.ovirt.org/community/about/community-guidelines/ > List Archives: > https://lists.ovirt.org/archives/list/users@ovirt.org/message/J2GZK5PUBZIQLGLNZ2UUCSIES6HSZLHC/ > -- Ryan Barry Associate Manager - RHV Virt/SLA rba...@redhat.com M: +16518159306 IM: rbarry <https://red.ht/sig>
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/IRKEWL34BRPBQSMR5HGHT5RI6O2PQA63/