On Thu, Nov 1, 2018 at 6:13 AM David Johnson <[email protected]> wrote:
> The saga has ended and all is well. > > When I stopped all of the active VM's, the storage domains came back to > normal and came online. At that point I had to delete the VM's that had > been running unmanaged on the hosts, then I could import all of the objects > from he storage domain into the data center. the only thing I lost was a > bitnami redmine VM that had been installed from an OVA. This was no major > loss, since I backed it up at the beginning of this adventure. > > In retrospect, the biggest issues I had were: > * being overly cautious because I didn't know which operations were > destructive, and > * finding the right actions in the GUI when I decided to take the plunge. > > Thank you all for your generous assistance. > Glad to hear that, thanks for the report! Best regards, > Regards, > David Johnson > > > On Tue, Oct 30, 2018 at 10:29 PM David Johnson < > [email protected]> wrote: > >> Host 1 is now back in "normal" mode. It was stuck in preparing for >> maintenance because the VM's were not fully under the control of the >> cluster. Once I powered off the VM's it finished what it was doing, >> rebooted, and is now properly part of the cluster. >> >> However, the main data domain is still in "Maintenance" state, and I >> can't find any way to make it "Active". My ISO partition is also inactive, >> with no visible way to make it "Active". >> >> Regards, >> David Johnson >> Director of Development, Maxis Technology >> 844.696.2947 ext 702 (o) | 479.531.3590 (c) >> [email protected] >> >> >> [image: Maxis Techncology] <http://www.maxistechnology.com> >> www.maxistechnology.com >> >> >> *stay connected <http://www.linkedin.com/in/pojoguy>* >> >> >> On Tue, Oct 30, 2018 at 4:22 PM David Johnson < >> [email protected]> wrote: >> >>> Hi everyone, and thank you again for all of your help. >>> >>> Here is the latest update in the never ending story. >>> >>> I worked through NFS woes on my san, configured a throwaway storage >>> domain to bootstrap the system, and have reconnected the storage domains. >>> >>> The main data domain is in "Maintenance" state >>> Host 1 is in "Preparing for Maintenance" state with SPM = "normal", >>> Host 2 is in "Up" state with SPM = "SPM", >>> >>> I think the next step is to get host 1 either fully into or out of >>> maintenance mode, and the storage domain out of maintenance mode. >>> >>> Regards, >>> David Johnson >>> Director of Development, Maxis Technology >>> 844.696.2947 ext 702 (o) | 479.531.3590 (c) >>> [email protected] >>> >>> >>> [image: Maxis Techncology] <http://www.maxistechnology.com> >>> www.maxistechnology.com >>> >>> >>> *stay connected <http://www.linkedin.com/in/pojoguy>* >>> >>> >>> On Mon, Oct 29, 2018 at 10:10 PM Nathan Lager <[email protected]> >>> wrote: >>> >>>> The cluster will need a default storage domain before you can import >>>> your existing domain. That much I do remember. What's causing your error I >>>> don't know. What sort of storage are you working with? >>>> >>>> On Mon, Oct 29, 2018, 6:04 PM David Johnson < >>>> [email protected]> wrote: >>>> >>>>> Thank you everyone for all of your help. Here is where things stand >>>>> now: >>>>> >>>>> I gave up trying to recover from backup. I wasn't able to mount the >>>>> OVA. Since this is a test rack, I looked at the amount of time I've sunk >>>>> into it already (about the same as it took to build from scratch the first >>>>> time around) and decided that it was worthwhile to risk starting from >>>>> scratch rather than become an overnight bit bashing Ovirt guru. >>>>> >>>>> I installed the 4.2 controller, upgraded the hosts to 4.2, and then >>>>> added the two hosts to the default data center. At this point, all of the >>>>> running JVM's were visible to the controller, but the vm's do not appear >>>>> to >>>>> be managed by the controller. I can't migrate them from one host to the >>>>> other, for example. >>>>> >>>>> As near as I can tell, the next crucial step is to import the >>>>> existing storage domains. >>>>> >>>>> There is no visible way to import an existing storage domain to an >>>>> uninitialized data center (uninitialized data centers are not available to >>>>> the Import Domain button)), so I created a new share on the SAN for a >>>>> bootstrap domain, and am unable to connect to it. The error message given >>>>> by OVirt is "Error while executing action Add Storage Connection: >>>>> Problem while trying to mount target", which is not really information. >>>>> >>>>> Is there a command line tool for importing the existing storage >>>>> domains that will not choke on an uninitialized data center? >>>>> >>>>> Thank you in advance, >>>>> >>>>> On Mon, Oct 29, 2018 at 7:14 AM David Johnson < >>>>> [email protected]> wrote: >>>>> >>>>>> Thank you for your generous help. >>>>>> >>>>>> 4.1 wouldn't install because a number of dependencies were pointing >>>>>> to dead links. It felt like over half, although I'm sure it was a more >>>>>> limited subset. >>>>>> >>>>>> I will give these suggestions a try >>>>>> >>>>>> >>>>>> On Mon, Oct 29, 2018, 4:50 AM Yedidyah Bar David <[email protected]> >>>>>> wrote: >>>>>> >>>>>>> On Mon, Oct 29, 2018 at 10:00 AM Simone Tiraboschi < >>>>>>> [email protected]> wrote: >>>>>>> > >>>>>>> > Hi, >>>>>>> > AFAIK https://resources.ovirt.org/pub/ovirt-4.1/ is still there, >>>>>>> maybe you could find some issues with other repos. >>>>>>> > >>>>>>> > If you want to take a shortcut, >>>>>>> https://resources.ovirt.org/pub/ovirt-4.1/rpm/el7/noarch/ovirt-engine-appliance-4.1-20180124.1.el7.centos.noarch.rpm >>>>>>> contains latest 4.1 based engine appliance. >>>>>>> > You can extract the OVA file from there and use it to boot a >>>>>>> temporary VM with virt-install and then upgrade the engine there to 4.2 >>>>>>> in >>>>>>> order to take a new 4.2 based backup that your can restore where you >>>>>>> prefer >>>>>>> with up to date rpms. >>>>>>> > >>>>>>> > >>>>>>> > On Mon, Oct 29, 2018 at 8:49 AM <[email protected]> >>>>>>> wrote: >>>>>>> >> >>>>>>> >> I'm looking for a hand in recovering my ovirt cluster from a >>>>>>> hardware failure. The hard drive on my cluster controller failed, and I >>>>>>> would like to recover from backup. >>>>>>> >> >>>>>>> >> The problem is, the cluster was 4.1, which is less than a year >>>>>>> old, but was nevertheless removed from the active repositories back in >>>>>>> May. >>>>>>> 4.2 will not recover from 4.1 backups. >>>>>>> >>>>>>> If you are brave, you can also "cheat" - patch engine-backup to allow >>>>>>> recovering 4.1. That's a trivial patch, and the main problem with it >>>>>>> is that no-one tested it, and I do expect it might introduce subtle >>>>>>> issues. But considering the alternatives, it might be a reasonable >>>>>>> approach. If you do, try restoring first on an _isolated_ VM >>>>>>> somewhere, >>>>>>> try to see how the engine behaves after restore (it will not work >>>>>>> very well, because it will not manage to access its hosts - if you >>>>>>> indeed isolated it well enough), and if it looks ok, try for real. >>>>>>> >>>>>>> See also e.g.: >>>>>>> >>>>>>> https://lists.ovirt.org/pipermail/users/2017-March/080346.html >>>>>>> >>>>>>> https://bugzilla.redhat.com/show_bug.cgi?id=1425788 >>>>>>> >>>>>>> That said, not sure why 4.1 does not work for you. I think it should >>>>>>> still work, although I didn't try by myself recently. >>>>>>> >>>>>>> Good luck and best regards, >>>>>>> >>>>>>> >> >>>>>>> >> The storage domains are all intact (I think), and the hosts are >>>>>>> still running (unmanaged). I've tried to manually restore the engine >>>>>>> from >>>>>>> backups, but either the upgrade is reinitializing or I am missing >>>>>>> something. >>>>>>> >> >>>>>>> >> Any ideas? >>>>>>> >> _______________________________________________ >>>>>>> >> Users mailing list -- [email protected] >>>>>>> >> To unsubscribe send an email to [email protected] >>>>>>> >> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>>>>>> >> oVirt Code of Conduct: >>>>>>> https://www.ovirt.org/community/about/community-guidelines/ >>>>>>> >> List Archives: >>>>>>> https://lists.ovirt.org/archives/list/[email protected]/message/HRIUSVMGGCIG5M5AWBWCP6VH2OVUHHIG/ >>>>>>> > >>>>>>> > _______________________________________________ >>>>>>> > Users mailing list -- [email protected] >>>>>>> > To unsubscribe send an email to [email protected] >>>>>>> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>>>>>> > oVirt Code of Conduct: >>>>>>> https://www.ovirt.org/community/about/community-guidelines/ >>>>>>> > List Archives: >>>>>>> https://lists.ovirt.org/archives/list/[email protected]/message/3OIB2WQSMHVM52LA7U2DC4LYG4PY5WZQ/ >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Didi >>>>>>> >>>>>> _______________________________________________ >>>>> Users mailing list -- [email protected] >>>>> To unsubscribe send an email to [email protected] >>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>>>> oVirt Code of Conduct: >>>>> https://www.ovirt.org/community/about/community-guidelines/ >>>>> List Archives: >>>>> https://lists.ovirt.org/archives/list/[email protected]/message/CN4Y3CC6XKROOFXGXOT5HY7Z22UEH5C3/ >>>>> >>>> _______________________________________________ > Users mailing list -- [email protected] > To unsubscribe send an email to [email protected] > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > oVirt Code of Conduct: > https://www.ovirt.org/community/about/community-guidelines/ > List Archives: > https://lists.ovirt.org/archives/list/[email protected]/message/76YBSLTDI73EMI5FUIQSMPRUKFWKMLH5/ > -- Didi
_______________________________________________ Users mailing list -- [email protected] To unsubscribe send an email to [email protected] Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/[email protected]/message/VNFLKITG54PKJAZYFJ5OECVA7BWKVDUJ/

