Hi there, yes, I would call that (hopefully you find this as advise, not as critics) - lack of experience/knowledge from your side - and you are right - once cloudstack-management starts for the very first time it starts DB from version 4.0.0 (or so) and all the SQL scripts etc are executed (i.e. until DB is upgraded to version 4.15) and that takes time - similar goes if you upgrade, say, from 4.11 to 4.15 - in large production DB (lot's of usage data), that could take 30+min - i.e altering a table of 100GB takes time to add a new column takes forever...
And yes, once you interrupt the upgrade/SQL execution process - then DB is messed up - there is no way to "start from where it stoped" - you would like to drop the DB, and start deploying DB from scratch. This is especially true when you reattempting a failed upgrade - you need to drop DB, import the older version from a backup, and then start again - it the upgrade process is interrupted once, it will fail every next time. You could wait for a certain log lines to capture from the management-server log, before proceeding with the rest of the playbook Cheers, fast man :) On Sun, 14 Feb 2021 at 04:17, Nathan McGarvey <nathanmcgar...@gmail.com> wrote: > All/any, > I just spent an unfortunate amount of time debugging a CloudStack > install (4.15) in what ended up being a self-caused race condition and > just wanted others to be aware in case they ran into the same issue: > > I set up a [very ugly] ansible playbook to install the 4.15 > management server on a clean CentOS 8 install (yes, I know that is EOL > at the end of this year) > > But I did something too fast and caused a race condition: > > I rebooted after the "cloudstack-setup-management" command completed. > > It turns out that I was interrupting the database upgrade scripts and > would reboot to a non-functional manager with weird and > non-deterministic errors when it would try to re-upgrade the database. > (Lots of duplicate column, foreign key constraint, etc. errors.) The > errors only depended on how far the migrations has gotten before the > reboot occurred. > > It was really hard to track down just because I assumed that once > "cloudstack-setup-management" was done, that implied that it was in a > stable state for service stops/restarts and reboots. Obviously I was wrong. > > Hopefully someone finds this useful and may save time/frustration by > being patient and watching the log file until all the database upgrade > steps are complete. > > > Thanks, > -Nathan McGarvey > -- Andrija Panić