All, I've to revise my vote to a -1 (binding), as there's a regression in the VM deployment logic which currently seems to affect anything non-shared storage (like local storage).
2024-07-12 11:49:29,799 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-43:ctx-7277b927 job-10187/job-10188 ctx-c5bfa38e) (logid:e1cb31f5) Trying to find a potenial host and associated storage pools from the suitable host/pool lists for this VM 2024-07-12 11:49:29,800 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-43:ctx-7277b927 job-10187/job-10188 ctx-c5bfa38e) (logid:e1cb31f5) Checking if host: 26 can access any suitable storage pool for volume: ROOT 2024-07-12 11:49:29,800 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-43:ctx-7277b927 job-10187/job-10188 ctx-c5bfa38e) (logid:e1cb31f5) Host: 26 cannot access pool: 22 2024-07-12 11:49:29,800 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-43:ctx-7277b927 job-10187/job-10188 ctx-c5bfa38e) (logid:e1cb31f5) Host: 26 can prepare access to pool: 22 2024-07-12 11:49:29,801 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-43:ctx-7277b927 job-10187/job-10188 ctx-c5bfa38e) (logid:e1cb31f5) Found a potential host id: 26 name: kvm1.onprem.in and associated storage pools for this VM 2024-07-12 11:49:29,801 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-43:ctx-7277b927 job-10187/job-10188 ctx-c5bfa38e) (logid:e1cb31f5) Returning Deployment Destination: Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(2)-Host(26)-Storage(Volume(1122|ROOT-->Pool(22))] 2024-07-12 11:49:29,801 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-43:ctx-7277b927 job-10187/job-10188 ctx-c5bfa38e) (logid:e1cb31f5) Deployment found - Attempt #1 - P0=VM instance {"id":852,"instanceName":"i-2-852-VM","type":"User","uuid":"2a0d3f68-f5ca-4679-bf52-7c7e7c9750b2"}, P0=Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(2)-Host(26)-Storage(Volume(1122|ROOT-->Pool(22))] In the above example, it chooses KVM1 (in my env) for host and KVM2 for storage pool for deploying local storage VM which obviously fails. This happens due to an invalid check that returns true for local storage: https://github.com/apache/cloudstack/blob/4.19/server/src/main/java/com/cloud/storage/StorageManagerImpl.java#L2795 This wasn't caught earlier as with NFS, it would still pass it being shared primary storage. Regards. ________________________________ From: Nux <n...@li.nux.ro> Sent: Friday, July 12, 2024 02:44 To: dev@cloudstack.apache.org <dev@cloudstack.apache.org> Cc: users <us...@cloudstack.apache.org> Subject: Re: [VOTE] Apache CloudStack 4.19.1.0 (RC1) Rohit, That's a very good point about the routines, thanks for sharing how to restore them. Hope your marriage is intact after this little adventure. :) I'll be doing some testing shortly. On 2024-07-11 12:52, Rohit Yadav wrote: > +0 (binding) > > > * > Tested fresh installation of EL8 packages, deployed VM from newly > created template on a newly created network; tested VM, volume & > network lifecycles using both mbx & health check tests passing > w/Trillian/BO > * > Tested upgrade from 4.19.0.2 EL8 to 4.19.1.0 RC1 EL8 pkgs using mbx; > tested upgrading systemvms, VRs (with/without cleanup and live patch on > an isolated network; deployed new VM & volumes and tested old ones) > * > Felt dangerous and upgraded my CloudStack 4.19.0.2 homelab to 4.19.1.0 > RC1 using deb packages (hope there are no major blockers to piss off > wife :)) > * > Upgrade went OK on the three Ubuntu 22.04 based 3xKVM hosts > * > I hit an issue with idempotent routines missing on cloud_usage db > (borrowed them from mbx env with: mysqldump --no-create-db > --no-create-info --no-data --routines cloud_usage > > cloud_usage-routines.sql ; and applied this in my homelab env; likely > my fault while moving DB servers and forgot the -R option where I > forgot to backup the routines) > * > Tested homelab storage: nfs, local storage & ceph - OK; hosts upgraded > OK; > * > Post upgrade, after deleting systemvms I found the SSVM & CPVM starting > to be stuck. I restarted the mgmt server and after a while the SSVM > came up but the CPVM was stuck. I repeated the same, but again SSVM > came up but CPVM struggle and after a few agent restarts it came up > eventually. > * > Post this, tested several VM, volume and network lifecycles worked OK > > I've logged the issue here - > https://github.com/apache/cloudstack/issues/9371 to triage further if > it's an issue in RC1 or to help investigate if it was an env issue. I'm > happy to change my vote to a +1 if it's just my env issue. > > > Regards. > > > > > ________________________________ > From: Suresh Kumar Anaparti <sureshanapa...@apache.org> > Sent: Wednesday, July 10, 2024 17:45 > To: dev <dev@cloudstack.apache.org>; users > <us...@cloudstack.apache.org> > Subject: [VOTE] Apache CloudStack 4.19.1.0 (RC1) > > Hi All, > > I have created a 4.19.1.0 release (RC1), with the following artifacts > up > for testing and a vote: > > Git Branch and Commit SHA: > https://github.com/apache/cloudstack/tree/4.19.1.0-RC20240710T1604 > Commit: 2dbd80d692d6f5a207f90a07ac0b7583a41b71cd > > Source release (checksums and signatures are available at the same > location): > https://dist.apache.org/repos/dist/dev/cloudstack/4.19.1.0/ > > PGP release keys (signed using > D6E0581ECF8A2FBE3FF6B3C9D7CEAE3A9E71D0AA): > https://dist.apache.org/repos/dist/release/cloudstack/KEYS > > The vote will be open until 16th July 2024. > > For sanity in tallying the vote, can PMC members please be sure to > indicate > "(binding)" with their vote? > > [ ] +1 approve > [ ] +0 no opinion > [ ] -1 disapprove (and reason why) > > For users convenience, the packages from this release candidate (RC1) > and > 4.19.1.0 systemvm templates are available here: > https://download.cloudstack.org/testing/4.19.1.0-RC1/ > https://download.cloudstack.org/systemvm/4.19/ > > Regards, > Suresh