it looks like CloudStack tried to copy a image (/mnt/2c48f02e-c900-3f17-8de1-677e1e7d7af5/c47e8c51-83f4-3871-bc3d-4f87536d9c82.qcow2, I think it is systemvm template) to ceph (rbd:csclx/4738f069-9bb0-4341-80d2-1e8baa96f8ae) but the image already exists in ceph.
you may remove the image on ceph, or add a record in template_spool_ref to skip the copy if the image is being used by some VMs. -Wei On Tue, Feb 3, 2026 at 8:44 AM Jeremy Hansen <[email protected]> wrote: > Thank you for the response. No. No encryption is used on these > volumes/pools. I have VM still running from before I did the Cloudstack > upgrade and they’re still running. Please let me know what information is > useful. > > -jeremy > > > > On Monday, Feb 02, 2026 at 11:27 PM, Wei ZHOU <[email protected]> > wrote: > Do you use volume encryption on ceph ? > > > -Wei > > > > On Tue, Feb 3, 2026 at 6:43 AM Jeremy Hansen <[email protected]> wrote: > > Pretty dead in the water here. Hopefully someone can give me a hint. > Also, the docs mentioned java 17 being required, yet, 17 didn’t allow the > manager to even start. This is on Rocky 9. I had to downgrade to java 11. > > > > On Monday, Feb 02, 2026 at 7:27 PM, Jeremy Hansen <[email protected]> > wrote: > 2026-02-03 03:25:42,761 WARN [utils.script.Script] > (AgentRequest-Handler-1:[]) (logid:e7581b89) Execution of process > [1495568] > for command [qemu-img convert -O raw -U --image-opts > driver=qcow2,file.filename=/mnt/2c48f02e-c900-3f17-8de1-677e1e7d7af5/c47e8c51-83f4-3871-bc3d-4f87536d9c82.qcow2 > > rbd:csclx/4738f069-9bb0-4341-80d2-1e8baa96f8ae:mon_host=mon.ceph:auth_supported=cephx:id=csclx:key=******:rbd_default_format=2:client_mount_timeout=30 > > ] failed. > 2026-02-03 03:25:42,761 WARN [utils.script.Script] > (AgentRequest-Handler-1:[]) (logid:e7581b89) Process [1495568] for command > [qemu-img convert -O raw -U --image-opts > driver=qcow2,file.filename=/mnt/2c48f02e-c900-3f17-8de1-677e1e7d7af5/c47e8c51-83f4-3871-bc3d-4f87536d9c82.qcow2 > > rbd:csclx/4738f069-9bb0-4341-80d2-1e8baa96f8ae:mon_host=mon.ceph:auth_supported=cephx:id=csclx:key=******:rbd_default_format=2:client_mount_timeout=30 > > ] encountered the error: [qemu-img: > rbd:csclx/4738f069-9bb0-4341-80d2-1e8baa96f8ae:mon_host=mon.ceph:auth_supported=cephx:id=csclx:key=AQA5UKJlNsNmDhAA4R0kWpPCXmXMX11Rpjet7g==:rbd_default_format=2:client_mount_timeout=30: > > error while converting raw: error rbd create: File exists]. > 2026-02-03 03:25:42,762 ERROR [kvm.storage.LibvirtStorageAdaptor] > (AgentRequest-Handler-1:[]) (logid:e7581b89) Failed to convert from > /mnt/2c48f02e-c900-3f17-8de1-677e1e7d7af5/c47e8c51-83f4-3871-bc3d-4f87536d9c82.qcow2 > > to > rbd:csclx/4738f069-9bb0-4341-80d2-1e8baa96f8ae:mon_host=mon.ceph:auth_supported=cephx:id=csclx:key=AQA5UKJlNsNmDhAA4R0kWpPCXmXMX11Rpjet7g==:rbd_default_format=2:client_mount_timeout=30 > > the error was: qemu-img: > rbd:csclx/4738f069-9bb0-4341-80d2-1e8baa96f8ae:mon_host=mon.ceph:auth_supported=cephx:id=csclx:key=AQA5UKJlNsNmDhAA4R0kWpPCXmXMX11Rpjet7g==:rbd_default_format=2:client_mount_timeout=30: > > error while converting raw: error rbd create: File exists > 2026-02-03 03:25:42,762 INFO [kvm.storage.LibvirtStorageAdaptor] > (AgentRequest-Handler-1:[]) (logid:e7581b89) Attempting to remove storage > pool 2c48f02e-c900-3f17-8de1-677e1e7d7af5 from libvirt > > ==> agent.err <== > libvirt: Secrets Driver error : Secret not found: no secret with matching > uuid '2c48f02e-c900-3f17-8de1-677e1e7d7af5' > > > > On Monday, Feb 02, 2026 at 6:47 PM, Jeremy Hansen <[email protected]> > wrote: > I went through the upgrade process to get my cloudstack cluster up to > 4.22. Now my system VMs are continually restarting. How do I debug this. > I see these errorrs in the logs: > > 2026-02-03 02:39:13,149 WARN [c.c.v.ClusteredVirtualMachineManagerImpl] > (Work-Job-Executor-8:[ctx-b6f69008, job-1332/job-1342, ctx-c2b0bf26]) > (logid:a730585f) Unable to contact resource. > com.cloud.exception.StorageUnavailableException: Resource [StoragePool:1] > is unreachable: Unable to create volume > [{"name":"ROOT-211","uuid":"359e21ea-713a-48c2-b031-5f804d9bbcfe"}] due to > [com.cloud.utils.exception.CloudRuntimeException: Failed to copy > /mnt/2c48f02e-c900-3f17-8de1-677e1e7d7af5/c47e8c51-83f4-3871-bc3d-4f87536d9c82.qcow2 > > to 4738f069-9bb0-4341-80d2-1e8baa96f8ae]. > at > org.apache.cloudstack.engine.orchestration.VolumeOrchestrator.recreateVolume(VolumeOrchestrator.java:1873) > > at > org.apache.cloudstack.engine.orchestration.VolumeOrchestrator.prepare(VolumeOrchestrator.java:2024) > > at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native > Method) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > > at java.base/java.lang.reflect.Method.invoke(Method.java:566) > at > org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:344) > > at > org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:198) > > at > org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) > > at > org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) > > at > org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) > > at > org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:215) > > at com.sun.proxy.$Proxy264.prepare(Unknown Source) > at > com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(VirtualMachineManagerImpl.java:1485) > > at > com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(VirtualMachineManagerImpl.java:5951) > > at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native > Method) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > > at java.base/java.lang.reflect.Method.invoke(Method.java:566) > at > com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:102) > > at > com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtchineManagerImpl.java:6075) > > at com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:99) > at > org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:698) > > at > org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49) > > at > org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56) > > at > org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103) > > at > org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53) > > at > org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46) > > at > org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(AsyncJobManagerImpl.java:646) > > at > java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) > > at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) > at > java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) > > at > java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) > > at java.base/java.lang.Thread.run(Thread.java:829) > > I left running VM up while upgrading. This seems storage related but I > haven’t changed anything on the storage side. > > Any ideas? > > Thanks > -jeremy > > > >
