Wei:
Even before I start recreating the problem by trying to start the ssvm,
I am seeing the following in the agent.log:
2026-03-28 08:16:53,821 INFO [kvm.resource.LibvirtConnection]
(AgentRequest-Handler-4:[]) (logid:23bddf40) No existing libvirtd
connection found. Opening a new one
2026-03-28 08:16:53,822 WARN [kvm.resource.LibvirtConnection]
(AgentRequest-Handler-4:[]) (logid:23bddf40) Can not find a connection
for Instance s-4-VM. Assuming the default connection.
2026-03-28 08:16:53,913 WARN [kvm.resource.LibvirtKvmAgentHook]
(AgentRequest-Handler-4:[]) (logid:23bddf40) Groovy script
'/etc/cloudstack/agent/hooks/libvirt-vm-state-change.groovy' is not
available. Transformations will not be applied.
2026-03-28 08:16:53,913 WARN [kvm.resource.LibvirtKvmAgentHook]
(AgentRequest-Handler-4:[]) (logid:23bddf40) Shell script
'/etc/cloudstack/agent/hooks/libvirt-vm-state-change.sh' is not
available. Transformations will not be applied.
2026-03-28 08:16:53,913 WARN [kvm.resource.LibvirtKvmAgentHook]
(AgentRequest-Handler-4:[]) (logid:23bddf40) Groovy scripting engine is
not initialized. Data transformation skipped.
2026-03-28 08:17:12,234 INFO [kvm.storage.LibvirtStorageAdaptor]
(AgentRequest-Handler-2:[]) (logid:7250f00d) Trying to fetch storage
pool 9e60fcb7-2e52-3f44-a9be-2383efb8d95b from libvirt
2026-03-28 08:17:12,236 INFO [kvm.storage.LibvirtStorageAdaptor]
(AgentRequest-Handler-2:[]) (logid:7250f00d) Asking libvirt to refresh
storage pool 9e60fcb7-2e52-3f44-a9be-2383efb8d95b
2026-03-28 08:17:20,677 WARN [kvm.resource.LibvirtComputingResource]
(AgentRequest-Handler-3:[]) (logid:23bddf40) Setting System VM's [VM
{id: "4", name: "s-4-VM", uuid: "818b559a-d646-4185-b1da-8dd6bc70892c",
type: "SecondaryStorageVm"}] current memory as max memory [524288].
2026-03-28 08:17:20,678 INFO [kvm.storage.LibvirtStorageAdaptor]
(AgentRequest-Handler-3:[]) (logid:23bddf40) Trying to fetch storage
pool 9e60fcb7-2e52-3f44-a9be-2383efb8d95b from libvirt
2026-03-28 08:17:20,697 INFO [kvm.storage.LibvirtStorageAdaptor]
(AgentRequest-Handler-3:[]) (logid:23bddf40) Trying to fetch storage
pool 9e60fcb7-2e52-3f44-a9be-2383efb8d95b from libvirt
2026-03-28 08:17:20,721 INFO [kvm.storage.LibvirtStorageAdaptor]
(AgentRequest-Handler-3:[]) (logid:23bddf40) Trying to fetch storage
pool 9e60fcb7-2e52-3f44-a9be-2383efb8d95b from libvirt
2026-03-28 08:17:20,723 WARN [cloud.agent.Agent] (AgentRequest-
Handler-3:[]) (logid:23bddf40) Caught: java.lang.NullPointerException:
Cannot invoke "java.net.URI.getScheme()" because "uri" is null
at
com.cloud.network.Networks$BroadcastDomainType.getSchemeValue(Networks.
java:180)
at
com.cloud.network.Networks$BroadcastDomainType.getValue(Networks.java:2
34)
at
com.cloud.hypervisor.kvm.resource.BridgeVifDriver.plug(BridgeVifDriver.
java:258)
at
com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.createVif(Li
bvirtComputingResource.java:3816)
at
com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.createVifs(L
ibvirtComputingResource.java:3395)
at
com.cloud.hypervisor.kvm.resource.wrapper.LibvirtStartCommandWrapper.ex
ecute(LibvirtStartCommandWrapper.java:86)
at
com.cloud.hypervisor.kvm.resource.wrapper.LibvirtStartCommandWrapper.ex
ecute(LibvirtStartCommandWrapper.java:52)
at
com.cloud.hypervisor.kvm.resource.wrapper.LibvirtRequestWrapper.execute
(LibvirtRequestWrapper.java:78)
at
com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeReque
st(LibvirtComputingResource.java:2240)
at com.cloud.agent.Agent.processRequest(Agent.java:813)
at
com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:1295)
at com.cloud.utils.nio.Task.call(Task.java:83)
at com.cloud.utils.nio.Task.call(Task.java:29)
at
java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolE
xecutor.java:1136)
at
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPool
Executor.java:635)
at java.base/java.lang.Thread.run(Thread.java:840)
s-4-VM is the current identifier for the ssvm.
This entire excerpt is repeating now at about a 15 second cadence.
Ron Gage
On Sat, 2026-03-28 at 10:35 +0100, Wei ZHOU wrote:
> Could you check the agent.log on the kvm host ?
>
> -Wei
>
> On Sat, Mar 28, 2026 at 1:45 AM Ron Gage <[email protected]> wrote:
>
> > Hi all from Michigan:
> >
> > I am working with a new deploy of 4.21 on Centos 9. Specifically,
> > I
> > cannot get the SSVM to launch. The console proxy is running just
> > fine.
> > The KVM host is running agent 4.21 on Centos. At this point, I
> > only
> > have this single host spun up.
> >
> > An excerpt from the management server log:
> > 2026-03-27 20:14:29,510 DEBUG [c.c.c.CapacityManagerImpl] (Work-
> > Job-
> > Executor-1:[ctx-97e1949b, job-1/job-26, ctx-e8f12238])
> > (logid:23bddf
> > 40) VM instance {"id":1,"instanceName":"s-1-
> > VM","state":"Stopped","type":"SecondaryStorageVm","uuid":"da51fa62-
> > e738-400c-a774-9a524953b1
> > a7"} state transited from [Starting] to [Stopped] with event
> > [OperationFailed]. VM's original host: null, new host: null, host
> > before st
> > ate transition: Host
> > {"id":1,"name":"kvm01","type":"Routing","uuid":"976eda71-ea18-4123-
> > bf46-85a1a7bd8042"}
> > 2026-03-27 20:14:29,515 DEBUG [c.c.c.CapacityManagerImpl] (Work-
> > Job-
> > Executor-1:[ctx-97e1949b, job-1/job-26, ctx-e8f12238])
> > (logid:23bddf
> > 40) Hosts's actual total CPU: 10000 and CPU after applying
> > overprovisioning: 10000
> > 2026-03-27 20:14:29,515 DEBUG [c.c.c.CapacityManagerImpl] (Work-
> > Job-
> > Executor-1:[ctx-97e1949b, job-1/job-26, ctx-e8f12238])
> > (logid:23bddf40)
> > Hosts's actual total RAM: (29.96 GB) 32169574400 and RAM after
> > applying
> > overprovisioning: (29.96 GB) 32169574400
> > 2026-03-27 20:14:29,515 DEBUG [c.c.c.CapacityManagerImpl] (Work-
> > Job-
> > Executor-1:[ctx-97e1949b, job-1/job-26, ctx-e8f12238])
> > (logid:23bddf40)
> > release cpu from host: Host
> > {"id":1,"name":"kvm01","type":"Routing","uuid":"976eda71-ea18-4123-
> > bf46-85a1a7bd8042"}, old used: 1000, reserved: 0, actual total:
> > 10000,
> > total with overprovisioning: 10000; new used: 500,reserved:0;
> > movedfromreserved: false,moveToReservered: false
> > 2026-03-27 20:14:29,515 DEBUG [c.c.c.CapacityManagerImpl] (Work-
> > Job-
> > Executor-1:[ctx-97e1949b, job-1/job-26, ctx-e8f12238])
> > (logid:23bddf40)
> > release mem from host: Host
> > {"id":1,"name":"kvm01","type":"Routing","uuid":"976eda71-ea18-4123-
> > bf46-85a1a7bd8042"}, old used: (1.50 GB) 1610612736, reserved: (0
> > bytes) 0, total: (29.96 GB) 32169574400; new used: (1.00 GB)
> > 1073741824, reserved: (0 bytes) 0; movedfromreserved: false,
> > moveToReservered: false
> > 2026-03-27 20:14:29,522 ERROR
> > [c.c.v.ClusteredVirtualMachineManagerImpl] (Work-Job-Executor-
> > 1:[ctx-
> > 97e1949b, job-1/job-26, ctx-e8f12238]) (logid:23bddf40) Unable to
> > orchestrate start VM instance {"id":1,"instanceName":"s-1-
> > VM","state":"Stopped","type":"SecondaryStorageVm","uuid":"da51fa62-
> > e738-400c-a774-9a524953b1a7"} due to [Unable to get answer that is
> > of
> > class com.cloud.agent.api.StartAnswer].
> > 2026-03-27 20:14:29,526 DEBUG [c.c.v.VmWorkJobHandlerProxy] (Work-
> > Job-
> > Executor-1:[ctx-97e1949b, job-1/job-26, ctx-e8f12238])
> > (logid:23bddf40)
> > Done executing VM work job:
> > com.cloud.vm.VmWorkStart{"dcId":0,"userId":1,"accountId":1,"vmId":1
> > ,"ha
> > ndlerName":"VirtualMachineManagerImpl"}
> > 2026-03-27 20:14:29,527 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
> > (Work-
> > Job-Executor-1:[ctx-97e1949b, job-1/job-26, ctx-e8f12238])
> > (logid:23bddf40) Complete async job-26, jobStatus: FAILED,
> > resultCode:
> > 0, result:
> > rO0ABXNyAC9jb20uY2xvdWQudXRpbHMuZXhjZXB0aW9uLkNsb3VkUnVudGltZUV4Y2V
> > wdGl
> > vblZNT3AAAAACAwABSQALY3NFcnJvckNvZGV4cgAaamF2YS5sYW5nLlJ1bnRpbWVFeG
> > NlcH
> > Rpb26eXwZHCjSD5QIAAHhyABNqYXZhLmxhbmcuRXhjZXB0aW9u0P0fPho7HMQCAAB4c
> > gATa
> > mF2YS5sYW5nLlRocm93YWJsZdXGNSc5d7jLAwAETAAFY2F1c2V0ABVMamF2YS9sYW5n
> > L1Ro
> > cm93YWJsZTtMAA1kZXRhaWxNZXNzYWdldAASTGphdmEvbGFuZy9TdHJpbmc7WwAKc3R
> > hY2t
> > UcmFjZXQAHltMamF2YS9sYW5nL1N0YWNrVHJhY2VFbGVtZW50O0wAFHN1cHByZXNzZW
> > RFeG
> > NlcHRpb25zdAAQTGphdmEvdXRpbC9MaXN0O3hwcQB-
> > AAh0AHdVbmFibGUgdG8gb3JjaGVzdHJhdGUgdGhlIHN0YXJ0IG9mIFZNIGluc3RhbmN
> > lIHs
> > iaW5zdGFuY2VOYW1lIjoicy0xLVZNIiwidXVpZCI6ImRhNTFmYTYyLWU3MzgtNDAwYy
> > 1hNz
> > c0LTlhNTI0OTUzYjFhNyJ9LnVyAB5bTGphdmEubGFuZy5TdGFja1RyYWNlRWxlbWVud
> > DsCR
> > io8PP0iOQIAAHhwAAAAFHNyABtqYXZhLmxhbmcuU3RhY2tUcmFjZUVsZW1lbnRhCcWa
> > Jjbd
> > hQIACEIABmZvcm1hdEkACmxpbmVOdW1iZXJMAA9jbGFzc0xvYWRlck5hbWVxAH4ABUw
> > ADmR
> > lY2xhcmluZ0NsYXNzcQB-
> > AAVMAAhmaWxlTmFtZXEAfgAFTAAKbWV0aG9kTmFtZXEAfgAFTAAKbW9kdWxlTmFtZXE
> > AfgA
> > FTAANbW9kdWxlVmVyc2lvbnEAfgAFeHABAAAWxnQAA2FwcHQAJmNvbS5jbG91ZC52bS
> > 5WaX
> > J0dWFsTWFjaGluZU1hbmFnZXJJbXBsdAAeVmlydHVhbE1hY2hpbmVNYW5hZ2VySW1wb
> > C5qY
> > XZhdAAQb3JjaGVzdHJhdGVTdGFydHBwc3EAfgAMAv____5wdAAtamRrLmludGVybmFs
> > LnJl
> > ZmxlY3QuTmF0aXZlTWV0aG9kQWNjZXNzb3JJbXBsdAAdTmF0aXZlTWV0aG9kQWNjZXN
> > zb3J
> > JbXBsLmphdmF0AAdpbnZva2UwdAAJamF2YS5iYXNldAAHMTcuMC4xOHNxAH4ADAIAAA
> > BNcH
> > EAfgATcQB-
> > ABR0AAZpbnZva2VxAH4AFnEAfgAXc3EAfgAMAgAAACtwdAAxamRrLmludGVybmFsLnJ
> > lZmx
> > lY3QuRGVsZWdhdGluZ01ldGhvZEFjY2Vzc29ySW1wbHQAIURlbGVnYXRpbmdNZXRob2
> > RBY2
> > Nlc3NvckltcGwuamF2YXEAfgAZcQB-
> > ABZxAH4AF3NxAH4ADAIAAAI5cHQAGGphdmEubGFuZy5yZWZsZWN0Lk1ldGhvZHQAC01
> > ldGh
> > vZC5qYXZhcQB-
> > ABlxAH4AFnEAfgAXc3EAfgAMAQAAAGZxAH4ADnQAImNvbS5jbG91ZC52bS5WbVdvcmt
> > Kb2J
> > IYW5kbGVyUHJveHl0ABpWbVdvcmtKb2JIYW5kbGVyUHJveHkuamF2YXQAD2hhbmRsZV
> > ZtV2
> > 9ya0pvYnBwc3EAfgAMAQAAFz9xAH4ADnEAfgAPcQB-
> > ABBxAH4AI3Bwc3EAfgAMAQAAAGNxAH4ADnQAIGNvbS5jbG91ZC52bS5WbVdvcmtKb2J
> > EaXN
> > wYXRjaGVydAAYVm1Xb3JrSm9iRGlzcGF0Y2hlci5qYXZhdAAGcnVuSm9icHBzcQB-
> > AAwBAAACsXEAfgAOdAA_b3JnLmFwYWNoZS5jbG91ZHN0YWNrLmZyYW1ld29yay5qb2J
> > zLml
> > tcGwuQXN5bmNKb2JNYW5hZ2VySW1wbCQ1dAAYQXN5bmNKb2JNYW5hZ2VySW1wbC5qYX
> > ZhdA
> > AMcnVuSW5Db250ZXh0cHBzcQB-AAwBAAAAMXEAfgAOdAA-
> > b3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5NYW5hZ2VkQ29udGV
> > 4dFJ
> > 1bm5hYmxlJDF0ABtNYW5hZ2VkQ29udGV4dFJ1bm5hYmxlLmphdmF0AANydW5wcHNxAH
> > 4ADA
> > EAAAA4cQB-
> > AA50AEJvcmcuYXBhY2hlLmNsb3Vkc3RhY2subWFuYWdlZC5jb250ZXh0LmltcGwuRGV
> > mYXV
> > sdE1hbmFnZWRDb250ZXh0JDF0ABpEZWZhdWx0TWFuYWdlZENvbnRleHQuamF2YXQABG
> > NhbG
> > xwcHNxAH4ADAEAAABncQB-
> > AA50AEBvcmcuYXBhY2hlLmNsb3Vkc3RhY2subWFuYWdlZC5jb250ZXh0LmltcGwuRGV
> > mYXV
> > sdE1hbmFnZWRDb250ZXh0cQB-
> > ADN0AA9jYWxsV2l0aENvbnRleHRwcHNxAH4ADAEAAAA1cQB-
> > AA5xAH4ANnEAfgAzdAAOcnVuV2l0aENvbnRleHRwcHNxAH4ADAEAAAAucQB-
> > AA50ADxvcmcuYXBhY2hlLmNsb3Vkc3RhY2subWFuYWdlZC5jb250ZXh0Lk1hbmFnZWR
> > Db25
> > 0ZXh0UnVubmFibGVxAH4AL3EAfgAwcHBzcQB-AAwBAAACfXEAfgAOcQB-
> > ACpxAH4AK3EAfgAwcHBzcQB-
> > AAwCAAACG3B0AC5qYXZhLnV0aWwuY29uY3VycmVudC5FeGVjdXRvcnMkUnVubmFibGV
> > BZGF
> > wdGVydAAORXhlY3V0b3JzLmphdmFxAH4ANHEAfgAWcQB-ABdzcQB-
> > AAwCAAABCHB0AB9qYXZhLnV0aWwuY29uY3VycmVudC5GdXR1cmVUYXNrdAAPRnV0dXJ
> > lVGF
> > zay5qYXZhcQB-
> > ADBxAH4AFnEAfgAXc3EAfgAMAgAABHBwdAAnamF2YS51dGlsLmNvbmN1cnJlbnQuVGh
> > yZWF
> > kUG9vbEV4ZWN1dG9ydAAXVGhyZWFkUG9vbEV4ZWN1dG9yLmphdmF0AAlydW5Xb3JrZX
> > JxAH
> > 4AFnEAfgAXc3EAfgAMAgAAAntwdAAuamF2YS51dGlsLmNvbmN1cnJlbnQuVGhyZWFkU
> > G9vb
> > EV4ZWN1dG9yJFdvcmtlcnEAfgBFcQB-
> > ADBxAH4AFnEAfgAXc3EAfgAMAgAAA0hwdAAQamF2YS5sYW5nLlRocmVhZHQAC1RocmV
> > hZC5
> > qYXZhcQB-
> > ADBxAH4AFnEAfgAXc3IAH2phdmEudXRpbC5Db2xsZWN0aW9ucyRFbXB0eUxpc3R6uBe
> > 0PKe
> > e3gIAAHhweAAAEJp3CAAAAAAAAAAAeA
> > 2026-03-27 20:14:29,527 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
> > (Work-
> > Job-Executor-1:[ctx-97e1949b, job-1/job-26, ctx-e8f12238])
> > (logid:23bddf40) Publish async job-26 complete on message bus
> > 2026-03-27 20:14:29,527 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
> > (Work-
> > Job-Executor-1:[ctx-97e1949b, job-1/job-26, ctx-e8f12238])
> > (logid:23bddf40) Wake up jobs related to job-26
> > 2026-03-27 20:14:29,527 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
> > (Work-
> > Job-Executor-1:[ctx-97e1949b, job-1/job-26, ctx-e8f12238])
> > (logid:23bddf40) Update db status for job-26
> > 2026-03-27 20:14:29,529 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
> > (Work-
> > Job-Executor-1:[ctx-97e1949b, job-1/job-26, ctx-e8f12238])
> > (logid:23bddf40) Wake up jobs joined with job-26 and disjoin all
> > subjobs created from job- 26
> > 2026-03-27 20:14:29,537 DEBUG [c.c.v.VmWorkJobDispatcher] (Work-
> > Job-
> > Executor-1:[ctx-97e1949b, job-1/job-26]) (logid:23bddf40) Done with
> > run
> > of VM work job: com.cloud.vm.VmWorkStart for VM 1, job origin: 1
> > 2026-03-27 20:14:29,537 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl$5]
> > (Work-Job-Executor-1:[ctx-97e1949b, job-1/job-26]) (logid:23bddf40)
> > Done executing com.cloud.vm.VmWorkStart for job-26
> > 2026-03-27 20:14:29,537 INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-
> > Job-
> > Executor-1:[ctx-97e1949b, job-1/job-26]) (logid:23bddf40) Remove
> > job-26
> > from job monitoring
> > 2026-03-27 20:14:29,542 DEBUG
> > [c.c.a.SecondaryStorageVmAlertAdapter]
> > (secstorage-1:[ctx-b26b71d7]) (logid:470c972d) received secondary
> > storage vm alert
> > 2026-03-27 20:14:29,542 DEBUG
> > [c.c.a.SecondaryStorageVmAlertAdapter]
> > (secstorage-1:[ctx-b26b71d7]) (logid:470c972d) Secondary Storage Vm
> > creation failure in zone [Mi-connect].
> > 2026-03-27 20:14:29,544 WARN [c.c.a.AlertManagerImpl] (secstorage-
> > 1:[ctx-b26b71d7]) (logid:470c972d) alertType=[19] dataCenterId=[1]
> > podId=[null] clusterId=[null] message=[Secondary Storage Vm
> > creation
> > failure in zone [Mi-connect]. Error details: Unable to allocate
> > capacity on zone [1] due to [null].].
> > 2026-03-27 20:14:29,546 WARN [c.c.a.AlertManagerImpl] (secstorage-
> > 1:[ctx-b26b71d7]) (logid:470c972d) No recipients set in global
> > setting
> > 'alert.email.addresses', skipping sending alert with subject
> > [Secondary
> > Storage Vm creation failure in zone [Mi-connect]. Error details:
> > Unable
> > to allocate capacity on zone [1] due to [null].] and content
> > [Secondary
> > Storage Vm creation failure (zone Mi-connect)].
> > 2026-03-27 20:14:29,546 WARN [c.c.v.SystemVmLoadScanner$1]
> > (secstorage-1:[ctx-b26b71d7]) (logid:470c972d) Unexpected exception
> > Unable to orchestrate the start of VM instance {"instanceName":"s-
> > 1-
> > VM","uuid":"da51fa62-e738-400c-a774-9a524953b1a7"}.
> > com.cloud.utils.exception.CloudRuntimeException: Unable to
> > orchestrate
> > the start of VM instance {"instanceName":"s-1-VM","uuid":"da51fa62-
> > e738-400c-a774-9a524953b1a7"}.
> > at
> > com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(VirtualMach
> > ineM
> > anagerImpl.java:5830)
> > at
> > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Nat
> > ive
> > Method)
> > at
> > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Nati
> > veMe
> > thodAccessorImpl.java:77)
> > at
> > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(
> > Dele
> > gatingMethodAccessorImpl.java:43)
> > at
> > java.base/java.lang.reflect.Method.invoke(Method.java:569)
> > at
> > com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandler
> > Prox
> > y.java:102)
> > at
> > com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtualMachi
> > neMa
> > nagerImpl.java:5951)
> > at
> > com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:99
> > )
> > at
> > org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run
> > InCo
> > ntext(AsyncJobManagerImpl.java:689)
> > at
> > org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(
> > Mana
> > gedContextRunnable.java:49)
> > at
> > org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.
> > call
> > (DefaultManagedContext.java:56)
> > at
> > org.apache.cloudstack.managed.context.impl.DefaultManagedContext.ca
> > llWi
> > thContext(DefaultManagedContext.java:103)
> > at
> > org.apache.cloudstack.managed.context.impl.DefaultManagedContext.ru
> > nWit
> > hContext(DefaultManagedContext.java:53)
> > at
> > org.apache.cloudstack.managed.context.ManagedContextRunnable.run(Ma
> > nage
> > dContextRunnable.java:46)
> > at
> > org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run
> > (Asy
> > ncJobManagerImpl.java:637)
> > at
> > java.base/java.util.concurrent.Executors$RunnableAdapter.call(Execu
> > tors
> > .java:539)
> > at
> > java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
> > at
> > java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadP
> > oolE
> > xecutor.java:1136)
> > at
> > java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Thread
> > Pool
> > Executor.java:635)
> > at java.base/java.lang.Thread.run(Thread.java:840)
> >
> > What did I do wrong here?
> >
> >
> >
> > --
> > Ron Gage
> > Westland, MI
> >
--
Ron Gage
Westland, MI