Forgot to mention that it fails to deploy the Vritaul router that's why the
deployment of guest VMs fails

2024-02-27 11:24:04,410 DEBUG [c.c.a.m.ClusteredAgentAttache]
(Work-Job-Executor-1:ctx-5ab127f5 job-12670/job-12671 ctx-02929e21)
(logid:97976fdd) Seq 1-7417428586279207196: Forwarding Seq
1-7417428586279207196: { Cmd , MgmtId: 187740248600989, via:
1(CLDXEN1XCP3), Ver: v1, Flags: 100111,
[{"org.apache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"path":"44b911df-3138-414b-9604-e7254372ad9b","origUrl":"
http://download.cloudstack.org/systemvm/4.18/systemvmtemplate-4.18.1-xen.vhd.bz2","uuid":"7752b709-1324-45ed-9132-56994506ad10","id":"328","format":"VHD","accountId":"2","checksum":"{SHA-512}2c9b9d65568ebd144c44b1bc6ad7f9f7671bd096a722e8c2838809706ae508af4ae6cbb2e10bb9db5c7afd00104db5b48ded63c8aea974bf28b1927be02952a1","hvm":"false","displayText":"systemvm-xenserver-4.18.1","imageDataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"CLOUDSTACKVOL1","name":"CLOUDSTACKVOL1","id":"1","poolType":"PreSetup","host":"localhost","path":"/CLOUDSTACKVOL1","port":"0","url":"PreSetup://localhost/CLOUDSTACKVOL1/?ROLE=Primary&STOREUUID=CLOUDSTACKVOL1","isManaged":"false"}},"name":"328-2-cd9c893a-f0af-3ac6-8c64-de9ec6723e0a","size":";(4.88
GB)
5242880000","hypervisorType":"XenServer","bootable":"false","uniqueName":"328-2-cd9c893a-f0af-3ac6-8c64-de9ec6723e0a","directDownload":"false","deployAsIs":"false"}},"destTO":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"506d9923-039b-46f1-870c-a0c465fbf714","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"CLOUDSTACKVOL1","name":"CLOUDSTACKVOL1","id":"1","poolType":"PreSetup","host":"localhost","path":"/CLOUDSTACKVOL1","port":"0","url":"PreSetup://localhost/CLOUDSTACKVOL1/?ROLE=Primary&STOREUUID=CLOUDSTACKVOL1","isManaged":"false"}},"name":"ROOT-1133","size":"(4.88
GB) 5242880000","volumeId":"1182","vmName":"*r-1133-VM*",
"accountId":"49","format":"VHD","provisioningType":"THIN","poolId":"1","id":"1182","deviceId":"0","cacheMode":"NONE","hypervisorType":"XenServer","directDownload":"false","deployAsIs":"false"}},"executeInSequence":"true","options":{},"options2":{},"wait":"0","bypassHostMaintenance":"false"}}]
} to 33622688187677 2024-02-27 11:24:04,451 DEBUG [c.c.a.t.Request]
(AgentManager-Handler-3:null) (logid:) Seq 1-7417428586279207196:
Processing: { Ans: , MgmtId: 187740248600989, via: 1, Ver: v1, Flags: 110,
[{"org.apache.cloudstack.storage.command.CopyCmdAnswer":{"result":"false","details":"com.cloud.utils.exception.CloudRuntimeException:
*Catch Exception com.xensource.xenapi.Types$UuidInvalid :VDI getByUuid for
uuid: 44b911df-3138-414b-9604-e7254372ad9b failed due to The uuid you
supplied was invalid."*,"wait":"0","bypassHostMaintenance":"false"}}] }


On Thu, Feb 29, 2024 at 10:16 AM Slavka Peleva <slav...@storpool.com> wrote:

> Hi Alex,
>
> I'm unfamiliar with Xen, but can you check if the template 
> `44b911df-3138-414b-9604-e7254372ad9b`
> exists in your primary storage?  On every first deployment of a VM with a
> new template, CS copies the template from secondary storage to the primary.
> The template may be missing on primary (for some reason) but CloudStack
> keeps the info for the record in the database and tries to create a volume
> from it.
>
> Best regards,
> Slavka
>
> On Thu, Feb 29, 2024 at 8:49 AM Joan g <joang...@gmail.com> wrote:
>
>> Hi Wei,
>>
>> Storage also does not have any tags. Attaching logs
>>
>> Alex
>>
>> On Thu, Feb 29, 2024 at 12:09 AM Wei ZHOU <ustcweiz...@gmail.com> wrote:
>>
>>> Hi Alex,
>>>
>>> The tags are the disk offering tags.
>>>
>>> Anyway , can you share the full log of the job ?
>>>
>>> -Wei
>>>
>>> On Wed, Feb 28, 2024 at 6:47 PM Alex Paul <alex.chris.jun...@gmail.com>
>>> wrote:
>>>
>>> > Hi Wei,
>>> >
>>> > No tags are there. I think in logs also its not printing  to any tags:
>>> >
>>> >
>>> > Found pools [[Pool[1|PreSetup]]] that match with tags [[]].
>>> >
>>> > The VM deployments were fine 2 days before. All in a sudden it started
>>> > failing.
>>> >
>>> >
>>> > Alex
>>> >
>>> > On Wednesday, February 28, 2024, Wei ZHOU <ustcweiz...@gmail.com>
>>> wrote:
>>> >
>>> > > Agree.
>>> > >
>>> > > Please also check if the pool has any tags.
>>> > >
>>> > > -Wei
>>> > >
>>> > > On Wed, Feb 28, 2024 at 5:31 PM Slavka Peleva
>>> > <slav...@storpool.com.invalid
>>> > > >
>>> > > wrote:
>>> > >
>>> > > > Hi Alex,
>>> > > >
>>> > > > Sorry for the question but is it possible that the cluster is
>>> > disabled? I
>>> > > > have faced a similar problem before while testing.
>>> > > >
>>> > > > Best regards,
>>> > > > Slavka
>>> > > >
>>> > > > On Wed, Feb 28, 2024 at 5:15 PM Alex Paul <
>>> alex.chris.jun...@gmail.com
>>> > >
>>> > > > wrote:
>>> > > >
>>> > > > > Hi Swen,
>>> > > > >
>>> > > > > Yes, only 3TB used out of 15TB and double checked the threshold
>>> is at
>>> > > > 85%.
>>> > > > > :(
>>> > > > >
>>> > > > > Alex
>>> > > > >
>>> > > > > On Wed, Feb 28, 2024 at 7:32 PM <m...@swen.io> wrote:
>>> > > > >
>>> > > > > > Hi Alex,
>>> > > > > >
>>> > > > > > is there enough free space on the storage? Check your
>>> thresholds in
>>> > > > > global
>>> > > > > > settings.
>>> > > > > >
>>> > > > > > Regards,
>>> > > > > > Swen
>>> > > > > >
>>> > > > > > -----Ursprüngliche Nachricht-----
>>> > > > > > Von: Alex Paul <alex.chris.jun...@gmail.com>
>>> > > > > > Gesendet: Mittwoch, 28. Februar 2024 14:24
>>> > > > > > An: users@cloudstack.apache.org
>>> > > > > > Betreff: Storage in avoid set
>>> > > > > >
>>> > > > > > Hello Team,
>>> > > > > >
>>> > > > > > We've encountered an issue with our cloudstack setup on 3
>>> XCP-ng
>>> > > hosts.
>>> > > > > > Suddenly, VM deployments are failing.
>>> > > > > >
>>> > > > > > Our configuration includes a primary storage of type 'presetup'
>>> > using
>>> > > > > HBA.
>>> > > > > > Interestingly, the storage functions properly in the XEN
>>> cluster,
>>> > > > > allowing
>>> > > > > > us to deploy VMs directly in xen.
>>> > > > > >
>>> > > > > > However, in cloudstack, deployments are failing with the
>>> following
>>> > > > logs:
>>> > > > > > "StoragePool is in avoid set, skipping this pool."
>>> > > > > >
>>> > > > > > Could someone please provide guidance on why it's in the avoid
>>> set
>>> > > and
>>> > > > > how
>>> > > > > > we can remove it from there?
>>> > > > > >
>>> > > > > > Full logs are provided below for reference.
>>> > > > > >
>>> > > > > >
>>> > > > > > 024-02-27 11:24:04,879 DEBUG
>>> > > > [o.a.c.s.a.ClusterScopeStoragePoolAllocator]
>>> > > > > > (Work-Job-Executor-1:ctx-5ab127f5 job-12670/job-12671
>>> > > > > >  ctx-02929e21) (logid:97976fdd) Looking for pools in dc [1],
>>> pod
>>> > [1]
>>> > > > and
>>> > > > > > cluster [1]. Disabled pools will be ignored.
>>> > > > > > 2024-02-27 11:24:04,881 DEBUG
>>> > > > > [o.a.c.s.a.ClusterScopeStoragePoolAllocator]
>>> > > > > > (Work-Job-Executor-1:ctx-5ab127f5 job-12670/job-12671
>>> > > > > >  ctx-02929e21) (logid:97976fdd) Found pools
>>> [[Pool[1|PreSetup]]]
>>> > that
>>> > > > > > match with tags [[]].
>>> > > > > > 2024-02-27 11:24:04,885 DEBUG [o.a.c.s.a.
>>> > > AbstractStoragePoolAllocator]
>>> > > > > > (Work-Job-Executor-1:ctx-5ab127f5 job-12670/job-12671 ctx
>>> > > > > > -02929e21) (logid:97976fdd) Checking if storage pool is
>>> suitable,
>>> > > name:
>>> > > > > > CLOUDSTACKVOL1 ,poolId: 1
>>> > > > > > 2024-02-27 11:24:04,885 DEBUG [o.a.c.s.a.
>>> > > AbstractStoragePoolAllocator]
>>> > > > > > (Work-Job-Executor-1:ctx-5ab127f5 job-12670/job-12671
>>> ctx-02929e21)
>>> > > > > > (logid:97976fdd) StoragePool is in avoid set, skipping this
>>> pool
>>> > > > > > 2024-02-27 11:24:04,885 DEBUG [o.a.c.s.a.
>>> > > AbstractStoragePoolAllocator]
>>> > > > > > (Work-Job-Executor-1:ctx-5ab127f5 job-12670/job-12671
>>> ctx-02929e21)
>>> > > > > > (logid:97976fdd) ClusterScopeStoragePoolAllocator is returning
>>> [0]
>>> > > > > > suitable storage pools [[]].
>>> > > > > > 2024-02-27 11:24:04,891 DEBUG [o.a.c.s.a.
>>> > > AbstractStoragePoolAllocator]
>>> > > > > > (Work-Job-Executor-1:ctx-5ab127f5 job-12670/job-12671
>>> ctx-02929e21)
>>> > > > > > (logid:97976fdd) ZoneWideStoragePoolAllocator is returning [0]
>>> > > suitable
>>> > > > > > storage pools [[]].
>>> > > > > > 2024-02-27 11:24:04,891 DEBUG
>>> [c.c.d.DeploymentPlanningManagerImpl]
>>> > > > > > (Work-Job-Executor-1:ctx-5ab127f5 job-12670/job-12671
>>> ctx-02929e21)
>>> > > > > > (logid:97976fdd) No suitable pools found for volume:
>>> > > > > > Vol[1182|name=ROOT-1133|vm=1133|ROOT] under cluster: 1
>>> > > > > > 2024-02-27 11:24:04,891 DEBUG
>>> [c.c.d.DeploymentPlanningManagerImpl]
>>> > > > > > (Work-Job-Executor-1:ctx-5ab127f5 job-12670/job-12671
>>> ctx-02929e21)
>>> > > > > > (logid:97976fdd) No suitable pools found
>>> > > > > > 2024-02-27 11:24:04,891 DEBUG
>>> [c.c.d.DeploymentPlanningManagerImpl]
>>> > > > > > (Work-Job-Executor-1:ctx-5ab127f5 job-12670/job-12671
>>> ctx-02929e21)
>>> > > > > > (logid:97976fdd) No suitable storagePools found under this
>>> > Cluster: 1
>>> > > > > > 2024-02-27 11:24:04,898 DEBUG
>>> [c.c.d.DeploymentPlanningManagerImpl]
>>> > > > > > (Work-Job-Executor-1:ctx-5ab127f5 job-12670/job-12671
>>> ctx-02929e21)
>>> > > > > > (logid:97976fdd) Could not find suitable Deployment
>>> Destination for
>>> > > > this
>>> > > > > > VM under any clusters, returning.
>>> > > > > > 2024-02-27 11:24:04,900 DEBUG [c.c.d.FirstFitPlanner]
>>> > > > > > (Work-Job-Executor-1:ctx-5ab127f5 job-12670/job-12671
>>> ctx-02929e21)
>>> > > > > > (logid:97976fdd) Searching all possible resources under this
>>> Zone:
>>> > 1
>>> > > > > > 2024-02-27 11:24:04,903 DEBUG [c.c.d.FirstFitPlanner]
>>> > > > > > (Work-Job-Executor-1:ctx-5ab127f5 job-12670/job-12671
>>> ctx-02929e21)
>>> > > > > > (logid:97976fdd) Listing clusters in order of aggregate
>>> capacity,
>>> > > that
>>> > > > > > have (at least one host with) enough CPU and RAM capacity under
>>> > this
>>> > > > > Zone: 1
>>> > > > > > 2024-02-27 11:24:04,910 DEBUG [c.c.d.FirstFitPlanner]
>>> > > > > > (Work-Job-Executor-1:ctx-5ab127f5 job-12670/job-12671
>>> ctx-02929e21)
>>> > > > > > (logid:97976fdd) Removing from the clusterId list these
>>> clusters
>>> > from
>>> > > > > avoid
>>> > > > > > set: [1]
>>> > > > > > 2024-02-27 11:24:04,911 DEBUG [c.c.d.FirstFitPlanner]
>>> > > > > > (Work-Job-Executor-1:ctx-5ab127f5 job-12670/job-12671
>>> ctx-02929e21)
>>> > > > > > (logid:97976fdd) No clusters found after removing disabled
>>> clusters
>>> > > and
>>> > > > > > clusters in avoid list, returning.
>>> > > > > >
>>> > > > > >
>>> > > > > >
>>> > > > > > Alexi
>>> > > > > >
>>> > > > > >
>>> > > > > >
>>> > > > >
>>> > > >
>>> > >
>>> >
>>>
>>

Reply via email to