ez
>
> De: Alejandro Ruiz Bermejo
> Enviado: sábado, 15 de junio 16:15
> Asunto: Re: launch instance error
> Para: users@cloudstack.apache.org
>
>
> Hi,
> I made a fresh install of the CoudStack environment with 2 nodes:
> management and server. The primary and secondar
do, 15 de junio 16:15
> Asunto: Re: launch instance error
> Para: users@cloudstack.apache.org
>
>
> Hi,
> I made a fresh install of the CoudStack environment with 2 nodes:
> management and server. The primary and secondary storage inside the
> management server, both using NF
: sábado, 15 de junio 16:15
Asunto: Re: launch instance error
Para: users@cloudstack.apache.org
Hi,
I made a fresh install of the CoudStack environment with 2 nodes:
management and server. The primary and secondary storage inside the
management server, both using NFS.
1 Zone
1 Pod
1 Cluster
1 Host
Hi,
I made a fresh install of the CoudStack environment with 2 nodes:
management and server. The primary and secondary storage inside the
management server, both using NFS.
1 Zone
1 Pod
1 Cluster
1 Host (LXC)
The secondary and console proxy VMs where created successfully and i
created an LXC
I would go with 2nd aproach - I don't expect "same" issues actually - you
can either add a new pod/cluster or just new cluster in same pod.
On Fri, 14 Jun 2019 at 21:25, Alejandro Ruiz Bermejo <
arbermejo0...@gmail.com> wrote:
> I created a new zone, pod and cluster and added the LXC host to the
I created a new zone, pod and cluster and added the LXC host to the new
cluster. CloudStack did everything for me. Since i am in a test environment
i used the same subnet for both zones.
I can do 2 things:
1. I will wipe the host and do a fresh install(only on the compute host),
but i will
I'm not sure how you moved the host to another Zone ? (Please describe)
But anyway, I would wipe that host (perhaps some settings are kept locally,
etc), and add it as a fresh host to a new cluster (could be in same Pod as
well, ot a new one).i.e. start from scratch please - since your setup
Yes trying to find solutions i did create a new zone pod and cluster and
moved the lxc host to it, but i had the same errors. So i moved it back to
my original LXC cluster into my original zone. I guess thats why it shows
those records.
I made all the movements using the UI, it seems like the
this was the ouputs
mysql> SELECT storage_pool.id, storage_pool.name, storage_pool.uuid,
-> storage_pool.pool_type, storage_pool.created, storage_pool.removed,
-> storage_pool.update_time, storage_pool.data_center_id,
storage_pool.pod_id,
-> storage_pool.used_bytes,
right... based on logs, it used different sql to search for ZONE-wide
storage, execute this one please:
SELECT storage_pool.id, storage_pool.name, storage_pool.uuid,
storage_pool.pool_type, storage_pool.created, storage_pool.removed,
storage_pool.update_time, storage_pool.data_center_id,
d:
> > > > > >> > null, lastPolled: null, created: null}
> > > > > >> > 2019-06-10 13:40:11,347 DEBUG [c.c.n.NetworkModelImpl]
> > > > > >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7)
> > > > > (logid:0a0b5a30)
(API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7)
> > > > (logid:0a0b5a30)
> > > > >> > Deploy avoids pods: [], clusters: [], hosts: []
> > > > >> > 2019-06-10 13:40:11,361 DEBUG [c.c.d.FirstFitPlanner]
> > > > >> > (
> > >> > FirstFitRoutingAllocator) (logid:0a0b5a30) Looking for hosts in
> dc:
> > 1
> > > >> > pod:1 cluster:2
> > > >> > 2019-06-10 13:40:11,372 DEBUG [c.c.a.m.a.i.FirstFitAllocator]
> > > >> > (API-Job-Executor-14:
-b92e08df job-41 ctx-3d41bde7
> > >> > FirstFitRoutingAllocator) (logid:0a0b5a30) Host Allocator returning
> 0
> > >> > suitable hosts
> > >> > 2019-06-10 13:40:11,374 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > >> > (API-Job-Executor-14:ctx-b92
Listing clusters in order of aggregate capacity, that have (atleast
> one
> >> > host with) enough CPU and RAM capacity under this Zone: 1
> >> > 2019-06-10 13:40:11,378 DEBUG [c.c.d.FirstFitPlanner]
> >> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7)
&
state transition: null
>> > 2019-06-10 13:40:11,964 DEBUG [c.c.r.ResourceLimitManagerImpl]
>> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7) (logid:0a0b5a30)
>> > Updating resource Type = volume count for Account = 2 Operation =
>> > decreasing Amount
nagerImpl]
> > (API-Job-Executor-14:ctx-b92e08df job-41 ctx-3d41bde7) (logid:0a0b5a30)
> > Updating resource Type = cpu count for Account = 2 Operation = decreasing
> > Amount = 1
> > 2019-06-10 13:40:12,957 DEBUG [c.c.r.ResourceLimitManagerImpl]
> > (API-Job-Execut
bManagerImpl]
> (API-Job-Executor-14:ctx-b92e08df job-41) (logid:0a0b5a30) Publish async
> job-41 complete on message bus
> 2019-06-10 13:40:13,125 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
> (API-Job-Executor-14:ctx-b92e08df job-41) (logid:0a0b5a30) Wake up jobs
> related to job-41
>
b-41) (logid:0a0b5a30) Update db status
for job-41
2019-06-10 13:40:13,125 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
(API-Job-Executor-14:ctx-b92e08df job-41) (logid:0a0b5a30) Wake up jobs
joined with job-41 and disjoin all subjobs created from job- 41
2019-06-10 13:40:13,207 DEBUG [o.a.c.f.j.i.Async
he checksum column on vm_template table fot that template and
> restart management server?
>
> Regards,
> Nicolas Vazquez
>
> From: Alejandro Ruiz Bermejo
> Sent: Monday, June 10, 2019 12:38:52 PM
> To: users@cloudstack.apache.org
> Subje
, 2019 12:38:52 PM
To: users@cloudstack.apache.org
Subject: launch instance error
Hi, I'm working with Cloudstack 4.11.2.0
This is my environment:
1 Zone
1 Pod
2 Clusters (LXC and KVM)
2 Hosts (one in each cluster)
i can launch perfectly VMs on the KVM cluster but when i'm trying to launch
a new VM
Hi, I'm working with Cloudstack 4.11.2.0
This is my environment:
1 Zone
1 Pod
2 Clusters (LXC and KVM)
2 Hosts (one in each cluster)
i can launch perfectly VMs on the KVM cluster but when i'm trying to launch
a new VM with an LXC template i'm getting this error:
2019-06-10 11:24:28,961 INFO
22 matches
Mail list logo