und a workarount for
>>>>> the metrics problem.
>>>>> I created a test environment in the laboratory with the main
>>>>> characteristics equal to that of production (acs 4.11.2.0, all Ubuntu
>>>>> 16.04
>>>>> OS, KVM, NFS as
testsand I found a workarount for
>>>> the metrics problem.
>>>> I created a test environment in the laboratory with the main
>>>> characteristics equal to that of production (acs 4.11.2.0, all Ubuntu 16.04
>>>> OS, KVM, NFS as shared storage and advanced ne
und a workarount for
>>> the metrics problem.
>>> I created a test environment in the laboratory with the main
>>> characteristics equal to that of production (acs 4.11.2.0, all Ubuntu 16.04
>>> OS, KVM, NFS as shared storage and advanced network). Then I added a ub
I have performed some testsand I found a workarount for
>> the metrics problem.
>> I created a test environment in the laboratory with the main
>> characteristics equal to that of production (acs 4.11.2.0, all Ubuntu 16.04
>> OS, KVM, NFS as shared storage and advanced netw
gt; I created a test environment in the laboratory with the main
> characteristics equal to that of production (acs 4.11.2.0, all Ubuntu 16.04
> OS, KVM, NFS as shared storage and advanced network). Then I added a ubuntu
> 18.04 new primary storage.
>
> I create a new VM in the new storag
ubuntu
18.04 new primary storage.
I create a new VM in the new storage server and and after a while the
metrics appeared as on the first storage, so the storage is working.
I destroyed this VM, I create a new one on the first (old) storage and then
I migrate it on the new storage.
After migrating
I still don't understand
why com.cloud.hypervisor.kvm.storage.LibvirtStoragePool don't find the
volume d93d3c0a-3859-4473-951d-9b5c5912c76 that exists as file
39148fe1-842b-433a-8a7f-85e90f316e04...
It's the only anomaly I have found. Where can I look again?
Il giorno lun 20 gen 2020 alle ore 16:
also, can you see the primary storage being mounted?
On Mon, Jan 20, 2020 at 12:33 PM Daan Hoogland
wrote:
> Why do you think that Charlie? Is it in the logs like that somewhere?
>
> On Mon, Jan 20, 2020 at 9:52 AM Charlie Holeowsky <
> charlie.holeow...@gmail.com> wrote:
>
>> Hi Daan,
>> in fa
Why do you think that Charlie? Is it in the logs like that somewhere?
On Mon, Jan 20, 2020 at 9:52 AM Charlie Holeowsky <
charlie.holeow...@gmail.com> wrote:
> Hi Daan,
> in fact I find the volume file (39148fe1-842b-433a-8a7f-85e90f316e04) in
> the repositry id = 3 (the new one) but it seems to
Hi Daan,
in fact I find the volume file (39148fe1-842b-433a-8a7f-85e90f316e04) in
the repositry id = 3 (the new one) but it seems to me that the cloudstack
system goes looking for the volume with its "old" name (path) that doesn't
exist...
Il giorno sab 18 gen 2020 alle ore 21:41 Daan Hoogland <
d
So Charlie,
d93d3c0a-3859-4473-951d-9b5c5912c767 is actually a valid disk? does it
exist on the backend nfs?
and the pool 9af0d1c6-85f2-3c55-94af-6ac17cb4024c does it exist both in
cloudstack and on the backend?
if both are answered with yes, you probably have a permissions issue, which
might be i
Hi Daan and users,
the infrastructure is based on the Linux environment. The management
server, hosts and storage are all Ubuntu 16.04 except the new storage
server which is an Ubuntu 18.04. The hypervisor used is Qemu-kvm with NFS
to share the storage.
We tried to add another primary storage and
There are some known issues in (4.13?) where KVM host is asked to report
statistics for other hypervisors storage pools, and that can cause silly
errors in the log - I would ignore those for now.
Better spin a **new** VM on that new storage pool (not migrate over
existing VM) and give it some time
Hi all,
I keep getting the following error about a volume that has been migrated:
2020-01-10 11:21:28,701 DEBUG [c.c.a.t.Request]
(AgentManager-Handler-2:null) (logid:) Seq 15-6725563093524421010:
Processing: { Ans: , MgmtId: 220777304233416, via: 15, Ver: v1, Flags: 10,
[{"com.cloud.agent.api.An
Charlie, I think you'll have to explain a bit more about your environment
to get an answer. what type of storage is it? Where did you migrate the VM
from and to? What types() of hypervisors are you using? Though saying *the*
agent logs suggests KVM, you are still leaving people guessing a lot.
On
Hi all,
on a host that use this new storage I found a problem that could be related
to the fact that the statistics are not being updated.
After migrating a newly created VM to the new storage server, a series of
messages like the following appear in the agent logs:
2020-01-07 17:30:45,868 WARN
Hi,
any idea about about this problem?
Il giorno mar 10 dic 2019 alle ore 17:23 charlie Holeowsky <
charlie.holeow...@gmail.com> ha scritto:
> Hi users,
> I recently added a new primary nfs storage to my cluster (cloudstack
> 4.11.2 with kvm on the Ubuntu system).
>
> It works well but on the me
Hi users,
I recently added a new primary nfs storage to my cluster (cloudstack 4.11.2
with kvm on the Ubuntu system).
It works well but on the metrics of the VM storage volume the "physical
size" and "usage" columns are empty while in the rows of the VM of the
other primary storage I can see the r
Thanks, I've added that storage with similar storage tag and that is suitable
now.
-Original Message-
From: Devdeep [mailto:devd...@gmail.com]
Sent: Monday, November 16, 2015 8:57 AM
To: users@cloudstack.apache.org
Subject: Re: New Primary Storage marked as not suitable
The st
The storage pool is marked as unsuitable probably because the storage tag
on the service/disk offering with which the volume was created do not match
the tag on the new primary storage (PrimaryStorage2). You may try updating
the tag on the new primary and see if it addresses the problem.
Regards
16, 2015 8:04 AM
To: users@cloudstack.apache.org
Subject: RE: New Primary Storage marked as not suitable
Does the secondary primary storage has enough storage?. I think it showing
warning as not suitable because its volume size may be more(after
applying default thin provisioning factor i.e
2015 9:57 PM
To: users@cloudstack.apache.org
Subject: New Primary Storage marked as not suitable
Hello,
I've added a new Primary storage with tag "PrimaryStorage2" and going to
replace it with the old one with the tag "PrimaryStorage1". Both of them are
VMFS and
Hello,
I've added a new Primary storage with tag "PrimaryStorage2" and going to
replace it with the old one with the tag "PrimaryStorage1". Both of them are
VMFS and similar, but when I try to migrate volumes from PrimaryStorage1,
the new PrimaryStorage2 has a &quo
October 2014 3:28 PM
To: users@cloudstack.apache.org
Subject: Templates and new primary storage
Hi all,
I recently added a new primary NFS storage to my system CloudStack 4.3 because
the first was overloading.
Now when I create new VM, the system continues to crearele in the old storage
but I
Hi all,
I recently added a new primary NFS storage to my system CloudStack 4.3
because the first was overloading.
Now when I create new VM, the system continues to crearele in the old
storage but I would like to force it to create the new machines in the
new storage.
Does anyone have idea o
-08-06 15:33:51,541 DEBUG [agent.manager.DirectAgentAttache]
(catalina-exec-21:null) Processing disconnect 66
- Original Message -
From: "Devdeep Singh"
To:
Sent: Tuesday, August 05, 2014 10:22 AM
Subject: RE: Host stuck in "Alert" status after adding a new primary sto
e-
> From: Amir Abbasi [mailto:abb...@tebyanidc.ir]
> Sent: Sunday, August 3, 2014 3:59 PM
> To: users@cloudstack.apache.org
> Subject: Host stuck in "Alert" status after adding a new primary storage
>
> Hi,
>
> I've removed the new Primary storage but the Ho
Hi,
I've removed the new Primary storage but the Host still shows Alert status and
here is what I see in logs:
2014-08-03 14:51:09,401 DEBUG [cloud.host.Status] (AgentTaskPool-4:null) Agent
status update: [id = 55; name = 10.3.1.5; old status = Alert; event =
AgentDisconnected; new s
tion, I have to say, but it worked) but I can't
find the virtual load balancers anywhere in the UI (they show up as
"b-1234-VM" on my hypervisors.
Could I just delete it from the hypervisor and hope it gets recreated?
I would like to be able to simply storage-motion the virtual
If primary NFS storage gets added to a cluster with insufficient privileges
for the host to write then the host will reboot.
This could be a problem! Is there a method to test 100% that it wont
happen.
30 matches
Mail list logo