Hello,
I get following error when I want to scale a VM via cloudmonkey: errortext
= Storage pool Primary1 does not have enough space to resize volume ROOT-37
I didn't request any volume resize however the volume is 60GB and Primary1
has 107GB free space.
Please advise.
Thanks
Please share more details, cmk command used etc.
On Tue, Jul 9, 2019, 14:54 Fariborz Navidan wrote:
> Hello,
>
> I get following error when I want to scale a VM via cloudmonkey: errortext
> = Storage pool Primary1 does not have enough space to resize volume ROOT-37
>
> I didn't request any
As service offering is customized, I am trying to use scaleVirtualMachine
API to add CPU cores as UI does not have such functionality. Bellow is
command and cmk output.
(local) > scale virtualmachine id=8c2fc3b6-c71b-4ead-885e-ec468b21c05e
serviceofferingid=287f17af-193e-4ad3-aa09-7291a8d1eaf5
Hello,
I fixed the issue by changing storage capicity disable threshold.
Thanks
On Tue, Jul 9, 2019 at 4:56 PM Andrija Panic
wrote:
> If I recall correctly, you are on KVM, and afaik scaling a VM while running
> is not supported for KVM (besides, template/VM needs to be marked as
>
If I recall correctly, you are on KVM, and afaik scaling a VM while running
is not supported for KVM (besides, template/VM needs to be marked as
Dynamically scalable).
If you have failure while VM is stopped, can you advise if VM is created
from template or via ISO file (because of root disk size
I would say that's avoiding the issue, but there might be a bug in
code/capacity checks, which you could actually file on GitHub if you have
time.
Thx
On Tue, Jul 9, 2019, 18:28 Fariborz Navidan wrote:
> Hello,
>
> I fixed the issue by changing storage capicity disable threshold.
>
> Thanks
>
ACS will only offer DHCP leases to its VMs, via DHCP reservation.. If you
have another DHCP server in your area, than it might be quicker to offer a
lease to a VM. You have to either remove your non-ACS DHCP server
completely, OR make sure it uses reservation for non-ACS servers/hosts i.e.
NOT let
Have a DHCP issue where vm pulls from ACS proxy properly sometimes and
other when it pulls from our normal dhcp server for end-points.
Network layout is flat, and I ACS is using basic network with security
groups. IP range for acs is within range of our normal network so vms and
endpoints will
Hi ALL
I want VM to automatically release computing resources (CPU and RAM) after
shutdown. I try to change the Message.ReservedCapacityFreed.Flag parameter of
VM to false. When VM is turned off, the CPU and memory in resource_count are
not released.
Excuse me, is there anything I haven't
My vm was assigned an ip from our endpoint DHCP server, not from VR. Do I
need to add firewall rule(s) to force DHCP request to VR? I probably missed
a part of setup w/KVM hosts and or within management when I defined the
zone/pod/...
This seems to be correct, VR is running on a different host
Yes race condition exists, I have been fortunate it hasn't been seen
outside of ACS environment so far.From a network topology, Ideally I should
isolate and route traffic to the Pod and use a firewall or other gateway to
control traffic.
I'll need to re-think my deployment and see if I need
Interesting
proxy in to vm
pkill dhclient
dhclient -x
dhclient eth0
get ip I expected, odd
On Tue, Jul 9, 2019 at 11:16 AM wrote:
>
> My vm was assigned an ip from our endpoint DHCP server, not from VR. Do I
> need to add firewall rule(s) to force DHCP request to VR? I probably missed
> a
Don't kill dhcp client (don't force renew of IP), since again it will NOT
work if you repeat that a few times - a VM will broadcast dhcp discover
messages, all DHCP server will receive it and all DHCP servers will offer a
lease/ip to your VMs - the one DHCP server to be "quicker" to send its dhcp
Jesse,
You can experiment with firewall rules/SG, but in general you should not
have more than 1 DHCP server in a single network. I assume your VMs would
be assigned one part of the net/subnet, while your external DHCP server
should be serving your non-ACS infra - i.e. if your acs network for VMs
Jesse:
As Andrija said, this is a purely DHCP issue, has nothing to do with the
CloudStack.
We have a similar set up here , where both ACS VM instances and non-ACS servers
exist on the same subnet, and they are served by separated DHCP servers (stock
ISC dhcp server on RHEL). Here is how we
15 matches
Mail list logo