Hello,
There are mixed reviews on Cloudstack setup on Xenserver with CEPH as
Storage (both Primary n Secondary) . insight on this will be very
helpful.
Regards,
Ranjit
Hi Daan,
yes i did. But as the agent never got into a "connected" / "up" state, all
attempts to disable the 'maintenance mode' failed with Error due to 'Client
in Alert state' or 'client not connected'.
Looking into the logs of the management server again, the whole process
takes not even 0,5sec.
Hello
Rebooting the entire KVM & CM server solved the issue. VMs can now enjoy
full 1 Gbps speeds. Thank you for the input guys.
On 9/30/22 13:20, Granwille Strauss wrote:
Attached shows I have all settings in place, but VM still limited to
200 Mbps
On 9/30/22 13:17, Granwille Strauss
Oh no, does this mean I need to delete my existing network and create a
new one? If yes, will the same public IPs assigns to existing VMs again?
On 9/30/22 12:56, Wei ZHOU wrote:
Yes.
The network rate is set not only on the virtual router, but also on virtual
machines as well.
Please also
Yes.
The network rate is set not only on the virtual router, but also on virtual
machines as well.
Please also check global settings
network.throttling.rate
vm.network.throttling.rate
-Wei
On Fri, 30 Sept 2022 at 12:31, Ruben Bosch wrote:
> Hi Granwille,
>
> Check your network offerings.
Hi Granwille,
Check your network offerings. They come with a default 200Mb/s rate limit.
Met vriendelijke groet / Kind regards,
Ruben Bosch
CLDIN
> On 30 Sep 2022, at 12:24, Granwille Strauss
> wrote:
>
> Hello
>
> My KVM host has a 1 Gbps port speed. And if I run speed test on KVM I get
Hello
My KVM host has a 1 Gbps port speed. And if I run speed test on KVM I
get good +900 Mbps speeds as expected.
But when I ssh into VM and run same test speed to same test server, I
get speeds of ~190 Mbps, its as if there is a 200 Mbps limit. I checked
documentation and I cannot see
be interesting indeed. Did you at any point (try to) take the host(s) out
of maintenance, Chris?
On Fri, Sep 30, 2022 at 12:32 AM vas...@gmx.de wrote:
> Short update.
>
> Was able to 'solve' this problem in DB, changeing the state from
> 'Maintenance' to 'Enabled'.
> Afterwards the host came