We’re using the cinder IOPS throttling which seems to work well.  The default 
volume type is rate limited so that the standard requests don’t overload the 
ceph pool and the high IOPS is reserved for those apps that need it.

We’ve not used glance throttling (but the load is much less in our environment)

Tim

From: Joe Topjian [mailto:[email protected]]
Sent: 23 October 2014 19:42
To: Craig Jellick
Cc: [email protected]; [email protected]
Subject: Re: [Openstack-operators] [nova] instance resource quota quesetions

I can confidently say that throttling will work with KVM. I think both 
virt_types will work since libvirt is controlling everything in the end.

One caveat about IO throttling to keep in mind is that the Nova settings are 
not applied to volumes -- just the root and ephemeral disk. We were unable to 
verify if IO throttling through Cinder worked due to this bug:

https://bugs.launchpad.net/nova/+bug/1362129

For bandwidth, I want to say that it is agnostic to nova-network and Neutron 
since it happens at the libvirt layer, but I am not 100% sure as I've never 
tested the settings on both.

It's safe to test these settings live (IMO). If you figure out the correct 
"virsh" commands to use to apply the settings, you can run them directly on the 
compute node against a test instance and no other instances will be affected.

Hope that helps,
Joe

On Thu, Oct 23, 2014 at 11:20 AM, Craig Jellick 
<[email protected]<mailto:[email protected]>> wrote:
Hello,

I have a few questions regarding the instance resource quota feature in nova 
which is documented here: https://wiki.openstack.org/wiki/InstanceResourceQuota

First, the section on disk IO states "IO throttling are handled by QEMU.” Does 
this mean that this feature only works when the hypervisor is QEMU (virt_type = 
QEMU in nova.conf) or will this feature work with KVM?

Second, this feature also allows control over network bandwidth. Will that work 
if you are using neutron or does it only work if you’re using nova-network? Our 
setup is neutron w/ ml2+ovs with the ovs agent living on each compute node.


/Craig J

_______________________________________________
OpenStack-operators mailing list
[email protected]<mailto:[email protected]>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

_______________________________________________
OpenStack-operators mailing list
[email protected]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

Reply via email to