From: Miguel Angel Ajo Pelayo [mailto:[email protected]]
Sent: Friday, April 08, 2016 4:17 PM
To: OpenStack Development Mailing List (not for usage questions)
<[email protected]>
Subject: [openstack-dev] [neutron] [nova] scheduling bandwidth resources /
NIC_BW_KB resource class
Hi!,
In the context of [1] (generic resource pools / scheduling in nova) and [2]
(minimum bandwidth guarantees -egress- in neutron), I had a talk a few weeks
ago with Jay Pipes,
The idea was leveraging the generic resource pools and scheduling mechanisms
defined in [1] to find the right hosts and track the total available bandwidth
per host (and per host "physical network"), something in neutron (still to be
defined where) would notify the new API about the total amount of "NIC_BW_KB"
available on every host/physnet.
I believe that NIC bandwidth can be taken from Libvirt see [4] and the only
piece that is missing is to tell nova the mapping of physnet to network
interface name. (In case of SR-IOV this is already known)
I see bandwidth (speed) as one of many capabilities of NIC, therefore I think
we should take all of them in the same way in this case libvirt. I was think
of adding a new NIC as new resource to nova.
[4] - <device>
<name>net_enp129s0_e4_1d_2d_2d_8c_41</name>
<path>/sys/devices/pci0000:80/0000:80:01.0/0000:81:00.0/net/enp129s0</path>
<parent>pci_0000_81_00_0</parent>
<capability type='net'>
<interface>enp129s0</interface>
<address>e4:1d:2d:2d:8c:41</address>
<link speed='40000' state='up'/>
<feature name='rx'/>
<feature name='tx'/>
<feature name='sg'/>
<feature name='tso'/>
<feature name='gso'/>
<feature name='gro'/>
<feature name='rxvlan'/>
<feature name='txvlan'/>
<feature name='rxhash'/>
<feature name='rdma'/>
<capability type='80203'/>
</capability>
</device>
That part is quite clear to me,
From [1] I'm not sure which blueprint introduces the ability to schedule
based on the resource allocation/availability itself,
("resource-providers-scheduler" seems more like an optimization to the
schedule/DB interaction, right?)
My understating is that the resource provider blueprint is just a rough filter
of compute nodes before passing them to the scheduler filters. The existing
filters here [6] will do the accurate filtering of resources.
see [5]
[5] -
http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2016-04-04.log.html#t2016-04-04T16:24:10
[6] - http://docs.openstack.org/developer/nova/filter_scheduler.html
And, that brings me to another point: at the moment of filtering hosts,
nova I guess, will have the neutron port information, it has to somehow
identify if the port is tied to a minimum bandwidth QoS policy.
That would require identifying that the port has a "qos_policy_id" attached
to it, and then, asking neutron for the specific QoS policy [3], then look out
for a minimum bandwidth rule (still to be defined), and extract the required
bandwidth from it.
I am not sure if that is the correct way to do it, but you can create NIC
bandwidth filter (or NIC capabilities filter) and in it you can implement the
way to retrieve Qos policy information by using neutron client.
That moves, again some of the responsibility to examine and understand
external resources to nova.
Could it make sense to make that part pluggable via stevedore?, so we would
provide something that takes the "resource id" (for a port in this case) and
returns the requirements translated to resource classes (NIC_BW_KB in this
case).
Best regards,
Miguel Ángel Ajo
[1] http://lists.openstack.org/pipermail/openstack-dev/2016-February/086371.html
[2] https://bugs.launchpad.net/neutron/+bug/1560963
[3] http://developer.openstack.org/api-ref-networking-v2-ext.html#showPolicy
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: [email protected]?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev