Re: [openstack-dev] [Neutron] Multiprovider API documentation

2014-04-11 Thread Robert Kukura


On 4/10/14, 6:35 AM, Salvatore Orlando wrote:
The bug for documenting the 'multi-provider' API extension is still 
open [1].
The bug report has a good deal of information, but perhaps it might be 
worth also documenting how ML2 uses the segment information, as this 
might be useful to understand when one should use the 'provider' 
extension and when instead the 'multi-provider' would be a better fit.


Unfortunately I do not understand enough how ML2 handles multi-segment 
networks, so I hope somebody from the ML2 team can chime in.
Here's a quick description of ML2 port binding, including how 
multi-segment networks are handled:


   Port binding is how the ML2 plugin determines the mechanism driver
   that handles the port, the network segment to which the port is
   attached, and the values of the binding:vif_type and
   binding:vif_details port attributes. Its inputs are the
   binding:host_id and binding:profile port attributes, as well as the
   segments of the port's network. When port binding is triggered, each
   registered mechanism driver's bind_port() function is called, in the
   order specified in the mechanism_drivers config variable, until one
   succeeds in binding, or all have been tried. If none succeed, the
   binding:vif_type attribute is set to 'binding_failed'. In
   bind_port(), each mechanism driver checks if it can bind the port on
   the binding:host_id host, using any of the network's segments,
   honoring any requirements it understands in binding:profile. If it
   can bind the port, the mechanism driver calls
   PortContext.set_binding() from within bind_port(), passing the
   chosen segment's ID, the values for binding:vif_type and
   binding:vif_details, and optionally, the port's status. A common
   base class for mechanism drivers supporting L2 agents implements
   bind_port() by iterating over the segments and calling a
   try_to_bind_segment_for_agent() function that decides whether the
   port can be bound based on the agents_db info periodically reported
   via RPC by that specific L2 agent. For network segment types of
   'flat' and 'vlan', the try_to_bind_segment_for_agent() function
   checks whether the L2 agent on the host has a mapping from the
   segment's physical_network value to a bridge or interface. For
   tunnel network segment types, try_to_bind_segment_for_agent() checks
   whether the L2 agent has that tunnel type enabled.


Note that, although ML2 can manage binding to multi-segment networks, 
neutron does not manage bridging between the segments of a multi-segment 
network. This is assumed to be done administratively.


Finally, at least in ML2, the providernet and multiprovidernet 
extensions are two different APIs to supply/view the same underlying 
information. The older providernet extension can only deal with 
single-segment networks, but is easier to use. The newer 
multiprovidernet extension handles multi-segment networks and 
potentially supports an extensible set of a segment properties, but is 
more cumbersome to use, at least from the CLI. Either extension can be 
used to create single-segment networks with ML2. Currently, ML2 network 
operations return only the providernet attributes 
(provider:network_type, provider:physical_network, and 
provider:segmentation_id) for single-segment networks, and only the 
multiprovidernet attribute (segments) for multi-segment networks. It 
could be argued that all attributes should be returned from all 
operations, with a provider:network_type value of 'multi-segment' 
returned when the network has multiple segments. A blueprint in the 
works for juno that lets each ML2 type driver define whatever segment 
properties make sense for that type may lead to eventual deprecation of 
the providernet extension.


Hope this helps,

-Bob



Salvatore


[1] https://bugs.launchpad.net/openstack-api-site/+bug/1242019


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Multiprovider API documentation

2014-04-11 Thread Salvatore Orlando
On 11 April 2014 19:11, Robert Kukura kuk...@noironetworks.com wrote:


 On 4/10/14, 6:35 AM, Salvatore Orlando wrote:

  The bug for documenting the 'multi-provider' API extension is still open
 [1].
  The bug report has a good deal of information, but perhaps it might be
 worth also documenting how ML2 uses the segment information, as this might
 be useful to understand when one should use the 'provider' extension and
 when instead the 'multi-provider' would be a better fit.

  Unfortunately I do not understand enough how ML2 handles multi-segment
 networks, so I hope somebody from the ML2 team can chime in.

 Here's a quick description of ML2 port binding, including how
 multi-segment networks are handled:

 Port binding is how the ML2 plugin determines the mechanism driver that
 handles the port, the network segment to which the port is attached, and
 the values of the binding:vif_type and binding:vif_details port attributes.
 Its inputs are the binding:host_id and binding:profile port attributes, as
 well as the segments of the port's network. When port binding is triggered,
 each registered mechanism driver’s bind_port() function is called, in the
 order specified in the mechanism_drivers config variable, until one
 succeeds in binding, or all have been tried. If none succeed, the
 binding:vif_type attribute is set to 'binding_failed'. In bind_port(), each
 mechanism driver checks if it can bind the port on the binding:host_id
 host, using any of the network’s segments, honoring any requirements it
 understands in binding:profile. If it can bind the port, the mechanism
 driver calls PortContext.set_binding() from within bind_port(), passing the
 chosen segment's ID, the values for binding:vif_type and
 binding:vif_details, and optionally, the port’s status. A common base class
 for mechanism drivers supporting L2 agents implements bind_port() by
 iterating over the segments and calling a try_to_bind_segment_for_agent()
 function that decides whether the port can be bound based on the agents_db
 info periodically reported via RPC by that specific L2 agent. For network
 segment types of 'flat' and 'vlan', the try_to_bind_segment_for_agent()
 function checks whether the L2 agent on the host has a mapping from the
 segment's physical_network value to a bridge or interface. For tunnel
 network segment types, try_to_bind_segment_for_agent() checks whether the
 L2 agent has that tunnel type enabled.


 Note that, although ML2 can manage binding to multi-segment networks,
 neutron does not manage bridging between the segments of a multi-segment
 network. This is assumed to be done administratively.


Thanks Bob. I think the above paragraph is the answer I was looking for.



 Finally, at least in ML2, the providernet and multiprovidernet extensions
 are two different APIs to supply/view the same underlying information. The
 older providernet extension can only deal with single-segment networks, but
 is easier to use. The newer multiprovidernet extension handles
 multi-segment networks and potentially supports an extensible set of a
 segment properties, but is more cumbersome to use, at least from the CLI.
 Either extension can be used to create single-segment networks with ML2.
 Currently, ML2 network operations return only the providernet attributes
 (provider:network_type, provider:physical_network, and
 provider:segmentation_id) for single-segment networks, and only the
 multiprovidernet attribute (segments) for multi-segment networks. It could
 be argued that all attributes should be returned from all operations, with
 a provider:network_type value of 'multi-segment' returned when the network
 has multiple segments. A blueprint in the works for juno that lets each ML2
 type driver define whatever segment properties make sense for that type may
 lead to eventual deprecation of the providernet extension.

 Hope this helps,

 -Bob


  Salvatore


  [1] https://bugs.launchpad.net/openstack-api-site/+bug/1242019


 ___
 OpenStack-dev mailing 
 listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Multiprovider API documentation

2014-04-10 Thread Salvatore Orlando
The bug for documenting the 'multi-provider' API extension is still open
[1].
The bug report has a good deal of information, but perhaps it might be
worth also documenting how ML2 uses the segment information, as this might
be useful to understand when one should use the 'provider' extension and
when instead the 'multi-provider' would be a better fit.

Unfortunately I do not understand enough how ML2 handles multi-segment
networks, so I hope somebody from the ML2 team can chime in.

Salvatore


[1] https://bugs.launchpad.net/openstack-api-site/+bug/1242019
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev