Re: [openstack-dev] [neutron] - L3 flavors and issues with usecases for multiple L3 backends

2016-02-03 Thread Eichberger, German
+1 – Good discussion in this thread.

We once had the plan to go with Gantt (https://wiki.openstack.org/wiki/Gantt) 
rather than re-invent that wheel but… in any case we have a simple framework to 
start experimenting ;-)

German

From: Doug Wiegley 
>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Tuesday, February 2, 2016 at 7:01 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [neutron] - L3 flavors and issues with usecases 
for multiple L3 backends

The lbaas use case was something like having one flavor with hardware SSL 
offload and one that doesn’t, e.g. You can easily have multiple backends that 
can do both (in fact, you might even want to let the lower flavor provision 
onto the higher, if you have spare capacity on one and not the other.) And the 
initial “scheduler” in such cases was supposed to be a simple round robin or 
hash, to be revisted later, including the inevitable rescheduling problem, or 
oversubscription issue. It quickly becomes as the same hairy wart that nova has 
to deal with, and all are valid use cases.

doug


On Feb 2, 2016, at 6:43 PM, Kevin Benton 
> wrote:


So flavors are for routers with different behaviors that you want the user to 
be able to choose from (e.g. High performance, slow but free, packet logged, 
etc). Multiple drivers are for when you have multiple backends providing the 
same flavor (e.g. The high performance flavor has several drivers for various 
bare metal routers).

On Feb 2, 2016 18:22, "rzang" 
> wrote:
What advantage can we get from putting multiple drivers into one flavor over 
strictly limit one flavor one driver (or whatever it is called).

Thanks,
Rui

-- Original --
From:  "Kevin Benton";>;
Send time: Wednesday, Feb 3, 2016 8:55 AM
To: "OpenStack Development Mailing List (not for usage 
questions)">;
Subject:  Re: [openstack-dev] [neutron] - L3 flavors and issues with usecases 
for multiple L3 backends


Choosing from multiple drivers for the same flavor is scheduling. I didn't mean 
automatically selecting other flavors.

On Feb 2, 2016 17:53, "Eichberger, German" 
> wrote:
Not that you could call it scheduling. The intent was that the user could pick 
the best flavor for his task (e.g. a gold router as opposed to a silver one). 
The system then would “schedule” the driver configured for gold or silver. 
Rescheduling wasn’t really a consideration…

German

From: Doug Wiegley 
>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>>
Date: Monday, February 1, 2016 at 8:17 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>>
Subject: Re: [openstack-dev] [neutron] - L3 flavors and issues with use cases 
for multiple L3 backends

Yes, scheduling was a big gnarly wart that was punted for the first pass. The 
intention was that any driver you put in a single flavor had equivalent 
capabilities/plumbed to the same networks/etc.

doug


On Feb 1, 2016, at 7:08 AM, Kevin Benton 
>>
 wrote:


Hi all,

I've been working on an implementation of the multiple L3 backends RFE[1] using 
the flavor framework and I've run into some snags with the use-cases.[2]

The first use cases are relatively straightforward where the user requests a 
specific flavor and that request gets dispatched to a driver associated with 
that flavor via a service profile. However, several of the use-cases are based 
around the idea that there is a single flavor with multiple drivers and a 
specific driver will need to be used depending on the placement of the router 
interfaces. i.e. a router cannot be bound to a driver until an interface is 
attached.

This creates some painful coordination problems amongst drivers. For example, 
say the first two networks that a user attaches a router to can be reached by 
all drivers because they use overlays so the first driver chosen by the 
framework works  

Re: [openstack-dev] [neutron] - L3 flavors and issues with usecases for multiple L3 backends

2016-02-02 Thread Kevin Benton
So flavors are for routers with different behaviors that you want the user
to be able to choose from (e.g. High performance, slow but free, packet
logged, etc). Multiple drivers are for when you have multiple backends
providing the same flavor (e.g. The high performance flavor has several
drivers for various bare metal routers).
On Feb 2, 2016 18:22, "rzang"  wrote:

> What advantage can we get from putting multiple drivers into one flavor
> over strictly limit one flavor one driver (or whatever it is called).
>
> Thanks,
> Rui
>
> -- Original --
> *From: * "Kevin Benton";;
> *Send time:* Wednesday, Feb 3, 2016 8:55 AM
> *To:* "OpenStack Development Mailing List (not for usage questions)"<
> openstack-dev@lists.openstack.org>;
> *Subject: * Re: [openstack-dev] [neutron] - L3 flavors and issues with
> usecases for multiple L3 backends
>
> Choosing from multiple drivers for the same flavor is scheduling. I didn't
> mean automatically selecting other flavors.
> On Feb 2, 2016 17:53, "Eichberger, German" 
> wrote:
>
>> Not that you could call it scheduling. The intent was that the user could
>> pick the best flavor for his task (e.g. a gold router as opposed to a
>> silver one). The system then would “schedule” the driver configured for
>> gold or silver. Rescheduling wasn’t really a consideration…
>>
>> German
>>
>> From: Doug Wiegley  doug...@parksidesoftware.com>>
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
>> openstack-dev@lists.openstack.org> openstack-dev@lists.openstack.org>>
>> Date: Monday, February 1, 2016 at 8:17 PM
>> To: "OpenStack Development Mailing List (not for usage questions)" <
>> openstack-dev@lists.openstack.org> openstack-dev@lists.openstack.org>>
>> Subject: Re: [openstack-dev] [neutron] - L3 flavors and issues with use
>> cases for multiple L3 backends
>>
>> Yes, scheduling was a big gnarly wart that was punted for the first pass.
>> The intention was that any driver you put in a single flavor had equivalent
>> capabilities/plumbed to the same networks/etc.
>>
>> doug
>>
>>
>> On Feb 1, 2016, at 7:08 AM, Kevin Benton  blak...@gmail.com>> wrote:
>>
>>
>> Hi all,
>>
>> I've been working on an implementation of the multiple L3 backends RFE[1]
>> using the flavor framework and I've run into some snags with the
>> use-cases.[2]
>>
>> The first use cases are relatively straightforward where the user
>> requests a specific flavor and that request gets dispatched to a driver
>> associated with that flavor via a service profile. However, several of the
>> use-cases are based around the idea that there is a single flavor with
>> multiple drivers and a specific driver will need to be used depending on
>> the placement of the router interfaces. i.e. a router cannot be bound to a
>> driver until an interface is attached.
>>
>> This creates some painful coordination problems amongst drivers. For
>> example, say the first two networks that a user attaches a router to can be
>> reached by all drivers because they use overlays so the first driver chosen
>> by the framework works  fine. Then the user connects to an external network
>> which is only reachable by a different driver. Do we immediately reschedule
>> the entire router at that point to the other driver and interrupt the
>> traffic between the first two networks?
>>
>> Even if we are fine with a traffic interruption for rescheduling, what
>> should we do when a failure occurs half way through switching over because
>> the new driver fails to attach to one of the networks (or the old driver
>> fails to detach from one)? It would seem the correct API experience would
>> be switch everything back and then return a failure to the caller trying to
>> add an interface. This is where things get messy.
>>
>> If there is a failure during the switch back, we now have a single
>> router's resources smeared across two drivers. We can drop the router into
>> the ERROR state and re-attempt the switch in a periodic task, or maybe just
>> leave it broken.
>>
>> How should we handle this much orchestration? Should we pull in something
>> like taskflow, or maybe defer that use case for now?
>>
>> What I want to avoid is what happened with ML2 where error handling is
>> still a TODO in several cases. (e.g. Any post-commit update or delete
>> failures in mechanism drivers will not trigger a revert in state.)
>>
>> 1. https://bugs.launchpad.net/neutron/+bug/1461133
>> 2. https://etherpad.openstack.org/p/<
>> https://etherpad.openstack.org/p/neutron-modular-l3-router-plugin-use-cases
>> >neutron-modular-l3-router-plugin-use-cases<
>> https://etherpad.openstack.org/p/neutron-modular-l3-router-plugin-use-cases
>> >
>>
>> --
>> Kevin Benton
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> 

Re: [openstack-dev] [neutron] - L3 flavors and issues with usecases for multiple L3 backends

2016-02-02 Thread rzang
What advantage can we get from putting multiple drivers into one flavor over 
strictly limit one flavor one driver (or whatever it is called).


Thanks,
Rui


-- Original --
From:  "Kevin Benton";;
Send time: Wednesday, Feb 3, 2016 8:55 AM
To: "OpenStack Development Mailing List (not for usage 
questions)"; 

Subject:  Re: [openstack-dev] [neutron] - L3 flavors and issues with usecases 
for multiple L3 backends




Choosing from multiple drivers for the same flavor is scheduling. I didn't mean 
automatically selecting other flavors. 
 On Feb 2, 2016 17:53, "Eichberger, German"  wrote:
Not that you could call it scheduling. The intent was that the user could pick 
the best flavor for his task (e.g. a gold router as opposed to a silver one). 
The system then would ??schedule?? the driver configured for gold or silver. 
Rescheduling wasn??t really a consideration??
 
 German
 
 From: Doug Wiegley 
>
 Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
 Date: Monday, February 1, 2016 at 8:17 PM
 To: "OpenStack Development Mailing List (not for usage questions)" 
>
 Subject: Re: [openstack-dev] [neutron] - L3 flavors and issues with use cases 
for multiple L3 backends
 
 Yes, scheduling was a big gnarly wart that was punted for the first pass. The 
intention was that any driver you put in a single flavor had equivalent 
capabilities/plumbed to the same networks/etc.
 
 doug
 
 
 On Feb 1, 2016, at 7:08 AM, Kevin Benton 
> wrote:
 
 
 Hi all,
 
 I've been working on an implementation of the multiple L3 backends RFE[1] 
using the flavor framework and I've run into some snags with the use-cases.[2]
 
 The first use cases are relatively straightforward where the user requests a 
specific flavor and that request gets dispatched to a driver associated with 
that flavor via a service profile. However, several of the use-cases are based 
around the idea that there is a single flavor with multiple drivers and a 
specific driver will need to be used depending on the placement of the router 
interfaces. i.e. a router cannot be bound to a driver until an interface is 
attached.
 
 This creates some painful coordination problems amongst drivers. For example, 
say the first two networks that a user attaches a router to can be reached by 
all drivers because they use overlays so the first driver chosen by the 
framework works  fine. Then the user connects to an external network which is 
only reachable by a different driver. Do we immediately reschedule the entire 
router at that point to the other driver and interrupt the traffic between the 
first two networks?
 
 Even if we are fine with a traffic interruption for rescheduling, what should 
we do when a failure occurs half way through switching over because the new 
driver fails to attach to one of the networks (or the old driver fails to 
detach from one)? It would seem the correct API experience would be switch 
everything back and then return a failure to the caller trying to add an 
interface. This is where things get messy.
 
 If there is a failure during the switch back, we now have a single router's 
resources smeared across two drivers. We can drop the router into the ERROR 
state and re-attempt the switch in a periodic task, or maybe just leave it 
broken.
 
 How should we handle this much orchestration? Should we pull in something like 
taskflow, or maybe defer that use case for now?
 
 What I want to avoid is what happened with ML2 where error handling is still a 
TODO in several cases. (e.g. Any post-commit update or delete failures in 
mechanism drivers will not trigger a revert in state.)
 
 1. https://bugs.launchpad.net/neutron/+bug/1461133
 2. 
https://etherpad.openstack.org/p/neutron-modular-l3-router-plugin-use-cases
 
 --
 Kevin Benton
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 

Re: [openstack-dev] [neutron] - L3 flavors and issues with usecases for multiple L3 backends

2016-02-02 Thread Doug Wiegley
The lbaas use case was something like having one flavor with hardware SSL 
offload and one that doesn’t, e.g. You can easily have multiple backends that 
can do both (in fact, you might even want to let the lower flavor provision 
onto the higher, if you have spare capacity on one and not the other.) And the 
initial “scheduler” in such cases was supposed to be a simple round robin or 
hash, to be revisted later, including the inevitable rescheduling problem, or 
oversubscription issue. It quickly becomes as the same hairy wart that nova has 
to deal with, and all are valid use cases.

doug


> On Feb 2, 2016, at 6:43 PM, Kevin Benton  wrote:
> 
> So flavors are for routers with different behaviors that you want the user to 
> be able to choose from (e.g. High performance, slow but free, packet logged, 
> etc). Multiple drivers are for when you have multiple backends providing the 
> same flavor (e.g. The high performance flavor has several drivers for various 
> bare metal routers).
> 
> On Feb 2, 2016 18:22, "rzang"  > wrote:
> What advantage can we get from putting multiple drivers into one flavor over 
> strictly limit one flavor one driver (or whatever it is called).
> 
> Thanks,
> Rui
> 
> -- Original --
> From:  "Kevin Benton";>;
> Send time: Wednesday, Feb 3, 2016 8:55 AM
> To: "OpenStack Development Mailing List (not for usage 
> questions)" >;
> Subject:  Re: [openstack-dev] [neutron] - L3 flavors and issues with usecases 
> for multiple L3 backends
> 
> Choosing from multiple drivers for the same flavor is scheduling. I didn't 
> mean automatically selecting other flavors.
> 
> On Feb 2, 2016 17:53, "Eichberger, German"  > wrote:
> Not that you could call it scheduling. The intent was that the user could 
> pick the best flavor for his task (e.g. a gold router as opposed to a silver 
> one). The system then would “schedule” the driver configured for gold or 
> silver. Rescheduling wasn’t really a consideration…
> 
> German
> 
> From: Doug Wiegley   >>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>    >>
> Date: Monday, February 1, 2016 at 8:17 PM
> To: "OpenStack Development Mailing List (not for usage questions)" 
>    >>
> Subject: Re: [openstack-dev] [neutron] - L3 flavors and issues with use cases 
> for multiple L3 backends
> 
> Yes, scheduling was a big gnarly wart that was punted for the first pass. The 
> intention was that any driver you put in a single flavor had equivalent 
> capabilities/plumbed to the same networks/etc.
> 
> doug
> 
> 
> On Feb 1, 2016, at 7:08 AM, Kevin Benton   >> wrote:
> 
> 
> Hi all,
> 
> I've been working on an implementation of the multiple L3 backends RFE[1] 
> using the flavor framework and I've run into some snags with the use-cases.[2]
> 
> The first use cases are relatively straightforward where the user requests a 
> specific flavor and that request gets dispatched to a driver associated with 
> that flavor via a service profile. However, several of the use-cases are 
> based around the idea that there is a single flavor with multiple drivers and 
> a specific driver will need to be used depending on the placement of the 
> router interfaces. i.e. a router cannot be bound to a driver until an 
> interface is attached.
> 
> This creates some painful coordination problems amongst drivers. For example, 
> say the first two networks that a user attaches a router to can be reached by 
> all drivers because they use overlays so the first driver chosen by the 
> framework works  fine. Then the user connects to an external network which is 
> only reachable by a different driver. Do we immediately reschedule the entire 
> router at that point to the other driver and interrupt the traffic between 
> the first two networks?
> 
> Even if we are fine with a traffic interruption for rescheduling, what should 
> we do when a failure occurs half way through switching over because the new 
> driver fails to attach to one of the networks (or the old driver fails to 
> detach from one)? It would seem the correct API experience would be switch 
> everything back and then return a failure to