Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-03-05 Thread Samuel Bercovici
Hi,

In 
https://docs.google.com/document/d/1D-1n8nCEFurYzvEBxIRfXfffnImcIPwWSctAG-NXonY/edit?usp=sharing
 referenced by the Wiki, I have added the section that address the items raised 
on the last irc meeting.

Regards,
-Sam.


From: Samuel Bercovici
Sent: Wednesday, February 26, 2014 7:06 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Samuel Bercovici; Eugene Nikanorov (enikano...@mirantis.com); Evgeny 
Fedoruk; Avishay Balderman
Subject: RE: [openstack-dev] [Neutron][LBaaS] Object Model discussion

Hi,

I have added to the wiki page: 
https://wiki.openstack.org/wiki/Neutron/LBaaS/LoadbalancerInstance/Discussion#1.1_Turning_existing_model_to_logical_model
 that points to a document that includes the current model + L7 + SSL.
Please review.

Regards,
-Sam.


From: Samuel Bercovici
Sent: Monday, February 24, 2014 7:36 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Samuel Bercovici
Subject: RE: [openstack-dev] [Neutron][LBaaS] Object Model discussion

Hi,

I also agree that the model should be pure logical.
I think that the existing model is almost correct but the pool should be made 
pure logical. This means that the vip <>pool relationships needs also to 
become any to any.
Eugene, has rightfully pointed that the current "state" management will not 
handle such relationship well.
To me this means that the "state" management is broken and not the model.
I will propose an update to the state management in the next few days.

Regards,
-Sam.




From: Mark McClain [mailto:mmccl...@yahoo-inc.com]
Sent: Monday, February 24, 2014 6:32 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion


On Feb 21, 2014, at 1:29 PM, Jay Pipes 
mailto:jaypi...@gmail.com>> wrote:

I disagree on this point. I believe that the more implementation details
bleed into the API, the harder the API is to evolve and improve, and the
less flexible the API becomes.

I'd personally love to see the next version of the LBaaS API be a
complete breakaway from any implementation specifics and refocus itself
to be a control plane API that is written from the perspective of the
*user* of a load balancing service, not the perspective of developers of
load balancer products.

I agree with Jay.  We the API needs to be user centric and free of 
implementation details.  One of my concerns I've voiced in some of the IRC 
discussions is that too many implementation details are exposed to the user.

mark
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-28 Thread Stephen Balukoff
Hi folks!

Just one other thing I'd like to bring up here as well:


On Thu, Feb 27, 2014 at 4:00 AM, Eugene Nikanorov
wrote:

> I see IP address sharing as user intent, not an implementation detail.
>> Same backend could be not only the only obstacle here.
>>
>> The backend is not exposed anyhow by the API, by the way.
>>
>> When you create root object with flavor - you really can't control to
>> which driver it will be scheduled.
>>
>> So even if there is driver that is somehow (how?) will allow same IP on
>> different backends, user just will not be able to create 2 vips that share
>> IP address.
>>
>>
>>
>> Eugene, is your point that the logical model addresses the capability for
>> IP sharing but that it can’t be scheduled correctly?
>>
> That's one of concerns, correct.
>
>>
>>
I also want to point out that there is the practical limitation that in no
IP network that I'm aware of, you can't have to different devices share the
same IP on the same layer-2 network and have this work. (I understand that
two neutron ports connected to the same netutron_network or subnet is
effectively putting them on the same layer-2 network.)  I know that an
active-standby topology can work here, but in this case we're talking about
two different VIPs sharing the same IP, not on the same device, and both
being active at the same time.  But... I've been wrong before and I just
might not be aware of any technology which makes this work:  Do any of
y'all know of any technology here which makes this feasible?

If not, then y'all must concede that this is one technological limitation
which is going to make it necessary for the user to actually specify
somehow that services collocated on the same IP must be collocated on the
same back-end (if a layer-2 topology is used).

It is possible to have two devices share the same IP in a layer-3 network
topology, but then there needs to also be some kind of logic there to
determine how packets get routed to each device (and this can break
stateful protocols like TCP if you're not careful)--  but again, this would
be "routed mode" load balancing, which I understand is not yet feasible
with Neutron LBaaS, correct?

Stephen

-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-27 Thread Eugene Nikanorov
>
> I see IP address sharing as user intent, not an implementation detail.
> Same backend could be not only the only obstacle here.
>
> The backend is not exposed anyhow by the API, by the way.
>
> When you create root object with flavor - you really can't control to
> which driver it will be scheduled.
>
> So even if there is driver that is somehow (how?) will allow same IP on
> different backends, user just will not be able to create 2 vips that share
> IP address.
>
>
>
> Eugene, is your point that the logical model addresses the capability for
> IP sharing but that it can't be scheduled correctly?
>
That's one of concerns, correct.

   That is just not so simple. If you create vip and the pool - this or
> that way it is ready configuration that needs to be deployed, so driver
> chooses the backend. Then you need to add objects to this configuration, by
> say, adding a vip with the same IP on different port.
>
> I don't understand the issue described here.
>
Again, it's about working with proper provider when creating/updating the
resource.
User has no control of it, other then referencing provider in indirect way,
say by working with the object that is attached to the root object.


>
> Currently there is no way you can specify this through the API.
>
> You can specify same IP address and another tcp port, but that call will
> just fail.
>
> Correct, as I have described, the current implementation allocates a
> neutron-port on the first VIP hence the second VIP will fail.
>
> This is an implementation detail, we can discuss how to address. In the
> logical model I have removed the reference to the neutron port and noted
> this for further discussion.
>
Well, it may be implementation detail, or it may be a part of logical
model. Port is logical abstraction, i don't see why it should be necessary
a detail here. Anyway, I'd like to see suggestions on how to address that.
So far all the way do address it will introduce these 'impl detail' that
we're trying to get rid of.

 API will not let user to control drivers, that's one of the reasons why
> it's not possible from design standpoint.
>
>  I do not see how this relates to controlling drivers. It is the driver
> implementation, the user should not need to control it.
>
That was about scheduling. User will not have control over what backend
technology newly created resource will use, neither provider/driver, nor
particular physical backend.


>
> Youcef, can we chat over IRC? I think we could clarify lot more than over
> ML.
>
>
>
> Thanks,
>
> Eugene.
>
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-27 Thread Samuel Bercovici


From: Eugene Nikanorov [mailto:enikano...@mirantis.com]
Sent: Thursday, February 27, 2014 11:12 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion


The point is to be able to share IP address, it really means that two VIPs(as 
we understand them in current model) need to reside within same backend 
(technically they need to share neutron port).

Aren't we leaking some implementation detail here?
I see IP address sharing as user intent, not an implementation detail. Same 
backend could be not only the only obstacle here.
The backend is not exposed anyhow by the API, by the way.
When you create root object with flavor - you really can't control to which 
driver it will be scheduled.
So even if there is driver that is somehow (how?) will allow same IP on 
different backends, user just will not be able to create 2 vips that share IP 
address.

Eugene, is your point that the logical model addresses the capability for IP 
sharing but that it can't be scheduled correctly?


I'm not against introducing a wrapper entity that correlates the different 
config objects that logically make up one LB config, but I don't think it is 
needed from the logical object model pov IMO. Yes, it might make the 
implementation of the object model for some drivers easier, and I'm OK with 
having it, if it helps. But strictly speaking it is not needed, because a 
driver doesn't have to choose a backend when the pool is created or when a vip 
is created, if it doesn't have enough info yet.
That is just not so simple. If you create vip and the pool - this or that way 
it is ready configuration that needs to be deployed, so driver chooses the 
backend. Then you need to add objects to this configuration, by say, adding a 
vip with the same IP on different port.
I don't understand the issue described here.

Currently there is no way you can specify this through the API.
You can specify same IP address and another tcp port, but that call will just 
fail.
Correct, as I have described, the current implementation allocates a 
neutron-port on the first VIP hence the second VIP will fail.
This is an implementation detail, we can discuss how to address. In the logical 
model I have removed the reference to the neutron port and noted this for 
further discussion.

E.g. we'll have subtle limitation in the API instead of consistency.

It can wait until a vip/pool are created and attached to each other, then it 
would have a clearer idea of the backends eligible to host that whole LB 
configuration. Another driver though, might be able to perform the 
configuration on its "backend" straight-away on each API call, and still be 
able to comply with the object model.
API will not let user to control drivers, that's one of the reasons why it's 
not possible from design standpoint.
I do not see how this relates to controlling drivers. It is the driver 
implementation, the user should not need to control it.

Youcef, can we chat over IRC? I think we could clarify lot more than over ML.

Thanks,
Eugene.


Youcef

On Thu, Feb 27, 2014 at 5:20 AM, Youcef Laribi 
mailto:youcef.lar...@citrix.com>> wrote:
Hi Eugene,

1) In order to allow real multiple 'vips' per pool feature, we need the 
listener concept.
It's not just a different tcp port, but also a protocol, so session persistence 
and all ssl-related parameters should move to listener.

Why do we need a new 'listener' concept? Since as Sam pointed out, we are 
removing the reference to a pool from the VIP in the current model, isn't this 
enough by itself to allow the model to support multiple VIPs per pool now?

lb-pool-create   -->$POOL-1
lb-vip-create .$VIP_ADDRESS,$TCP_PORT, default_pool=$POOL-1... --> $VIP-1
lb-vip-create .$VIP_ADDRESS,$TCP_PORT, default_pool=$POOL-1... --> $VIP-2


Youcef





From: Eugene Nikanorov 
[mailto:enikano...@mirantis.com<mailto:enikano...@mirantis.com>]
Sent: Wednesday, February 26, 2014 1:26 PM
To: Samuel Bercovici
Cc: OpenStack Development Mailing List (not for usage questions)

Subject: Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

Hi Sam,

I've looked over the document, couple of notes:

1) In order to allow real multiple 'vips' per pool feature, we need the 
listener concept.
It's not just a different tcp port, but also a protocol, so session persistence 
and all ssl-related parameters should move to listener.

2) ProviderResourceAssociation - remains on the instance object (our instance 
object is VIP) as a relation attribute.
Though it is removed from public API, so it could not be specified on creation.
Remember provider is needed for REST call dispatching. The value of provider 
attribute (e.g. ProviderResourceAssociation) is result of scheduling.

3) As we discussed before, pool->vip relation will be r

Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-27 Thread Eugene Nikanorov
>
>
>
> The point is to be able to share IP address, it really means that two
> VIPs(as we understand them in current model) need to reside within same
> backend (technically they need to share neutron port).
>
>
>
> Aren't we leaking some implementation detail here?
>
I see IP address sharing as user intent, not an implementation detail. Same
backend could be not only the only obstacle here.
The backend is not exposed anyhow by the API, by the way.
When you create root object with flavor - you really can't control to which
driver it will be scheduled.
So even if there is driver that is somehow (how?) will allow same IP on
different backends, user just will not be able to create 2 vips that share
IP address.


>
> I'm not against introducing a wrapper entity that correlates the different
> config objects that logically make up one LB config, but I don't think it
> is needed from the logical object model pov IMO. Yes, it might make the
> implementation of the object model for some drivers easier, and I'm OK with
> having it, if it helps. But strictly speaking it is not needed, because a
> driver doesn't have to choose a backend when the pool is created or when a
> vip is created, if it doesn't have enough info yet.
>
That is just not so simple. If you create vip and the pool - this or that
way it is ready configuration that needs to be deployed, so driver chooses
the backend. Then you need to add objects to this configuration, by say,
adding a vip with the same IP on different port.
Currently there is no way you can specify this through the API.
You can specify same IP address and another tcp port, but that call will
just fail.
E.g. we'll have subtle limitation in the API instead of consistency.

It can wait until a vip/pool are created and attached to each other, then
> it would have a clearer idea of the backends eligible to host that whole LB
> configuration. Another driver though, might be able to perform the
> configuration on its "backend" straight-away on each API call, and still be
> able to comply with the object model.
>
API will not let user to control drivers, that's one of the reasons why
it's not possible from design standpoint.

Youcef, can we chat over IRC? I think we could clarify lot more than over
ML.

Thanks,
Eugene.


>
> Youcef
>
>
>
> On Thu, Feb 27, 2014 at 5:20 AM, Youcef Laribi 
> wrote:
>
> Hi Eugene,
>
>
>
> 1) In order to allow real multiple 'vips' per pool feature, we need the
> listener concept.
>
> It's not just a different tcp port, but also a protocol, so session
> persistence and all ssl-related parameters should move to listener.
>
>
>
> Why do we need a new 'listener' concept? Since as Sam pointed out, we are
> removing the reference to a pool from the VIP in the current model, isn't
> this enough by itself to allow the model to support multiple VIPs per pool
> now?
>
>
>
> lb-pool-create   à$POOL-1
>
> lb-vip-create .$VIP_ADDRESS,$TCP_PORT, default_pool=$POOL-1... à $VIP-1
>
> lb-vip-create .$VIP_ADDRESS,$TCP_PORT, default_pool=$POOL-1... à $VIP-2
>
>
>
>
>
> Youcef
>
>
>
>
>
>
>
>
>
>
>
> *From:* Eugene Nikanorov [mailto:enikano...@mirantis.com]
> *Sent:* Wednesday, February 26, 2014 1:26 PM
> *To:* Samuel Bercovici
> *Cc:* OpenStack Development Mailing List (not for usage questions)
>
>
> *Subject:* Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion
>
>
>
> Hi Sam,
>
>
>
> I've looked over the document, couple of notes:
>
>
>
> 1) In order to allow real multiple 'vips' per pool feature, we need the
> listener concept.
>
> It's not just a different tcp port, but also a protocol, so session
> persistence and all ssl-related parameters should move to listener.
>
>
>
> 2) ProviderResourceAssociation - remains on the instance object (our
> instance object is VIP) as a relation attribute.
>
> Though it is removed from public API, so it could not be specified on
> creation.
>
> Remember provider is needed for REST call dispatching. The value of
> provider attribute (e.g. ProviderResourceAssociation) is result of
> scheduling.
>
>
>
> 3) As we discussed before, pool->vip relation will be removed, but pool
> reuse by different vips (e.g. different backends) will be forbidden for
> implementation simplicity, because this is definitely not a priority right
> now.
>
> I think it's a fair limitation that can be removed later.
>
>
>
> On workflows:
>
> WFs #2 and #3 are problematic. First off, sharing the same IP is not
> possible for other vip for the following reason:
>

Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-27 Thread Samuel Bercovici
+1

From: Youcef Laribi [mailto:youcef.lar...@citrix.com]
Sent: Thursday, February 27, 2014 10:11 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

Hi Eugene,

Thanks for the provided detail. See my comments below.

The point is to be able to share IP address, it really means that two VIPs(as 
we understand them in current model) need to reside within same backend 
(technically they need to share neutron port).

Aren't we leaking some implementation detail here? Why is it that 2 VIPs using 
the same IP address have to be implemented on the same backend? Isn't this a 
driver/technology capability? If a certain driver *requires* that VIPs sharing 
the same IP address have to be on the same "backend" (whatever a "backend" 
means), it just needs to ensure that this is the case, but another driver might 
be able to support VIPs sharing the same IP to be on different backends. The 
user really shouldn't care. Did I miss some important detail? It feels like it, 
so please be patient with me :)

I'm sorry this all creates so much confusion.
In order to understand why we need additional entity, you need to keep in mind 
the following things:
 1) We have a notion of root object. From user perspective it represents 
logical instance, from implementation perspective it also represents how that 
instance is mapped to a backend (agent, device), which flavor/provider/driver 
it has, etc
 2) We're trying to change vip-pool relationship to m:n, if vip or pool remain 
the root object, that creates inconsistency because root object can be 
connected to another root object with different parameters.

I'm not against introducing a wrapper entity that correlates the different 
config objects that logically make up one LB config, but I don't think it is 
needed from the logical object model pov IMO. Yes, it might make the 
implementation of the object model for some drivers easier, and I'm OK with 
having it, if it helps. But strictly speaking it is not needed, because a 
driver doesn't have to choose a backend when the pool is created or when a vip 
is created, if it doesn't have enough info yet. It can wait until a vip/pool 
are created and attached to each other, then it would have a clearer idea of 
the backends eligible to host that whole LB configuration. Another driver 
though, might be able to perform the configuration on its "backend" 
straight-away on each API call, and still be able to comply with the object 
model.

Youcef

On Thu, Feb 27, 2014 at 5:20 AM, Youcef Laribi 
mailto:youcef.lar...@citrix.com>> wrote:
Hi Eugene,

1) In order to allow real multiple 'vips' per pool feature, we need the 
listener concept.
It's not just a different tcp port, but also a protocol, so session persistence 
and all ssl-related parameters should move to listener.

Why do we need a new 'listener' concept? Since as Sam pointed out, we are 
removing the reference to a pool from the VIP in the current model, isn't this 
enough by itself to allow the model to support multiple VIPs per pool now?

lb-pool-create   -->$POOL-1
lb-vip-create .$VIP_ADDRESS,$TCP_PORT, default_pool=$POOL-1... --> $VIP-1
lb-vip-create .$VIP_ADDRESS,$TCP_PORT, default_pool=$POOL-1... --> $VIP-2


Youcef





From: Eugene Nikanorov 
[mailto:enikano...@mirantis.com<mailto:enikano...@mirantis.com>]
Sent: Wednesday, February 26, 2014 1:26 PM
To: Samuel Bercovici
Cc: OpenStack Development Mailing List (not for usage questions)

Subject: Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

Hi Sam,

I've looked over the document, couple of notes:

1) In order to allow real multiple 'vips' per pool feature, we need the 
listener concept.
It's not just a different tcp port, but also a protocol, so session persistence 
and all ssl-related parameters should move to listener.

2) ProviderResourceAssociation - remains on the instance object (our instance 
object is VIP) as a relation attribute.
Though it is removed from public API, so it could not be specified on creation.
Remember provider is needed for REST call dispatching. The value of provider 
attribute (e.g. ProviderResourceAssociation) is result of scheduling.

3) As we discussed before, pool->vip relation will be removed, but pool reuse 
by different vips (e.g. different backends) will be forbidden for 
implementation simplicity, because this is definitely not a priority right now.
I think it's a fair limitation that can be removed later.

On workflows:
WFs #2 and #3 are problematic. First off, sharing the same IP is not possible 
for other vip for the following reason:
vip is created (with new model) with flavor (or provider) and scheduled to a 
provider (and then to a particular backend), doing so for 2 vips makes address 
reuse impossible if we want 

Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-27 Thread Youcef Laribi
Hi Eugene,

Thanks for the provided detail. See my comments below.

The point is to be able to share IP address, it really means that two VIPs(as 
we understand them in current model) need to reside within same backend 
(technically they need to share neutron port).

Aren't we leaking some implementation detail here? Why is it that 2 VIPs using 
the same IP address have to be implemented on the same backend? Isn't this a 
driver/technology capability? If a certain driver *requires* that VIPs sharing 
the same IP address have to be on the same "backend" (whatever a "backend" 
means), it just needs to ensure that this is the case, but another driver might 
be able to support VIPs sharing the same IP to be on different backends. The 
user really shouldn't care. Did I miss some important detail? It feels like it, 
so please be patient with me :)

I'm sorry this all creates so much confusion.
In order to understand why we need additional entity, you need to keep in mind 
the following things:
 1) We have a notion of root object. From user perspective it represents 
logical instance, from implementation perspective it also represents how that 
instance is mapped to a backend (agent, device), which flavor/provider/driver 
it has, etc
 2) We're trying to change vip-pool relationship to m:n, if vip or pool remain 
the root object, that creates inconsistency because root object can be 
connected to another root object with different parameters.

I'm not against introducing a wrapper entity that correlates the different 
config objects that logically make up one LB config, but I don't think it is 
needed from the logical object model pov IMO. Yes, it might make the 
implementation of the object model for some drivers easier, and I'm OK with 
having it, if it helps. But strictly speaking it is not needed, because a 
driver doesn't have to choose a backend when the pool is created or when a vip 
is created, if it doesn't have enough info yet. It can wait until a vip/pool 
are created and attached to each other, then it would have a clearer idea of 
the backends eligible to host that whole LB configuration. Another driver 
though, might be able to perform the configuration on its "backend" 
straight-away on each API call, and still be able to comply with the object 
model.

Youcef

On Thu, Feb 27, 2014 at 5:20 AM, Youcef Laribi 
mailto:youcef.lar...@citrix.com>> wrote:
Hi Eugene,

1) In order to allow real multiple 'vips' per pool feature, we need the 
listener concept.
It's not just a different tcp port, but also a protocol, so session persistence 
and all ssl-related parameters should move to listener.

Why do we need a new 'listener' concept? Since as Sam pointed out, we are 
removing the reference to a pool from the VIP in the current model, isn't this 
enough by itself to allow the model to support multiple VIPs per pool now?

lb-pool-create   -->$POOL-1
lb-vip-create .$VIP_ADDRESS,$TCP_PORT, default_pool=$POOL-1... --> $VIP-1
lb-vip-create .$VIP_ADDRESS,$TCP_PORT, default_pool=$POOL-1... --> $VIP-2


Youcef





From: Eugene Nikanorov 
[mailto:enikano...@mirantis.com<mailto:enikano...@mirantis.com>]
Sent: Wednesday, February 26, 2014 1:26 PM
To: Samuel Bercovici
Cc: OpenStack Development Mailing List (not for usage questions)

Subject: Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

Hi Sam,

I've looked over the document, couple of notes:

1) In order to allow real multiple 'vips' per pool feature, we need the 
listener concept.
It's not just a different tcp port, but also a protocol, so session persistence 
and all ssl-related parameters should move to listener.

2) ProviderResourceAssociation - remains on the instance object (our instance 
object is VIP) as a relation attribute.
Though it is removed from public API, so it could not be specified on creation.
Remember provider is needed for REST call dispatching. The value of provider 
attribute (e.g. ProviderResourceAssociation) is result of scheduling.

3) As we discussed before, pool->vip relation will be removed, but pool reuse 
by different vips (e.g. different backends) will be forbidden for 
implementation simplicity, because this is definitely not a priority right now.
I think it's a fair limitation that can be removed later.

On workflows:
WFs #2 and #3 are problematic. First off, sharing the same IP is not possible 
for other vip for the following reason:
vip is created (with new model) with flavor (or provider) and scheduled to a 
provider (and then to a particular backend), doing so for 2 vips makes address 
reuse impossible if we want to maintain logical API, or otherwise we would need 
to expose implementation details that will allow us to connect two vips to the 
same backend.

On the open discussion questions:
I think most of them are resolved by following exist

Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-26 Thread Eugene Nikanorov
Hi Youcef,

The point is to be able to share IP address, it really means that two
VIPs(as we understand them in current model) need to reside within same
backend (technically they need to share neutron port).
We decided not to expose any 'colocation hint' (like loadbalancer_id) in
the API, so we really can't create two vips on one backend right now.

I'm sorry this all creates so much confusion.
In order to understand why we need additional entity, you need to keep in
mind the following things:
 1) We have a notion of root object. From user perspective it represents
logical instance, from implementation perspective it also represents how
that instance is mapped to a backend (agent, device), which
flavor/provider/driver it has, etc
 2) We're trying to change vip-pool relationship to m:n, if vip or pool
remain the root object, that creates inconsistency because root object can
be connected to another root object with different parameters.

To resolve issue #2 we can do basically two similar things:
- introduce another entity instead on a vip to make that m:n relationship
(listener),
- as was initially suggested, to introduce 'instance' entity to colocate
vips, both

Hope that helps.

Thanks,
Eugene.



On Thu, Feb 27, 2014 at 5:20 AM, Youcef Laribi wrote:

>  Hi Eugene,
>
>
>
> 1) In order to allow real multiple 'vips' per pool feature, we need the
> listener concept.
>
> It's not just a different tcp port, but also a protocol, so session
> persistence and all ssl-related parameters should move to listener.
>
>
>
> Why do we need a new 'listener' concept? Since as Sam pointed out, we are
> removing the reference to a pool from the VIP in the current model, isn't
> this enough by itself to allow the model to support multiple VIPs per pool
> now?
>
>
>
> lb-pool-create   à$POOL-1
>
> lb-vip-create .$VIP_ADDRESS,$TCP_PORT, default_pool=$POOL-1... à $VIP-1
>
> lb-vip-create .$VIP_ADDRESS,$TCP_PORT, default_pool=$POOL-1... à $VIP-2
>
>
>
>
>
> Youcef
>
>
>
>
>
>
>
>
>
>
>
> *From:* Eugene Nikanorov [mailto:enikano...@mirantis.com]
> *Sent:* Wednesday, February 26, 2014 1:26 PM
> *To:* Samuel Bercovici
> *Cc:* OpenStack Development Mailing List (not for usage questions)
>
> *Subject:* Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion
>
>
>
> Hi Sam,
>
>
>
> I've looked over the document, couple of notes:
>
>
>
> 1) In order to allow real multiple 'vips' per pool feature, we need the
> listener concept.
>
> It's not just a different tcp port, but also a protocol, so session
> persistence and all ssl-related parameters should move to listener.
>
>
>
> 2) ProviderResourceAssociation - remains on the instance object (our
> instance object is VIP) as a relation attribute.
>
> Though it is removed from public API, so it could not be specified on
> creation.
>
> Remember provider is needed for REST call dispatching. The value of
> provider attribute (e.g. ProviderResourceAssociation) is result of
> scheduling.
>
>
>
> 3) As we discussed before, pool->vip relation will be removed, but pool
> reuse by different vips (e.g. different backends) will be forbidden for
> implementation simplicity, because this is definitely not a priority right
> now.
>
> I think it's a fair limitation that can be removed later.
>
>
>
> On workflows:
>
> WFs #2 and #3 are problematic. First off, sharing the same IP is not
> possible for other vip for the following reason:
>
> vip is created (with new model) with flavor (or provider) and scheduled to
> a provider (and then to a particular backend), doing so for 2 vips makes
> address reuse impossible if we want to maintain logical API, or otherwise
> we would need to expose implementation details that will allow us to
> connect two vips to the same backend.
>
>
>
> On the open discussion questions:
>
> I think most of them are resolved by following existing API expectations
> about status fields, etc.
>
> Main thing that allows to go with existing API expectations is the notion
> of 'root object'.
>
> Root object is the object which status and admin_state show real
> operability of the configuration. While from implementation perspective it
> is a mounting point between logical config and the backend.
>
>
>
> The real challenge of model #3 is ability to share pools between different
> VIPs, e.g. between different flavors/providers/backends.
>
> User may be unaware of it, but it requires really complex logic to handle
> statistics, healthchecks, etc.
>
> I think while me may leave this ability at object 

Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-26 Thread Youcef Laribi
Hi Eugene,

1) In order to allow real multiple 'vips' per pool feature, we need the 
listener concept.
It's not just a different tcp port, but also a protocol, so session persistence 
and all ssl-related parameters should move to listener.

Why do we need a new 'listener' concept? Since as Sam pointed out, we are 
removing the reference to a pool from the VIP in the current model, isn't this 
enough by itself to allow the model to support multiple VIPs per pool now?

lb-pool-create   -->$POOL-1
lb-vip-create .$VIP_ADDRESS,$TCP_PORT, default_pool=$POOL-1... --> $VIP-1
lb-vip-create .$VIP_ADDRESS,$TCP_PORT, default_pool=$POOL-1... --> $VIP-2


Youcef





From: Eugene Nikanorov [mailto:enikano...@mirantis.com]
Sent: Wednesday, February 26, 2014 1:26 PM
To: Samuel Bercovici
Cc: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

Hi Sam,

I've looked over the document, couple of notes:

1) In order to allow real multiple 'vips' per pool feature, we need the 
listener concept.
It's not just a different tcp port, but also a protocol, so session persistence 
and all ssl-related parameters should move to listener.

2) ProviderResourceAssociation - remains on the instance object (our instance 
object is VIP) as a relation attribute.
Though it is removed from public API, so it could not be specified on creation.
Remember provider is needed for REST call dispatching. The value of provider 
attribute (e.g. ProviderResourceAssociation) is result of scheduling.

3) As we discussed before, pool->vip relation will be removed, but pool reuse 
by different vips (e.g. different backends) will be forbidden for 
implementation simplicity, because this is definitely not a priority right now.
I think it's a fair limitation that can be removed later.

On workflows:
WFs #2 and #3 are problematic. First off, sharing the same IP is not possible 
for other vip for the following reason:
vip is created (with new model) with flavor (or provider) and scheduled to a 
provider (and then to a particular backend), doing so for 2 vips makes address 
reuse impossible if we want to maintain logical API, or otherwise we would need 
to expose implementation details that will allow us to connect two vips to the 
same backend.

On the open discussion questions:
I think most of them are resolved by following existing API expectations about 
status fields, etc.
Main thing that allows to go with existing API expectations is the notion of 
'root object'.
Root object is the object which status and admin_state show real operability of 
the configuration. While from implementation perspective it is a mounting point 
between logical config and the backend.

The real challenge of model #3 is ability to share pools between different 
VIPs, e.g. between different flavors/providers/backends.
User may be unaware of it, but it requires really complex logic to handle 
statistics, healthchecks, etc.
I think while me may leave this ability at object model and API level, we will 
limit it, as I said previously.

Thanks,
Eugene.


On Wed, Feb 26, 2014 at 9:06 PM, Samuel Bercovici 
mailto:samu...@radware.com>> wrote:
Hi,

I have added to the wiki page: 
https://wiki.openstack.org/wiki/Neutron/LBaaS/LoadbalancerInstance/Discussion#1.1_Turning_existing_model_to_logical_model
 that points to a document that includes the current model + L7 + SSL.
Please review.

Regards,
-Sam.


From: Samuel Bercovici
Sent: Monday, February 24, 2014 7:36 PM

To: OpenStack Development Mailing List (not for usage questions)
Cc: Samuel Bercovici
Subject: RE: [openstack-dev] [Neutron][LBaaS] Object Model discussion

Hi,

I also agree that the model should be pure logical.
I think that the existing model is almost correct but the pool should be made 
pure logical. This means that the vip <>pool relationships needs also to 
become any to any.
Eugene, has rightfully pointed that the current "state" management will not 
handle such relationship well.
To me this means that the "state" management is broken and not the model.
I will propose an update to the state management in the next few days.

Regards,
-Sam.




From: Mark McClain [mailto:mmccl...@yahoo-inc.com]
Sent: Monday, February 24, 2014 6:32 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion


On Feb 21, 2014, at 1:29 PM, Jay Pipes 
mailto:jaypi...@gmail.com>> wrote:

I disagree on this point. I believe that the more implementation details
bleed into the API, the harder the API is to evolve and improve, and the
less flexible the API becomes.

I'd personally love to see the next version of the LBaaS API be a
complete breakaway from any implementation specifics and refocus itself
to be a control plane API t

Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-26 Thread Eugene Nikanorov
Hi Sam,

I've looked over the document, couple of notes:

1) In order to allow real multiple 'vips' per pool feature, we need the
listener concept.
It's not just a different tcp port, but also a protocol, so session
persistence and all ssl-related parameters should move to listener.

2) ProviderResourceAssociation - remains on the instance object (our
instance object is VIP) as a relation attribute.
Though it is removed from public API, so it could not be specified on
creation.
Remember provider is needed for REST call dispatching. The value of
provider attribute (e.g. ProviderResourceAssociation) is result of
scheduling.

3) As we discussed before, pool->vip relation will be removed, but pool
reuse by different vips (e.g. different backends) will be forbidden for
implementation simplicity, because this is definitely not a priority right
now.
I think it's a fair limitation that can be removed later.

On workflows:
WFs #2 and #3 are problematic. First off, sharing the same IP is not
possible for other vip for the following reason:
vip is created (with new model) with flavor (or provider) and scheduled to
a provider (and then to a particular backend), doing so for 2 vips makes
address reuse impossible if we want to maintain logical API, or otherwise
we would need to expose implementation details that will allow us to
connect two vips to the same backend.

On the open discussion questions:
I think most of them are resolved by following existing API expectations
about status fields, etc.
Main thing that allows to go with existing API expectations is the notion
of 'root object'.
Root object is the object which status and admin_state show real
operability of the configuration. While from implementation perspective it
is a mounting point between logical config and the backend.

The real challenge of model #3 is ability to share pools between different
VIPs, e.g. between different flavors/providers/backends.
User may be unaware of it, but it requires really complex logic to handle
statistics, healthchecks, etc.
I think while me may leave this ability at object model and API level, we
will limit it, as I said previously.

Thanks,
Eugene.



On Wed, Feb 26, 2014 at 9:06 PM, Samuel Bercovici wrote:

>  Hi,
>
>
>
> I have added to the wiki page:
> https://wiki.openstack.org/wiki/Neutron/LBaaS/LoadbalancerInstance/Discussion#1.1_Turning_existing_model_to_logical_modelthat
>  points to a document that includes the current model + L7 + SSL.
>
> Please review.
>
>
>
> Regards,
>
> -Sam.
>
>
>
>
>
> *From:* Samuel Bercovici
> *Sent:* Monday, February 24, 2014 7:36 PM
>
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Cc:* Samuel Bercovici
> *Subject:* RE: [openstack-dev] [Neutron][LBaaS] Object Model discussion
>
>
>
> Hi,
>
>
>
> I also agree that the model should be pure logical.
>
> I think that the existing model is almost correct but the pool should be
> made pure logical. This means that the vip ßàpool relationships needs
> also to become any to any.
>
> Eugene, has rightfully pointed that the current "state" management will
> not handle such relationship well.
>
> To me this means that the "state" management is broken and not the model.
>
> I will propose an update to the state management in the next few days.
>
>
>
> Regards,
>
> -Sam.
>
>
>
>
>
>
>
>
>
> *From:* Mark McClain [mailto:mmccl...@yahoo-inc.com]
>
> *Sent:* Monday, February 24, 2014 6:32 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion
>
>
>
>
>
> On Feb 21, 2014, at 1:29 PM, Jay Pipes  wrote:
>
>
>
> I disagree on this point. I believe that the more implementation details
> bleed into the API, the harder the API is to evolve and improve, and the
> less flexible the API becomes.
>
> I'd personally love to see the next version of the LBaaS API be a
> complete breakaway from any implementation specifics and refocus itself
> to be a control plane API that is written from the perspective of the
> *user* of a load balancing service, not the perspective of developers of
> load balancer products.
>
>
>
> I agree with Jay.  We the API needs to be user centric and free of
> implementation details.  One of my concerns I've voiced in some of the IRC
> discussions is that too many implementation details are exposed to the user.
>
>
>
> mark
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-26 Thread Samuel Bercovici
Hi,

I have added to the wiki page: 
https://wiki.openstack.org/wiki/Neutron/LBaaS/LoadbalancerInstance/Discussion#1.1_Turning_existing_model_to_logical_model
 that points to a document that includes the current model + L7 + SSL.
Please review.

Regards,
-Sam.


From: Samuel Bercovici
Sent: Monday, February 24, 2014 7:36 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Samuel Bercovici
Subject: RE: [openstack-dev] [Neutron][LBaaS] Object Model discussion

Hi,

I also agree that the model should be pure logical.
I think that the existing model is almost correct but the pool should be made 
pure logical. This means that the vip <>pool relationships needs also to 
become any to any.
Eugene, has rightfully pointed that the current "state" management will not 
handle such relationship well.
To me this means that the "state" management is broken and not the model.
I will propose an update to the state management in the next few days.

Regards,
-Sam.




From: Mark McClain [mailto:mmccl...@yahoo-inc.com]
Sent: Monday, February 24, 2014 6:32 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion


On Feb 21, 2014, at 1:29 PM, Jay Pipes 
mailto:jaypi...@gmail.com>> wrote:

I disagree on this point. I believe that the more implementation details
bleed into the API, the harder the API is to evolve and improve, and the
less flexible the API becomes.

I'd personally love to see the next version of the LBaaS API be a
complete breakaway from any implementation specifics and refocus itself
to be a control plane API that is written from the perspective of the
*user* of a load balancing service, not the perspective of developers of
load balancer products.

I agree with Jay.  We the API needs to be user centric and free of 
implementation details.  One of my concerns I've voiced in some of the IRC 
discussions is that too many implementation details are exposed to the user.

mark
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-26 Thread Jay Pipes
On Wed, 2014-02-26 at 16:11 +0400, Eugene Nikanorov wrote:

> On Wed, Feb 26, 2014 at 12:24 AM, Jay Pipes 
> wrote:

> neutron l7-policy-create --type="uri-regex-matching" \
>  --attr=URIRegex="static\.example\.com.*"
> 
> Presume above returns an ID for the policy $L7_POLICY_ID. We
> could then
> 
> assign that policy to operate on the front-end of the load
> balancer and
> spreading load to the nginx nodes by doing:
> 
> neutron balancer-apply-policy $BALANCER_ID $L7_POLICY_ID \
>  --subnet-cidr=192.168.1.0/24
> 
> We could then indicate to the balancer that all other traffic
> should be
> sent to only the Apache nodes:
> 
> neutron l7-policy-create --type="uri-regex-matching" \
>  --attr=URIRegex="static\.example\.com.*" \
>  --attr="RegexMatchReverse=true"
> 
> neutron balancer-apply-policy $BALANCER_ID $L7_POLICY_ID \
>  --subnet-cidr=192.168.2.0/24
> That's cheating! :)

:)

> Once you have both static and webapp servers on one subnet, you'll
> have to introduce the notion of 'node groups', 
> e.g. pools, and somehow refer them within single $BALANCER_ID.

Agreed. In fact, I had a hangout with Stephen yesterday evening to chat
about just this thing.

I admit that the notion of a named pool of instances would be necessary
in these cases.

That said, what it all boils down to is generating a list of backend IP
addresses. Whether we use a subnet_cidr or a named pool ID, all that is
happening is allowing the user to specify a group of nodes together.

So, I'd love it if both options were possible (i.e. allow subnet_id,
subnet_cidr, pool_id and pool_name when specifying groups of nodes with
balancer-apply-policy) 
> 
> I think notions from world of load balancing are unavoidable in the
> API and we should not try to get rid of them.
>  
> The biggest advantage to this proposed API and CLI is that we
> are not
> introducing any terminology into the Neutron LBaaS API that is
> not
> necessary when existing terms in the main Neutron API already
> exist to
> describe such things. 
> But is there much point in this? We'are introducing quite a lot even
> within this proposal: loadbalancer, l7-policy, healthchecks, etc.

Fair point. Was just brainstorming :)
> 
> You will note that I do not use the term "pool"
> above, since the concept of a subnet (and its associated CIDR)
> are
> already well-established objects in the Neutron API and can
> serve the
> exact same purpose for Neutron LBaaS API.
> The subnet is just not flexible enough. Not to say that some
> implementations may not support having nodes on different subnets,
> while may support L7 rules.

Agreed. Would just like it to be an option instead of forcing the user
to create a pool if they don't need to (i.e. the subnet would work just
fine...)

> > As far as hiding implementation details from the user:  To a
> certain
> > degree I agree with this, and to a certain degree I do not:
> OpenStack
> > is a cloud OS fulfilling the needs of supplying IaaS. It is
> not a
> > PaaS. As such, the objects that users deal with largely are
> analogous
> > to physical pieces of hardware that make up a cluster,
> albeit these
> > are virtualized or conceptualized. Users can then use these
> conceptual
> > components of a cluster to build the (virtual)
> infrastructure they
> > need to support whatever application they want. These
> objects have
> > attributes and are expected to act in a certain way, which
> again, are
> > usually analogous to actual hardware.
> 
> 
> I disagree. A cloud API should strive to shield users of the
> cloud from
> having to understand underlying hardware APIs or object
> models.
>  
> I think Stephen's suggestion is not about underlying hardware API, but
> about the set of building blocks.
> Across all services, Libra/Atlas, ELB, LBaaS those blocks are the same
> no matter how we name them.

Sure, understood. Just trying to brainstorm a bit on how to keep
flexibility in the LBaaS API while also simplifying it as much as
possible.

Best,
-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-26 Thread Eugene Nikanorov
A couple of notes:


On Wed, Feb 26, 2014 at 12:24 AM, Jay Pipes  wrote:

>
>
> neutron l7-policy-create --type="uri-regex-matching" \
>  --attr=URIRegex="static\.example\.com.*"
>
> Presume above returns an ID for the policy $L7_POLICY_ID. We could then
> assign that policy to operate on the front-end of the load balancer and
> spreading load to the nginx nodes by doing:
>
> neutron balancer-apply-policy $BALANCER_ID $L7_POLICY_ID \
>  --subnet-cidr=192.168.1.0/24
>
> We could then indicate to the balancer that all other traffic should be
> sent to only the Apache nodes:
>
> neutron l7-policy-create --type="uri-regex-matching" \
>  --attr=URIRegex="static\.example\.com.*" \
>  --attr="RegexMatchReverse=true"
>
> neutron balancer-apply-policy $BALANCER_ID $L7_POLICY_ID \
>  --subnet-cidr=192.168.2.0/24

That's cheating! :)
Once you have both static and webapp servers on one subnet, you'll have to
introduce the notion of 'node groups',
e.g. pools, and somehow refer them within single $BALANCER_ID.

I think notions from world of load balancing are unavoidable in the API and
we should not try to get rid of them.


> The biggest advantage to this proposed API and CLI is that we are not
> introducing any terminology into the Neutron LBaaS API that is not
> necessary when existing terms in the main Neutron API already exist to
> describe such things.

But is there much point in this? We'are introducing quite a lot even within
this proposal: loadbalancer, l7-policy, healthchecks, etc.

You will note that I do not use the term "pool"
> above, since the concept of a subnet (and its associated CIDR) are
> already well-established objects in the Neutron API and can serve the
> exact same purpose for Neutron LBaaS API.
>
The subnet is just not flexible enough. Not to say that some
implementations may not support having nodes on different subnets, while
may support L7 rules.


>
> > As far as hiding implementation details from the user:  To a certain
> > degree I agree with this, and to a certain degree I do not: OpenStack
> > is a cloud OS fulfilling the needs of supplying IaaS. It is not a
> > PaaS. As such, the objects that users deal with largely are analogous
> > to physical pieces of hardware that make up a cluster, albeit these
> > are virtualized or conceptualized. Users can then use these conceptual
> > components of a cluster to build the (virtual) infrastructure they
> > need to support whatever application they want. These objects have
> > attributes and are expected to act in a certain way, which again, are
> > usually analogous to actual hardware.
>
> I disagree. A cloud API should strive to shield users of the cloud from
> having to understand underlying hardware APIs or object models.
>

I think Stephen's suggestion is not about underlying hardware API, but
about the set of building blocks.
Across all services, Libra/Atlas, ELB, LBaaS those blocks are the same no
matter how we name them.

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-25 Thread Stephen Balukoff
Hi Ed,

That sounds good to me, actually:  As long as 'cloud admin' API functions
are represented as well as 'simple user workflows', then I'm all for a
unified API that simply exposes more depending on permissions.

Stephen


On Tue, Feb 25, 2014 at 12:15 PM, Ed Hall  wrote:

>
>  On Feb 25, 2014, at 10:10 AM, Stephen Balukoff 
> wrote:
>
>On Feb 25, 2014 at 3:39 AM, enikano...@mirantis.com wrote:
>
>> Agree, however actual hardware is beyond logical LBaaS API but could
>> be a part of admin LBaaS API.
>>
>
>  Aah yes--  In my opinion, users should almost never be exposed to
> anything that represents a specific piece of hardware, but cloud
> administrators must be. The logical constructs the user is exposed to can
> "come close" to what an actual piece of hardware is, but again, we should
> be abstract enough that a cloud admin can swap out one piece of hardware
> for another without affecting the user's workflow, application
> configuration, (hopefully) availability, etc.
>
>  I recall you said previously that the concept of having an 'admin API'
> had been discussed earlier, but I forget the resolution behind this (if
> there was one). Maybe we should revisit this discussion?
>
>  I tend to think that if we acknowledge the need for an admin API, as
> well as some of the core features it's going to need, and contrast this
> with the user API (which I think is mostly what Jay and Mark McClain are
> rightly concerned about), it'll start to become obvious which features
> belong where, and what kind of data model will emerge which supports both
> APIs.
>
>
>  [I’m new to this discussion; my role at my employer has been shifted from
> an internal to a community focus and I’m madly
> attempting to come up to speed. I’m a software developer with an
> operations focus; I’ve worked with OpenStack since Diablo
> as Yahoo’s team lead for network integration.]
>
> Two levels (user and admin) would be the minimum. But our experience over
> time is that even administrators occasionally
> need to be saved from themselves. This suggests that, rather than two or
> more separate APIs, a single API with multiple
> roles is needed. Certain operations and attributes would only be
> accessible to someone acting in an appropriate role.
>
>  This might seem over-elaborate at first glance, but there are other
> dividends: a single API is more likely to be consistent,
> and maintained consistently as it evolves. By taking a role-wise view the
> hierarchy of concerns is clarified. If you focus on
> the data model first you are more likely to produce an arrangement that
> mirrors the hardware but presents difficulties in
> representing and implementing user and operator intent.
>
>  Just some general insights/opinions — take for what they’re worth.
>
>   -Ed
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-25 Thread Jay Pipes
On Mon, 2014-02-24 at 18:07 -0800, Stephen Balukoff wrote:
> Hi y'all,
> 
> Jay, in the L7 example you give, it looks like you're setting SSL
> parameters for a given load balancer front-end. 

Correct. The example comes straight out of the same example in the ELB
API documentation. The only difference being in my CLI commands, there's
no mention of a listener, whereas in the ELB examples, there is (since
the ELB API can only configure this on the load balancer by adding or
removing listener objects to/from the load balancer object.

> Do you have an example you can share where where certain traffic is
> sent to one set of back-end nodes, and other traffic is sent to a
> different set of back-end nodes based on the URL in the client
> request? (I'm trying to understand how this can work without the
> concept of 'pools'.)  

Great example. This is quite a common scenario -- consider serving
requests for static images or content from one set of nginx servers and
non-static content from another set of, say, Apache servers running
Tomcat or similar.

OK, I'll try to work through my ongoing CLI suggestions for the
following scenario:

* User has 3 Nova instances running nginx and serving static files.
These instances all have private IP addresses in subnet 192.168.1.0/24.
* User has 3 Nova instances running Apache and tomcat and serving
dynamic content. These instances all have private IP addresses in subnet
192.168.2.0/24
* User wants any traffic coming in to the balancer's front-end IP with a
URI beginning with "static.example.com" to get directed to any of the
nginx nodes
* User wants any other traffic coming in to the balancer's front-end IP
to get directed to any of the Apache nodes
* User wants sticky session handling enabled ONLY for traffic going to
the Apache nodes

Here is what some proposed CLI commands might look like in my
"user-centric flow of things":

# Assume we've created a load balancer with ID $BALANCER_ID using
# Something like I showed in my original response:
 
neutron balancer-create --type=advanced --front= \
 --back= --algorithm="least-connections" \
 --topology="active-standby"

Note that in the above call,  includes **all of the Nova
instances that would be balanced across**, including all of the nginx
and all of the Apache instances.

Now, let's set up our static balancing. First, we'd create a new L7
policy, just like the SSL negotiation one in the previous example:

neutron l7-policy-create --type="uri-regex-matching" \
 --attr=URIRegex="static\.example\.com.*"

Presume above returns an ID for the policy $L7_POLICY_ID. We could then
assign that policy to operate on the front-end of the load balancer and
spreading load to the nginx nodes by doing:

neutron balancer-apply-policy $BALANCER_ID $L7_POLICY_ID \
 --subnet-cidr=192.168.1.0/24

We could then indicate to the balancer that all other traffic should be
sent to only the Apache nodes:

neutron l7-policy-create --type="uri-regex-matching" \
 --attr=URIRegex="static\.example\.com.*" \
 --attr="RegexMatchReverse=true"

neutron balancer-apply-policy $BALANCER_ID $L7_POLICY_ID \
 --subnet-cidr=192.168.2.0/24

> Also, what if the first group of nodes needs a different health check
> run against it than the second group of nodes?

neutron balancer-apply-healthcheck $BALANCER_ID $HEALTHCHECK_ID \
 --subnet-cidr=192.168.1.0/24

where $HEALTHCHECK_ID would be the ID of a simple healthcheck object.

The biggest advantage to this proposed API and CLI is that we are not
introducing any terminology into the Neutron LBaaS API that is not
necessary when existing terms in the main Neutron API already exist to
describe such things. You will note that I do not use the term "pool"
above, since the concept of a subnet (and its associated CIDR) are
already well-established objects in the Neutron API and can serve the
exact same purpose for Neutron LBaaS API.

> As far as hiding implementation details from the user:  To a certain
> degree I agree with this, and to a certain degree I do not: OpenStack
> is a cloud OS fulfilling the needs of supplying IaaS. It is not a
> PaaS. As such, the objects that users deal with largely are analogous
> to physical pieces of hardware that make up a cluster, albeit these
> are virtualized or conceptualized. Users can then use these conceptual
> components of a cluster to build the (virtual) infrastructure they
> need to support whatever application they want. These objects have
> attributes and are expected to act in a certain way, which again, are
> usually analogous to actual hardware.

I disagree. A cloud API should strive to shield users of the cloud from
having to understand underlying hardware APIs or object models.

> If we were building a PaaS, the story would be a lot different--  but
> what we are building is a cloud OS that provides Infrastructure (as a
> service).

I still think we need to simplify the APIs as much as we can, and remove
the underlying implementation (which includes the da

Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-25 Thread Ed Hall

On Feb 25, 2014, at 10:10 AM, Stephen Balukoff 
mailto:sbaluk...@bluebox.net>> wrote:
 On Feb 25, 2014 at 3:39 AM, 
enikano...@mirantis.com wrote:
Agree, however actual hardware is beyond logical LBaaS API but could be a part 
of admin LBaaS API.

Aah yes--  In my opinion, users should almost never be exposed to anything that 
represents a specific piece of hardware, but cloud administrators must be. The 
logical constructs the user is exposed to can "come close" to what an actual 
piece of hardware is, but again, we should be abstract enough that a cloud 
admin can swap out one piece of hardware for another without affecting the 
user's workflow, application configuration, (hopefully) availability, etc.

I recall you said previously that the concept of having an 'admin API' had been 
discussed earlier, but I forget the resolution behind this (if there was one). 
Maybe we should revisit this discussion?

I tend to think that if we acknowledge the need for an admin API, as well as 
some of the core features it's going to need, and contrast this with the user 
API (which I think is mostly what Jay and Mark McClain are rightly concerned 
about), it'll start to become obvious which features belong where, and what 
kind of data model will emerge which supports both APIs.

[I’m new to this discussion; my role at my employer has been shifted from an 
internal to a community focus and I’m madly
attempting to come up to speed. I’m a software developer with an operations 
focus; I’ve worked with OpenStack since Diablo
as Yahoo’s team lead for network integration.]

Two levels (user and admin) would be the minimum. But our experience over time 
is that even administrators occasionally
need to be saved from themselves. This suggests that, rather than two or more 
separate APIs, a single API with multiple
roles is needed. Certain operations and attributes would only be accessible to 
someone acting in an appropriate role.

This might seem over-elaborate at first glance, but there are other dividends: 
a single API is more likely to be consistent,
and maintained consistently as it evolves. By taking a role-wise view the 
hierarchy of concerns is clarified. If you focus on
the data model first you are more likely to produce an arrangement that mirrors 
the hardware but presents difficulties in
representing and implementing user and operator intent.

Just some general insights/opinions — take for what they’re worth.

 -Ed

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-25 Thread Stephen Balukoff
Hi Eugene!

Responses inline:

On Tue, Feb 25, 2014 at 3:33 AM, Eugene Nikanorov
wrote:
>
> I'm really not sure what Mark McClain on some other folks see as
> implementation details. To me the 'instance' concept is as logical as
> others (vips/pool/etc). But anyway, it looks like majority of those who
> discuss, sees it as redundant concept.
>

Maybe we should have a discussion around what qualifies as a 'logical
concept' or 'logical construct,' and why the 'loadbalancer' concept you've
been championing either does or does not qualify, so we're all (closer to
being) on the same page before we discuss model changes?



> Agree, however actual hardware is beyond logical LBaaS API but could be a
> part of admin LBaaS API.
>

Aah yes--  In my opinion, users should almost never be exposed to anything
that represents a specific piece of hardware, but cloud administrators must
be. The logical constructs the user is exposed to can "come close" to what
an actual piece of hardware is, but again, we should be abstract enough
that a cloud admin can swap out one piece of hardware for another without
affecting the user's workflow, application configuration, (hopefully)
availability, etc.

I recall you said previously that the concept of having an 'admin API' had
been discussed earlier, but I forget the resolution behind this (if there
was one). Maybe we should revisit this discussion?

I tend to think that if we acknowledge the need for an admin API, as well
as some of the core features it's going to need, and contrast this with the
user API (which I think is mostly what Jay and Mark McClain are rightly
concerned about), it'll start to become obvious which features belong
where, and what kind of data model will emerge which supports both APIs.


Thanks,
Stephen



-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-25 Thread Eugene Nikanorov
Hi Stephen,

My comments inline:


On Tue, Feb 25, 2014 at 6:07 AM, Stephen Balukoff wrote:

> Hi y'all,
>
> Jay, in the L7 example you give, it looks like you're setting SSL
> parameters for a given load balancer front-end. Do you have an example you
> can share where where certain traffic is sent to one set of back-end nodes,
> and other traffic is sent to a different set of back-end nodes based on the
> URL in the client request? (I'm trying to understand how this can work
> without the concept of 'pools'.)  Also, what if the first group of nodes
> needs a different health check run against it than the second group of
> nodes?
>
Obviously any kind of loadbalancer API need to have a concept of 'pool' if
we want to go beyond single group of nodes.
So the API that is convenient at first glance will need to introduce all
those concepts that we already have.


> As far as hiding implementation details from the user:  To a certain
> degree I agree with this, and to a certain degree I do not: OpenStack is a
> cloud OS fulfilling the needs of supplying IaaS. It is not a PaaS. As such,
> the objects that users deal with largely are analogous to physical pieces
> of hardware that make up a cluster, albeit these are virtualized or
> conceptualized. Users can then use these conceptual components of a cluster
> to build the (virtual) infrastructure they need to support whatever
> application they want. These objects have attributes and are expected to
> act in a certain way, which again, are usually analogous to actual hardware.
>
I'm really not sure what Mark McClain on some other folks see as
implementation details. To me the 'instance' concept is as logical as
others (vips/pool/etc). But anyway, it looks like majority of those who
discuss, sees it as redundant concept.


> If we were building a PaaS, the story would be a lot different--  but what
> we are building is a cloud OS that provides Infrastructure (as a service).
>
> I think the concept of a 'load balancer' or 'load balancer service' is one
> of these building blocks that has attributes and is expected to act in a
> certain way. (Much the same way cinder provides "block devices" or swift
> provides an "object store.") And yes, while you can do away with a lot of
> the implementation details and use a very simple model for the simplest use
> case, there are a whole lot of load balancer use cases more complicated
> than that which don't work with the current model (or even a small
> alteration to the current model). If you don't allow for these more
> complicated use cases, you end up with users stacking home-built software
> load balancers behind the cloud OS load balancers in order to get the
> features they actually need. (I understand this is a very common topology
> with ELB, because ELB simply isn't capable of doing advanced things, from
> the user's perspective.) In my opinion, we should be looking well beyond
> what ELB can do.
>
Agree on ELB. Existing public APIs (ELB/Libra) are not much better in terms
of feature coverage, than what we have already.


> :P Ideally, almost all users should not have to hack together their own
> load balancer because the cloud OS load balancer can't do what they need it
> to do.
>
Totally agree.


>
> Also, from a cloud administrator's point of view, the cloud OS needs to be
> aware of all the actual hardware components, virtual components, and other
> logical constructs that make up the cloud in order to be able to
> effectively maintain it.
>
Agree, however actual hardware is beyond logical LBaaS API but could be a
part of admin LBaaS API.


> Again, almost all the details of this should be hidden from the user. But
> these details must not be hidden from the cloud administrator. This means
> implementation details will be represented somehow, and will be visible to
> the cloud administrator.
> Yes, the focus needs to be on making the user's experience as simple as
> possible. But we shouldn't sacrifice powerful capabilities for a simpler
> experience. And if we ignore the needs of the cloud administrator, then we
> end up with a cloud that is next to impossible to practically administer.
>
> Do y'all disagree with this, and if so, could you please share your
> reasoning?
>
Personally I agree, that was always a priority to accommodate API for
simple and advanced scenarios.

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-24 Thread Stephen Balukoff
Hi y'all,

Jay, in the L7 example you give, it looks like you're setting SSL
parameters for a given load balancer front-end. Do you have an example you
can share where where certain traffic is sent to one set of back-end nodes,
and other traffic is sent to a different set of back-end nodes based on the
URL in the client request? (I'm trying to understand how this can work
without the concept of 'pools'.)  Also, what if the first group of nodes
needs a different health check run against it than the second group of
nodes?

As far as hiding implementation details from the user:  To a certain degree
I agree with this, and to a certain degree I do not: OpenStack is a cloud
OS fulfilling the needs of supplying IaaS. It is not a PaaS. As such, the
objects that users deal with largely are analogous to physical pieces of
hardware that make up a cluster, albeit these are virtualized or
conceptualized. Users can then use these conceptual components of a cluster
to build the (virtual) infrastructure they need to support whatever
application they want. These objects have attributes and are expected to
act in a certain way, which again, are usually analogous to actual hardware.

If we were building a PaaS, the story would be a lot different--  but what
we are building is a cloud OS that provides Infrastructure (as a service).

I think the concept of a 'load balancer' or 'load balancer service' is one
of these building blocks that has attributes and is expected to act in a
certain way. (Much the same way cinder provides "block devices" or swift
provides an "object store.") And yes, while you can do away with a lot of
the implementation details and use a very simple model for the simplest use
case, there are a whole lot of load balancer use cases more complicated
than that which don't work with the current model (or even a small
alteration to the current model). If you don't allow for these more
complicated use cases, you end up with users stacking home-built software
load balancers behind the cloud OS load balancers in order to get the
features they actually need. (I understand this is a very common topology
with ELB, because ELB simply isn't capable of doing advanced things, from
the user's perspective.) In my opinion, we should be looking well beyond
what ELB can do. :P Ideally, almost all users should not have to hack
together their own load balancer because the cloud OS load balancer can't
do what they need it to do.

I'm all for having the simplest workflow possible for the basic user-- and
using the principle of least surprise when assuming defaults so that when
they grow and their needs change, they won't often have to completely
rework the load balancer component in their cluster. But the model we use
should be sufficiently sophisticated to support advanced workflows.

Also, from a cloud administrator's point of view, the cloud OS needs to be
aware of all the actual hardware components, virtual components, and other
logical constructs that make up the cloud in order to be able to
effectively maintain it. Again, almost all the details of this should be
hidden from the user. But these details must not be hidden from the cloud
administrator. This means implementation details will be represented
somehow, and will be visible to the cloud administrator.

Yes, the focus needs to be on making the user's experience as simple as
possible. But we shouldn't sacrifice powerful capabilities for a simpler
experience. And if we ignore the needs of the cloud administrator, then we
end up with a cloud that is next to impossible to practically administer.

Do y'all disagree with this, and if so, could you please share your
reasoning?

Thanks,
Stephen




On Mon, Feb 24, 2014 at 1:24 PM, Eugene Nikanorov
wrote:

> Hi Jay,
>
> Thanks for suggestions. I get the idea.
> I'm not sure the essence of this API is much different then what we have
> now.
> 1) We operate on parameters of loadbalancer rather then on
> vips/pools/listeners. No matter how we name them, the notions are there.
> 2) I see two opposite preferences: one is that user doesn't care about
> 'loadbalancer' in favor of pools/vips/listeners ('pure logical API')
> another is vice versa (yours).
> 3) The approach of providing $BALANCER_ID to pretty much every call
> solves all my concerns, I like it.
> Basically that was my initial code proposal (it's not exactly the same,
> but it's very close).
> The idea of my proposal was to have that 'balancer' resource plus being
> able to operate on vips/pools/etc.
> In this direction we could evolve from existing API to the API in your
> latest suggestion.
>
> Thanks,
> Eugene.
>
>
> On Tue, Feb 25, 2014 at 12:35 AM, Jay Pipes  wrote:
>
>> Thanks, Eugene! I've given the API a bit of thought today and jotted
>> down some thoughts below.
>>
>> On Fri, 2014-02-21 at 23:57 +0400, Eugene Nikanorov wrote:
>> > Could you provide some examples -- even in the pseudo-CLI
>> > commands like
>> > I did below. It's really d

Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-24 Thread Eugene Nikanorov
Hi Jay,

Thanks for suggestions. I get the idea.
I'm not sure the essence of this API is much different then what we have
now.
1) We operate on parameters of loadbalancer rather then on
vips/pools/listeners. No matter how we name them, the notions are there.
2) I see two opposite preferences: one is that user doesn't care about
'loadbalancer' in favor of pools/vips/listeners ('pure logical API')
another is vice versa (yours).
3) The approach of providing $BALANCER_ID to pretty much every call solves
all my concerns, I like it.
Basically that was my initial code proposal (it's not exactly the same, but
it's very close).
The idea of my proposal was to have that 'balancer' resource plus being
able to operate on vips/pools/etc.
In this direction we could evolve from existing API to the API in your
latest suggestion.

Thanks,
Eugene.


On Tue, Feb 25, 2014 at 12:35 AM, Jay Pipes  wrote:

> Thanks, Eugene! I've given the API a bit of thought today and jotted
> down some thoughts below.
>
> On Fri, 2014-02-21 at 23:57 +0400, Eugene Nikanorov wrote:
> > Could you provide some examples -- even in the pseudo-CLI
> > commands like
> > I did below. It's really difficult to understand where the
> > limits are
> > without specific examples.
> > You know, I always look at the API proposal from implementation
> > standpoint also, so here's what I see.
> > In the cli workflow that you described above, everything is fine,
> > because the driver knowы how and where to deploy each object
> > that you provide in your command, because it's basically a batch.
>
> Yes, that is true.
>
> > When we're talking about separate objectы that form a loadbalancer -
> > vips, pools, members, it becomes not clear how to map them backends
> > and at which point.
>
> Understood, but I think we can make some headway here. Examples below.
>
> > So here's an example I usually give:
> > We have 2 VIPs (in fact, one address and 2 ports listening for http
> > and https, now we call them listeners),
> > both listeners pass request to a webapp server farm, and http listener
> > also passes requests to static image servers by processing incoming
> > request URIs by L7 rules.
> > So object topology is:
> >
> >
> >  Listener1 (addr:80)   Listener2(addr:443)
> >| \/
> >| \/
> >|  X
> >|  / \
> >  pool1(webapp) pool2(static imgs)
> > sorry for that stone age pic :)
> >
> >
> > The proposal that we discuss can create such object topology by the
> > following sequence of commands:
> > 1) create-vip --name VipName address=addr
> > returns vid_id
> > 2) create-listener --name listener1 --port 80 --protocol http --vip_id
> > vip_id
> > returns listener_id1
> > 3) create-listener --name listener2 --port 443 --protocol https
> > --sl-params params --vip_id vip_id
> >
> > returns listener_id2
>
> > 4) create-pool --name pool1 
> >
> > returns pool_id1
> > 5) create-pool --name pool1 
> > returns pool_id2
> >
> > 6) set-listener-pool listener_id1 pool_id1 --default
> > 7) set-listener-pool listener_id1 pool_id2 --l7policy policy
> >
> > 7) set-listener-pool listener_id2 pool_id1 --default
>
> > That's a generic workflow that allows you to create such config. The
> > question is at which point the backend is chosen.
>
> From a user's perspective, they don't care about VIPs, listeners or
> pools :) All the user cares about is:
>
>  * being able to add or remove backend nodes that should be balanced
> across
>  * being able to set some policies about how traffic should be directed
>
> I do realize that AWS ELB's API uses the term "listener" in its API, but
> I'm not convinced this is the best term. And I'm not convinced that
> there is a need for a "pool" resource at all.
>
> Could the above steps #1 through #6 be instead represented in the
> following way?
>
> # Assume we've created a load balancer with ID $BALANCER_ID using
> # Something like I showed in my original response:
>
> neutron balancer-create --type=advanced --front= \
>  --back= --algorithm="least-connections" \
>  --topology="active-standby"
>
> neutron balancer-configure $BALANCER_ID --front-protocol=http \
>  --front-port=80 --back-protocol=http --back-port=80
>
> neutron balancer-configure $BALANCER_ID --front-protocol=https \
>  --front-port=443 --back-protocol=https --back-port=443
>
> Likewise, we could configure the load balancer to send front-end HTTPS
> traffic (terminated at the load balancer) to back-end HTTP services:
>
> neutron balancer-configure $BALANCER_ID --front-protocol=https \
>  --front-port=443 --back-protocol=http --back-port=80
>
> No mention of listeners, VIPs, or pools at all.
>
> The REST API for the balancer-update CLI command above might be
> something like this:
>
> PUT /balancers/{balancer_id}
>
> with JSON body of request like so:
>
> {
>   "

Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-24 Thread Jay Pipes
Thanks, Eugene! I've given the API a bit of thought today and jotted
down some thoughts below.

On Fri, 2014-02-21 at 23:57 +0400, Eugene Nikanorov wrote:
> Could you provide some examples -- even in the pseudo-CLI
> commands like
> I did below. It's really difficult to understand where the
> limits are
> without specific examples.
> You know, I always look at the API proposal from implementation
> standpoint also, so here's what I see.
> In the cli workflow that you described above, everything is fine,
> because the driver knowы how and where to deploy each object
> that you provide in your command, because it's basically a batch.

Yes, that is true.

> When we're talking about separate objectы that form a loadbalancer -
> vips, pools, members, it becomes not clear how to map them backends
> and at which point.

Understood, but I think we can make some headway here. Examples below.

> So here's an example I usually give:
> We have 2 VIPs (in fact, one address and 2 ports listening for http
> and https, now we call them listeners), 
> both listeners pass request to a webapp server farm, and http listener
> also passes requests to static image servers by processing incoming
> request URIs by L7 rules.
> So object topology is:
> 
> 
>  Listener1 (addr:80)   Listener2(addr:443)
>| \/
>| \/
>|  X
>|  / \
>  pool1(webapp) pool2(static imgs)
> sorry for that stone age pic :)
> 
> 
> The proposal that we discuss can create such object topology by the
> following sequence of commands:
> 1) create-vip --name VipName address=addr
> returns vid_id
> 2) create-listener --name listener1 --port 80 --protocol http --vip_id
> vip_id
> returns listener_id1
> 3) create-listener --name listener2 --port 443 --protocol https
> --sl-params params --vip_id vip_id
> 
> returns listener_id2

> 4) create-pool --name pool1 
> 
> returns pool_id1
> 5) create-pool --name pool1 
> returns pool_id2
> 
> 6) set-listener-pool listener_id1 pool_id1 --default
> 7) set-listener-pool listener_id1 pool_id2 --l7policy policy
> 
> 7) set-listener-pool listener_id2 pool_id1 --default

> That's a generic workflow that allows you to create such config. The
> question is at which point the backend is chosen.

From a user's perspective, they don't care about VIPs, listeners or
pools :) All the user cares about is:

 * being able to add or remove backend nodes that should be balanced
across
 * being able to set some policies about how traffic should be directed

I do realize that AWS ELB's API uses the term "listener" in its API, but
I'm not convinced this is the best term. And I'm not convinced that
there is a need for a "pool" resource at all.

Could the above steps #1 through #6 be instead represented in the
following way?

# Assume we've created a load balancer with ID $BALANCER_ID using
# Something like I showed in my original response:

neutron balancer-create --type=advanced --front= \
 --back= --algorithm="least-connections" \
 --topology="active-standby"

neutron balancer-configure $BALANCER_ID --front-protocol=http \
 --front-port=80 --back-protocol=http --back-port=80

neutron balancer-configure $BALANCER_ID --front-protocol=https \
 --front-port=443 --back-protocol=https --back-port=443

Likewise, we could configure the load balancer to send front-end HTTPS
traffic (terminated at the load balancer) to back-end HTTP services:

neutron balancer-configure $BALANCER_ID --front-protocol=https \
 --front-port=443 --back-protocol=http --back-port=80

No mention of listeners, VIPs, or pools at all.

The REST API for the balancer-update CLI command above might be
something like this:

PUT /balancers/{balancer_id}

with JSON body of request like so:

{
  "front-port": 443,
  "front-protocol": "https",
  "back-port": 80,
  "back-protocol": "http"
}

And the code handling the above request would simply look to see if the
load balancer had a "routing entry" for the front-end port and protocol
of (443, https) and set the entry to route to back-end port and protocol
of (80, http).

For the advanced L7 policy heuristics, it makes sense to me to use a
similar strategy. For example (using a similar example from ELB):

neutron l7-policy-create --type="ssl-negotiation" \
 --attr=ProtocolSSLv3=true \
 --attr=ProtocolTLSv1.1=true \
 --attr=DHE-RSA-AES256-SHA256=true \
 --attr=Server-Defined-Cipher-Order=true

Presume above returns an ID for the policy $L7_POLICY_ID. We could then
assign that policy to operate on the front-end of the load balancer by
doing:

neutron balancer-apply-policy $BALANCER_ID $L7_POLICY_ID --port=443

There's no need to specify --front-port of course, since the policy only
applies to the front-end.

There is also no need to refer to a "listener" object, no need to call
anything a VIP, nor any reason to use the

Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-24 Thread Eugene Nikanorov
Folks,

So far everyone agrees that the model should be pure logical, but no one
came up with the API and meaningful implementation details (at least at
idea level) of such obj model.
As I've pointed out, 'pure logical' object model has some API and user
experience inconsistencies that we need to sort out before we implement it.
I'd like to see real details proposed for such 'pure logical' object model.

Let's also consider the cost of the change - it's easier to do it gradually
than rewrite it from scratch.

Thanks,
Eugene.



On Mon, Feb 24, 2014 at 9:36 PM, Samuel Bercovici wrote:

>  Hi,
>
>
>
> I also agree that the model should be pure logical.
>
> I think that the existing model is almost correct but the pool should be
> made pure logical. This means that the vip ßàpool relationships needs
> also to become any to any.
>
> Eugene, has rightfully pointed that the current "state" management will
> not handle such relationship well.
>
> To me this means that the "state" management is broken and not the model.
>
> I will propose an update to the state management in the next few days.
>
>
>
> Regards,
>
> -Sam.
>
>
>
>
>
>
>
>
>
> *From:* Mark McClain [mailto:mmccl...@yahoo-inc.com]
> *Sent:* Monday, February 24, 2014 6:32 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion
>
>
>
>
>
> On Feb 21, 2014, at 1:29 PM, Jay Pipes  wrote:
>
>
>
>  I disagree on this point. I believe that the more implementation details
> bleed into the API, the harder the API is to evolve and improve, and the
> less flexible the API becomes.
>
> I'd personally love to see the next version of the LBaaS API be a
> complete breakaway from any implementation specifics and refocus itself
> to be a control plane API that is written from the perspective of the
> *user* of a load balancing service, not the perspective of developers of
> load balancer products.
>
>
>
> I agree with Jay.  We the API needs to be user centric and free of
> implementation details.  One of my concerns I've voiced in some of the IRC
> discussions is that too many implementation details are exposed to the user.
>
>
>
> mark
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-24 Thread Samuel Bercovici
Hi,

I also agree that the model should be pure logical.
I think that the existing model is almost correct but the pool should be made 
pure logical. This means that the vip <>pool relationships needs also to 
become any to any.
Eugene, has rightfully pointed that the current "state" management will not 
handle such relationship well.
To me this means that the "state" management is broken and not the model.
I will propose an update to the state management in the next few days.

Regards,
-Sam.




From: Mark McClain [mailto:mmccl...@yahoo-inc.com]
Sent: Monday, February 24, 2014 6:32 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion


On Feb 21, 2014, at 1:29 PM, Jay Pipes 
mailto:jaypi...@gmail.com>> wrote:


I disagree on this point. I believe that the more implementation details
bleed into the API, the harder the API is to evolve and improve, and the
less flexible the API becomes.

I'd personally love to see the next version of the LBaaS API be a
complete breakaway from any implementation specifics and refocus itself
to be a control plane API that is written from the perspective of the
*user* of a load balancing service, not the perspective of developers of
load balancer products.

I agree with Jay.  We the API needs to be user centric and free of 
implementation details.  One of my concerns I've voiced in some of the IRC 
discussions is that too many implementation details are exposed to the user.

mark
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-24 Thread Eugene Nikanorov
Mark,

I'm not sure I understand what are implementation details in the workflow I
have proposed in the email above, could you point to them?

Thanks,
Eugene.



On Mon, Feb 24, 2014 at 8:31 PM, Mark McClain wrote:

>
>  On Feb 21, 2014, at 1:29 PM, Jay Pipes  wrote:
>
> I disagree on this point. I believe that the more implementation details
> bleed into the API, the harder the API is to evolve and improve, and the
> less flexible the API becomes.
>
> I'd personally love to see the next version of the LBaaS API be a
> complete breakaway from any implementation specifics and refocus itself
> to be a control plane API that is written from the perspective of the
> *user* of a load balancing service, not the perspective of developers of
> load balancer products.
>
>
> I agree with Jay.  We the API needs to be user centric and free of
> implementation details.  One of my concerns I've voiced in some of the IRC
> discussions is that too many implementation details are exposed to the user.
>
>  mark
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-24 Thread Mark McClain

On Feb 21, 2014, at 1:29 PM, Jay Pipes 
mailto:jaypi...@gmail.com>> wrote:

I disagree on this point. I believe that the more implementation details
bleed into the API, the harder the API is to evolve and improve, and the
less flexible the API becomes.

I'd personally love to see the next version of the LBaaS API be a
complete breakaway from any implementation specifics and refocus itself
to be a control plane API that is written from the perspective of the
*user* of a load balancing service, not the perspective of developers of
load balancer products.

I agree with Jay.  We the API needs to be user centric and free of 
implementation details.  One of my concerns I’ve voiced in some of the IRC 
discussions is that too many implementation details are exposed to the user.

mark
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-21 Thread Eugene Nikanorov
>
>
>
> Could you provide some examples -- even in the pseudo-CLI commands like
> I did below. It's really difficult to understand where the limits are
> without specific examples.
>
You know, I always look at the API proposal from implementation standpoint
also, so here's what I see.
In the cli workflow that you described above, everything is fine, because
the driver knowы how and where to deploy each object
that you provide in your command, because it's basically a batch.

When we're talking about separate objectы that form a loadbalancer - vips,
pools, members, it becomes not clear how to map them backends and at which
point.

So here's an example I usually give:
We have 2 VIPs (in fact, one address and 2 ports listening for http and
https, now we call them listeners),
both listeners pass request to a webapp server farm, and http listener also
passes requests to static image servers by processing incoming request URIs
by L7 rules.
So object topology is:

 Listener1 (addr:80)   Listener2(addr:443)
   | \/
   | \/
   |  X
   |  / \
 pool1(webapp) pool2(static imgs)
sorry for that stone age pic :)

The proposal that we discuss can create such object topology by the
following sequence of commands:
1) create-vip --name VipName address=addr
returns vid_id
2) create-listener --name listener1 --port 80 --protocol http --vip_id
vip_id
returns listener_id1
3) create-listener --name listener2 --port 443 --protocol https --sl-params
params --vip_id vip_id
returns listener_id2
4) create-pool --name pool1 
returns pool_id1
5) create-pool --name pool1 
returns pool_id2
6) set-listener-pool listener_id1 pool_id1 --default
7) set-listener-pool listener_id1 pool_id2 --l7policy policy
7) set-listener-pool listener_id2 pool_id1 --default

That's a generic workflow that allows you to create such config. The
question is at which point the backend is chosen.
In our current proposal backend is chosen and step (1) and all further
objects are implicitly go on the same backend as VipName.

The API allows the following addition:
8) create-vip --name VipName2 address=addr2
9) create-listener ... listener3 ...
10) set-listener-pool listener_id3 pool_id1

E.g. from API stand point the commands above are valid. But that particular
ability (pool1 is shared by two different backends) introduces lots of
complexity in the implementation and API, and that is what we would like to
avoid at this point.

So the proposal makes step #10 forbidden: pool is already associated with
the listener on other backend, so we don't share it with listeners on
another one.
That kind of restriction introduces implicit knowledge about the
object-to-backend mapping into the API.
In my opinion it's not a big deal. Once we sort out those complexities, we
can allow that.

What do you think?

Thanks,
Eugene.




> > Looking at your proposal it reminds me Heat template for
> > loadbalancer.
> > It's fine, but we need to be able to operate on particular objects.
>
> I'm not ruling out being able to add or remove nodes from a balancer, if
> that's what you're getting at?
>
> Best,
> -jay
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-21 Thread Jay Pipes
On Fri, 2014-02-21 at 22:58 +0400, Eugene Nikanorov wrote:
> Hi Jay,
> 
> Just a quick response:
> 
> The 'implementation detail in API' that we all are arguing about is
> some hint from the user about how logical configuration is mapped on
> the backend(s), not much detail IMO. 
> 
> Your proposed model has that, because you create the balancer at once
> and the driver can easily map submitted configuration to *some*
> backend or even decide how to split it.
> Things get more complicated when you need fine-grained control.

Could you provide some examples -- even in the pseudo-CLI commands like
I did below. It's really difficult to understand where the limits are
without specific examples.

> Looking at your proposal it reminds me Heat template for
> loadbalancer. 
> It's fine, but we need to be able to operate on particular objects.

I'm not ruling out being able to add or remove nodes from a balancer, if
that's what you're getting at?

Best,
-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-21 Thread Eugene Nikanorov
Hi Jay,

Just a quick response:

The 'implementation detail in API' that we all are arguing about is some
hint from the user about how logical configuration is mapped on the
backend(s), not much detail IMO.

Your proposed model has that, because you create the balancer at once and
the driver can easily map submitted configuration to *some* backend or even
decide how to split it.
Things get more complicated when you need fine-grained control.

Looking at your proposal it reminds me Heat template for loadbalancer.
It's fine, but we need to be able to operate on particular objects.

Thanks,
Eugene.



On Fri, Feb 21, 2014 at 10:29 PM, Jay Pipes  wrote:

> On Thu, 2014-02-20 at 15:21 +0400, Eugene Nikanorov wrote:
>
> > I agree with Samuel here.  I feel the logical model and other
> > issues
> > (implementation etc.) are mixed in the discussion.
> >
> > A little bit. While ideally it's better to separate it, in my opinion
> > we need to have some 'fair bit' of implementation details
> > in API in order to reduce code complexity (I'll try to explain it on
> > the meeting). Currently these 'implementation details' are implied
> > because we deal with simplest configurations which maps 1:1 to a
> > backend.
>
> I disagree on this point. I believe that the more implementation details
> bleed into the API, the harder the API is to evolve and improve, and the
> less flexible the API becomes.
>
> I'd personally love to see the next version of the LBaaS API be a
> complete breakaway from any implementation specifics and refocus itself
> to be a control plane API that is written from the perspective of the
> *user* of a load balancing service, not the perspective of developers of
> load balancer products.
>
> The user of the OpenStack load balancer service would be able to call
> the API in the following way (which represents more how the user thinks
> about the problem domain):
>
> neutron balancer-type-list
>
> # Returns a list of balancer types (flavors) that might
> # look something like this perhaps (just an example off top of head):
>
> - simple:
> capabilities:
>   topologies:
> - single-node
>   algorithms:
> - round-robin
>   protocols:
> - http
>   max-members: 4
> - advanced:
> capabilities:
>   topologies:
> - single-node
> - active-standby
>   algorithms:
> - round-robin
> - least-connections
>   protocols:
> - http
> - https
>   max-members: 100
>
> # User would then create a new balancer from the type:
>
> neutron balancer-create --type=advanced --front= \
>  --back= --algorithm="least-connections" \
>  --topology="active-standby"
>
> # Neutron LBaaS goes off and does a few things, then perhaps
> # user would run:
>
> neutron balancer-show 
>
> # which might return the following:
>
> front:
>   ip: 
>   nodes:
> -  <-- could be a hardware device ID or a VM ID
>   ip: 
>   status: ACTIVE
> - 
>   ip: 
>   status: STANDBY
> back:
>   nodes:
> -  <-- could be ID of an appliance or a VM ID
>   ip: 
>   status: ONLINE
> - 
>   ip: 
>   status: ONLINE
> - 
>   ip: 
>   status: OFFLINE
>
> No mention of pools, VIPs, or really much else other than a "balancer"
> and the balancer "type", which describes capabilities and restrictions
> for a class of balancers. All implementation details are hidden behind
> the API. How Neutron LBaaS stores the data behind the scenes should not
> influence the forward user-facing API.
>
> Just my two cents,
> -jay
>
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-21 Thread Jay Pipes
On Thu, 2014-02-20 at 15:21 +0400, Eugene Nikanorov wrote:

> I agree with Samuel here.  I feel the logical model and other
> issues
> (implementation etc.) are mixed in the discussion.
>  
> A little bit. While ideally it's better to separate it, in my opinion
> we need to have some 'fair bit' of implementation details
> in API in order to reduce code complexity (I'll try to explain it on
> the meeting). Currently these 'implementation details' are implied
> because we deal with simplest configurations which maps 1:1 to a
> backend.

I disagree on this point. I believe that the more implementation details
bleed into the API, the harder the API is to evolve and improve, and the
less flexible the API becomes.

I'd personally love to see the next version of the LBaaS API be a
complete breakaway from any implementation specifics and refocus itself
to be a control plane API that is written from the perspective of the
*user* of a load balancing service, not the perspective of developers of
load balancer products.

The user of the OpenStack load balancer service would be able to call
the API in the following way (which represents more how the user thinks
about the problem domain):

neutron balancer-type-list

# Returns a list of balancer types (flavors) that might
# look something like this perhaps (just an example off top of head):

- simple:
capabilities:
  topologies:
- single-node
  algorithms:
- round-robin
  protocols:
- http
  max-members: 4
- advanced:
capabilities:
  topologies:
- single-node
- active-standby
  algorithms:
- round-robin
- least-connections
  protocols:
- http
- https
  max-members: 100
   
# User would then create a new balancer from the type:

neutron balancer-create --type=advanced --front= \
 --back= --algorithm="least-connections" \
 --topology="active-standby"

# Neutron LBaaS goes off and does a few things, then perhaps
# user would run:

neutron balancer-show 

# which might return the following:

front:
  ip: 
  nodes:
-  <-- could be a hardware device ID or a VM ID
  ip: 
  status: ACTIVE
- 
  ip: 
  status: STANDBY
back:
  nodes:
-  <-- could be ID of an appliance or a VM ID
  ip: 
  status: ONLINE
- 
  ip: 
  status: ONLINE
- 
  ip: 
  status: OFFLINE

No mention of pools, VIPs, or really much else other than a "balancer"
and the balancer "type", which describes capabilities and restrictions
for a class of balancers. All implementation details are hidden behind
the API. How Neutron LBaaS stores the data behind the scenes should not
influence the forward user-facing API.

Just my two cents,
-jay





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-21 Thread IWAMOTO Toshihiro
At Thu, 20 Feb 2014 15:21:49 +0400,
Eugene Nikanorov wrote:
> 
> Hi Iwamoto,
> 
> 
> > I agree with Samuel here.  I feel the logical model and other issues
> > (implementation etc.) are mixed in the discussion.
> >
> 
> A little bit. While ideally it's better to separate it, in my opinion we
> need to have some 'fair bit' of implementation details
> in API in order to reduce code complexity (I'll try to explain it on the
> meeting). Currently these 'implementation details' are implied because we
> deal with simplest configurations which maps 1:1 to a backend.

Exposing some implementation details as API might not be ideal but
would be accepable if it saves a lot of code complexity.

> > I'm failing to understand why the current model is unfit for L7 rules.
> >
> >   - pools belonging to a L7 group should be created with the same
> > provider/flavor by a user
> >   - pool scheduling can be delayed until it is bound to a vip to make
> > sure pools belonging to a L7 group are scheduled to one backend
> >
> > While that could be an option, It's not as easy as it seems.
> We've discussed that back on HK summit but at that point decided that it's
> undesirable.

Could you give some more details/examples why my proposal above is
undesirable?
In my opinion, pool rescheduling happens anyway when an agent dies,
and calculating pool-vip grouping based on their connectivity is not hard.


> > > I think grouping vips and pools is important part of logical model, even
> > if
> > > some users may not care about it.
> >
> > One possibility is to provide an optional data structure to describe
> > grouping of vips and pools, on top of the existing pool-vip model.
> >
> That would be 'loadbalancer' approach, #2 in a wiki page.
> So far we tend to introduce such grouping directly into vip-pool
> relationship.
> I plan to explain that in more detail on the meeting.

My idea was to keep the 'loadbalancer' API optional for users who
don't care about grouping.

--
IWAMOTO Toshihiro


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-20 Thread Eugene Nikanorov
Hi Iwamoto,


> I agree with Samuel here.  I feel the logical model and other issues
> (implementation etc.) are mixed in the discussion.
>

A little bit. While ideally it's better to separate it, in my opinion we
need to have some 'fair bit' of implementation details
in API in order to reduce code complexity (I'll try to explain it on the
meeting). Currently these 'implementation details' are implied because we
deal with simplest configurations which maps 1:1 to a backend.



> I'm failing to understand why the current model is unfit for L7 rules.
>
>   - pools belonging to a L7 group should be created with the same
> provider/flavor by a user
>   - pool scheduling can be delayed until it is bound to a vip to make
> sure pools belonging to a L7 group are scheduled to one backend
>
> While that could be an option, It's not as easy as it seems.
We've discussed that back on HK summit but at that point decided that it's
undesirable.


> > I think grouping vips and pools is important part of logical model, even
> if
> > some users may not care about it.
>
> One possibility is to provide an optional data structure to describe
> grouping of vips and pools, on top of the existing pool-vip model.
>
That would be 'loadbalancer' approach, #2 in a wiki page.
So far we tend to introduce such grouping directly into vip-pool
relationship.
I plan to explain that in more detail on the meeting.


> Yes, there's little benefit in sharing pools at cost of the
> complexity.
>
Right, that's the suggestion, but such ability is also a consequence of
pure logical config where backend considerations are not taken into account
in the API.

Hope to see you on the meeting!

Thanks,
Eugene.

>
> --
> IWAMOTO Toshihiro
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-20 Thread IWAMOTO Toshihiro
At Tue, 18 Feb 2014 18:47:37 -0800,
Stephen Balukoff wrote:
> 
> [1  ]
> [1.1  ]
> Small correction to my option #4 (here as #4.1). Neutron port_id should be
> an attribute of the 'loadbalancer' object, not the 'cluster' object.
> (Though cluster should have a network_id attribute).

Hi Eugene and Stephen,
I'd like to see the wiki updated with the plan #4 and current issues
as mentioned in emails.  It'll greatly help me to keep in touch with
the discussion.

Thanks.
--
IWAMOTO Toshihiro

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-20 Thread IWAMOTO Toshihiro
At Wed, 19 Feb 2014 20:23:04 +0400,
Eugene Nikanorov wrote:
> 
> Hi Sam,
> 
> My comments inline:
> 
> 
> On Wed, Feb 19, 2014 at 4:57 PM, Samuel Bercovici wrote:
> 
> >  Hi,
> >
> >
> >
> > I think we mix different aspects of operations. And try to solve a non
> > "problem".
> >
> Not really, Advanced features we're trying to introduce are incompatible by
> both object model and API.

I agree with Samuel here.  I feel the logical model and other issues
(implementation etc.) are mixed in the discussion.

I'm failing to understand why the current model is unfit for L7 rules.

  - pools belonging to a L7 group should be created with the same
provider/flavor by a user
  - pool scheduling can be delayed until it is bound to a vip to make
sure pools belonging to a L7 group are scheduled to one backend

I think proposed changes are introduction of "implementation details"
and as a general rule it's better to be hidden from users.

>  From APIs/Operations we are mixing the following models:
> >
> > 1.   Logical model (which as far as I understand is the topic of this
> > discussion) - tenants define what they need logically 
> > vip$(D+"(Bdefault_pool,
> > l7 association, ssl, etc.
> >
> That's correct. Tenant may or may not care about how it is grouped on the
> backend. We need to support both cases.
> 
> >  2.   Physical model - operator / vendor install and specify how
> > backend gets implemented.
> >
> > 3.   Deploying 1 on 2 - this is currently the driver's
> > responsibility. We can consider making it better but this should not impact
> > the logical model.
> >
> I think grouping vips and pools is important part of logical model, even if
> some users may not care about it.

One possibility is to provide an optional data structure to describe
grouping of vips and pools, on top of the existing pool-vip model.

> > I think this is not a "problem".
> >
> > In a logical model a pool which is part of L7 policy is a logical object
> > which could be placed at any backend and any existing vip$(D)N+"(Bpool and
> > accordingly configure the backend that those vip$(D)N+"(Bpool are 
> > deployed on.
> >
>  That's not how it currently works - that's why we're trying to address it.
> Having pool shareable between backends at least requires to move 'instance'
> role from the pool to some other entity, and also that changes a number of
> API aspects.
> 
>  If the same pool that was part of a l7 association will also be connected
> > to a vip as a default pool, than by all means this new vip$(D)N+"(Bpool 
> > pair can
> > be instantiated into some back end.
> >
> > The proposal to not allow this (ex: only allow pools that are connected to
> > the same lb-instance to be used for l7 association), brings the physical
> > model into the logical model.
> >
> So proposal tries to address 2 issues:
> 1) in many cases it is desirable to know about grouping of logical objects
> on the backend
> 2) currently physical model implied when working with pools, because pool
> is the root and corresponds to backend with 1:1 mapping
> 
> 
> >
> > I think that the current logical model is fine with the exception that the
> > two way reference between vip and pool (vip$(D)N+"(Bpool) should be 
> > modified
> > with only vip pointing to a pool (vip$(D+"(Bpool) which allows reusing 
> > the pool
> > with multiple vips.
> >
> Reusing pools by vips is not as simple as it seems.
> If those vips belong to 1 backend (that by itself requires tenant to know
> about that) - that's no problem, but if they don't, then:
> 1) what 'status' attribute of the pool would mean?
> 2) how health monitors for the pool will be deployed? and what their
> statuses would mean?
> 3) what pool statistics would mean?
> 4) If the same pool is used on
> 
> To be able to preserve existing meaningful healthmonitors, members and
> statistics API we will need to create associations for everything, or just
> change API in backward incompatible way.
> My opinion is that it make sense to limit such ability (reusing pools by
> vips deployed on different backends) in favor of simpler code, IMO it's
> really a big deal. Pool is lightweight enough to not to share it as an
> object.

Yes, there's little benefit in sharing pools at cost of the
complexity.

--
IWAMOTO Toshihiro

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-19 Thread Stephen Balukoff
Hi guys!

This is a great discussion, and I'm glad y'all have been participating in
it thus far, eh! Thanks also for you patience digesting my mile long posts.

My comments are in-line:


On Wed, Feb 19, 2014 at 3:47 PM, Youcef Laribi wrote:

>  Hi guys,
>
>
>
> I have been catching up on this interesting thread around the object
> model, so sorry in advance to jump in late in this debate, and if I missed
> some of the subtleties of the points being made so far.
>
>
>
> I tend to agree with Sam that the original intention of the current object
> model was never tied to a physical deployment. We seem to be confusing the
> tenant-facing object model which is completely logical (albeit with some
> “properties” or “qualities” that a tenant can express) from the
> deployment/implementation aspects of such a logical model (things like
> cluster/HA, one vs. multiple backends, virtual appliance vs. OS process,
> etc). We discussed in the past, the need for an Admin API (separate from
> the tenant API) where a cloud administrator (as opposed to a tenant) could
> manage the deployment aspects, and could construct different offerings that
> can be exposed to a tenant, but in the absence of such as admin API (which
> would necessarily be very technology-specific), this responsibility is
> currently shouldered by the drivers.
>

Looking at the original object model but not having been here for the
origin of these things, I suspect the original intent was to duplicate the
functionality of one major cloud provider's load balancing service and to
keep things as simple as possible. Keeping things as simple as they can be
is of course a desirable goal, but unfortunately the current object model
is too simplistic to support a lot of really desirable features that cloud
tenants are asking for. (Hence the addition of L7 and SSL necessitating a
model change, for example.)

I'm still of the opinion that HA at least should be one of these features--
and although it does speak to topology considerations, it should still be
doable in a purely logical way for the generic case. And I feel pretty
strongly that intelligence around core features (of which I'd say HA
capability is one-- I know of no commercial load balancer solution that
doesn't support HA in some form) should not be delegated solely to drivers.
In addition to intelligence around HA, not having greater visibility into
the components that do the actual load balancing is going to complicate
other features as well--  like auto-provisioning of load balancing
appliances or pseudo-appliances, statistics and monitoring, and scaling.
 And again, the more of these features we delegate to drivers, the more
clients are likely to experience vendor lock-in due to specific driver
implementations being different.

Maybe we should revisit the discussion around the need for an Admin API?
I'm not convinced that all admin API features would be tied to any specific
technology. :/  Most active-standby HA configurations, for example, use
some form of floating IP to achieve this (in fact, I can't think of any
that don't right now). And although specific implementations of how this is
done will vary, a 'floating IP' is a common feature here.


> IMO a tenant should only care about whether VIPs/Pools are grouped
> together to the extent that the provider allows the tenant to express such
> a preference. Some providers will allow their tenants to express such a
> preference (e.g. because it might impact cost), and others might not as it
> wouldn’t make sense in their implementation.
>

Remind me to tell you about the futility of telling a client what he or she
should want sometime. :)

In all seriousness, though, we should come to a decision as to whether we
allow a tenant to make such decisions, and if so, exactly how far we let
them trespass onto operational / implementation concerns. Keep in mind that
what we decide here also directly impacts a tenant's ability to deploy load
balancing on a specific vendor's appliance. (Which, I've been lead to
believe, is a feature some tenants are going to demand.)

I've heard some talk of a concept of 'flavors' which might solve this
problem, but I've not seen enough detail about this to be able to register
an opinion on it. In the absence of a better idea, I'm still plugging for
that whole "cluster + loadbalancer" concept alluded to in my #4.1 diagram
in this e-mail thread. :)


> Also the mapping between pool and backend is not necessarily 1:1, and is
> not necessarily at the creation time of pool, as this is purely a driver
> implementation decision (I know that currently implementations are like
> this, but another driver can choose a different approach). A driver could
> for example delay mapping a pool to a backend, until a full LB
> configuration is completed (when pool has members, and a VIP is attached to
> the pool). A driver can also move these resources around between backends,
> if it finds out, it put them in a non-optimal backend initially. A

Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-19 Thread Youcef Laribi
Hi guys,

I have been catching up on this interesting thread around the object model, so 
sorry in advance to jump in late in this debate, and if I missed some of the 
subtleties of the points being made so far.

I tend to agree with Sam that the original intention of the current object 
model was never tied to a physical deployment. We seem to be confusing the 
tenant-facing object model which is completely logical (albeit with some 
"properties" or "qualities" that a tenant can express) from the 
deployment/implementation aspects of such a logical model (things like 
cluster/HA, one vs. multiple backends, virtual appliance vs. OS process, etc). 
We discussed in the past, the need for an Admin API (separate from the tenant 
API) where a cloud administrator (as opposed to a tenant) could manage the 
deployment aspects, and could construct different offerings that can be exposed 
to a tenant, but in the absence of such as admin API (which would necessarily 
be very technology-specific), this responsibility is currently shouldered by 
the drivers.

IMO a tenant should only care about whether VIPs/Pools are grouped together to 
the extent that the provider allows the tenant to express such a preference. 
Some providers will allow their tenants to express such a preference (e.g. 
because it might impact cost), and others might not as it wouldn't make sense 
in their implementation.

Also the mapping between pool and backend is not necessarily 1:1, and is not 
necessarily at the creation time of pool, as this is purely a driver 
implementation decision (I know that currently implementations are like this, 
but another driver can choose a different approach). A driver could for example 
delay mapping a pool to a backend, until a full LB configuration is completed 
(when pool has members, and a VIP is attached to the pool). A driver can also 
move these resources around between backends, if it finds out, it put them in a 
non-optimal backend initially. As long as the logical model is realized and 
remains consistent from the tenant point of view, implementations should be 
free to achieve that goal in any way they see fit.

Youcef

From: Eugene Nikanorov [mailto:enikano...@mirantis.com]
Sent: Wednesday, February 19, 2014 8:23 AM
To: Samuel Bercovici
Cc: OpenStack Development Mailing List; Mark McClain; Salvatore Orlando; 
sbaluk...@bluebox.net; Youcef Laribi; Avishay Balderman
Subject: Re: [Neutron][LBaaS] Object Model discussion

Hi Sam,

My comments inline:

On Wed, Feb 19, 2014 at 4:57 PM, Samuel Bercovici 
mailto:samu...@radware.com>> wrote:
Hi,

I think we mix different aspects of operations. And try to solve a non 
"problem".
Not really, Advanced features we're trying to introduce are incompatible by 
both object model and API.

>From APIs/Operations we are mixing the following models:

1.   Logical model (which as far as I understand is the topic of this 
discussion) - tenants define what they need logically vip-->default_pool, l7 
association, ssl, etc.
That's correct. Tenant may or may not care about how it is grouped on the 
backend. We need to support both cases.

2.   Physical model - operator / vendor install and specify how backend 
gets implemented.

3.   Deploying 1 on 2 - this is currently the driver's responsibility. We 
can consider making it better but this should not impact the logical model.
I think grouping vips and pools is important part of logical model, even if 
some users may not care about it.


I think this is not a "problem".
In a logical model a pool which is part of L7 policy is a logical object which 
could be placed at any backend and any existing vip<>pool and accordingly 
configure the backend that those vip<>pool are deployed on.
 That's not how it currently works - that's why we're trying to address it. 
Having pool shareable between backends at least requires to move 'instance' 
role from the pool to some other entity, and also that changes a number of API 
aspects.

If the same pool that was part of a l7 association will also be connected to a 
vip as a default pool, than by all means this new vip<>pool pair can be 
instantiated into some back end.
The proposal to not allow this (ex: only allow pools that are connected to the 
same lb-instance to be used for l7 association), brings the physical model into 
the logical model.
So proposal tries to address 2 issues:
1) in many cases it is desirable to know about grouping of logical objects on 
the backend
2) currently physical model implied when working with pools, because pool is 
the root and corresponds to backend with 1:1 mapping


I think that the current logical model is fine with the exception that the two 
way reference between vip and pool (vip<>pool) should be modified with only 
vip pointing to a pool (vip-->pool) which allows reusing the pool with multiple 
vips.
Reusing pools by vips is not as simple as it seems.
If those vips belong to 1 backend (that by itself requires tenant to know

Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-19 Thread Eugene Nikanorov
Hi Sam,

My comments inline:


On Wed, Feb 19, 2014 at 4:57 PM, Samuel Bercovici wrote:

>  Hi,
>
>
>
> I think we mix different aspects of operations. And try to solve a non
> "problem".
>
Not really, Advanced features we're trying to introduce are incompatible by
both object model and API.

 From APIs/Operations we are mixing the following models:
>
> 1.   Logical model (which as far as I understand is the topic of this
> discussion) - tenants define what they need logically vipàdefault_pool,
> l7 association, ssl, etc.
>
That's correct. Tenant may or may not care about how it is grouped on the
backend. We need to support both cases.

>  2.   Physical model - operator / vendor install and specify how
> backend gets implemented.
>
> 3.   Deploying 1 on 2 - this is currently the driver's
> responsibility. We can consider making it better but this should not impact
> the logical model.
>
I think grouping vips and pools is important part of logical model, even if
some users may not care about it.


> I think this is not a "problem".
>
> In a logical model a pool which is part of L7 policy is a logical object
> which could be placed at any backend and any existing vipßàpool and
> accordingly configure the backend that those vipßàpool are deployed on.
>
 That's not how it currently works - that's why we're trying to address it.
Having pool shareable between backends at least requires to move 'instance'
role from the pool to some other entity, and also that changes a number of
API aspects.

 If the same pool that was part of a l7 association will also be connected
> to a vip as a default pool, than by all means this new vipßàpool pair can
> be instantiated into some back end.
>
> The proposal to not allow this (ex: only allow pools that are connected to
> the same lb-instance to be used for l7 association), brings the physical
> model into the logical model.
>
So proposal tries to address 2 issues:
1) in many cases it is desirable to know about grouping of logical objects
on the backend
2) currently physical model implied when working with pools, because pool
is the root and corresponds to backend with 1:1 mapping


>
> I think that the current logical model is fine with the exception that the
> two way reference between vip and pool (vipßàpool) should be modified
> with only vip pointing to a pool (vipàpool) which allows reusing the pool
> with multiple vips.
>
Reusing pools by vips is not as simple as it seems.
If those vips belong to 1 backend (that by itself requires tenant to know
about that) - that's no problem, but if they don't, then:
1) what 'status' attribute of the pool would mean?
2) how health monitors for the pool will be deployed? and what their
statuses would mean?
3) what pool statistics would mean?
4) If the same pool is used on

To be able to preserve existing meaningful healthmonitors, members and
statistics API we will need to create associations for everything, or just
change API in backward incompatible way.
My opinion is that it make sense to limit such ability (reusing pools by
vips deployed on different backends) in favor of simpler code, IMO it's
really a big deal. Pool is lightweight enough to not to share it as an
object.

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-19 Thread Samuel Bercovici
Hi,

I think we mix different aspects of operations. And try to solve a non 
"problem".

>From APIs/Operations we are mixing the following models:

1.   Logical model (which as far as I understand is the topic of this 
discussion) - tenants define what they need logically vip-->default_pool, l7 
association, ssl, etc.

2.   Physical model - operator / vendor install and specify how backend 
gets implemented.

3.   Deploying 1 on 2 - this is currently the driver's responsibility. We 
can consider making it better but this should not impact the logical model.

Another "problem", which all the new proposals are trying to solve, is placing 
a pools which can be a root/default for a vip <>pool relationship also as 
part association with l7 policy of another vip<>pool that is configured in 
another backend.
I think this is not a "problem".
In a logical model a pool which is part of L7 policy is a logical object which 
could be placed at any backend and any existing vip<>pool and accordingly 
configure the backend that those vip<>pool are deployed on.
If the same pool that was part of a l7 association will also be connected to a 
vip as a default pool, than by all means this new vip<>pool pair can be 
instantiated into some back end.
The proposal to not allow this (ex: only allow pools that are connected to the 
same lb-instance to be used for l7 association), brings the physical model into 
the logical model.

I think that the current logical model is fine with the exception that the two 
way reference between vip and pool (vip<>pool) should be modified with only 
vip pointing to a pool (vip-->pool) which allows reusing the pool with multiple 
vips. This also means that all those vips will be placed on the same place as 
the pool they are pointing to as their default pool.

Regards,
-Sam.





From: Eugene Nikanorov [mailto:enikano...@mirantis.com]
Sent: Tuesday, February 18, 2014 9:35 PM
To: OpenStack Development Mailing List
Cc: Youcef Laribi; Samuel Bercovici; sbaluk...@bluebox.net; Mark McClain; 
Salvatore Orlando
Subject: [Neutron][LBaaS] Object Model discussion

Hi folks,

Recently we were discussing LBaaS object model with Mark McClain in order to 
address several problems that we faced while approaching L7 rules and multiple 
vips per pool.

To cut long story short: with existing workflow and model it's impossible to 
use L7 rules, because
each pool being created is 'instance' object in itself, it defines another 
logical configuration and can't be attached to other existing configuration.
To address this problem, plus create a base for multiple vips per pool, the 
'loadbalancer' object was introduced (see 
https://wiki.openstack.org/wiki/Neutron/LBaaS/LoadbalancerInstance ).
However this approach raised a concern of whether we want to let user to care 
about 'instance' object.

My personal opinion is that letting user to work with 'loadbalancer' entity is 
no big deal (and might be even useful for terminological clarity; Libra and AWS 
APIs have that) especially if existing simple workflow is preserved, so the 
'loadbalancer' entity is only required when working with L7 or multiple vips 
per pool.

The alternative solution proposed by Mark is described here under #3:
https://wiki.openstack.org/wiki/Neutron/LBaaS/LoadbalancerInstance/Discussion
In (3) the root object of the configuration is VIP, where all kinds of bindings 
are made (such as provider, agent, device, router). To address 'multiple vips' 
case another entity 'Listener' is introduced, which receives most attributes of 
former 'VIP' (attribute sets are not finalized on those pictures, so don't pay 
much attention)
If you take closer look at #2 and #3 proposals, you'll see that they are 
essentially similar, where in #3 the VIP object takes instance/loadbalancer 
role from #2.
Both #2 and #3 proposals make sense to me because they address both problems 
with L7 and multiple vips (or listeners)
My concern about #3 is that it redefines lots of workflow and API aspects and 
even if we manage to make transition to #3 in backward-compatible way, it will 
be more complex in terms of code/testing, then #2 (which is on review already 
and works).

The whole thing is important design decision, so please share your thoughts 
everyone.

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-18 Thread Eugene Nikanorov
Thanks for quick response, Stephen,

See my comments inline:


On Wed, Feb 19, 2014 at 6:28 AM, Stephen Balukoff 
 wrote:

> Hi y'all!
>
> Eugene:  Are the arrows in your diagrams meaningful?
>
Arrow means 'one object references another'.


> Regarding existing workflows: Do we have any idea how widely the existing
> Neutron LBaaS is being deployed / used in the wild?  (In our environment,
> we don't use it yet because it's not sophisticated enough for many of our
> customers' needs.)  In other words, is breaking backward compatibility a
> huge concern?  In our environment, obviously it's not.
>
It's more a policy then a concern: we need to maintain compatibility for at
least one release cycle before deprecating workflow/API parts.

>
> I personally favor #3 as suggested, but again, I do doubt the need to have
> pools associated with a vip_id, or listener_id:  That is, in larger
> clusters it may be advantageous to have a single pool that is referenced by
> several listeners and VIPs.
>
Agree, the pool can be shared, we concluded this as well during the
discussion with Mark.
Just one concern here - pool currently has 'subnet' attribute which means
the subnet where members reside. More formally it means that loadbalancer
device should have a port on that subnet (Say in routed mode vip and pool
may be on different subnets and then device should have two ports on those
subnets)

If we keep the vip_id as an attribute of a pool (implying a pool can belong
> to only one vip), this isn't too bad--  you can duplicate the behavior by
> having multiple pools with the same actual member ips associated (though
> different member object instantiations, of course), and just make sure you
> update all of these "clone" pools whenever adding / removing members or
> changing healthmonitors, etc. It's certainly more housekeeping on the part
> of the application developer, though.
>
Right, it isn't a big deal.


> You mention in the notes that having the pools with a vip_id attribute
> solves a collocation problem. What is this specific collocation problem?
>
When making complex configurations with several pools (L7 rules usage) or
multiple VIPs (or Listeners) user may want to have this in a single
'logical configuration' for various reasons. One of the important reasons
is resource consumption: user may want to consume/pay for 1 backend, where
his configuration will be deployed. With existing API and workflow it's not
quite possible because 1) pool is the root object 2) pool is associated
with backend at the point when it is created.

If we go with #3, I would keep IP address as an attribute of the VIP in
> this model.
>
Yes, that makes sense. I'd say that we need port_id there, rather than ip.


> As far as not having a 'loadbalancer' entity: Again, I think we're going
> to need something like this when we solve the HA or horizontal scalability
> problem. If we're going to break workflows with the introduction of L7
> rules, then I would prefer to approach the model changes that will need to
> happen for HA and horizontal scalability sooner rather than later, so that
> we don't have to (potentially) contemplate yet another
> workflow-backward-compatibility model change.
>

That will need further clarification. So far we planned to introduce HA in
such way that user can't control it other then 'enable-disable', so
everything related to HA isn't really exposed to API. With approaches
either #2 or #3 HA is just a flag on the instance that indicates that it is
deployed in HA mode. Then driver does whatever it thinks HA is.
While HA may require additional DB objects, like additional
associations/bindings between logical config and backends, those objects
are not a part of public API.


> Could you please describe what is the 'provider/flavor' attribute of the
> VIP being proposed?
>
Currently we have a notion of 'provider' which is a user-facing
representation of 'driver', e.g. vendor-specific code that works after the
persistence layer and communicates with physical backend. Currently Pool,
as a root of configuration, has such attribute, so when any call is handled
for the pool or its child objects, that attribute is used to dispatch the
call to the appropriate driver.

Flavor is something more flexible (it's not there right now, we're working
on designing the framework), that would allow user to choose capabilities
rather then vendors. In particular, that will allow, having several
configurations for one driver.

As for the pictures - I have intentionally omitted some objects like L7
rules, ssl objects, health monitors since existing API around these object
makes sense and at this point we don't have plans to change it.

Regarding picture #4, could you describe once again in more details, what
is cluster and loadbalancer in this scheme?

>
> Thoughts?
>
> Thanks,
> Stephen
>


On Wed, Feb 19, 2014 at 6:28 AM, Stephen Balukoff wrote:

> Hi y'all!
>
> Eugene:  Are the arrows in your diagrams meaningful?
>
> Regarding existing wo

[openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-18 Thread Eugene Nikanorov
Hi folks,

Recently we were discussing LBaaS object model with Mark McClain in order
to address several problems that we faced while approaching L7 rules and
multiple vips per pool.

To cut long story short: with existing workflow and model it's impossible
to use L7 rules, because
each pool being created is 'instance' object in itself, it defines another
logical configuration and can't be attached to other existing configuration.
To address this problem, plus create a base for multiple vips per pool, the
'loadbalancer' object was introduced (see
https://wiki.openstack.org/wiki/Neutron/LBaaS/LoadbalancerInstance ).
However this approach raised a concern of whether we want to let user to
care about 'instance' object.

My personal opinion is that letting user to work with 'loadbalancer' entity
is no big deal (and might be even useful for terminological clarity; Libra
and AWS APIs have that) especially if existing simple workflow is
preserved, so the 'loadbalancer' entity is only required when working with
L7 or multiple vips per pool.

The alternative solution proposed by Mark is described here under #3:
https://wiki.openstack.org/wiki/Neutron/LBaaS/LoadbalancerInstance/Discussion
In (3) the root object of the configuration is VIP, where all kinds of
bindings are made (such as provider, agent, device, router). To address
'multiple vips' case another entity 'Listener' is introduced, which
receives most attributes of former 'VIP' (attribute sets are not finalized
on those pictures, so don't pay much attention)
If you take closer look at #2 and #3 proposals, you'll see that they are
essentially similar, where in #3 the VIP object takes instance/loadbalancer
role from #2.
Both #2 and #3 proposals make sense to me because they address both
problems with L7 and multiple vips (or listeners)
My concern about #3 is that it redefines lots of workflow and API aspects
and even if we manage to make transition to #3 in backward-compatible way,
it will be more complex in terms of code/testing, then #2 (which is on
review already and works).

The whole thing is important design decision, so please share your thoughts
everyone.

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev