Re: [openstack-dev] [neutron][group-based-policy] GP mapping driver

2014-06-20 Thread loy wolfe
GP should support applying policy on exist openstack deployment, so neither
implicit mapping nor intercepting works well.

maybe the explicit associating model is best: associate EPG with exist
neutron network object (policy automatically applied to all ports on it),
or with single port object (policy applied only on this port). By this way
GP will be more loosely coupled with Neutron core than the spec sample:
boot vm from a grand-new EP object, which need rewrite nova vif-plug, and
only support new deployment. It is suitable to put GP in orchestration
layer, etc, Heat, without bothering nova code. Boot vm from EPG can be
interpreted by ochestration with: 1) create port from network associated
with EGP; 2) boot nova from port.  In the future we may also need a unified
abstract policy template across compute/stroage/network.

And, it's not a good idea to intercept neutron port create api for
implicitly EP binding(I don't know if this has been removed now), for it
severely break the hierarchy relationship between GP and neutron core. the
link from GP wiki to an ODL page clearly shows that GP should be layered on
top of both neutron and ODL(1st graph).

http://webcache.googleusercontent.com/search?q=cache:https://wiki.opendaylight.org/view/Project_Proposals:Application_Policy_Plugin#Relationship_with_OpenStack.2FNeutron_Policy_Model
(this link has hidden all picture from this week so I have to give the
google cache)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][group-based-policy] GP mapping driver

2014-06-13 Thread Carlos Gonçalves
Hi Sumit,

My concern was related to sharing common configuration information not
between GP drivers but configurations between GP and ML2 (and any other
future plugins). When both are enabled, users need to configure drivers
information (e.g., endpoint, username, password) twice, when applicable
(e.g., when using ODL for ML2 and GP). A common configuration file here
could help, yes.

Thanks,
Carlos Goncalves

On Thu, Jun 12, 2014 at 6:05 PM, Sumit Naiksatam sumitnaiksa...@gmail.com
wrote:

 Hi Carlos,

 I noticed that the point you raised here had not been followed up. So
 if I understand correctly, your concern is related to sharing common
 configuration information between GP drivers, and ML2 mechanism
 drivers (when used in the mapping)? If so, would a common
 configuration file  shared between the two drivers help to address
 this?

 Thanks,
 ~Sumit.

 On Tue, May 27, 2014 at 10:33 AM, Carlos Gonçalves m...@cgoncalves.pt
 wrote:
  Hi,
 
  On 27 May 2014, at 15:55, Mohammad Banikazemi m...@us.ibm.com wrote:
 
  GP like any other Neutron extension can have different implementations.
 Our
  idea has been to have the GP code organized similar to how ML2 and
 mechanism
  drivers are organized, with the possibility of having different drivers
 for
  realizing the GP API. One such driver (analogous to an ML2 mechanism
 driver
  I would say) is the mapping driver that was implemented for the PoC. I
  certainly do not see it as the only implementation. The mapping driver is
  just the driver we used for our PoC implementation in order to gain
  experience in developing such a driver. Hope this clarifies things a bit.
 
 
  The code organisation adopted to implement the PoC for the GP is indeed
 very
  similar to the one ML2 is using. There is one aspect I think GP will hit
  soon if it continues to follow with its current code base where multiple
  (policy) drivers will be available, and as Mohammad putted it as being
  analogous to an ML2 mech driver, but are independent from ML2’s. I’m
  unaware, however, if the following problem has already been brought to
  discussion or not.
 
  From here I see the GP effort going, besides from some code refactoring,
 I'd
  say expanding the supported policy drivers is the next goal. With that
 ODL
  support might next. Now, administrators enabling GP ODL support will
 have to
  configure ODL data twice (host, user, password) in case they’re using
 ODL as
  a ML2 mech driver too, because policy drivers share no information
 between
  ML2 ones. This can become more troublesome if ML2 is configured to load
  multiple mech drivers.
 
  With that said, if it makes any sense, a different implementation should
 be
  considered. One that somehow allows mech drivers living in ML2 umbrella
 to
  be extended; BP [1] [2] may be a first step towards that end, I’m
 guessing.
 
  Thanks,
  Carlos Gonçalves
 
  [1]
 
 https://blueprints.launchpad.net/neutron/+spec/neutron-ml2-mechanismdriver-extensions
  [2] https://review.openstack.org/#/c/89208/
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][group-based-policy] GP mapping driver

2014-06-13 Thread Mohammad Banikazemi
I had tried to address the comment on the review board where Carlos had
raised the same issue. Should have posted here as well.

https://review.openstack.org/#/c/96393/

Patch Set 3:
Carlos, The plan is not to have multiple drivers for enforcing policies. At
least not right now. With respect to using same config options by drivers,
we can have a given group policy driver and possibly an ML2 mechanism
driver use the same config namespace. Does this answer your questions?

Best,

Mohammad



From:   Sumit Naiksatam sumitnaiksa...@gmail.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org,
Date:   06/12/2014 01:10 PM
Subject:Re: [openstack-dev] [neutron][group-based-policy] GP mapping
driver



Hi Carlos,

I noticed that the point you raised here had not been followed up. So
if I understand correctly, your concern is related to sharing common
configuration information between GP drivers, and ML2 mechanism
drivers (when used in the mapping)? If so, would a common
configuration file  shared between the two drivers help to address
this?

Thanks,
~Sumit.

On Tue, May 27, 2014 at 10:33 AM, Carlos Gonçalves m...@cgoncalves.pt
wrote:
 Hi,

 On 27 May 2014, at 15:55, Mohammad Banikazemi m...@us.ibm.com wrote:

 GP like any other Neutron extension can have different implementations.
Our
 idea has been to have the GP code organized similar to how ML2 and
mechanism
 drivers are organized, with the possibility of having different drivers
for
 realizing the GP API. One such driver (analogous to an ML2 mechanism
driver
 I would say) is the mapping driver that was implemented for the PoC. I
 certainly do not see it as the only implementation. The mapping driver is
 just the driver we used for our PoC implementation in order to gain
 experience in developing such a driver. Hope this clarifies things a bit.


 The code organisation adopted to implement the PoC for the GP is indeed
very
 similar to the one ML2 is using. There is one aspect I think GP will hit
 soon if it continues to follow with its current code base where multiple
 (policy) drivers will be available, and as Mohammad putted it as being
 analogous to an ML2 mech driver, but are independent from ML2’s. I’m
 unaware, however, if the following problem has already been brought to
 discussion or not.

 From here I see the GP effort going, besides from some code refactoring,
I'd
 say expanding the supported policy drivers is the next goal. With that
ODL
 support might next. Now, administrators enabling GP ODL support will have
to
 configure ODL data twice (host, user, password) in case they’re using ODL
as
 a ML2 mech driver too, because policy drivers share no information
between
 ML2 ones. This can become more troublesome if ML2 is configured to load
 multiple mech drivers.

 With that said, if it makes any sense, a different implementation should
be
 considered. One that somehow allows mech drivers living in ML2 umbrella
to
 be extended; BP [1] [2] may be a first step towards that end, I’m
guessing.

 Thanks,
 Carlos Gonçalves

 [1]

https://blueprints.launchpad.net/neutron/+spec/neutron-ml2-mechanismdriver-extensions

 [2] https://review.openstack.org/#/c/89208/


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][group-based-policy] GP mapping driver

2014-06-12 Thread Sumit Naiksatam
Hi Carlos,

I noticed that the point you raised here had not been followed up. So
if I understand correctly, your concern is related to sharing common
configuration information between GP drivers, and ML2 mechanism
drivers (when used in the mapping)? If so, would a common
configuration file  shared between the two drivers help to address
this?

Thanks,
~Sumit.

On Tue, May 27, 2014 at 10:33 AM, Carlos Gonçalves m...@cgoncalves.pt wrote:
 Hi,

 On 27 May 2014, at 15:55, Mohammad Banikazemi m...@us.ibm.com wrote:

 GP like any other Neutron extension can have different implementations. Our
 idea has been to have the GP code organized similar to how ML2 and mechanism
 drivers are organized, with the possibility of having different drivers for
 realizing the GP API. One such driver (analogous to an ML2 mechanism driver
 I would say) is the mapping driver that was implemented for the PoC. I
 certainly do not see it as the only implementation. The mapping driver is
 just the driver we used for our PoC implementation in order to gain
 experience in developing such a driver. Hope this clarifies things a bit.


 The code organisation adopted to implement the PoC for the GP is indeed very
 similar to the one ML2 is using. There is one aspect I think GP will hit
 soon if it continues to follow with its current code base where multiple
 (policy) drivers will be available, and as Mohammad putted it as being
 analogous to an ML2 mech driver, but are independent from ML2’s. I’m
 unaware, however, if the following problem has already been brought to
 discussion or not.

 From here I see the GP effort going, besides from some code refactoring, I'd
 say expanding the supported policy drivers is the next goal. With that ODL
 support might next. Now, administrators enabling GP ODL support will have to
 configure ODL data twice (host, user, password) in case they’re using ODL as
 a ML2 mech driver too, because policy drivers share no information between
 ML2 ones. This can become more troublesome if ML2 is configured to load
 multiple mech drivers.

 With that said, if it makes any sense, a different implementation should be
 considered. One that somehow allows mech drivers living in ML2 umbrella to
 be extended; BP [1] [2] may be a first step towards that end, I’m guessing.

 Thanks,
 Carlos Gonçalves

 [1]
 https://blueprints.launchpad.net/neutron/+spec/neutron-ml2-mechanismdriver-extensions
 [2] https://review.openstack.org/#/c/89208/


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][group-based-policy] GP mapping driver

2014-06-12 Thread Sumit Naiksatam
Hi Carlos,

I noticed that the point you raised here had not been followed up. So
if I understand correctly, your concern is related to sharing common
configuration information between GP drivers, and ML2 mechanism
drivers (when used in the mapping)? If so, would a common
configuration file  shared between the two drivers help to address
this?

Thanks,
~Sumit.

On Tue, May 27, 2014 at 10:33 AM, Carlos Gonçalves m...@cgoncalves.pt wrote:
 Hi,

 On 27 May 2014, at 15:55, Mohammad Banikazemi m...@us.ibm.com wrote:

 GP like any other Neutron extension can have different implementations. Our
 idea has been to have the GP code organized similar to how ML2 and mechanism
 drivers are organized, with the possibility of having different drivers for
 realizing the GP API. One such driver (analogous to an ML2 mechanism driver
 I would say) is the mapping driver that was implemented for the PoC. I
 certainly do not see it as the only implementation. The mapping driver is
 just the driver we used for our PoC implementation in order to gain
 experience in developing such a driver. Hope this clarifies things a bit.


 The code organisation adopted to implement the PoC for the GP is indeed very
 similar to the one ML2 is using. There is one aspect I think GP will hit
 soon if it continues to follow with its current code base where multiple
 (policy) drivers will be available, and as Mohammad putted it as being
 analogous to an ML2 mech driver, but are independent from ML2’s. I’m
 unaware, however, if the following problem has already been brought to
 discussion or not.

 From here I see the GP effort going, besides from some code refactoring, I'd
 say expanding the supported policy drivers is the next goal. With that ODL
 support might next. Now, administrators enabling GP ODL support will have to
 configure ODL data twice (host, user, password) in case they’re using ODL as
 a ML2 mech driver too, because policy drivers share no information between
 ML2 ones. This can become more troublesome if ML2 is configured to load
 multiple mech drivers.

 With that said, if it makes any sense, a different implementation should be
 considered. One that somehow allows mech drivers living in ML2 umbrella to
 be extended; BP [1] [2] may be a first step towards that end, I’m guessing.

 Thanks,
 Carlos Gonçalves

 [1]
 https://blueprints.launchpad.net/neutron/+spec/neutron-ml2-mechanismdriver-extensions
 [2] https://review.openstack.org/#/c/89208/


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][group-based-policy] GP mapping driver

2014-05-27 Thread Mohammad Banikazemi

Thanks for the continued interest in discussing Group Policy (GP). I
believe these discussions with the larger Neutron community can benefit the
GP work.

GP like any other Neutron extension can have different implementations. Our
idea has been to have the GP code organized similar to how ML2 and
mechanism drivers are organized, with the possibility of having different
drivers for realizing the GP API. One such driver (analogous to an ML2
mechanism driver I would say) is the mapping driver that was implemented
for the PoC. I certainly do not see it as the only implementation. The
mapping driver is just the driver we used for our PoC implementation in
order to gain experience in developing such a driver. Hope this clarifies
things a bit.

Please note that for better or worse we have produced several documents
during the previous cycle. We have tried to collect them on the GP wiki
page [1]. The latest design document [2] should give a broad view of the GP
extension and the model being proposed. The PoC document [3] may clarify
our PoC plans and where the mapping driver stands wrt other pieces of the
work.  (Please note some parts of the plan as described in the PoC document
was not implemented.)

Hope my explanation and these documents (and other documents available on
the GP wiki) are helpful.

Best,

Mohammad

[1] https://wiki.openstack.org/wiki/Neutron/GroupPolicy   - GP wiki
page
[2]
https://docs.google.com/presentation/d/1Nn1HjghAvk2RTPwvltSrnCUJkidWKWY2ckU7OYAVNpo/
- GP design document
[3]
https://docs.google.com/document/d/14UyvBkptmrxB9FsWEP8PEGv9kLqTQbsmlRxnqeF9Be8/
- GP PoC document




From:   Armando M. arma...@gmail.com
To: OpenStack Development Mailing List, (not for usage questions)
openstack-dev@lists.openstack.org,
Date:   05/26/2014 09:46 PM
Subject:Re: [openstack-dev] [neutron][group-based-policy] GP mapping
driver




On May 26, 2014 4:27 PM, Mohammad Banikazemi m...@us.ibm.com wrote:

 Armando,

 I think there are a couple of things that are being mixed up here, at
least as I see this conversation :). The mapping driver is simply one way
of implementing GP. Ideally I would say, you do not need to implement the
GP in terms of other Neutron abstractions even though you may choose to do
so. A network controller could realize the connectivities and policies
defined by GP independent of say networks, and subnets. If we agree on this
point, then how we organize the code will be different than the case where
GP is always defined as something on top of current neutron API. In other
words, we shouldn't organize the overall code for GP based solely on the
use of the mapping driver.


The mapping driver is embedded in the policy framework that Bob had
initially proposed. If I understood what you're suggesting correctly, it
makes very little sense to diverge or come up with a different framework
alongside the legacy driver later on, otherwise we may end up in the same
state of the core plugins': monolithic vs ml2-based. Could you clarify?

 In the mapping driver (aka the legacy driver) for the PoC, GP is
implemented in terms of other Neutron abstractions. I agree that using
python-neutronclient for the PoC would be fine and as Bob has mentioned it
would have been probably the best/easiest way of having the PoC implemented
in the first place. The calls to python-neutronclient in my understanding
could be eventually easily replaced with direct calls after refactoring
which lead me to ask a question concerning the following part of the
conversation (being copied here again):


Not sure why we keep bringing this refactoring up: my point is that if GP
were to be implemented the way I'm suggesting the refactoring would have no
impact on GP...even if it did, replacing remote with direct calls should be
avoided IMO.




 [Bob:]

   The overhead of using python-neutronclient is that unnecessary
   serialization/deserialization are performed as well as socket
communication
   through the kernel. This is all required between processes, but not
within a
   single process. A well-defined and efficient mechanism to invoke
resource
   APIs within the process, with the same semantics as incoming REST
calls,
   seems like a generally useful addition to neutron. I'm hopeful the
core
   refactoring effort will provide this (and am willing to help make
sure it
   does), but we need something we can use until that is available.
  

 [Armando:]

  I appreciate that there is a cost involved in relying on distributed
  communication, but this must be negligible considered what needs to
  happen end-to-end. If the overhead being referred here is the price to
  pay for having a more dependable system (e.g. because things can be
  scaled out and/or made reliable independently), then I think this is a
  price worth paying.
 
  I do hope that the core refactoring is not aiming at what you're
  suggesting, as it sounds in exact opposition to some of the OpenStack
  design

Re: [openstack-dev] [neutron][group-based-policy] GP mapping driver

2014-05-27 Thread Armando M.
Hi Mohammad,

Thanks, I understand now. I appreciate that the mapping driver is one way
of doing things and that the design has been familiarized for a while. I
wish I could follow infinite channels but unfortunately the openstack
information overload is astounding and sometimes I fail :) Gerrit is the
channel I strive to follow and this is when I saw the code for the first
time, hence my feedback.

It's worth noting that the PoC design document is (as it should be) very
high level and most of my feedback applies to the implementation decisions
being made. That said, I still have doubts that an ML2 like approach is
really necessary for GP and I welcome inputs to help me change my mind :)

Thanks
Armando
On May 27, 2014 5:04 PM, Mohammad Banikazemi m...@us.ibm.com wrote:

 Thanks for the continued interest in discussing Group Policy (GP). I
 believe these discussions with the larger Neutron community can benefit the
 GP work.

 GP like any other Neutron extension can have different implementations.
 Our idea has been to have the GP code organized similar to how ML2 and
 mechanism drivers are organized, with the possibility of having different
 drivers for realizing the GP API. One such driver (analogous to an ML2
 mechanism driver I would say) is the mapping driver that was implemented
 for the PoC. I certainly do not see it as the only implementation. The
 mapping driver is just the driver we used for our PoC implementation in
 order to gain experience in developing such a driver. Hope this clarifies
 things a bit.

 Please note that for better or worse we have produced several documents
 during the previous cycle. We have tried to collect them on the GP wiki
 page [1]. The latest design document [2] should give a broad view of the GP
 extension and the model being proposed. The PoC document [3] may clarify
 our PoC plans and where the mapping driver stands wrt other pieces of the
 work.  (Please note some parts of the plan as described in the PoC document
 was not implemented.)

 Hope my explanation and these documents (and other documents available on
 the GP wiki) are helpful.

 Best,

 Mohammad

 [1] https://wiki.openstack.org/wiki/Neutron/GroupPolicy   - GP wiki
 page
 [2]
 https://docs.google.com/presentation/d/1Nn1HjghAvk2RTPwvltSrnCUJkidWKWY2ckU7OYAVNpo/
- GP design document
 [3]
 https://docs.google.com/document/d/14UyvBkptmrxB9FsWEP8PEGv9kLqTQbsmlRxnqeF9Be8/
- GP PoC document


 [image: Inactive hide details for Armando M. ---05/26/2014 09:46:34
 PM---On May 26, 2014 4:27 PM, Mohammad Banikazemi m...@us.ibm.co]Armando
 M. ---05/26/2014 09:46:34 PM---On May 26, 2014 4:27 PM, Mohammad
 Banikazemi m...@us.ibm.com wrote: 

 From: Armando M. arma...@gmail.com
 To: OpenStack Development Mailing List, (not for usage questions) 
 openstack-dev@lists.openstack.org,
 Date: 05/26/2014 09:46 PM
 Subject: Re: [openstack-dev] [neutron][group-based-policy] GP mapping
 driver
 --




 On May 26, 2014 4:27 PM, Mohammad Banikazemi 
 *m...@us.ibm.com*m...@us.ibm.com
 wrote:
 
  Armando,
 
  I think there are a couple of things that are being mixed up here, at
 least as I see this conversation :). The mapping driver is simply one way
 of implementing GP. Ideally I would say, you do not need to implement the
 GP in terms of other Neutron abstractions even though you may choose to do
 so. A network controller could realize the connectivities and policies
 defined by GP independent of say networks, and subnets. If we agree on this
 point, then how we organize the code will be different than the case where
 GP is always defined as something on top of current neutron API. In other
 words, we shouldn't organize the overall code for GP based solely on the
 use of the mapping driver.

 The mapping driver is embedded in the policy framework that Bob had
 initially proposed. If I understood what you're suggesting correctly, it
 makes very little sense to diverge or come up with a different framework
 alongside the legacy driver later on, otherwise we may end up in the same
 state of the core plugins': monolithic vs ml2-based. Could you clarify?
 
  In the mapping driver (aka the legacy driver) for the PoC, GP is
 implemented in terms of other Neutron abstractions. I agree that using
 python-neutronclient for the PoC would be fine and as Bob has mentioned it
 would have been probably the best/easiest way of having the PoC implemented
 in the first place. The calls to python-neutronclient in my understanding
 could be eventually easily replaced with direct calls after refactoring
 which lead me to ask a question concerning the following part of the
 conversation (being copied here again):

 Not sure why we keep bringing this refactoring up: my point is that if GP
 were to be implemented the way I'm suggesting the refactoring would have no
 impact on GP...even if it did, replacing remote with direct calls should be
 avoided IMO.

 
 
  [Bob:]
 
The overhead of using python

Re: [openstack-dev] [neutron][group-based-policy] GP mapping driver

2014-05-27 Thread Sumit Naiksatam
There seems to be a fair bit of confusion with the PoC/prototype
patches. As such, and per reviewer feedback to introduce the Endpoint
Group related patch sooner than later, we will start a new series. You
will see this first patch land shortly, and we can incrementally make
progress from there.

On Tue, May 27, 2014 at 10:33 AM, Carlos Gonçalves m...@cgoncalves.pt wrote:
 Hi,

 On 27 May 2014, at 15:55, Mohammad Banikazemi m...@us.ibm.com wrote:

 GP like any other Neutron extension can have different implementations. Our
 idea has been to have the GP code organized similar to how ML2 and mechanism
 drivers are organized, with the possibility of having different drivers for
 realizing the GP API. One such driver (analogous to an ML2 mechanism driver
 I would say) is the mapping driver that was implemented for the PoC. I
 certainly do not see it as the only implementation. The mapping driver is
 just the driver we used for our PoC implementation in order to gain
 experience in developing such a driver. Hope this clarifies things a bit.


 The code organisation adopted to implement the PoC for the GP is indeed very
 similar to the one ML2 is using. There is one aspect I think GP will hit
 soon if it continues to follow with its current code base where multiple
 (policy) drivers will be available, and as Mohammad putted it as being
 analogous to an ML2 mech driver, but are independent from ML2’s. I’m
 unaware, however, if the following problem has already been brought to
 discussion or not.

 From here I see the GP effort going, besides from some code refactoring, I'd
 say expanding the supported policy drivers is the next goal. With that ODL
 support might next. Now, administrators enabling GP ODL support will have to
 configure ODL data twice (host, user, password) in case they’re using ODL as
 a ML2 mech driver too, because policy drivers share no information between
 ML2 ones. This can become more troublesome if ML2 is configured to load
 multiple mech drivers.

 With that said, if it makes any sense, a different implementation should be
 considered. One that somehow allows mech drivers living in ML2 umbrella to
 be extended; BP [1] [2] may be a first step towards that end, I’m guessing.

 Thanks,
 Carlos Gonçalves

 [1]
 https://blueprints.launchpad.net/neutron/+spec/neutron-ml2-mechanismdriver-extensions
 [2] https://review.openstack.org/#/c/89208/


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][group-based-policy] GP mapping driver

2014-05-26 Thread Mohammad Banikazemi

Armando,

I think there are a couple of things that are being mixed up here, at least
as I see this conversation :). The mapping driver is simply one way of
implementing GP. Ideally I would say, you do not need to implement the GP
in terms of other Neutron abstractions even though you may choose to do so.
A network controller could realize the connectivities and policies defined
by GP independent of say networks, and subnets. If we agree on this point,
then how we organize the code will be different than the case where GP is
always defined as something on top of current neutron API. In other words,
we shouldn't organize the overall code for GP based solely on the use of
the mapping driver.

In the mapping driver (aka the legacy driver) for the PoC, GP is
implemented in terms of other Neutron abstractions. I agree that using
python-neutronclient for the PoC would be fine and as Bob has mentioned it
would have been probably the best/easiest way of having the PoC implemented
in the first place. The calls to python-neutronclient in my understanding
could be eventually easily replaced with direct calls after refactoring
which lead me to ask a question concerning the following part of the
conversation (being copied here again):


[Bob:]
  The overhead of using python-neutronclient is that unnecessary
  serialization/deserialization are performed as well as socket
communication
  through the kernel. This is all required between processes, but not
within a
  single process. A well-defined and efficient mechanism to invoke
resource
  APIs within the process, with the same semantics as incoming REST
calls,
  seems like a generally useful addition to neutron. I'm hopeful the core
  refactoring effort will provide this (and am willing to help make sure
it
  does), but we need something we can use until that is available.
 

[Armando:]
 I appreciate that there is a cost involved in relying on distributed
 communication, but this must be negligible considered what needs to
 happen end-to-end. If the overhead being referred here is the price to
 pay for having a more dependable system (e.g. because things can be
 scaled out and/or made reliable independently), then I think this is a
 price worth paying.

 I do hope that the core refactoring is not aiming at what you're
 suggesting, as it sounds in exact opposition to some of the OpenStack
 design principles.

From the summit sessions (in particular the session by Mark on refactoring
the core), I too was under the impression that there will be a way of
invoking Neutron API within the plugin with the same semantics as through
the REST API. Is this a misunderstanding?

Best,

Mohammad







Armando M. arma...@gmail.com wrote on 05/24/2014 01:36:35 PM:

 From: Armando M. arma...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org,
 Date: 05/24/2014 01:38 PM
 Subject: Re: [openstack-dev] [neutron][group-based-policy] GP mapping
driver

 On 24 May 2014 05:20, Robert Kukura kuk...@noironetworks.com wrote:
 
  On 5/23/14, 10:54 PM, Armando M. wrote:
 
  On 23 May 2014 12:31, Robert Kukura kuk...@noironetworks.com wrote:
 
  On 5/23/14, 12:46 AM, Mandeep Dhami wrote:
 
  Hi Armando:
 
  Those are good points. I will let Bob Kukura chime in on the
specifics of
  how we intend to do that integration. But if what you see in the
  prototype/PoC was our final design for integration with Neutron core,
I
  would be worried about that too. That specific part of the code
  (events/notifications for DHCP) was done in that way just for the
  prototype
  - to allow us to experiment with the part that was new and needed
  experimentation, the APIs and the model.
 
  That is the exact reason that we did not initially check the code to
  gerrit
  - so that we do not confuse the review process with the prototype
  process.
  But we were requested by other cores to check in even the prototype
code
  as
  WIP patches to allow for review of the API parts. That can
unfortunately
  create this very misunderstanding. For the review, I would recommend
not
  the
  WIP patches, as they contain the prototype parts as well, but just
the
  final
  patches that are not marked WIP. If you such issues in that part of
the
  code, please DO raise that as that would be code that we intend to
  upstream.
 
  I believe Bob did discuss the specifics of this integration issue
with
  you
  at the summit, but like I said it is best if he represents that side
  himself.
 
  Armando and Mandeep,
 
  Right, we do need a workable solution for the GBP driver to invoke
  neutron
  API operations, and this came up at the summit.
 
  We started out in the PoC directly calling the plugin, as is
currently
  done
  when creating ports for agents. But this is not sufficient because
the
  DHCP
  notifications, and I think the nova notifications, are needed for VM
  ports.
  We also really should be generating the other notifications,
enforcing
  quotas, etc

Re: [openstack-dev] [neutron][group-based-policy] GP mapping driver

2014-05-26 Thread Armando M.
On May 26, 2014 4:27 PM, Mohammad Banikazemi m...@us.ibm.com wrote:

 Armando,

 I think there are a couple of things that are being mixed up here, at
least as I see this conversation :). The mapping driver is simply one way
of implementing GP. Ideally I would say, you do not need to implement the
GP in terms of other Neutron abstractions even though you may choose to do
so. A network controller could realize the connectivities and policies
defined by GP independent of say networks, and subnets. If we agree on this
point, then how we organize the code will be different than the case where
GP is always defined as something on top of current neutron API. In other
words, we shouldn't organize the overall code for GP based solely on the
use of the mapping driver.

The mapping driver is embedded in the policy framework that Bob had
initially proposed. If I understood what you're suggesting correctly, it
makes very little sense to diverge or come up with a different framework
alongside the legacy driver later on, otherwise we may end up in the same
state of the core plugins': monolithic vs ml2-based. Could you clarify?

 In the mapping driver (aka the legacy driver) for the PoC, GP is
implemented in terms of other Neutron abstractions. I agree that using
python-neutronclient for the PoC would be fine and as Bob has mentioned it
would have been probably the best/easiest way of having the PoC implemented
in the first place. The calls to python-neutronclient in my understanding
could be eventually easily replaced with direct calls after refactoring
which lead me to ask a question concerning the following part of the
conversation (being copied here again):

Not sure why we keep bringing this refactoring up: my point is that if GP
were to be implemented the way I'm suggesting the refactoring would have no
impact on GP...even if it did, replacing remote with direct calls should be
avoided IMO.



 [Bob:]

   The overhead of using python-neutronclient is that unnecessary
   serialization/deserialization are performed as well as socket
communication
   through the kernel. This is all required between processes, but not
within a
   single process. A well-defined and efficient mechanism to invoke
resource
   APIs within the process, with the same semantics as incoming REST
calls,
   seems like a generally useful addition to neutron. I'm hopeful the
core
   refactoring effort will provide this (and am willing to help make
sure it
   does), but we need something we can use until that is available.
  

 [Armando:]

  I appreciate that there is a cost involved in relying on distributed
  communication, but this must be negligible considered what needs to
  happen end-to-end. If the overhead being referred here is the price to
  pay for having a more dependable system (e.g. because things can be
  scaled out and/or made reliable independently), then I think this is a
  price worth paying.
 
  I do hope that the core refactoring is not aiming at what you're
  suggesting, as it sounds in exact opposition to some of the OpenStack
  design principles.


 From the summit sessions (in particular the session by Mark on
refactoring the core), I too was under the impression that there will be a
way of invoking Neutron API within the plugin with the same semantics as
through the REST API. Is this a misunderstanding?

That was not my understanding, but I'll let Mark chime in on this.

Many thanks
Armando

 Best,

 Mohammad







 Armando M. arma...@gmail.com wrote on 05/24/2014 01:36:35 PM:

  From: Armando M. arma...@gmail.com
  To: OpenStack Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.org,
  Date: 05/24/2014 01:38 PM
  Subject: Re: [openstack-dev] [neutron][group-based-policy] GP mapping
driver

 
  On 24 May 2014 05:20, Robert Kukura kuk...@noironetworks.com wrote:
  
   On 5/23/14, 10:54 PM, Armando M. wrote:
  
   On 23 May 2014 12:31, Robert Kukura kuk...@noironetworks.com wrote:
  
   On 5/23/14, 12:46 AM, Mandeep Dhami wrote:
  
   Hi Armando:
  
   Those are good points. I will let Bob Kukura chime in on the
specifics of
   how we intend to do that integration. But if what you see in the
   prototype/PoC was our final design for integration with Neutron
core, I
   would be worried about that too. That specific part of the code
   (events/notifications for DHCP) was done in that way just for the
   prototype
   - to allow us to experiment with the part that was new and needed
   experimentation, the APIs and the model.
  
   That is the exact reason that we did not initially check the code to
   gerrit
   - so that we do not confuse the review process with the prototype
   process.
   But we were requested by other cores to check in even the prototype
code
   as
   WIP patches to allow for review of the API parts. That can
unfortunately
   create this very misunderstanding. For the review, I would
recommend not
   the
   WIP patches, as they contain the prototype parts as well, but just

Re: [openstack-dev] [neutron][group-based-policy] GP mapping driver

2014-05-25 Thread Jay Pipes

On 05/24/2014 01:36 PM, Armando M. wrote:

I appreciate that there is a cost involved in relying on distributed
communication, but this must be negligible considered what needs to
happen end-to-end. If the overhead being referred here is the price to
pay for having a more dependable system (e.g. because things can be
scaled out and/or made reliable independently), then I think this is a
price worth paying.


Yes, I agree 100%.

best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][group-based-policy] GP mapping driver

2014-05-24 Thread Robert Kukura


On 5/23/14, 10:54 PM, Armando M. wrote:

On 23 May 2014 12:31, Robert Kukura kuk...@noironetworks.com wrote:

On 5/23/14, 12:46 AM, Mandeep Dhami wrote:

Hi Armando:

Those are good points. I will let Bob Kukura chime in on the specifics of
how we intend to do that integration. But if what you see in the
prototype/PoC was our final design for integration with Neutron core, I
would be worried about that too. That specific part of the code
(events/notifications for DHCP) was done in that way just for the prototype
- to allow us to experiment with the part that was new and needed
experimentation, the APIs and the model.

That is the exact reason that we did not initially check the code to gerrit
- so that we do not confuse the review process with the prototype process.
But we were requested by other cores to check in even the prototype code as
WIP patches to allow for review of the API parts. That can unfortunately
create this very misunderstanding. For the review, I would recommend not the
WIP patches, as they contain the prototype parts as well, but just the final
patches that are not marked WIP. If you such issues in that part of the
code, please DO raise that as that would be code that we intend to upstream.

I believe Bob did discuss the specifics of this integration issue with you
at the summit, but like I said it is best if he represents that side
himself.

Armando and Mandeep,

Right, we do need a workable solution for the GBP driver to invoke neutron
API operations, and this came up at the summit.

We started out in the PoC directly calling the plugin, as is currently done
when creating ports for agents. But this is not sufficient because the DHCP
notifications, and I think the nova notifications, are needed for VM ports.
We also really should be generating the other notifications, enforcing
quotas, etc. for the neutron resources.

I am at loss here: if you say that you couldn't fit at the plugin
level, that is because it is the wrong level!! Sitting above it and
redo all the glue code around it to add DHCP notifications etc
continues the bad practice within the Neutron codebase where there is
not a good separation of concerns: for instance everything is cobbled
together like the DB and plugin logic. I appreciate that some design
decisions have been made in the past, but there's no good reason for a
nice new feature like GP to continue this bad practice; this is why I
feel strongly about the current approach being taken.
Armando, I am agreeing with you! The code you saw was a proof-of-concept 
implementation intended as a learning exercise, not something intended 
to be merged as-is to the neutron code base. The approach for invoking 
resources from the driver(s) will be revisited before the driver code is 
submitted for review.



We could just use python-neutronclient, but I think we'd prefer to avoid the
overhead. The neutron project already depends on python-neutronclient for
some tests, the debug facility, and the metaplugin, so in retrospect, we
could have easily used it in the PoC.

I am not sure I understand what overhead you mean here. Could you
clarify? Actually looking at the code, I see a mind boggling set of
interactions going back and forth between the GP plugin, the policy
driver manager, the mapping driver and the core plugin: they are all
entangled together. For instance, when creating an endpoint the GP
plugin ends up calling the mapping driver that in turns ends up calls
the GP plugin itself! If this is not overhead I don't know what is!
The way the code has been structured makes it very difficult to read,
let alone maintain and extend with other policy mappers. The ML2-like
nature of the approach taken might work well in the context of core
plugin, mechanisms drivers etc, but I would argue that it poorly
applies to the context of GP.
The overhead of using python-neutronclient is that unnecessary 
serialization/deserialization are performed as well as socket 
communication through the kernel. This is all required between 
processes, but not within a single process. A well-defined and efficient 
mechanism to invoke resource APIs within the process, with the same 
semantics as incoming REST calls, seems like a generally useful addition 
to neutron. I'm hopeful the core refactoring effort will provide this 
(and am willing to help make sure it does), but we need something we can 
use until that is available.


One lesson we learned from the PoC is that the implicit management of 
the GP resources (RDs and BDs) is completely independent from the 
mapping of GP resources to neutron resources. We discussed this at the 
last GP sub-team IRC meeting, and decided to package this functionality 
as a separate driver that is invoked prior to the mapping_driver, and 
can also be used in conjunction with other GP back-end drivers. I think 
this will help improve the structure and readability of the code, and it 
also shows the applicability of the ML2-like nature of the driver API.


You are 

Re: [openstack-dev] [neutron][group-based-policy] GP mapping driver

2014-05-24 Thread Armando M.
On 24 May 2014 05:20, Robert Kukura kuk...@noironetworks.com wrote:

 On 5/23/14, 10:54 PM, Armando M. wrote:

 On 23 May 2014 12:31, Robert Kukura kuk...@noironetworks.com wrote:

 On 5/23/14, 12:46 AM, Mandeep Dhami wrote:

 Hi Armando:

 Those are good points. I will let Bob Kukura chime in on the specifics of
 how we intend to do that integration. But if what you see in the
 prototype/PoC was our final design for integration with Neutron core, I
 would be worried about that too. That specific part of the code
 (events/notifications for DHCP) was done in that way just for the
 prototype
 - to allow us to experiment with the part that was new and needed
 experimentation, the APIs and the model.

 That is the exact reason that we did not initially check the code to
 gerrit
 - so that we do not confuse the review process with the prototype
 process.
 But we were requested by other cores to check in even the prototype code
 as
 WIP patches to allow for review of the API parts. That can unfortunately
 create this very misunderstanding. For the review, I would recommend not
 the
 WIP patches, as they contain the prototype parts as well, but just the
 final
 patches that are not marked WIP. If you such issues in that part of the
 code, please DO raise that as that would be code that we intend to
 upstream.

 I believe Bob did discuss the specifics of this integration issue with
 you
 at the summit, but like I said it is best if he represents that side
 himself.

 Armando and Mandeep,

 Right, we do need a workable solution for the GBP driver to invoke
 neutron
 API operations, and this came up at the summit.

 We started out in the PoC directly calling the plugin, as is currently
 done
 when creating ports for agents. But this is not sufficient because the
 DHCP
 notifications, and I think the nova notifications, are needed for VM
 ports.
 We also really should be generating the other notifications, enforcing
 quotas, etc. for the neutron resources.

 I am at loss here: if you say that you couldn't fit at the plugin
 level, that is because it is the wrong level!! Sitting above it and
 redo all the glue code around it to add DHCP notifications etc
 continues the bad practice within the Neutron codebase where there is
 not a good separation of concerns: for instance everything is cobbled
 together like the DB and plugin logic. I appreciate that some design
 decisions have been made in the past, but there's no good reason for a
 nice new feature like GP to continue this bad practice; this is why I
 feel strongly about the current approach being taken.

 Armando, I am agreeing with you! The code you saw was a proof-of-concept
 implementation intended as a learning exercise, not something intended to be
 merged as-is to the neutron code base. The approach for invoking resources
 from the driver(s) will be revisited before the driver code is submitted for
 review.


 We could just use python-neutronclient, but I think we'd prefer to avoid
 the
 overhead. The neutron project already depends on python-neutronclient for
 some tests, the debug facility, and the metaplugin, so in retrospect, we
 could have easily used it in the PoC.

 I am not sure I understand what overhead you mean here. Could you
 clarify? Actually looking at the code, I see a mind boggling set of
 interactions going back and forth between the GP plugin, the policy
 driver manager, the mapping driver and the core plugin: they are all
 entangled together. For instance, when creating an endpoint the GP
 plugin ends up calling the mapping driver that in turns ends up calls
 the GP plugin itself! If this is not overhead I don't know what is!
 The way the code has been structured makes it very difficult to read,
 let alone maintain and extend with other policy mappers. The ML2-like
 nature of the approach taken might work well in the context of core
 plugin, mechanisms drivers etc, but I would argue that it poorly
 applies to the context of GP.

 The overhead of using python-neutronclient is that unnecessary
 serialization/deserialization are performed as well as socket communication
 through the kernel. This is all required between processes, but not within a
 single process. A well-defined and efficient mechanism to invoke resource
 APIs within the process, with the same semantics as incoming REST calls,
 seems like a generally useful addition to neutron. I'm hopeful the core
 refactoring effort will provide this (and am willing to help make sure it
 does), but we need something we can use until that is available.


I appreciate that there is a cost involved in relying on distributed
communication, but this must be negligible considered what needs to
happen end-to-end. If the overhead being referred here is the price to
pay for having a more dependable system (e.g. because things can be
scaled out and/or made reliable independently), then I think this is a
price worth paying.

I do hope that the core refactoring is not aiming at what you're
suggesting,