Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-06 Thread Kevin Benton
Are there any parity features you are aware of that aren't receiving
adequate developer/reviewer time? I'm not aware of any parity features that
are in a place where throwing more engineers at them is going to speed
anything up. Maybe Mark McClain (Nova parity leader) can provide some
better insight here, but that is the impression I've gotten as an active
Neutron contributor observing the ongoing parity work.

Given that, pointing to the Nova parity work seems a bit like a red
herring. This new API is being developed orthogonally to the existing API
endpoints and I don't think it was ever the expectation that Nova would
switch to this during the Juno timeframe anyway. The new API will not be
used during normal operation and should not impact the existing API at all.


On Tue, Aug 5, 2014 at 5:51 PM, Sean Dague s...@dague.net wrote:

 On 08/05/2014 07:28 PM, Joe Gordon wrote:
 
 
 
  On Wed, Aug 6, 2014 at 12:20 AM, Robert Kukura kuk...@noironetworks.com
  mailto:kuk...@noironetworks.com wrote:
 
  On 8/4/14, 4:27 PM, Mark McClain wrote:
  All-
 
  tl;dr
 
  * Group Based Policy API is the kind of experimentation we be
  should attempting.
  * Experiments should be able to fail fast.
  * The master branch does not fail fast.
  * StackForge is the proper home to conduct this experiment.
  The disconnect here is that the Neutron group-based policy sub-team
  that has been implementing this feature for Juno does not see this
  work as an experiment to gather data, but rather as an important
  innovative feature to put in the hands of early adopters in Juno and
  into widespread deployment with a stable API as early as Kilo.
 
 
  The group-based policy BP approved for Juno addresses the critical
  need for a more usable, declarative, intent-based interface for
  cloud application developers and deployers, that can co-exist with
  Neutron's current networking-hardware-oriented API and work nicely
  with all existing core plugins. Additionally, we believe that this
  declarative approach is what is needed to properly integrate
  advanced services into Neutron, and will go a long way towards
  resolving the difficulties so far trying to integrate LBaaS, FWaaS,
  and VPNaaS APIs into the current Neutron model.
 
  Like any new service API in Neutron, the initial group policy API
  release will be subject to incompatible changes before being
  declared stable, and hence would be labeled experimental in
  Juno. This does not mean that it is an experiment where to fail
  fast is an acceptable outcome. The sub-team's goal is to stabilize
  the group policy API as quickly as possible,  making any needed
  changes based on early user and operator experience.
 
  The L and M cycles that Mark suggests below to revisit the status
  are a completely different time frame. By the L or M cycle, we
  should be working on a new V3 Neutron API that pulls these APIs
  together into a more cohesive core API. We will not be in a position
  to do this properly without the experience of using the proposed
  group policy extension with the V2 Neutron API in production.
 
 
  If we were failing miserably, or if serious technical issues were
  being identified with the patches, some delay might make sense. But,
  other than Mark's -2 blocking the initial patches from merging, we
  are on track to complete the planned work in Juno.
 
  -Bob
 
 
 
  As a member of nova-core, I find this whole discussion very startling.
  Putting aside the concerns over technical details and the pain of having
  in tree experimental APIs (such as nova v3 API), neutron still isn't the
  de-facto networking solution from nova's perspective and it won't be
  until neutron has feature and performance parity with nova-network. In
  fact due to the slow maturation of neutron, nova has moved nova-network
  from 'frozen' to open for development (with a few caveats).  So unless
  this new API directly solves some of the gaps in [0], I see no reason to
  push this into Juno. Juno hardly seems to be the appropriate time to
  introduce a new not-so-stable API; Juno is the time to address all the
  gaps [0] and hit feature and performance parity with nova-network.
 
 
  [0]
 
 https://wiki.openstack.org/wiki/Governance/TechnicalCommittee/Neutron_Gap_Coverage

 I would agree.

 There has been a pretty regular issue with Neutron team members working
 on new features instead of getting Neutron to feature parity with Nova
 network so we can retire the thing. This whole push for another API at
 this stage makes no sense to me.

 -Sean

 --
 Sean Dague
 http://dague.net

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton

Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-06 Thread Gary Kotton


On 8/5/14, 8:53 PM, Russell Bryant rbry...@redhat.com wrote:

On 08/05/2014 01:23 PM, Gary Kotton wrote:
 Ok, thanks for the clarification. This means that it will not be done
 automagically as it is today ­ the tenant will need to create a Neutron
 port and then pass that through.

FWIW, that's the direction we've wanted to move in Nova anyway.  We'd
like to get rid of automatic port creation, but can't do that in the
current stable API.

Can you elaborate on what you mean here? What are the issues with port
creation? 


-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-06 Thread Gary Kotton
Correct, this work is orthogonal to the parity work, which I understand is 
coming along very nicely. Do new features in Nova also require parity - 
https://blueprints.launchpad.net/nova/+spec/better-support-for-multiple-networks
 (for example enables the MTU to be configured instead of via a configuration 
variable)
At the moment it seems like a moving target.

From: Kevin Benton blak...@gmail.commailto:blak...@gmail.com
Reply-To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, August 6, 2014 at 9:12 AM
To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

Are there any parity features you are aware of that aren't receiving adequate 
developer/reviewer time? I'm not aware of any parity features that are in a 
place where throwing more engineers at them is going to speed anything up. 
Maybe Mark McClain (Nova parity leader) can provide some better insight here, 
but that is the impression I've gotten as an active Neutron contributor 
observing the ongoing parity work.

Given that, pointing to the Nova parity work seems a bit like a red herring. 
This new API is being developed orthogonally to the existing API endpoints and 
I don't think it was ever the expectation that Nova would switch to this during 
the Juno timeframe anyway. The new API will not be used during normal operation 
and should not impact the existing API at all.


On Tue, Aug 5, 2014 at 5:51 PM, Sean Dague 
s...@dague.netmailto:s...@dague.net wrote:
On 08/05/2014 07:28 PM, Joe Gordon wrote:



 On Wed, Aug 6, 2014 at 12:20 AM, Robert Kukura 
 kuk...@noironetworks.commailto:kuk...@noironetworks.com
 mailto:kuk...@noironetworks.commailto:kuk...@noironetworks.com wrote:

 On 8/4/14, 4:27 PM, Mark McClain wrote:
 All-

 tl;dr

 * Group Based Policy API is the kind of experimentation we be
 should attempting.
 * Experiments should be able to fail fast.
 * The master branch does not fail fast.
 * StackForge is the proper home to conduct this experiment.
 The disconnect here is that the Neutron group-based policy sub-team
 that has been implementing this feature for Juno does not see this
 work as an experiment to gather data, but rather as an important
 innovative feature to put in the hands of early adopters in Juno and
 into widespread deployment with a stable API as early as Kilo.


 The group-based policy BP approved for Juno addresses the critical
 need for a more usable, declarative, intent-based interface for
 cloud application developers and deployers, that can co-exist with
 Neutron's current networking-hardware-oriented API and work nicely
 with all existing core plugins. Additionally, we believe that this
 declarative approach is what is needed to properly integrate
 advanced services into Neutron, and will go a long way towards
 resolving the difficulties so far trying to integrate LBaaS, FWaaS,
 and VPNaaS APIs into the current Neutron model.

 Like any new service API in Neutron, the initial group policy API
 release will be subject to incompatible changes before being
 declared stable, and hence would be labeled experimental in
 Juno. This does not mean that it is an experiment where to fail
 fast is an acceptable outcome. The sub-team's goal is to stabilize
 the group policy API as quickly as possible,  making any needed
 changes based on early user and operator experience.

 The L and M cycles that Mark suggests below to revisit the status
 are a completely different time frame. By the L or M cycle, we
 should be working on a new V3 Neutron API that pulls these APIs
 together into a more cohesive core API. We will not be in a position
 to do this properly without the experience of using the proposed
 group policy extension with the V2 Neutron API in production.


 If we were failing miserably, or if serious technical issues were
 being identified with the patches, some delay might make sense. But,
 other than Mark's -2 blocking the initial patches from merging, we
 are on track to complete the planned work in Juno.

 -Bob



 As a member of nova-core, I find this whole discussion very startling.
 Putting aside the concerns over technical details and the pain of having
 in tree experimental APIs (such as nova v3 API), neutron still isn't the
 de-facto networking solution from nova's perspective and it won't be
 until neutron has feature and performance parity with nova-network. In
 fact due to the slow maturation of neutron, nova has moved nova-network
 from 'frozen' to open for development (with a few caveats).  So unless
 this new API directly solves some of the gaps in [0], I see no reason to
 push this into Juno. Juno hardly seems to be the appropriate time to
 introduce a new not-so-stable 

Re: [openstack-dev] [Nova][Neutron][Technical Committee] nova-network - Neutron. Throwing a wrench in the Neutron gap analysis

2014-08-06 Thread Jesse Pretorius

 In many cases the users I've spoken to who are looking for a live path out
 of nova-network on to neutron are actually completely OK with some API
 service downtime (metadata service is an API service by their definition).
 A little 'glitch' in the network is also OK for many of them.

 Contrast that with the original proposal in this thread (snapshot VMs in
 old nova-network deployment, store in Swift or something, then launch VM
 from a snapshot in new Neutron deployment) - it is completely unacceptable
 and is not considered a migration path for these users.


There have been several discussions and ideas over time. I've tried to
collect as many as I've had exposure to on the whiteboard here:
https://blueprints.launchpad.net/neutron/+spec/nova-to-quantum-upgrade

Generally speaking we've all agreed before that while zero downtime would
be great, minimal downtime would be just fine. If it means a little API
downtime and some packet loss while an instance is replugged (maybe doing a
suspend while this happens would be safer) then it's still a big win.
Having to snap, delete and redeploy is less desirable but if there's an
automated process for that which can be controlled by the user (not the
admin) then that may be ok too.

Best regards,

Jesse
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-06 Thread Aaron Rosen
On Tue, Aug 5, 2014 at 11:18 PM, Gary Kotton gkot...@vmware.com wrote:



 On 8/5/14, 8:53 PM, Russell Bryant rbry...@redhat.com wrote:

 On 08/05/2014 01:23 PM, Gary Kotton wrote:
  Ok, thanks for the clarification. This means that it will not be done
  automagically as it is today ­ the tenant will need to create a Neutron
  port and then pass that through.
 
 FWIW, that's the direction we've wanted to move in Nova anyway.  We'd
 like to get rid of automatic port creation, but can't do that in the
 current stable API.

 Can you elaborate on what you mean here? What are the issues with port
 creation?


Having nova-compute create ports for neutron is problematic if timeouts
occur between nova and neutron as you have to garbage collect neutron ports
in nova to cleanup (which was the cause of several bug in the cache handing
allowing ports to leak into the info_cache in nova).  Pushing this out to
the tenant is less orchestration nova has to do.



 --
 Russell Bryant
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] l2pop problems

2014-08-06 Thread Mathieu Rohon
Hi Zang,

On Tue, Aug 5, 2014 at 1:18 PM, Zang MingJie zealot0...@gmail.com wrote:
 Hi Mathieu:

 We have deployed the new l2pop described in the previous mail in our
 environment, and works pretty well. It solved the timing problem, and
 also reduces lots of l2pop rpc calls. I'm going to file a blueprint to
 propose the changes.

great, I would be pleased to review this BP.

 On Fri, Jul 18, 2014 at 10:26 PM, Mathieu Rohon mathieu.ro...@gmail.com 
 wrote:
 Hi Zang,

 On Wed, Jul 16, 2014 at 4:43 PM, Zang MingJie zealot0...@gmail.com wrote:
 Hi, all:

 While resolving ovs restart rebuild br-tun flows[1], we have found
 several l2pop problems:

 1. L2pop is depending on agent_boot_time to decide whether send all
 port information or not, but the agent_boot_time is unreliable, for
 example if the service receives port up message before agent status
 report, the agent won't receive any port on other agents forever.

 you're right, there a race condition here, if the agent has more than
 1 port on the same network and if the agent sends its
 update_device_up() on every port before it sends its report_state(),
 it won't receive fdb concerning these network. Is it the race you are
 mentionning above?
 Since the report_state is done in a dedicated greenthread, and is
 launched before the greenthread that manages ovsdb_monitor, the state
 of the agent should be updated before the agent gets aware of its
 ports and sends get_device_details()/update_device_up(), am I wrong?
 So, after a restart of an agent, the agent_uptime() should be less
 than the agent_boot_time configured by default in the conf when the
 agent sent its first update_device_up(), the l2pop MD will be aware of
 this restart and trigger the cast of all fdb entries to the restarted
 agent.

 But I agree that it might relies on enventlet thread managment and on
 agent_boot_time that can be misconfigured by the provider.

 2. If the openvswitch restarted, all flows will be lost, including all
 l2pop flows, the agent is unable to fetch or recreate the l2pop flows.

 To resolve the problems, I'm suggesting some changes:

 1. Because the agent_boot_time is unreliable, the service can't decide
 whether to send flooding entry or not. But the agent can build up the
 flooding entries from unicast entries, it has already been
 implemented[2]

 2. Create a rpc from agent to service which fetch all fdb entries, the
 agent calls the rpc in `provision_local_vlan`, before setting up any
 port.[3]

 After these changes, the l2pop service part becomes simpler and more
 robust, mainly 2 function: first, returns all fdb entries at once when
 requested; second, broadcast fdb single entry when a port is up/down.

 That's an implementation that we have been thinking about during the
 l2pop implementation.
 Our purpose was to minimize RPC calls. But if this implementation is
 buggy due to uncontrolled thread order and/or bad usage of the
 agent_boot_time parameter, it's worth investigating your proposal [3].
 However, I don't get why [3] depends on [2]. couldn't we have a
 network_sync() sent by the agent during provision_local_vlan() which
 will reconfigure ovs when the agent and/or the ovs restart?

 actual, [3] doesn't strictly depend [2], we have encountered l2pop
 problems several times where the unicast is correct, but the broadcast
 fails, so we decide completely ignore the broadcast entries in rpc,
 only deal unicast entries, and use unicast entries to build broadcast
 rules.

Understood, but i could be interesting to understand why the MD sends
wrong broadcast entries. Do you have any clue?




 [1] https://bugs.launchpad.net/neutron/+bug/1332450
 [2] https://review.openstack.org/#/c/101581/
 [3] https://review.openstack.org/#/c/107409/

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-06 Thread Masakazu Shinohara
Hi,

I'm Masakazu Shinohara of Cyberagent corporation in Japan.

I am a representative of our new cloud network project.
We have a lot of services such as on line games blogs or all kind of web
services.

Now we have been testing Openstack and Cisco ACI.
It is a really important thing that they can work correctly.

We have been observing, learning and following very closely the work going
on for Group Based Policy. Our production deployment relies on using it in
Juno. We strongly want to see it complete as proposed in Neutron.

Best regards

--
Masakazu Shinohara
CyberAgent,Inc
Ameba division Ameba Infra. Unit
Architect group
Zip code 150-0045
Shibuya First Place Bldg, 8-16 Shinsen-cho
Shibuya-ku Tokyo
Mobile +81 80-6863-2356
Extension 62478
shinohara_masak...@cyberagent.co.jp
---



From: Mark McClain mmccl...@yahoo-inc.com mailto:mmccl...@yahoo-inc.com
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 mailto:openstack-dev@lists.openstack.org
 Date: Monday, August 4, 2014 at 4:27 PM
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 mailto:openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [Neutron] Group Based Policy and the way forward

 All-

 tl;dr

 * Group Based Policy API is the kind of experimentation we be should
 attempting.
 * Experiments should be able to fail fast.
 * The master branch does not fail fast.
 * StackForge is the proper home to conduct this experiment.


 Why this email?
 ---
 Our community has been discussing and working on Group Based Policy
 (GBP) for many months.  I think the discussion has reached a point where
 we need to openly discuss a few issues before moving forward.  I
 recognize that this discussion could create frustration for those who
 have invested significant time and energy, but the reality is we need
 ensure we are making decisions that benefit all members of our community
 (users, operators, developers and vendors).

 Experimentation
 
 I like that as a community we are exploring alternate APIs.  The process
 of exploring via real user experimentation can produce valuable results.
   A good experiment should be designed to fail fast to enable further
 trials via rapid iteration.

 Merging large changes into the master branch is the exact opposite of
 failing fast.

 The master branch deliberately favors small iterative changes over time.
   Releasing a new version of the proposed API every six months limits
 our ability to learn and make adjustments.

 In the past, we’ve released LBaaS, FWaaS, and VPNaaS as experimental
 APIs.  The results have been very mixed as operators either shy away
 from testing/offering the API or embrace the API with the expectation
 that the community will provide full API support and migration.  In both
 cases, the experiment fails because we either could not get the data we
 need or are unable to make significant changes without accepting a
 non-trivial amount of technical debt via migrations or draft API support.

 Next Steps
 --
 Previously, the GPB subteam used a Github account to host the
 development, but the workflows and tooling do not align with OpenStack's
 development model. I’d like to see us create a group based policy
 project in StackForge.  StackForge will host the code and enable us to
 follow the same open review and QA processes we use in the main project
 while we are developing and testing the API. The infrastructure there
 will benefit us as we will have a separate review velocity and can
 frequently publish libraries to PyPI.  From a technical perspective, the
 13 new entities in GPB [1] do not require any changes to internal
 Neutron data structures.  The docs[2] also suggest that an external
 plugin or service would work to make it easier to speed development.

 End State
 -
 APIs require time to fully bake and right now it is too early to know
 the final outcome.  Using StackForge will allow the team to retain all
 of its options including: merging the code into Neutron, adopting the
 repository as sub-project of the Network Program, leaving the project in
 StackForge project or learning that users want something completely
 different.  I would expect that we'll revisit the status of the repo
 during the L or M cycles since the Kilo development cycle does not leave
 enough time to experiment and iterate.


 mark

 [1]
 http://git.openstack.org/cgit/openstack/neutron-specs/tree/
 specs/juno/group-based-policy-abstraction.rst#n370
 [2]
 https://docs.google.com/presentation/d/1Nn1HjghAvk2RTPwvltSrnCUJkidWK
 WY2ckU7OYAVNpo/edit#slide=id.g12c5a79d7_4078

 [3]
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-06 Thread Sumit Naiksatam
On Tue, Aug 5, 2014 at 11:46 PM, Gary Kotton gkot...@vmware.com wrote:
 Correct, this work is orthogonal to the parity work, which I understand is
 coming along very nicely.

Agree Gary and Kevin. I think the topic of Nova integration has
created confusion in people’s mind (at least the non-Neutron folks)
with regards to what is being proposed in the Group-based Policy (GBP)
feature. So to clarify - GBP is an optional extension, like many other
existing Neutron extensions. It is not meant to replace the Neutron
core API and/or the current Nova-Neutron interaction in Juno.

 Do new features in Nova also require parity -
 https://blueprints.launchpad.net/nova/+spec/better-support-for-multiple-networks
 (for example enables the MTU to be configured instead of via a configuration
 variable)
 At the moment it seems like a moving target.

 From: Kevin Benton blak...@gmail.com
 Reply-To: OpenStack List openstack-dev@lists.openstack.org
 Date: Wednesday, August 6, 2014 at 9:12 AM

 To: OpenStack List openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Neutron] Group Based Policy and the way
 forward

 Are there any parity features you are aware of that aren't receiving
 adequate developer/reviewer time? I'm not aware of any parity features that
 are in a place where throwing more engineers at them is going to speed
 anything up. Maybe Mark McClain (Nova parity leader) can provide some better
 insight here, but that is the impression I've gotten as an active Neutron
 contributor observing the ongoing parity work.

 Given that, pointing to the Nova parity work seems a bit like a red herring.
 This new API is being developed orthogonally to the existing API endpoints
 and I don't think it was ever the expectation that Nova would switch to this
 during the Juno timeframe anyway. The new API will not be used during normal
 operation and should not impact the existing API at all.


 On Tue, Aug 5, 2014 at 5:51 PM, Sean Dague s...@dague.net wrote:

 On 08/05/2014 07:28 PM, Joe Gordon wrote:
 
 
 
  On Wed, Aug 6, 2014 at 12:20 AM, Robert Kukura kuk...@noironetworks.com
  mailto:kuk...@noironetworks.com wrote:
 
  On 8/4/14, 4:27 PM, Mark McClain wrote:
  All-
 
  tl;dr
 
  * Group Based Policy API is the kind of experimentation we be
  should attempting.
  * Experiments should be able to fail fast.
  * The master branch does not fail fast.
  * StackForge is the proper home to conduct this experiment.
  The disconnect here is that the Neutron group-based policy sub-team
  that has been implementing this feature for Juno does not see this
  work as an experiment to gather data, but rather as an important
  innovative feature to put in the hands of early adopters in Juno and
  into widespread deployment with a stable API as early as Kilo.
 
 
  The group-based policy BP approved for Juno addresses the critical
  need for a more usable, declarative, intent-based interface for
  cloud application developers and deployers, that can co-exist with
  Neutron's current networking-hardware-oriented API and work nicely
  with all existing core plugins. Additionally, we believe that this
  declarative approach is what is needed to properly integrate
  advanced services into Neutron, and will go a long way towards
  resolving the difficulties so far trying to integrate LBaaS, FWaaS,
  and VPNaaS APIs into the current Neutron model.
 
  Like any new service API in Neutron, the initial group policy API
  release will be subject to incompatible changes before being
  declared stable, and hence would be labeled experimental in
  Juno. This does not mean that it is an experiment where to fail
  fast is an acceptable outcome. The sub-team's goal is to stabilize
  the group policy API as quickly as possible,  making any needed
  changes based on early user and operator experience.
 
  The L and M cycles that Mark suggests below to revisit the status
  are a completely different time frame. By the L or M cycle, we
  should be working on a new V3 Neutron API that pulls these APIs
  together into a more cohesive core API. We will not be in a position
  to do this properly without the experience of using the proposed
  group policy extension with the V2 Neutron API in production.
 
 
  If we were failing miserably, or if serious technical issues were
  being identified with the patches, some delay might make sense. But,
  other than Mark's -2 blocking the initial patches from merging, we
  are on track to complete the planned work in Juno.
 
  -Bob
 
 
 
  As a member of nova-core, I find this whole discussion very startling.
  Putting aside the concerns over technical details and the pain of having
  in tree experimental APIs (such as nova v3 API), neutron still isn't the
  de-facto networking solution from nova's perspective and it won't be
  until neutron has feature and 

Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-06 Thread Gary Kotton


From: Aaron Rosen aaronoro...@gmail.commailto:aaronoro...@gmail.com
Reply-To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, August 6, 2014 at 10:09 AM
To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron] Group Based Policy and the way forward


On Tue, Aug 5, 2014 at 11:18 PM, Gary Kotton 
gkot...@vmware.commailto:gkot...@vmware.com wrote:


On 8/5/14, 8:53 PM, Russell Bryant 
rbry...@redhat.commailto:rbry...@redhat.com wrote:

On 08/05/2014 01:23 PM, Gary Kotton wrote:
 Ok, thanks for the clarification. This means that it will not be done
 automagically as it is today ­ the tenant will need to create a Neutron
 port and then pass that through.

FWIW, that's the direction we've wanted to move in Nova anyway.  We'd
like to get rid of automatic port creation, but can't do that in the
current stable API.

Can you elaborate on what you mean here? What are the issues with port
creation?


Having nova-compute create ports for neutron is problematic if timeouts occur 
between nova and neutron as you have to garbage collect neutron ports in nova 
to cleanup (which was the cause of several bug in the cache handing allowing 
ports to leak into the info_cache in nova).  Pushing this out to the tenant is 
less orchestration nova has to do.

[gary] my take on this is that we should allocate this via the n-api and not 
via the nova compute (which is far too late in the process. But that is another 
discussion :)



--
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-06 Thread Martin Geisler
Ben Nemec openst...@nemebean.com writes:

 On 08/05/2014 03:14 PM, Yuriy Taraday wrote:
 
 When you're developing some big change you'll end up with trying
 dozens of different approaches and make thousands of mistakes. For
 reviewers this is just unnecessary noise (commit title Scratch my
 last CR, that was bullshit) while for you it's a precious history
 that can provide basis for future research or bug-hunting.

 So basically keeping a record of how not to do it?  I get that, but I
 think I'm more onboard with the suggestion of sticking those dead end
 changes into a separate branch.  There's no particular reason to keep
 them on your working branch anyway since they'll never merge to master.
  They're basically unnecessary conflicts waiting to happen.

Yeah, I would never keep broken or unfinished commits around like this.
In my opinion (as a core Mercurial developer), the best workflow is to
work on a feature and make small and large commits as you go along. When
the feature works, you begin squashing/splitting the commits to make
them into logical pieces, if they aren't already in good shape. You then
submit the branch for review and iterate on it until it is accepted.

As a reviewer, it cannot be stressed enough how much small, atomic,
commits help. Squashing things together into large commits make reviews
very tricky and removes the possibility of me accepting a later commit
while still discussing or rejecting earlier commits (cherry-picking).

 FWIW, I have had long-lived patch series, and I don't really see what
 is so difficult about running git rebase master. Other than conflicts,
 of course, which are going to be an issue with any long-running change
 no matter how it's submitted. There isn't a ton of git magic involved.

I agree. The conflicts you talk about are intrinsic to the parallel
development. Doing a rebase is equivalent to doing a series of merges,
so if rebase gives you conflicts, you can be near certain that a plain
merge would give you conflicts too. The same applies other way around.

 So as you may have guessed by now, I'm opposed to adding this to
 git-review. I think it's going to encourage bad committer behavior
 (monolithic commits) and doesn't address a use case I find compelling
 enough to offset that concern.

I don't understand why this would even be in the domain of git-review. A
submitter can do the puff magic stuff himself using basic Git commands
before he submits the collapsed commit.

-- 
Martin Geisler

http://google.com/+MartinGeisler


pgp2cgqzpZO4I.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][oslo] Problem installing oslo.config-1.4.0.0a3 from .whl files

2014-08-06 Thread Matthieu Huin
Thank you so much ! I had the same problem yesterday and was out of
ideas to solve this.

Matthieu Huin 

m...@enovance.com

- Original Message -
 From: Alexei Kornienko alexei.kornie...@gmail.com
 To: openstack-dev@lists.openstack.org
 Sent: Tuesday, August 5, 2014 10:08:45 PM
 Subject: Re: [openstack-dev] [Neutron][oslo] Problem installing 
 oslo.config-1.4.0.0a3 from .whl files
 
 Hello Carl,
 
 You should try to update your virtualenv (pip install -U virtualenv).
 It fixed this problem for me.
 
 Regards,
 Alexei
 
 On 05/08/14 23:00, Carl Baldwin wrote:
  Hi,
 
  I noticed this yesterday afternoon.  I tried to run pep8 and unit
  tests on a patch I was going to submit.  It failed with an error that
  no package satisfying oslo.config could be found [1].  I went to pypi
  and saw that the version appears to be available [2] but still
  couldn't install it.
 
  I tried to activate the .tox/pep8 virtual environment and install the
  version explicitly.  Interestingly, that worked in one gerrit repo for
  Neutron [3] but not the other [4].  These two virtual envs are on the
  same machine.  I ran git clean -fdx to start over and now neither
  virtualenv can install it.
 
  Anyone have any idea what is going on?  It seems to be related to the
  fact that oslo.config is now uploaded as .whl files, whatever those
  are.  Why is it that my system cannot handle these?  I noticed that
  oslo.config is now available only as .whl in the 1.4.0.0aN versions
  but used to be available as .tar.gz files.
 
  Carl
 
  [1] http://paste.openstack.org/show/90651/
  [2] https://pypi.python.org/pypi/oslo.config
  [3] http://paste.openstack.org/show/90674/
  [4] http://paste.openstack.org/show/90675/
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-06 Thread Aaron Rosen
On Wed, Aug 6, 2014 at 12:59 AM, Gary Kotton gkot...@vmware.com wrote:



   From: Aaron Rosen aaronoro...@gmail.com
 Reply-To: OpenStack List openstack-dev@lists.openstack.org
 Date: Wednesday, August 6, 2014 at 10:09 AM

 To: OpenStack List openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Neutron] Group Based Policy and the way
 forward


 On Tue, Aug 5, 2014 at 11:18 PM, Gary Kotton gkot...@vmware.com wrote:



 On 8/5/14, 8:53 PM, Russell Bryant rbry...@redhat.com wrote:

 On 08/05/2014 01:23 PM, Gary Kotton wrote:
  Ok, thanks for the clarification. This means that it will not be done
  automagically as it is today ­ the tenant will need to create a Neutron
  port and then pass that through.
 
 FWIW, that's the direction we've wanted to move in Nova anyway.  We'd
 like to get rid of automatic port creation, but can't do that in the
 current stable API.

  Can you elaborate on what you mean here? What are the issues with port
 creation?


 Having nova-compute create ports for neutron is problematic if timeouts
 occur between nova and neutron as you have to garbage collect neutron ports
 in nova to cleanup (which was the cause of several bug in the cache handing
 allowing ports to leak into the info_cache in nova).  Pushing this out to
 the tenant is less orchestration nova has to do.

  [gary] my take on this is that we should allocate this via the n-api and
 not via the nova compute (which is far too late in the process. But that is
 another discussion :)


I agree, I had actually proposed this here:
https://blueprints.launchpad.net/nova/+spec/nova-api-quantum-create-port
 :),   though there are some issues we need to solve in neutron first --
allowing the mac_address on the port to be updated in neutron. This is
required for bare metal support as when the port is created we don't know
which physical mac will need to be mapped to the port.


   
 --
 Russell Bryant
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Blueprints process

2014-08-06 Thread Mike Scherbakov
As we are before next milestone, let's get back to this excellent (in my
opinion) proposal from Dmitry. Current situation is the following: there
are 121 blueprints targeted for 6.0. Most of them miss a header with
QA/reviewers/developers information, basically those are incomplete
blueprints initially. Many of them have such a cryptic description, that it
is impossible to understand the main idea.

My suggestion for immediate actions, strictly following original email from
Dmitry, is the following:

   1. Move all 6.0 blueprints to next milestone and clear up assignee (so
   others know that it's not taken by anyone)
   2. Start adding blueprints to 6.0 only if they satisfy criteria of
   clarity and filled information:
Each blueprint in a milestone should contain information about feature
   lead, design reviewers, developers, qa, acceptance criteria.

Once we know who is going to work on the blueprint, who commit for it, then
the blueprint can be added. We know that behind every engineer is the
company, so feature lead should negotiate first with the management of the
company to get allocated for the particular feature. Same applies to a team
of developers working on a feature.

Fuelers, any objections?


On Fri, Jul 4, 2014 at 8:26 PM, Dmitry Pyzhov dpyz...@mirantis.com wrote:

 Do not cheat. If we need to add functionality after feature freeze, then
 let add functionality after feature freeze. No reason for additional
 obfuscation. It will make our workflow for blueprints harder, but it will
 help us. We will see what we are really going to do and plan our work
 better.

 Also we can create a beta iso with all features in 'beta available'
 status. It will help to make sure that small improvements are not break
 anything and can be merged without any fear.


 On Tue, Jul 1, 2014 at 3:00 PM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:

 I have some objections. We are trying to follow a strict development
 workflow with feature freeze stage. In this case we will have to miss small
 enhancements that can emerge after FF date and can bring essential benefits
 along with small risks of breaking anything (e.g. changing some config
 options for galera or other stuff). We maintained such small changes as
 bugs because of this FF rule. As our project is growing, these last minute
 calls for small changes are going to be more and more probable. My
 suggestion is that we somehow modify our workflow allowing these small
 features to get through FF stage or we are risking to have an endless queue
 of enhancements that users will never see in the release.


 On Thu, Jun 26, 2014 at 8:07 PM, Matthew Mosesohn mmoses...@mirantis.com
  wrote:

 +1

 Keeping features separate as blueprints (even tiny ones with no spec)
 really will let us focus on the volume of real bugs.

 On Tue, Jun 24, 2014 at 5:14 PM, Dmitry Pyzhov dpyz...@mirantis.com
 wrote:
  Guys,
 
  We have a beautiful contribution guide:
  https://wiki.openstack.org/wiki/Fuel/How_to_contribute
 
  However, I would like to address several issues in our blueprints/bugs
  processes. Let's discuss and vote on my proposals.
 
  1) First of all, the bug counter is an excellent metric for quality. So
  let's use it only for bugs and track all feature requirement as
 blueprints.
  Here is what it means:
 
  1a) If a bug report does not describe a user’s pain, a blueprint
 should be
  created and bug should be closed as invalid
  1b) If a bug report does relate to a user’s pain, a blueprint should be
  created and linked to the bug
  1c) We have an excellent reporting tool, but it needs more metrics:
 count of
  critical/high bugs, count of bugs assigned to each team. It will
 require
  support of team members lists, but it seems that we really need it.
 
 
  2) We have a huge amount of blueprints and it is hard to work with this
  list. A good blueprint needs a fixed scope, spec review and acceptance
  criteria. It is obvious for me that we can not work on blueprints that
 do
  not meet these requirements. Therefore:
 
  2a) Let's copy the nova future series and create a fake milestone
 'next' as
  nova does. All unclear blueprints should be moved there. We will pick
  blueprints from there, add spec and other info and target them to a
  milestone when we are really ready to work on a particular blueprint.
 Our
  release page will look much more close to reality and much more
 readable in
  this case.
  2b) Each blueprint in a milestone should contain information about
 feature
  lead, design reviewers, developers, qa, acceptance criteria. Spec is
  optional for trivial blueprints. If a spec is created, the designated
  reviewer(s) should put (+1) right into the blueprint description.
  2c) Every blueprint spec should be updated before feature freeze with
 the
  latest actual information. Actually, I'm not sure if we care about spec
  after feature development, but it seems to be logical to have correct
  information in specs.
  2d) We should avoid creating 

[openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-06 Thread Stefano Santini
Hi,

In my company (Vodafone), we (DC network architecture) are following very
closely the work happening on Group Based Policy since we see a great value
on the new paradigm to drive network configurations with an advanced logic.

We're working on a new production project for an internal private cloud
deployment targeting Juno release where we plan to introduce the
capabilities based on using Group Policy and we don't want to see it
delayed.
We strongly request/vote to see this complete as proposed without such
changes to allow to move forward with the evolution of the network
capabilities

Thanks and kind regards
Stefano
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-06 Thread Yuriy Taraday
I'd like to stress this to everyone: I DO NOT propose squashing together
commits that should belong to separate change requests. I DO NOT propose to
upload all your changes at once. I DO propose letting developers to keep
local history of all iterations they have with a change request. The
history that absolutely doesn't matter to anyone but this developer.

On Wed, Aug 6, 2014 at 12:03 PM, Martin Geisler mar...@geisler.net wrote:

 Ben Nemec openst...@nemebean.com writes:

  On 08/05/2014 03:14 PM, Yuriy Taraday wrote:
 
  When you're developing some big change you'll end up with trying
  dozens of different approaches and make thousands of mistakes. For
  reviewers this is just unnecessary noise (commit title Scratch my
  last CR, that was bullshit) while for you it's a precious history
  that can provide basis for future research or bug-hunting.
 
  So basically keeping a record of how not to do it?  I get that, but I
  think I'm more onboard with the suggestion of sticking those dead end
  changes into a separate branch.  There's no particular reason to keep
  them on your working branch anyway since they'll never merge to master.
   They're basically unnecessary conflicts waiting to happen.

 Yeah, I would never keep broken or unfinished commits around like this.
 In my opinion (as a core Mercurial developer), the best workflow is to
 work on a feature and make small and large commits as you go along. When
 the feature works, you begin squashing/splitting the commits to make
 them into logical pieces, if they aren't already in good shape. You then
 submit the branch for review and iterate on it until it is accepted.


Absolutely true. And it's mostly the same workflow that happens in
OpenStack: you do your cool feature, you carve meaningful small
self-contained pieces out of it, you submit series of change requests.
And nothing in my proposal conflicts with it. It just provides a way to
make developer's side of this simpler (which is the intent of git-review,
isn't it?) while not changing external artifacts of one's work: the same
change requests, with the same granularity.


 As a reviewer, it cannot be stressed enough how much small, atomic,
 commits help. Squashing things together into large commits make reviews
 very tricky and removes the possibility of me accepting a later commit
 while still discussing or rejecting earlier commits (cherry-picking).


That's true, too. But please don't think I'm proposing to squash everything
together and push 10k-loc patches. I hate that, too. I'm proposing to let
developer use one's tools (Git) in a simpler way.
And the simpler way (for some of us) would be to have one local branch for
every change request, not one branch for the whole series. Switching
between branches is very well supported by Git and doesn't require extra
thinking. Jumping around in detached HEAD state and editing commits during
rebase requires remembering all those small details.

 FWIW, I have had long-lived patch series, and I don't really see what
  is so difficult about running git rebase master. Other than conflicts,
  of course, which are going to be an issue with any long-running change
  no matter how it's submitted. There isn't a ton of git magic involved.

 I agree. The conflicts you talk about are intrinsic to the parallel
 development. Doing a rebase is equivalent to doing a series of merges,
 so if rebase gives you conflicts, you can be near certain that a plain
 merge would give you conflicts too. The same applies other way around.


You disregard other issues that can happen with patch series. You might
need something more that rebase. You might need to fix something. You might
need to focus on the one commit in the middle and do huge bunch of changes
in it alone. And I propose to just allow developer to keep track of what's
one been doing instead of forcing one to remember all of this.

 So as you may have guessed by now, I'm opposed to adding this to
  git-review. I think it's going to encourage bad committer behavior
  (monolithic commits) and doesn't address a use case I find compelling
  enough to offset that concern.

 I don't understand why this would even be in the domain of git-review. A
 submitter can do the puff magic stuff himself using basic Git commands
 before he submits the collapsed commit.


Isn't it the domain of git-review - puff magic? You can upload your
changes with 'git push HEAD:refs/for/master', you can do all your rebasing
by yourself, but somehow we ended up with this tool that simplifies common
tasks related to uploading changes to Gerrit.
And (at least for some) such change would simplify their day-to-day
workflow with regards to uploading changes to Gerrit.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Blueprints process

2014-08-06 Thread Sergii Golovatiuk
Hi,

I really like what Mike proposed. It will help us to keep milestone clean
and accurate.

--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser


On Wed, Aug 6, 2014 at 11:26 AM, Mike Scherbakov mscherba...@mirantis.com
wrote:

 As we are before next milestone, let's get back to this excellent (in my
 opinion) proposal from Dmitry. Current situation is the following: there
 are 121 blueprints targeted for 6.0. Most of them miss a header with
 QA/reviewers/developers information, basically those are incomplete
 blueprints initially. Many of them have such a cryptic description, that it
 is impossible to understand the main idea.

 My suggestion for immediate actions, strictly following original email
 from Dmitry, is the following:

1. Move all 6.0 blueprints to next milestone and clear up assignee
(so others know that it's not taken by anyone)
2. Start adding blueprints to 6.0 only if they satisfy criteria of
clarity and filled information:

 Each blueprint in a milestone should contain information about
feature lead, design reviewers, developers, qa, acceptance criteria.

 Once we know who is going to work on the blueprint, who commit for it,
 then the blueprint can be added. We know that behind every engineer is the
 company, so feature lead should negotiate first with the management of the
 company to get allocated for the particular feature. Same applies to a team
 of developers working on a feature.

 Fuelers, any objections?


 On Fri, Jul 4, 2014 at 8:26 PM, Dmitry Pyzhov dpyz...@mirantis.com
 wrote:

 Do not cheat. If we need to add functionality after feature freeze, then
 let add functionality after feature freeze. No reason for additional
 obfuscation. It will make our workflow for blueprints harder, but it will
 help us. We will see what we are really going to do and plan our work
 better.

 Also we can create a beta iso with all features in 'beta available'
 status. It will help to make sure that small improvements are not break
 anything and can be merged without any fear.


 On Tue, Jul 1, 2014 at 3:00 PM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:

 I have some objections. We are trying to follow a strict development
 workflow with feature freeze stage. In this case we will have to miss small
 enhancements that can emerge after FF date and can bring essential benefits
 along with small risks of breaking anything (e.g. changing some config
 options for galera or other stuff). We maintained such small changes as
 bugs because of this FF rule. As our project is growing, these last minute
 calls for small changes are going to be more and more probable. My
 suggestion is that we somehow modify our workflow allowing these small
 features to get through FF stage or we are risking to have an endless queue
 of enhancements that users will never see in the release.


 On Thu, Jun 26, 2014 at 8:07 PM, Matthew Mosesohn 
 mmoses...@mirantis.com wrote:

 +1

 Keeping features separate as blueprints (even tiny ones with no spec)
 really will let us focus on the volume of real bugs.

 On Tue, Jun 24, 2014 at 5:14 PM, Dmitry Pyzhov dpyz...@mirantis.com
 wrote:
  Guys,
 
  We have a beautiful contribution guide:
  https://wiki.openstack.org/wiki/Fuel/How_to_contribute
 
  However, I would like to address several issues in our blueprints/bugs
  processes. Let's discuss and vote on my proposals.
 
  1) First of all, the bug counter is an excellent metric for quality.
 So
  let's use it only for bugs and track all feature requirement as
 blueprints.
  Here is what it means:
 
  1a) If a bug report does not describe a user’s pain, a blueprint
 should be
  created and bug should be closed as invalid
  1b) If a bug report does relate to a user’s pain, a blueprint should
 be
  created and linked to the bug
  1c) We have an excellent reporting tool, but it needs more metrics:
 count of
  critical/high bugs, count of bugs assigned to each team. It will
 require
  support of team members lists, but it seems that we really need it.
 
 
  2) We have a huge amount of blueprints and it is hard to work with
 this
  list. A good blueprint needs a fixed scope, spec review and acceptance
  criteria. It is obvious for me that we can not work on blueprints
 that do
  not meet these requirements. Therefore:
 
  2a) Let's copy the nova future series and create a fake milestone
 'next' as
  nova does. All unclear blueprints should be moved there. We will pick
  blueprints from there, add spec and other info and target them to a
  milestone when we are really ready to work on a particular blueprint.
 Our
  release page will look much more close to reality and much more
 readable in
  this case.
  2b) Each blueprint in a milestone should contain information about
 feature
  lead, design reviewers, developers, qa, acceptance criteria. Spec is
  optional for trivial blueprints. If a spec is created, the designated
  reviewer(s) should put (+1) right into the blueprint description.
  2c) Every 

Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-06 Thread Gary Kotton


From: Aaron Rosen aaronoro...@gmail.commailto:aaronoro...@gmail.com
Reply-To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, August 6, 2014 at 11:11 AM
To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron] Group Based Policy and the way forward




On Wed, Aug 6, 2014 at 12:59 AM, Gary Kotton 
gkot...@vmware.commailto:gkot...@vmware.com wrote:


From: Aaron Rosen aaronoro...@gmail.commailto:aaronoro...@gmail.com
Reply-To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, August 6, 2014 at 10:09 AM

To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron] Group Based Policy and the way forward


On Tue, Aug 5, 2014 at 11:18 PM, Gary Kotton 
gkot...@vmware.commailto:gkot...@vmware.com wrote:


On 8/5/14, 8:53 PM, Russell Bryant 
rbry...@redhat.commailto:rbry...@redhat.com wrote:

On 08/05/2014 01:23 PM, Gary Kotton wrote:
 Ok, thanks for the clarification. This means that it will not be done
 automagically as it is today ­ the tenant will need to create a Neutron
 port and then pass that through.

FWIW, that's the direction we've wanted to move in Nova anyway.  We'd
like to get rid of automatic port creation, but can't do that in the
current stable API.

Can you elaborate on what you mean here? What are the issues with port
creation?


Having nova-compute create ports for neutron is problematic if timeouts occur 
between nova and neutron as you have to garbage collect neutron ports in nova 
to cleanup (which was the cause of several bug in the cache handing allowing 
ports to leak into the info_cache in nova).  Pushing this out to the tenant is 
less orchestration nova has to do.

[gary] my take on this is that we should allocate this via the n-api and not 
via the nova compute (which is far too late in the process. But that is another 
discussion :)

I agree, I had actually proposed this here: 
https://blueprints.launchpad.net/nova/+spec/nova-api-quantum-create-porthttps://urldefense.proofpoint.com/v1/url?u=https://blueprints.launchpad.net/nova/%2Bspec/nova-api-quantum-create-portk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=nxi%2BsVOGOwFKN8cKE9T4thh6hF%2Fbbz59EZEBQvd1lkE%3D%0As=50f7fe08f64d0d647ee97a8da6f91091e380cca72e84d06fa9c57c62dbb4e4ee
  :),   though there are some issues we need to solve in neutron first -- 
allowing the mac_address on the port to be updated in neutron. This is required 
for bare metal support as when the port is created we don't know which physical 
mac will need to be mapped to the port.

[gary] agreed


--
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-06 Thread Sylvain Bauza


Le 06/08/2014 10:35, Yuriy Taraday a écrit :
I'd like to stress this to everyone: I DO NOT propose squashing 
together commits that should belong to separate change requests. I DO 
NOT propose to upload all your changes at once. I DO propose letting 
developers to keep local history of all iterations they have with a 
change request. The history that absolutely doesn't matter to anyone 
but this developer.




Well, I can understand that for ease, we could propose it as an option 
in git-review, but I'm just thinking that if you consider your local Git 
repo as your single source of truth (and not Gerrit), then you just have 
to make another branch and squash your intermediate commits for Gerrit 
upload only.


If you need modifying (because of another iteration), you just need to 
amend the commit message on each top-squasher commit by adding the 
Change-Id on your local branch, and redo the process (make a branch, 
squash, upload) each time you need it.



Gerrit is cool, it doesn't care about SHA-1s but only Change-Id, so 
cherry-picking and rebasing still works (hurrah)


tl;dr: do as many as intermediate commits you want, but just generate a 
Change-ID on the commit you consider as patch, so you just squash the 
intermediate commits on a separate branch copy for Gerrit use only 
(one-way).


Again, I can understand the above as hacky, so I'm not against your 
change, just emphasizing it as non-necessary (but anyway, everything can 
be done without git-review, even the magical -m option :-) )


-Sylvain

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] backport fixes to old branches

2014-08-06 Thread Osanai, Hisashi

On Tuesday, August 05, 2014 8:57 PM, Ihar Hrachyshka wrote:
 
 Thanks. To facilitate quicker backport, you may also propose the patch
 for review yourself. It may take time before stable maintainers or
 other interested parties get to the bug and do cherry-pick.

Thank you for your advice.
I would like to confirm the procedure for backporting. The procedure is just 
using 
same Change-Id (in last paragraph) in addition to Normal Workflow, right?

Is there any other points that I should take care of?

Thanks in advance,
Hisashi Osanai


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Proposal for slight change in our spec process

2014-08-06 Thread Lucas Alvares Gomes
Already agreed with the idea at the midcycle, but just making it public: +1

On Tue, Aug 5, 2014 at 8:54 PM, Roman Prykhodchenko
rprikhodche...@mirantis.com wrote:
 Hi!

 I think this is a nice idea indeed. Do you plan to use this process starting
 from Juno or as soon as possible?

It will start in Kilo

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-06 Thread Yuriy Taraday
On Wed, Aug 6, 2014 at 12:55 PM, Sylvain Bauza sba...@redhat.com wrote:


 Le 06/08/2014 10:35, Yuriy Taraday a écrit :

  I'd like to stress this to everyone: I DO NOT propose squashing together
 commits that should belong to separate change requests. I DO NOT propose to
 upload all your changes at once. I DO propose letting developers to keep
 local history of all iterations they have with a change request. The
 history that absolutely doesn't matter to anyone but this developer.


 Well, I can understand that for ease, we could propose it as an option in
 git-review, but I'm just thinking that if you consider your local Git repo
 as your single source of truth (and not Gerrit), then you just have to make
 another branch and squash your intermediate commits for Gerrit upload only.


That's my proposal - generate such another branches automatically. And
from this thread it looks like some people already do them by hand.


 If you need modifying (because of another iteration), you just need to
 amend the commit message on each top-squasher commit by adding the
 Change-Id on your local branch, and redo the process (make a branch,
 squash, upload) each time you need it.


I don't quite understand the top-squasher commit part but what I'm
suggesting is to automate this process to make users including myself
happier.


 Gerrit is cool, it doesn't care about SHA-1s but only Change-Id, so
 cherry-picking and rebasing still works (hurrah)


Yes, and that's the only stable part of those another branches.


 tl;dr: do as many as intermediate commits you want, but just generate a
 Change-ID on the commit you consider as patch, so you just squash the
 intermediate commits on a separate branch copy for Gerrit use only
 (one-way).

 Again, I can understand the above as hacky, so I'm not against your
 change, just emphasizing it as non-necessary (but anyway, everything can be
 done without git-review, even the magical -m option :-) )


I'd even prefer to leave it to git config file so that it won't get
accidentally enabled unless user know what one's doing.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Tox run failure during installation of dependencies in requirements

2014-08-06 Thread Narasimhan, Vivekanandan
Hi,



Recently , the Tox runs started to fail in my workspace.

It fails consistently during installing dependencies with the following.



Downloading/unpacking PrettyTable=0.7,0.8 (from 
python-keystoneclient=0.10.0--r 
/home/narasimv/dev/bug1350485/neutron/requirements.txt (line 22))

Cleaning up...

Exception:

Traceback (most recent call last):

  File 
/home/narasimv/dev/bug1350485/neutron/.tox/pep8/local/lib/python2.7/site-packages/pip/basecommand.py,
 line 122, in main

status = self.run(options, args)

  File 
/home/narasimv/dev/bug1350485/neutron/.tox/pep8/local/lib/python2.7/site-packages/pip/commands/install.py,
 line 278, in run

requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, 
bundle=self.bundle)

  File 
/home/narasimv/dev/bug1350485/neutron/.tox/pep8/local/lib/python2.7/site-packages/pip/req.py,
 line 1197, in prepare_files

do_download,

  File 
/home/narasimv/dev/bug1350485/neutron/.tox/pep8/local/lib/python2.7/site-packages/pip/req.py,
 line 1375, in unpack_url

self.session,

  File 
/home/narasimv/dev/bug1350485/neutron/.tox/pep8/local/lib/python2.7/site-packages/pip/download.py,
 line 546, in unpack_http_url

resp = session.get(target_url, stream=True)

  File 
/home/narasimv/dev/bug1350485/neutron/.tox/pep8/local/lib/python2.7/site-packages/pip/_vendor/requests/sessions.py,
 line 468, in get

return self.request('GET', url, **kwargs)

  File 
/home/narasimv/dev/bug1350485/neutron/.tox/pep8/local/lib/python2.7/site-packages/pip/download.py,
 line 237, in request

return super(PipSession, self).request(method, url, *args, **kwargs)

  File 
/home/narasimv/dev/bug1350485/neutron/.tox/pep8/local/lib/python2.7/site-packages/pip/_vendor/requests/sessions.py,
 line 456, in request

resp = self.send(prep, **send_kwargs)

  File 
/home/narasimv/dev/bug1350485/neutron/.tox/pep8/local/lib/python2.7/site-packages/pip/_vendor/requests/sessions.py,
 line 559, in send

r = adapter.send(request, **kwargs)

  File 
/home/narasimv/dev/bug1350485/neutron/.tox/pep8/local/lib/python2.7/site-packages/pip/_vendor/requests/adapters.py,
 line 384, in send

raise Timeout(e, request=request)

Timeout: 
(pip._vendor.requests.packages.urllib3.connection.VerifiedHTTPSConnection 
object at 0x37e4790, 'Connection to pypi.python.org timed out. (connect 
timeout=15)')



The TOX was earlier running in the same workspace successfully on the machine

even though we were behind the proxy.



Could you please advise on how to resolve this problem?



--

Thanks,



Vivek



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-06 Thread Christopher Yeoh
On Wed, Aug 6, 2014 at 5:41 PM, Aaron Rosen aaronoro...@gmail.com wrote:




 On Wed, Aug 6, 2014 at 12:59 AM, Gary Kotton gkot...@vmware.com wrote:



   From: Aaron Rosen aaronoro...@gmail.com
 Reply-To: OpenStack List openstack-dev@lists.openstack.org
 Date: Wednesday, August 6, 2014 at 10:09 AM

 To: OpenStack List openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Neutron] Group Based Policy and the way
 forward


 On Tue, Aug 5, 2014 at 11:18 PM, Gary Kotton gkot...@vmware.com wrote:



 On 8/5/14, 8:53 PM, Russell Bryant rbry...@redhat.com wrote:

 On 08/05/2014 01:23 PM, Gary Kotton wrote:
  Ok, thanks for the clarification. This means that it will not be done
  automagically as it is today ­ the tenant will need to create a
 Neutron
  port and then pass that through.
 
 FWIW, that's the direction we've wanted to move in Nova anyway.  We'd
 like to get rid of automatic port creation, but can't do that in the
 current stable API.

  Can you elaborate on what you mean here? What are the issues with port
 creation?


 Having nova-compute create ports for neutron is problematic if timeouts
 occur between nova and neutron as you have to garbage collect neutron ports
 in nova to cleanup (which was the cause of several bug in the cache handing
 allowing ports to leak into the info_cache in nova).  Pushing this out to
 the tenant is less orchestration nova has to do.

  [gary] my take on this is that we should allocate this via the n-api
 and not via the nova compute (which is far too late in the process. But
 that is another discussion :)


 I agree, I had actually proposed this here:
 https://blueprints.launchpad.net/nova/+spec/nova-api-quantum-create-port
  :),   though there are some issues we need to solve in neutron first --
 allowing the mac_address on the port to be updated in neutron. This is
 required for bare metal support as when the port is created we don't know
 which physical mac will need to be mapped to the port.




I think that in the long term (when we can do an API rev) we should just be
getting rid of the automatic port creation completely with the updated Nova
API. I don't see why the Nova API needs to do proxying work to neutron to
create the port when the client can do it directly with neutron (perhaps
via some convenience client code if desired). It removes the complexity of
the garbage collection on failure issues in Nova that we currently have.

Chris



   
 --
 Russell Bryant
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Tox run failure during installation of dependencies in requirements

2014-08-06 Thread Matthieu Huin
Hi,

Can you reach pypi.python.org ?

Matthieu Huin 

m...@enovance.com

- Original Message -
 From: Vivekanandan Narasimhan vivekanandan.narasim...@hp.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Wednesday, August 6, 2014 12:19:57 PM
 Subject: [openstack-dev] Tox run failure during installation of dependencies  
 in requirements
 
 
 
 Hi,
 
 
 
 Recently , the Tox runs started to fail in my workspace.
 
 It fails consistently during installing dependencies with the following.
 
 
 
 Downloading/unpacking PrettyTable=0.7,0.8 (from
 python-keystoneclient=0.10.0--r
 /home/narasimv/dev/bug1350485/neutron/requirements.txt (line 22))
 
 Cleaning up...
 
 Exception:
 
 Traceback (most recent call last):
 
 File
 /home/narasimv/dev/bug1350485/neutron/.tox/pep8/local/lib/python2.7/site-packages/pip/basecommand.py,
 line 122, in main
 
 status = self.run(options, args)
 
 File
 /home/narasimv/dev/bug1350485/neutron/.tox/pep8/local/lib/python2.7/site-packages/pip/commands/install.py,
 line 278, in run
 
 requirement_set.prepare_files(finder, force_root_egg_info=self.bundle,
 bundle=self.bundle)
 
 File
 /home/narasimv/dev/bug1350485/neutron/.tox/pep8/local/lib/python2.7/site-packages/pip/req.py,
 line 1197, in prepare_files
 
 do_download,
 
 File
 /home/narasimv/dev/bug1350485/neutron/.tox/pep8/local/lib/python2.7/site-packages/pip/req.py,
 line 1375, in unpack_url
 
 self.session,
 
 File
 /home/narasimv/dev/bug1350485/neutron/.tox/pep8/local/lib/python2.7/site-packages/pip/download.py,
 line 546, in unpack_http_url
 
 resp = session.get(target_url, stream=True)
 
 File
 /home/narasimv/dev/bug1350485/neutron/.tox/pep8/local/lib/python2.7/site-packages/pip/_vendor/requests/sessions.py,
 line 468, in get
 
 return self.request('GET', url, **kwargs)
 
 File
 /home/narasimv/dev/bug1350485/neutron/.tox/pep8/local/lib/python2.7/site-packages/pip/download.py,
 line 237, in request
 
 return super(PipSession, self).request(method, url, *args, **kwargs)
 
 File
 /home/narasimv/dev/bug1350485/neutron/.tox/pep8/local/lib/python2.7/site-packages/pip/_vendor/requests/sessions.py,
 line 456, in request
 
 resp = self.send(prep, **send_kwargs)
 
 File
 /home/narasimv/dev/bug1350485/neutron/.tox/pep8/local/lib/python2.7/site-packages/pip/_vendor/requests/sessions.py,
 line 559, in send
 
 r = adapter.send(request, **kwargs)
 
 File
 /home/narasimv/dev/bug1350485/neutron/.tox/pep8/local/lib/python2.7/site-packages/pip/_vendor/requests/adapters.py,
 line 384, in send
 
 raise Timeout(e, request=request)
 
 Timeout:
 (pip._vendor.requests.packages.urllib3.connection.VerifiedHTTPSConnection
 object at 0x37e4790, 'Connection to pypi.python.org timed out. (connect
 timeout=15)')
 
 
 
 The TOX was earlier running in the same workspace successfully on the machine
 
 even though we were behind the proxy.
 
 
 
 Could you please advise on how to resolve this problem?
 
 
 
 --
 
 Thanks,
 
 
 
 Vivek
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Tox run failure during installation of dependencies in requirements

2014-08-06 Thread Igor Degtiarov
Hi,

Actually the same question, what is wrong with Tox now?

-- Igor


On Wed, Aug 6, 2014 at 1:19 PM, Narasimhan, Vivekanandan 
vivekanandan.narasim...@hp.com wrote:

  Hi,



 Recently , the Tox runs started to fail in my workspace.

 It fails consistently during installing dependencies with the following.



 Downloading/unpacking PrettyTable=0.7,0.8 (from
 python-keystoneclient=0.10.0--r
 /home/narasimv/dev/bug1350485/neutron/requirements.txt (line 22))

 Cleaning up...

 Exception:

 Traceback (most recent call last):

   File
 /home/narasimv/dev/bug1350485/neutron/.tox/pep8/local/lib/python2.7/site-packages/pip/basecommand.py,
 line 122, in main

 status = self.run(options, args)

   File
 /home/narasimv/dev/bug1350485/neutron/.tox/pep8/local/lib/python2.7/site-packages/pip/commands/install.py,
 line 278, in run

 requirement_set.prepare_files(finder, force_root_egg_info=self.bundle,
 bundle=self.bundle)

   File
 /home/narasimv/dev/bug1350485/neutron/.tox/pep8/local/lib/python2.7/site-packages/pip/req.py,
 line 1197, in prepare_files

 do_download,

   File
 /home/narasimv/dev/bug1350485/neutron/.tox/pep8/local/lib/python2.7/site-packages/pip/req.py,
 line 1375, in unpack_url

 self.session,

   File
 /home/narasimv/dev/bug1350485/neutron/.tox/pep8/local/lib/python2.7/site-packages/pip/download.py,
 line 546, in unpack_http_url

 resp = session.get(target_url, stream=True)

   File
 /home/narasimv/dev/bug1350485/neutron/.tox/pep8/local/lib/python2.7/site-packages/pip/_vendor/requests/sessions.py,
 line 468, in get

 return self.request('GET', url, **kwargs)

   File
 /home/narasimv/dev/bug1350485/neutron/.tox/pep8/local/lib/python2.7/site-packages/pip/download.py,
 line 237, in request

 return super(PipSession, self).request(method, url, *args, **kwargs)

   File
 /home/narasimv/dev/bug1350485/neutron/.tox/pep8/local/lib/python2.7/site-packages/pip/_vendor/requests/sessions.py,
 line 456, in request

 resp = self.send(prep, **send_kwargs)

   File
 /home/narasimv/dev/bug1350485/neutron/.tox/pep8/local/lib/python2.7/site-packages/pip/_vendor/requests/sessions.py,
 line 559, in send

 r = adapter.send(request, **kwargs)

   File
 /home/narasimv/dev/bug1350485/neutron/.tox/pep8/local/lib/python2.7/site-packages/pip/_vendor/requests/adapters.py,
 line 384, in send

 raise Timeout(e, request=request)

 Timeout:
 (pip._vendor.requests.packages.urllib3.connection.VerifiedHTTPSConnection
 object at 0x37e4790, 'Connection to pypi.python.org timed out. (connect
 timeout=15)')



 The TOX was earlier running in the same workspace successfully on the
 machine

 even though we were behind the proxy.



 Could you please advise on how to resolve this problem?



 --

 Thanks,



 Vivek



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] OpenStack Heat installation guide and Heat utilisation manual

2014-08-06 Thread marwen mechtri
Thank you for your suggestion.

I will investigate it and see how to integrate the trust model in the heat
template (if possible).

Regards,
Marouen Mechtri


2014-08-06 5:23 GMT+02:00 Don Waterloo don.water...@gmail.com:

 On 5 August 2014 18:10, marwen mechtri mechtri.mar...@gmail.com wrote:
  Hi all,
 
  I want to present you our OpenStack Heat installation guide for Icehouse
  release.
 
 
 https://github.com/MarouenMechtri/OpenStack-Heat-Installation/blob/master/OpenStack-Heat-Installation.rst
 
  A well described manual with illustrative pictures for Heat utilisation
 and
  HOT template creation is available here:
 
 
 https://github.com/MarouenMechtri/OpenStack-Heat-Installation/blob/master/Create-your-first-stack-with-Heat.rst
 
  Please let us know your opinion about it.
 
  Enjoy!
 
  Marouen Mechtri


 thanks for this.

 I have been struggling with the delegated trust model in heat, I
 notice you do not include this in your recipe. Have you considered
 adding it?
 Without it, one needs to resupply their password.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Tox run failure during installation of dependencies in requirements

2014-08-06 Thread Chmouel Boudjnah
On Wed, Aug 6, 2014 at 12:19 PM, Narasimhan, Vivekanandan 
vivekanandan.narasim...@hp.com wrote:

 Timeout:
 (pip._vendor.requests.packages.urllib3.connection.VerifiedHTTPSConnection
 object at 0x37e4790, 'Connection to pypi.python.org timed out. (connect
 timeout=15)')



I think this error message is pretty self explanatory

Chmouel
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] OpenStack Heat installation guide and Heat utilisation manual

2014-08-06 Thread Andreas Jaeger
On 08/06/2014 05:23 AM, Don Waterloo wrote:
 On 5 August 2014 18:10, marwen mechtri mechtri.mar...@gmail.com wrote:
 Hi all,

 I want to present you our OpenStack Heat installation guide for Icehouse
 release.

 https://github.com/MarouenMechtri/OpenStack-Heat-Installation/blob/master/OpenStack-Heat-Installation.rst

 A well described manual with illustrative pictures for Heat utilisation and
 HOT template creation is available here:

 https://github.com/MarouenMechtri/OpenStack-Heat-Installation/blob/master/Create-your-first-stack-with-Heat.rst

 Please let us know your opinion about it.

 Enjoy!

 Marouen Mechtri
 
 
 thanks for this.
 
 I have been struggling with the delegated trust model in heat, I
 notice you do not include this in your recipe. Have you considered
 adding it?
 Without it, one needs to resupply their password.

Let's work together to enhance the OpenStack documentation to really
describe heat templates. The documentation team has started work on:

http://specs.openstack.org/openstack/docs-specs/specs/juno/heat-templates.html

Help is welcome!

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB16746 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] status in entities

2014-08-06 Thread Vijay Venkatachalam
I agree. the current status can reflect the deployment status and we can add a 
new attribute to reflect operational status.

I also agree that adminstate_up should definitely affect operational status. 
But driver could choose to unprovision when admin state is set to false. In 
which case status will also change.

If agenda permits,  can we discuss this in the upcoming weekly meeting?

Sent using 
CloudMagichttps://cloudmagic.com/k/d/mailapp?ct=pacv=5.0.32pv=4.2.2

On Wed, Aug 06, 2014 at 2:46 AM, Stephen Balukoff 
sbaluk...@bluebox.netmailto:sbaluk...@bluebox.net wrote:

Hi guys,

I understood that admin_state_up was a manipulable field which (when working 
correctly) should change the entity to an operational status of ADMIN_DOWN or 
something similar to that. In any case, +1 on the deeper discussion of status.

How urgent is it to resolve the discussion around status? We could potentially 
bring the interested parties together via google hangout or webex (to 
facilitate the high bandwidth).

Stephen


On Tue, Aug 5, 2014 at 9:05 AM, Brandon Logan 
brandon.lo...@rackspace.commailto:brandon.lo...@rackspace.com wrote:
Isn't that what admin_state_up is for?

But yes we do need a deeper discussion on this and many other things.

On Tue, 2014-08-05 at 15:42 +, Eichberger, German wrote:
 There was also talk about a third administrative status like ON/OFF...

 We really need a deeper status discussion - likely high bandwith to work all 
 of that out.

 German

 -Original Message-
 From: Brandon Logan 
 [mailto:brandon.lo...@rackspace.commailto:brandon.lo...@rackspace.com]
 Sent: Tuesday, August 05, 2014 8:27 AM
 To: 
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Neutron][LBaaS] status in entities


 Hello Vijay!

 Well this is a hold over from v1, but the status is a provisioning status.  
 So yes, when something is deployed successfully it should be ACTIVE.  The 
 exception to this is the member status, in that it's status can be INACTIVE 
 if a health check fails.  Now this will probably cause edge cases when health 
 checks and updates are happening to the same member.  It's been talked about 
 before, but we need to really have two types of status fields, provisioning 
 and operational.  IMHO, that should be something we try to get into K.

 Thanks,
 Brandon

 On Tue, 2014-08-05 at 09:28 +, Vijay Venkatachalam wrote:
  Hi:
 
 I think we had some discussions around ‘status’
  attribute earlier, I don’t recollect the conclusion.
 
  Does it reflect the deployment status?
 
 Meaning, if the status of an entity is ACTIVE, the user
  has to infer that the entity is deployed successfully in the
  backend/loadbalancer.
 
  Thanks,
 
  Vijay V.
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] Package python-django-pyscss dependencies on CentOS

2014-08-06 Thread Timur Sufiev
Hi!

Here is the link: http://koji.fedoraproject.org/koji/rpminfo?rpmID=5239113

The question is whether the python-pillow package really needed for
proper compiling css from scss in Horizon or is it an optional
requirement which can be safely dropped? The problem with
python-pillow is that it pulls a lot of unneeded deps (like tk, qt,
etc...) which is better avoided.

-- 
Timur Sufiev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-06 Thread Kyle Mestery
On Wed, Aug 6, 2014 at 3:11 AM, Aaron Rosen aaronoro...@gmail.com wrote:



 On Wed, Aug 6, 2014 at 12:59 AM, Gary Kotton gkot...@vmware.com wrote:



 From: Aaron Rosen aaronoro...@gmail.com
 Reply-To: OpenStack List openstack-dev@lists.openstack.org
 Date: Wednesday, August 6, 2014 at 10:09 AM

 To: OpenStack List openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Neutron] Group Based Policy and the way
 forward


 On Tue, Aug 5, 2014 at 11:18 PM, Gary Kotton gkot...@vmware.com wrote:



 On 8/5/14, 8:53 PM, Russell Bryant rbry...@redhat.com wrote:

 On 08/05/2014 01:23 PM, Gary Kotton wrote:
  Ok, thanks for the clarification. This means that it will not be done
  automagically as it is today ­ the tenant will need to create a
  Neutron
  port and then pass that through.
 
 FWIW, that's the direction we've wanted to move in Nova anyway.  We'd
 like to get rid of automatic port creation, but can't do that in the
 current stable API.

 Can you elaborate on what you mean here? What are the issues with port
 creation?


 Having nova-compute create ports for neutron is problematic if timeouts
 occur between nova and neutron as you have to garbage collect neutron ports
 in nova to cleanup (which was the cause of several bug in the cache handing
 allowing ports to leak into the info_cache in nova).  Pushing this out to
 the tenant is less orchestration nova has to do.

 [gary] my take on this is that we should allocate this via the n-api and
 not via the nova compute (which is far too late in the process. But that is
 another discussion :)


 I agree, I had actually proposed this here:
 https://blueprints.launchpad.net/nova/+spec/nova-api-quantum-create-port
 :),   though there are some issues we need to solve in neutron first --
 allowing the mac_address on the port to be updated in neutron. This is
 required for bare metal support as when the port is created we don't know
 which physical mac will need to be mapped to the port.

Looks like someone has proposed a patch which does just that, please
have a look below:

https://review.openstack.org/#/c/112129/


 
 --
 Russell Bryant
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Step by step OpenStack Icehouse Installation Guide

2014-08-06 Thread chayma ghribi
Hi stefano !

Yes, it's a pleasure to contribute and to join the documentation team.

Ok! I will send announcements to the other mailing list ;)

Regards,

Chaima


2014-08-06 1:15 GMT+02:00 Stefano Maffulli stef...@openstack.org:

 On 08/03/2014 03:49 AM, chayma ghribi wrote:
  I want to share with you our OpenStack Icehouse Installation Guide for
  Ubuntu 14.04.
 [...]

 Thanks for the effort. Would you please consider working with the
 existing Documentation team to add/improve current manuals?

 Also, please use this list only to discuss *future* technical features
 and not to send generic announcements. A better place for this
 conversation is openst...@lists.openstack.org

 Regards,
 Stef

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Nova][Neutron] multiple hypervisors on one compute host - neutron agent and compute hostnames

2014-08-06 Thread Kyle Mestery
On Tue, Aug 5, 2014 at 6:17 PM, Robert Collins
robe...@robertcollins.net wrote:
 Hi!

 James has run into an issue implementing the multi-hypervisor spec
 (http://git.openstack.org/cgit/openstack/tripleo-specs/tree/specs/juno/tripleo-juno-deploy-cloud-hypervisor-type.rst)
 which we're hoping to use to reduce infrastructure overheads by
 deploying OpenStack control plane services in VMs, without requiring
 dedicated VM hypervisor machines in the deploy cloud.

 The issue we've hit is that the Neutron messages for VIF plugging are
 sent to the Neutron agent with an exactly matching hostname to the
 Nova-compute process. However, we have unique hostnames for the
 nova-compute processes on one machine (one for -kvm, one for -docker,
 one for -ironic etc) for a variety of reasons: so we can see if all
 the processes are up, so that we don't get messages for the wrong
 process from nova-api etc.

 I think a reasonable step might be to allow the agent host option to
 be a list - e.g.

  [DEFAULT]
  hosts={{nova.compute_hostname}}-libvirt,{{nova.compute_hostname}}-docker

 we'd just make it listen to all the nova-compute hostnames we may have
 on the machine.
 That seems like a fairly shallow patch to me: add a new hosts option
 with no default, change the code to listen to N queues when hosts is
 set, and to report state N times as well (for consistency).
 Alternatively, with a DB migration, we could record N hosts against
 one agent status.

 Alternatively we could run N ovs-agents on one machine (with a
 separate integration bridge each), but I worry that we'd encounter
 unexpected cross-chatter between them on things like external bridge
 flows.

 Thoughts?

I don't like the idea of running multiple agents on the host, so your
first idea to me seems like a better path forward. As you say, the
change seems simple enough. Let me know once you have a patch and I'll
take a look at it.

Thanks!
Kyle

 For now, we're going to have to run with a limitation of only one
 vif-plugging hypervisor type per machine - we'll make the agent
 hostname match that of the nova compute that needs VIFs plugged ;)

 -Rob

 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-06 Thread Russell Bryant
On 08/05/2014 05:24 PM, Sumit Naiksatam wrote:
 On Tue, Aug 5, 2014 at 1:41 PM, Jay Pipes jaypi...@gmail.com wrote:
 On 08/05/2014 04:26 PM, Stephen Wong wrote:

 Agreed with Kevin and Sumit here. As a subgroup we talked about Nova
 integration, and the preliminary idea, as Bob alluded to, is to add
 endpoint as an option in place of Neutron port. But if we can make
 Nova EPG-aware, it would be great.


 Is anyone listening to what I'm saying? The term endpoint is obtuse and
 completely disregards the existing denotation of the word endpoint in use
 in OpenStack today.

 
 Yes, listening, absolutely. I acknowledged your point in this thread
 as well as on the review. Your suggestion on the thread seemed to be
 to document this better and clarify. Is that sufficient for moving
 forward, or are you thinking something else?

I agree with Jay's concern here.  I think using the term endpoint at
all is problematic and should be renamed to something else.  As Jay has
stated, endpoint is already a well defined and completely different
concept in OpenStack.  What's being created here should be called
something else.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-06 Thread Russell Bryant
On 08/05/2014 06:13 PM, Kevin Benton wrote:
 That makes sense. It's not quite a fair analogy though to compare to
 reintroducing projects or tenants because Keystone endpoints aren't
 'user-facing' so to speak. i.e. a regular user (application deployer,
 instance operator, etc) should never have to see or understand the
 purpose of a Keystone endpoint.

An end user that is consuming any OpenStack API absolutely must
understand endpoints in the service catalog.  The entire purpose of the
catalog is so that an application only needs to know the API endpoint to
keystone and is then able to discover where the rest of the APIs are
located.  They are very much user facing, IMO.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] OpenStack Heat installation guide and Heat utilisation manual

2014-08-06 Thread marwen mechtri
Hi Andreas,

It's pleasure to work together on the OpenStack heat templates
documentation.
In our manual, we provide two templates with the associated descriptions
(and pictures).
The first one is useful when we deploy 2 interconnected VMs and the second
one can be used to update the template (keep one VM instance and delete the
other one).

Regards,
Marouen


2014-08-06 13:20 GMT+02:00 Andreas Jaeger a...@suse.com:

 On 08/06/2014 05:23 AM, Don Waterloo wrote:
  On 5 August 2014 18:10, marwen mechtri mechtri.mar...@gmail.com wrote:
  Hi all,
 
  I want to present you our OpenStack Heat installation guide for Icehouse
  release.
 
 
 https://github.com/MarouenMechtri/OpenStack-Heat-Installation/blob/master/OpenStack-Heat-Installation.rst
 
  A well described manual with illustrative pictures for Heat utilisation
 and
  HOT template creation is available here:
 
 
 https://github.com/MarouenMechtri/OpenStack-Heat-Installation/blob/master/Create-your-first-stack-with-Heat.rst
 
  Please let us know your opinion about it.
 
  Enjoy!
 
  Marouen Mechtri
 
 
  thanks for this.
 
  I have been struggling with the delegated trust model in heat, I
  notice you do not include this in your recipe. Have you considered
  adding it?
  Without it, one needs to resupply their password.

 Let's work together to enhance the OpenStack documentation to really
 describe heat templates. The documentation team has started work on:


 http://specs.openstack.org/openstack/docs-specs/specs/juno/heat-templates.html

 Help is welcome!

 Andreas
 --
  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
   SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB16746 (AG Nürnberg)
 GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Which program for Rally

2014-08-06 Thread Russell Bryant
On 08/06/2014 06:30 AM, Thierry Carrez wrote:
 Hi everyone,
 
 At the TC meeting yesterday we discussed Rally program request and
 incubation request. We quickly dismissed the incubation request, as
 Rally appears to be able to live happily on top of OpenStack and would
 benefit from having a release cycle decoupled from the OpenStack
 integrated release.
 
 That leaves the question of the program. OpenStack programs are created
 by the Technical Committee, to bless existing efforts and teams that are
 considered *essential* to the production of the OpenStack integrated
 release and the completion of the OpenStack project mission. There are 3
 ways to look at Rally and official programs at this point:
 
 1. Rally as an essential QA tool
 Performance testing (and especially performance regression testing) is
 an essential QA function, and a feature that Rally provides. If the QA
 team is happy to use Rally to fill that function, then Rally can
 obviously be adopted by the (already-existing) QA program. That said,
 that would put Rally under the authority of the QA PTL, and that raises
 a few questions due to the current architecture of Rally, which is more
 product-oriented. There needs to be further discussion between the QA
 core team and the Rally team to see how that could work and if that
 option would be acceptable for both sides.
 
 2. Rally as an essential operator tool
 Regular benchmarking of OpenStack deployments is a best practice for
 cloud operators, and a feature that Rally provides. With a bit of a
 stretch, we could consider that benchmarking is essential to the
 completion of the OpenStack project mission. That program could one day
 evolve to include more such operations best practices tools. In
 addition to the slight stretch already mentioned, one concern here is
 that we still want to have performance testing in QA (which is clearly
 essential to the production of OpenStack). Letting Rally primarily be
 an operational tool might make that outcome more difficult.
 
 3. Let Rally be a product on top of OpenStack
 The last option is to not have Rally in any program, and not consider it
 *essential* to the production of the OpenStack integrated release or
 the completion of the OpenStack project mission. Rally can happily exist
 as an operator tool on top of OpenStack. It is built as a monolithic
 product: that approach works very well for external complementary
 solutions... Also be more integrated in OpenStack or part of the
 OpenStack programs might come at a cost (slicing some functionality out
 of rally to make it more a framework and less a product) that might not
 be what its authors want.
 
 Let's explore each option to see which ones are viable, and the pros and
 cons of each.

My feeling right now is that Rally is trying to accomplish too much at
the start (both #1 and #2).  I would rather see the project focus on
doing one of them as best as it can before increasing scope.

It's my opinion that #1 is the most important thing that Rally can be
doing to help ensure the success of OpenStack, so I'd like to explore
the Rally as a QA tool in more detail to start with.

From the TC meeting, it seems that the QA group (via sdague, at least)
has provided some feedback to Rally over the last several months.  I
would really like to see an analysis and write-up from the QA group on
the current state of Rally and how it may (or may not) be able to serve
the performance QA needs.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Tox run failure during installation of dependencies in requirements

2014-08-06 Thread Narasimhan, Vivekanandan
Hi Igor /Matthieu,



I am getting random connection timeouts to pypi.python.org with my machine 
behind the proxy when trying to run

Tox –e pep8

Dependency installation fails with the stack trace posted below.



However, similar issue does not occur when I use run_tests.sh to run the same 
set of tests.



My machine is able to reach pypi.python.org.  Some packages are getting 
installed while doing some others,

It fails as below.



It looks like am hitting the problem as here, but the solution proposed there 
isn’t working for me.

https://github.com/pypa/pip/issues/1805



--

Thanks,



Vivek





From: Igor Degtiarov [mailto:idegtia...@mirantis.com]
Sent: Wednesday, August 06, 2014 3:54 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Tox run failure during installation of 
dependencies in requirements



Hi,

Actually the same question, what is wrong with Tox now?




-- Igor



On Wed, Aug 6, 2014 at 1:19 PM, Narasimhan, Vivekanandan 
vivekanandan.narasim...@hp.commailto:vivekanandan.narasim...@hp.com wrote:

Hi,



Recently , the Tox runs started to fail in my workspace.

It fails consistently during installing dependencies with the following.



Downloading/unpacking PrettyTable=0.7,0.8 (from 
python-keystoneclient=0.10.0--r 
/home/narasimv/dev/bug1350485/neutron/requirements.txt (line 22))

Cleaning up...

Exception:

Traceback (most recent call last):

  File 
/home/narasimv/dev/bug1350485/neutron/.tox/pep8/local/lib/python2.7/site-packages/pip/basecommand.py,
 line 122, in main

status = self.run(options, args)

  File 
/home/narasimv/dev/bug1350485/neutron/.tox/pep8/local/lib/python2.7/site-packages/pip/commands/install.py,
 line 278, in run

requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, 
bundle=self.bundle)

  File 
/home/narasimv/dev/bug1350485/neutron/.tox/pep8/local/lib/python2.7/site-packages/pip/req.py,
 line 1197, in prepare_files

do_download,

  File 
/home/narasimv/dev/bug1350485/neutron/.tox/pep8/local/lib/python2.7/site-packages/pip/req.py,
 line 1375, in unpack_url

self.session,

  File 
/home/narasimv/dev/bug1350485/neutron/.tox/pep8/local/lib/python2.7/site-packages/pip/download.py,
 line 546, in unpack_http_url

resp = session.get(target_url, stream=True)

  File 
/home/narasimv/dev/bug1350485/neutron/.tox/pep8/local/lib/python2.7/site-packages/pip/_vendor/requests/sessions.py,
 line 468, in get

return self.request('GET', url, **kwargs)

  File 
/home/narasimv/dev/bug1350485/neutron/.tox/pep8/local/lib/python2.7/site-packages/pip/download.py,
 line 237, in request

return super(PipSession, self).request(method, url, *args, **kwargs)

  File 
/home/narasimv/dev/bug1350485/neutron/.tox/pep8/local/lib/python2.7/site-packages/pip/_vendor/requests/sessions.py,
 line 456, in request

resp = self.send(prep, **send_kwargs)

  File 
/home/narasimv/dev/bug1350485/neutron/.tox/pep8/local/lib/python2.7/site-packages/pip/_vendor/requests/sessions.py,
 line 559, in send

r = adapter.send(request, **kwargs)

  File 
/home/narasimv/dev/bug1350485/neutron/.tox/pep8/local/lib/python2.7/site-packages/pip/_vendor/requests/adapters.py,
 line 384, in send

raise Timeout(e, request=request)

Timeout: 
(pip._vendor.requests.packages.urllib3.connection.VerifiedHTTPSConnection 
object at 0x37e4790, 'Connection to pypi.python.orghttp://pypi.python.org 
timed out. (connect timeout=15)')



The TOX was earlier running in the same workspace successfully on the machine

even though we were behind the proxy.



Could you please advise on how to resolve this problem?



--

Thanks,



Vivek




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Which program for Rally

2014-08-06 Thread Sean Dague
On 08/06/2014 09:11 AM, Russell Bryant wrote:
 On 08/06/2014 06:30 AM, Thierry Carrez wrote:
 Hi everyone,

 At the TC meeting yesterday we discussed Rally program request and
 incubation request. We quickly dismissed the incubation request, as
 Rally appears to be able to live happily on top of OpenStack and would
 benefit from having a release cycle decoupled from the OpenStack
 integrated release.

 That leaves the question of the program. OpenStack programs are created
 by the Technical Committee, to bless existing efforts and teams that are
 considered *essential* to the production of the OpenStack integrated
 release and the completion of the OpenStack project mission. There are 3
 ways to look at Rally and official programs at this point:

 1. Rally as an essential QA tool
 Performance testing (and especially performance regression testing) is
 an essential QA function, and a feature that Rally provides. If the QA
 team is happy to use Rally to fill that function, then Rally can
 obviously be adopted by the (already-existing) QA program. That said,
 that would put Rally under the authority of the QA PTL, and that raises
 a few questions due to the current architecture of Rally, which is more
 product-oriented. There needs to be further discussion between the QA
 core team and the Rally team to see how that could work and if that
 option would be acceptable for both sides.

 2. Rally as an essential operator tool
 Regular benchmarking of OpenStack deployments is a best practice for
 cloud operators, and a feature that Rally provides. With a bit of a
 stretch, we could consider that benchmarking is essential to the
 completion of the OpenStack project mission. That program could one day
 evolve to include more such operations best practices tools. In
 addition to the slight stretch already mentioned, one concern here is
 that we still want to have performance testing in QA (which is clearly
 essential to the production of OpenStack). Letting Rally primarily be
 an operational tool might make that outcome more difficult.

 3. Let Rally be a product on top of OpenStack
 The last option is to not have Rally in any program, and not consider it
 *essential* to the production of the OpenStack integrated release or
 the completion of the OpenStack project mission. Rally can happily exist
 as an operator tool on top of OpenStack. It is built as a monolithic
 product: that approach works very well for external complementary
 solutions... Also be more integrated in OpenStack or part of the
 OpenStack programs might come at a cost (slicing some functionality out
 of rally to make it more a framework and less a product) that might not
 be what its authors want.

 Let's explore each option to see which ones are viable, and the pros and
 cons of each.
 
 My feeling right now is that Rally is trying to accomplish too much at
 the start (both #1 and #2).  I would rather see the project focus on
 doing one of them as best as it can before increasing scope.
 
 It's my opinion that #1 is the most important thing that Rally can be
 doing to help ensure the success of OpenStack, so I'd like to explore
 the Rally as a QA tool in more detail to start with.

I want to clarify some things. I don't think that rally in it's current
form belongs in any OpenStack project. It's a giant monolythic tool,
which is apparently a design point. That's the wrong design point for an
OpenStack project.

For instance:

https://github.com/stackforge/rally/tree/master/rally/benchmark/scenarios should
all be tests in Tempest (and actually today mostly are via API tests).
There is an existing stress framework in Tempest which does the
repetitive looping that rally does on these already. This fact has been
brought up before.

https://github.com/stackforge/rally/tree/master/rally/verification/verifiers
- should be baked back into Tempest (at least on the results side,
though diving in there now it looks largely duplicative from existing
subunit to html code).

https://github.com/stackforge/rally/blob/master/rally/db/api.py - is
largely (not entirely) what we'd like from a long term trending piece
that subunit2sql is working on. Again this was just all thrown into the
Rally db instead of thinking about how to split it off. Also, notable
here is there are some fundamental testr bugs (like worker
misallocation) which mean the data is massively dirty today. It would be
good for people to actually work on fixing those things.

The parts that should stay outside of Tempest are the setup tool
(separation of concerns is that Tempest is the load runner, not the
setup environment) and any of the SLA portions.

I think rally brings forward a good point about making Tempest easier to
run. But I think that shouldn't be done outside Tempest. Making the test
tool easier to use should be done in the tool itself. If that means
adding a tempest cmd or such, so be it. Note this was a topic for
discussion at last summit:

Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-06 Thread Carlino, Chuck (OpenStack TripleO, Neutron)
On Aug 6, 2014, at 1:11 AM, Aaron Rosen 
aaronoro...@gmail.commailto:aaronoro...@gmail.com wrote:

I agree, I had actually proposed this here: 
https://blueprints.launchpad.net/nova/+spec/nova-api-quantum-create-port  :),   
though there are some issues we need to solve in neutron first -- allowing the 
mac_address on the port to be updated in neutron. This is required for bare 
metal support as when the port is created we don't know which physical mac will 
need to be mapped to the port.


FYI, this is work in progress (see 
https://bugs.launchpad.net/neutron/+bug/1341268 and 
https://review.openstack.org/#/c/112129/).

Chuck Carlino

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] New keyword to recheck commits

2014-08-06 Thread Dmitry Teselkin
Hi all,

Please note that we've changed keywords used to trigger recheck action in
murano-ci to 'retrigger murano-ci'.

We decided to do this in order to prevent OpenStack CI from triggering a
build each time a comment 'recheck murano-ci' is added. The main cause of
this is the regexp in OpenStack's zuul which triggers each time it sees
'recheck' word, so the only fast way to fix it is to get rid of that
ambiguous keyword.

-- 
Thanks,
Dmitry Teselkin
Deployment Engineer
Mirantis
http://www.mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Which program for Rally

2014-08-06 Thread Russell Bryant
On 08/06/2014 09:44 AM, Sean Dague wrote:
 Something that we need to figure out is given where we are in the
 release cycle do we want to ask the QA team to go off and do Rally deep
 dive now to try to pull it apart into the parts that make sense for
 other programs to take in. There are always trade offs.
 
 Like the fact that right now the rally team is proposing gate jobs which
 have some overlap to the existing largeops jobs. Did they start a
 conversation about it? Nope. They just went off to do their thing
 instead. https://review.openstack.org/#/c/112251/
 
 So now we're going to run 2 jobs that do very similar things, with
 different teams adjusting the test loads. Which I think is basically
 madness.

You make a great point about the time needed to do this.  I think the
feedback you've provided in this post is a great start.  Perhaps the
burden should be squarely on the Rally team.  Using the feedback you've
provided thus far, they could go off and work on splitting things up and
making a better integration plan, and we could revisit post-Juno.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-06 Thread Ben Nemec
On 08/06/2014 03:35 AM, Yuriy Taraday wrote:
 I'd like to stress this to everyone: I DO NOT propose squashing together
 commits that should belong to separate change requests. I DO NOT propose to
 upload all your changes at once. I DO propose letting developers to keep
 local history of all iterations they have with a change request. The
 history that absolutely doesn't matter to anyone but this developer.

Right, I understand that may not be the intent, but it's almost
certainly going to be the end result.  You can't control how people are
going to use this feature, and history suggests if it can be abused, it
will be.

 
 On Wed, Aug 6, 2014 at 12:03 PM, Martin Geisler mar...@geisler.net wrote:
 
 Ben Nemec openst...@nemebean.com writes:

 On 08/05/2014 03:14 PM, Yuriy Taraday wrote:

 When you're developing some big change you'll end up with trying
 dozens of different approaches and make thousands of mistakes. For
 reviewers this is just unnecessary noise (commit title Scratch my
 last CR, that was bullshit) while for you it's a precious history
 that can provide basis for future research or bug-hunting.

 So basically keeping a record of how not to do it?  I get that, but I
 think I'm more onboard with the suggestion of sticking those dead end
 changes into a separate branch.  There's no particular reason to keep
 them on your working branch anyway since they'll never merge to master.
  They're basically unnecessary conflicts waiting to happen.

 Yeah, I would never keep broken or unfinished commits around like this.
 In my opinion (as a core Mercurial developer), the best workflow is to
 work on a feature and make small and large commits as you go along. When
 the feature works, you begin squashing/splitting the commits to make
 them into logical pieces, if they aren't already in good shape. You then
 submit the branch for review and iterate on it until it is accepted.

 
 Absolutely true. And it's mostly the same workflow that happens in
 OpenStack: you do your cool feature, you carve meaningful small
 self-contained pieces out of it, you submit series of change requests.
 And nothing in my proposal conflicts with it. It just provides a way to
 make developer's side of this simpler (which is the intent of git-review,
 isn't it?) while not changing external artifacts of one's work: the same
 change requests, with the same granularity.
 
 
 As a reviewer, it cannot be stressed enough how much small, atomic,
 commits help. Squashing things together into large commits make reviews
 very tricky and removes the possibility of me accepting a later commit
 while still discussing or rejecting earlier commits (cherry-picking).

 
 That's true, too. But please don't think I'm proposing to squash everything
 together and push 10k-loc patches. I hate that, too. I'm proposing to let
 developer use one's tools (Git) in a simpler way.
 And the simpler way (for some of us) would be to have one local branch for
 every change request, not one branch for the whole series. Switching
 between branches is very well supported by Git and doesn't require extra
 thinking. Jumping around in detached HEAD state and editing commits during
 rebase requires remembering all those small details.
 
 FWIW, I have had long-lived patch series, and I don't really see what
 is so difficult about running git rebase master. Other than conflicts,
 of course, which are going to be an issue with any long-running change
 no matter how it's submitted. There isn't a ton of git magic involved.

 I agree. The conflicts you talk about are intrinsic to the parallel
 development. Doing a rebase is equivalent to doing a series of merges,
 so if rebase gives you conflicts, you can be near certain that a plain
 merge would give you conflicts too. The same applies other way around.

 
 You disregard other issues that can happen with patch series. You might
 need something more that rebase. You might need to fix something. You might
 need to focus on the one commit in the middle and do huge bunch of changes
 in it alone. And I propose to just allow developer to keep track of what's
 one been doing instead of forcing one to remember all of this.

This is a separate issue though.  Editing a commit in the middle of a
series doesn't have to be done at the same time as a rebase to master.

In fact, not having a bunch of small broken commits that can't be
submitted individually in your history makes it _easier_ to deal with
follow-up changes.  Then you know that the unit tests pass on every
commit, so you can work on it in isolation without constantly having to
rebase through your entire commit history.  This workflow seems to
encourage the painful rebases you're trying to avoid.

 
 So as you may have guessed by now, I'm opposed to adding this to
 git-review. I think it's going to encourage bad committer behavior
 (monolithic commits) and doesn't address a use case I find compelling
 enough to offset that concern.

 I don't understand why this would even be in 

Re: [openstack-dev] [heat] Stack update and raw_template backup

2014-08-06 Thread Anant Patil
On 30-Jul-14 23:24, Zane Bitter wrote:
 On 30/07/14 02:21, Anant Patil wrote:
 On 28-Jul-14 22:37, Clint Byrum wrote:
 Excerpts from Zane Bitter's message of 2014-07-28 07:25:24 -0700:
 On 26/07/14 00:04, Anant Patil wrote:
 When the stack is updated, a diff of updated template and current
 template can be stored to optimize database.  And perhaps Heat should
 have an API to retrieve this history of templates for inspection etc.
 when the stack admin needs it.

 If there's a demand for that feature we could implement it, but it
 doesn't easily fall out of the current implementation any more.

 We are never going to do it even 1/10th as well as git. In fact we won't
 even do it 1/0th as well as CVS.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 Zane,
 I am working the defect you had filed, which would clean up backup stack
 along with the resources, templates and other data.

 However, I simply don't want to delete the templates for the same reason
 as we don't hard-delete the stack. Anyone who deploys a stack and
 updates it over time would want to view the the updates in the templates
 for debugging or auditing reasons.
 
 As I mentioned, and as Ton mentioned in another thread, the old 
 templates are useless for auditing (and to a large extent debugging) 
 because the update process turns them into Frankentemplates that don't 
 reflect either the old or the new template supplied by the user (though 
 they're much closer to the new than the old).
 
 So if you don't delete them, then you don't end up with a copy of the 
 old template, you end up with a broken copy of the new template.
 
 It is not fair to assume that every
 user has a VCS with him to store the templates.
 
 It most certainly is fair to assume that.
 
 In addition, Glance has an artifact repository project already underway 
 with the goal of providing versioned access to templates as part of 
 OpenStack.
 
 It's likely that not all users are making use of a VCS, but if they're 
 not then I don't know why they bother using Heat. The whole point of the 
 project is to provide a way for people to describe their infrastructure 
 in a way that _can_ be managed in a VCS. Whenever we add new features, 
 we always try to do so in a way that _encourages_ users to store their 
 templates in a VCS and _discourages_ them from managing them in an ad 
 hoc manner.
 
 It is kind of
 inconvenience for me to not have the ability to view my updates in
 templates.
 
 I tend to agree that it would be kind-of nice to have, but you're 
 talking about it as if it's a simple matter of just not deleting the old 
 template and sticking an API in front of it, rather than the major new 
 development that it actually is.
 
 We need not go as far as git or any VCS. Any library which can do a diff
 and patch of text files can be used, like the google-diff-match-patch.
 
 We don't store the original text of templates - in heat-engine we only 
 get the object tree obtained by parsing the JSON or YAML. So the 
 templates we store or could store currently are of not much use to 
 (human) users.
 
 cheers,
 Zane.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

Sorry for coming late on this.

Thanks Zane for clarifications.

However,
 I tend to agree that it would be kind-of nice to have, but you're
 talking about it as if it's a simple matter of just not deleting the
 old template and sticking an API in front of it, rather than the major
 new development that it actually is.
I am not really saying that it would be that easy. The how part will
come later. I am discussing this in the context of bug I am fixing but
certainly it's not that simple.

The templates, resources, events etc are all integral part of stack. I
would like to just grab the heat command to view all the stack and its
updates and templates. May be I should try to put it in a blueprint so
that the idea is clear and we can discuss.

- Anant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Too long stack delete method.

2014-08-06 Thread Anant Patil
Hi,

I see that the stack delete method is too long to comprehend easily
without going to-and-fro few times. I think we should refactor it and
move out the UpdateReplace related logic for backup stack to another
method. We can also move the user credentials deletion related logic to
another method. With this we will can have more testable (each method
can be tested independently) and lucid code.

Don't know if someone has already taken it. I am sure the refactor will
be helpful.

- Anant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] Too long stack delete method

2014-08-06 Thread Anant Patil
Hi,

I see that the stack delete method is too long to comprehend easily
without going to-and-fro few times. I think we should refactor it and
move out the UpdateReplace related logic for backup stack to another
method. We can also move the user credentials deletion related logic to
another method. With this we will can have more testable (each method
can be tested independently) and lucid code.

Don't know if someone has already taken it. I am sure the refactor will
be helpful.

- Anant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-06 Thread Joe Gordon
On Wed, Aug 6, 2014 at 4:12 PM, Kevin Benton blak...@gmail.com wrote:

 Are there any parity features you are aware of that aren't receiving
 adequate developer/reviewer time? I'm not aware of any parity features that
 are in a place where throwing more engineers at them is going to speed
 anything up. Maybe Mark McClain (Nova parity leader) can provide some
 better insight here, but that is the impression I've gotten as an active
 Neutron contributor observing the ongoing parity work.


I cannot speak for which parts of nova-parity are short staffed, if any,
but from an outsiders perspective I don't think neutron will hit full
parity in Juno. And I would be very surprised to hear that more developers
working on parity won't help. For example we are already in Juno-3 and the
following work is yet to be completed (as per the neutron gap wiki):

* Make check-tempest-dsvm-neutron-full stable enough to vote
* Grenade testing
* DVR (Neutron replacement for Nova multi-host)
* Document Open Source Options
* Real world (not in gate) performance, stability and scalability testing
(performance parity with nova-networking).




 Given that, pointing to the Nova parity work seems a bit like a red
 herring. This new API is being developed orthogonally to the existing API
 endpoints and I don't think it was ever the expectation that Nova would
 switch to this during the Juno timeframe anyway. The new API will not be
 used during normal operation and should not impact the existing API at all.






 On Tue, Aug 5, 2014 at 5:51 PM, Sean Dague s...@dague.net wrote:

 On 08/05/2014 07:28 PM, Joe Gordon wrote:
 
 
 
  On Wed, Aug 6, 2014 at 12:20 AM, Robert Kukura 
 kuk...@noironetworks.com
  mailto:kuk...@noironetworks.com wrote:
 
  On 8/4/14, 4:27 PM, Mark McClain wrote:
  All-
 
  tl;dr
 
  * Group Based Policy API is the kind of experimentation we be
  should attempting.
  * Experiments should be able to fail fast.
  * The master branch does not fail fast.
  * StackForge is the proper home to conduct this experiment.
  The disconnect here is that the Neutron group-based policy sub-team
  that has been implementing this feature for Juno does not see this
  work as an experiment to gather data, but rather as an important
  innovative feature to put in the hands of early adopters in Juno and
  into widespread deployment with a stable API as early as Kilo.
 
 
  The group-based policy BP approved for Juno addresses the critical
  need for a more usable, declarative, intent-based interface for
  cloud application developers and deployers, that can co-exist with
  Neutron's current networking-hardware-oriented API and work nicely
  with all existing core plugins. Additionally, we believe that this
  declarative approach is what is needed to properly integrate
  advanced services into Neutron, and will go a long way towards
  resolving the difficulties so far trying to integrate LBaaS, FWaaS,
  and VPNaaS APIs into the current Neutron model.
 
  Like any new service API in Neutron, the initial group policy API
  release will be subject to incompatible changes before being
  declared stable, and hence would be labeled experimental in
  Juno. This does not mean that it is an experiment where to fail
  fast is an acceptable outcome. The sub-team's goal is to stabilize
  the group policy API as quickly as possible,  making any needed
  changes based on early user and operator experience.
 
  The L and M cycles that Mark suggests below to revisit the status
  are a completely different time frame. By the L or M cycle, we
  should be working on a new V3 Neutron API that pulls these APIs
  together into a more cohesive core API. We will not be in a position
  to do this properly without the experience of using the proposed
  group policy extension with the V2 Neutron API in production.
 
 
  If we were failing miserably, or if serious technical issues were
  being identified with the patches, some delay might make sense. But,
  other than Mark's -2 blocking the initial patches from merging, we
  are on track to complete the planned work in Juno.
 
  -Bob
 
 
 
  As a member of nova-core, I find this whole discussion very startling.
  Putting aside the concerns over technical details and the pain of having
  in tree experimental APIs (such as nova v3 API), neutron still isn't the
  de-facto networking solution from nova's perspective and it won't be
  until neutron has feature and performance parity with nova-network. In
  fact due to the slow maturation of neutron, nova has moved nova-network
  from 'frozen' to open for development (with a few caveats).  So unless
  this new API directly solves some of the gaps in [0], I see no reason to
  push this into Juno. Juno hardly seems to be the appropriate time to
  introduce a new not-so-stable API; Juno is the time to address all 

Re: [openstack-dev] [Heat] Too long stack delete method.

2014-08-06 Thread Zane Bitter

On 06/08/14 10:37, Anant Patil wrote:

Hi,

I see that the stack delete method is too long to comprehend easily
without going to-and-fro few times. I think we should refactor it and
move out the UpdateReplace related logic for backup stack to another
method. We can also move the user credentials deletion related logic to
another method. With this we will can have more testable (each method
can be tested independently) and lucid code.

Don't know if someone has already taken it. I am sure the refactor will
be helpful.


When the current update-failure-recovery stuff I am working on is done 
we're going to have the same bug with updates that this code is solving 
for deletes. So I was expecting to do some refactoring along these lines 
anyway.


BTW, I suggest you put [Heat] in the subject line when posting to 
openstack-dev if you want Heat developers to see it.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tripleo] Release report

2014-08-06 Thread mar...@redhat.com
This was my first run so if I missed something please ping me, esp if
you are in need of a stable branch (for those projects we do that for),

1. os-apply-config: no changes, 0.1.19

2. os-refresh-config:   no changes, 0.1.7

3. os-collect-config:   no changes, 0.1.25

4. os-cloud-config: release: 0.1.4 -- 0.1.5
-- https://pypi.python.org/pypi/os-cloud-config/0.1.5
--
http://tarballs.openstack.org/os-cloud-config/os-cloud-config-0.1.5.tar.gz

5. diskimage-builder:   no_changes, 0.1.26

6. dib-utils:   no_changes, 0.0.4

7. tripleo-heat-templates:  release: 0.7.1 -- 0.7.2
-- https://pypi.python.org/pypi/tripleo-heat-templates/0.7.2
--
http://tarballs.openstack.org/tripleo-heat-templates/tripleo-heat-templates-0.7.2.tar.gz

8: tripleo-image-elements:  release: 0.8.1 -- 0.8.2
-- https://pypi.python.org/pypi/tripleo-image-elements/0.8.2
--
http://tarballs.openstack.org/tripleo-image-elements/tripleo-image-elements-0.8.2.tar.gz

9: tuskar:  release 0.4.6 -- 0.4.7
-- https://pypi.python.org/pypi/tuskar/0.4.7
-- http://tarballs.openstack.org/tuskar/tuskar-0.4.7.tar.gz

10. python-tuskarclient:no_changes, 0.1.8


I'll deal with the process_bugs thing before end of week,

thanks! marios

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][ceilometer] Some background on the gnocchi project

2014-08-06 Thread Eoghan Glynn

Folks,

It's come to our attention that some key individuals are not
fully up-to-date on gnocchi activities, so it being a good and
healthy thing to ensure we're as communicative as possible about
our roadmap, I've provided a high-level overview here of our
thinking. This is intended as a precursor to further discussion
with the TC.

Cheers,
Eoghan


What gnocchi is:
===

Gnocchi is a separate, but related, project spun up on stackforge
by Julien Danjou, with the objective of providing efficient
storage and retrieval of timeseries-oriented data and resource
representations.

The goal is to experiment with a potential approach to addressing
an architectural misstep made in the very earliest days of
ceilometer, specifically the decision to store snapshots of some
resource metadata alongside each metric datapoint. The core idea
is to move to storing datapoints shorn of metadata, and instead
allow the resource-state timeline to be reconstructed more cheaply
from much less frequently occurring events (e.g. instance resizes
or migrations).


What gnocchi isn't:
==

Gnocchi is not a large-scale under-the-radar rewrite of a core
OpenStack component along the lines of keystone-lite.

The change is concentrated on the final data-storage phase of
the ceilometer pipeline, so will have little initial impact on the
data-acquiring agents, or on transformation phase.

We've been totally open at the Atlanta summit and other forums
about this approach being a multi-cycle effort.


Why we decided to do it this way:


The intent behind spinning up a separate project on stackforge
was to allow the work progress at arms-length from ceilometer,
allowing normalcy to be maintained on the core project and a
rapid rate of innovation on gnocchi.

Note that that the developers primarily contributing to gnocchi
represent a cross-section of the core team, and there's a regular
feedback loop in the form of a recurring agenda item at the
weekly team meeting to avoid the effort becoming silo'd.


But isn't re-architecting frowned upon?
==

Well, the architecture of other OpenStack projects have also
under-gone change as the community understanding of the
implications of prior design decisions has evolved.

Take for example the move towards nova no-db-compute  the
unified-object-model in order to address issues in the nova
architecture that made progress towards rolling upgrades
unneccessarily difficult.

The point, in my understanding, is not to avoid doing the
course-correction where it's deemed necessary. Rather, the
principle is more that these corrections happen in an open
and planned way.


The path forward:


A subset of the ceilometer community will continue to work on
gnocchi in parallel with the ceilometer core over the remainder
of the Juno cycle and into the Kilo timeframe. The goal is to
have an initial implementation of gnocchi ready for tech preview
by the end of Juno, and to have the integration/migration/
co-existence questions addressed in Kilo.

Moving the ceilometer core to using gnocchi will be contingent
on it demonstrating the required performance characteristics and
providing the semantics needed to support a v3 ceilometer API
that's fit-for-purpose.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-06 Thread Jay Pipes

On 08/06/2014 02:12 AM, Kevin Benton wrote:

Given that, pointing to the Nova parity work seems a bit like a red
herring. This new API is being developed orthogonally to the existing
API endpoints


You see how you used the term endpoints there? :P

-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-06 Thread Ben Nemec
On 08/06/2014 12:41 AM, Yuriy Taraday wrote:
 On Wed, Aug 6, 2014 at 1:17 AM, Ben Nemec openst...@nemebean.com wrote:
 
 On 08/05/2014 03:14 PM, Yuriy Taraday wrote:
 On Tue, Aug 5, 2014 at 10:48 PM, Ben Nemec openst...@nemebean.com
 wrote:

 On 08/05/2014 10:51 AM, ZZelle wrote:
 Hi,


 I like the idea  ... with complex change, it could useful for the
 understanding to split it into smaller changes during development.

 I don't understand this.  If it's a complex change that you need
 multiple commits to keep track of locally, why wouldn't reviewers want
 the same thing?  Squashing a bunch of commits together solely so you
 have one review for Gerrit isn't a good thing.  Is it just the warning
 message that git-review prints when you try to push multiple commits
 that is the problem here?


 When you're developing some big change you'll end up with trying dozens
 of
 different approaches and make thousands of mistakes. For reviewers this
 is
 just unnecessary noise (commit title Scratch my last CR, that was
 bullshit) while for you it's a precious history that can provide basis
 for
 future research or bug-hunting.

 So basically keeping a record of how not to do it?
 
 
 Well, yes, you can call version control system a history of failures.
 Because if there were no failures there would've been one omnipotent commit
 that does everything you want it to.

Ideally, no.  In a perfect world every commit would work, so the version
history would be a number of small changes that add up to this great
application.  In reality it's a combination of new features, oopses, and
fixes for those oopses.  I certainly wouldn't describe it as a history
of failures though.  I would hope the majority of commits to our
projects are _not_ failures. :-)

 
 
  I get that, but I
 think I'm more onboard with the suggestion of sticking those dead end
 changes into a separate branch.  There's no particular reason to keep
 them on your working branch anyway since they'll never merge to master.

 
 The commits themselves are never going to merge to master but that's not
 the only meaning of their life. With current tooling working branch ends
 up a patch series that is constantly rewritten with no proper history of
 when did that happen and why. As I said, you can't find roots of bugs in
 your code, you can't dig into old versions of your code (what if you need a
 method that you've already created but removed because of some wrong
 suggestion?).

You're not going to find the root of a bug in your code by looking at an
old commit that was replaced by some other implementation.  If anything,
I see that as more confusing.  And if you want to keep old versions of
your code, either push it to Gerrit or create a new branch before
changing it further.

 
  They're basically unnecessary conflicts waiting to happen.

 
 No. They are your local history. They don't need to be rebased on top of
 master - you can just merge master into your branch and resolve conflicts
 once. After that your autosquashed commit will merge clearly back to
 master.

Then don't rebase them.  git checkout -b dead-end and move on. :-)

 
 
 Merges are one of the strong sides of Git itself (and keeping them very
 easy is one of the founding principles behind it). With current workflow
 we
 don't use them at all. master went too far forward? You have to do rebase
 and screw all your local history and most likely squash everything anyway
 because you don't want to fix commits with known bugs in them. With
 proposed feature you can just do merge once and let 'git review' add some
 magic without ever hurting your code.

 How do rebases screw up your local history?  All your commits are still
 there after a rebase, they just have a different parent.  I also don't
 see how rebases are all that much worse than merges.  If there are no
 conflicts, rebases are trivial.  If there are conflicts, you'd have to
 resolve them either way.

 
 Merge is a new commit, new recorded point in history. Rebase is rewriting
 your commit, replacing it with a new one, without any record in history (of
 course there will be a record in reflog but there's not much tooling to
 work with it). Yes, you just apply your patch to a different version of
 master branch. And then fix some conflicts. And then fix some tests. And
 then you end up with totally different commit.

And with merge commits you end up with a tree that is meaningless except
at the very tail end of the commit series, which I think is the root of
your problems with rebasing.  I imagine it would be very painful to work
in a way where the only commit that you can test against is the last one.

 I totally agree that life's very easy when there's no conflicts and you've
 written all your feature in one go. But that's almost never the true.
 
 
 I also reiterate my point about not keeping broken commits on your
 working branch.  You know at some point they're going to get
 accidentally submitted. :-)

 
 Well... As long as you use 'git 

Re: [openstack-dev] [Heat] Too long stack delete method.

2014-08-06 Thread Anant Patil
On 06-Aug-14 20:20, Zane Bitter wrote:
 On 06/08/14 10:37, Anant Patil wrote:
 Hi,

 I see that the stack delete method is too long to comprehend easily
 without going to-and-fro few times. I think we should refactor it and
 move out the UpdateReplace related logic for backup stack to another
 method. We can also move the user credentials deletion related logic to
 another method. With this we will can have more testable (each method
 can be tested independently) and lucid code.

 Don't know if someone has already taken it. I am sure the refactor will
 be helpful.
 
 When the current update-failure-recovery stuff I am working on is done 
 we're going to have the same bug with updates that this code is solving 
 for deletes. So I was expecting to do some refactoring along these lines 
 anyway.
 
 BTW, I suggest you put [Heat] in the subject line when posting to 
 openstack-dev if you want Heat developers to see it.
 
 cheers,
 Zane.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 

Sure. I think its critical we do that.

 BTW, I suggest you put [Heat] in the subject line when posting to
 openstack-dev if you want Heat developers to see it.
I had re-sent with appropriate subject realizing it later.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] team meeting Aug 7 1800 UTC

2014-08-06 Thread Sergey Lukjanov
Hi folks,

We'll be having the Sahara team meeting as usual in
#openstack-meeting-alt channel.

Agenda: https://wiki.openstack.org/wiki/Meetings/SaharaAgenda#Next_meetings

http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meetingiso=20140807T18

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-06 Thread Eoghan Glynn


 Hi everyone,
 
 With the incredible growth of OpenStack, our development community is
 facing complex challenges. How we handle those might determine the
 ultimate success or failure of OpenStack.
 
 With this cycle we hit new limits in our processes, tools and cultural
 setup. This resulted in new limiting factors on our overall velocity,
 which is frustrating for developers. This resulted in the burnout of key
 firefighting resources. This resulted in tension between people who try
 to get specific work done and people who try to keep a handle on the big
 picture.
 
 It all boils down to an imbalance between strategic and tactical
 contributions. At the beginning of this project, we had a strong inner
 group of people dedicated to fixing all loose ends. Then a lot of
 companies got interested in OpenStack and there was a surge in tactical,
 short-term contributions. We put on a call for more resources to be
 dedicated to strategic contributions like critical bugfixing,
 vulnerability management, QA, infrastructure... and that call was
 answered by a lot of companies that are now key members of the OpenStack
 Foundation, and all was fine again. But OpenStack contributors kept on
 growing, and we grew the narrowly-focused population way faster than the
 cross-project population.
 
 At the same time, we kept on adding new projects to incubation and to
 the integrated release, which is great... but the new developers you get
 on board with this are much more likely to be tactical than strategic
 contributors. This also contributed to the imbalance. The penalty for
 that imbalance is twofold: we don't have enough resources available to
 solve old, known OpenStack-wide issues; but we also don't have enough
 resources to identify and fix new issues.
 
 We have several efforts under way, like calling for new strategic
 contributors, driving towards in-project functional testing, making
 solving rare issues a more attractive endeavor, or hiring resources
 directly at the Foundation level to help address those. But there is a
 topic we haven't raised yet: should we concentrate on fixing what is
 currently in the integrated release rather than adding new projects ?
 
 We seem to be unable to address some key issues in the software we
 produce, and part of it is due to strategic contributors (and core
 reviewers) being overwhelmed just trying to stay afloat of what's
 happening. For such projects, is it time for a pause ? Is it time to
 define key cycle goals and defer everything else ?
 
 On the integrated release side, more projects means stretching our
 limited strategic resources more. Is it time for the Technical Committee
 to more aggressively define what is in and what is out ? If we go
 through such a redefinition, shall we push currently-integrated projects
 that fail to match that definition out of the integrated release inner
 circle ?
 
 The TC discussion on what the integrated release should or should not
 include has always been informally going on. Some people would like to
 strictly limit to end-user-facing projects. Some others suggest that
 OpenStack should just be about integrating/exposing/scaling smart
 functionality that lives in specialized external projects, rather than
 trying to outsmart those by writing our own implementation. Some others
 are advocates of carefully moving up the stack, and to resist from
 further addressing IaaS+ services until we complete the pure IaaS
 space in a satisfactory manner. Some others would like to build a
 roadmap based on AWS services. Some others would just add anything that
 fits the incubation/integration requirements.
 
 On one side this is a long-term discussion, but on the other we also
 need to make quick decisions. With 4 incubated projects, and 2 new ones
 currently being proposed, there are a lot of people knocking at the door.
 
 Thanks for reading this braindump this far. I hope this will trigger the
 open discussions we need to have, as an open source project, to reach
 the next level.


Thanks Thierry, for this timely post.

You've touched on multiple trains-of-thought that could indeed
justify separate threads of their own.

I agree with your read on the diverging growth rates in the
strategic versus the tactical elements of the community.

I would also be supportive of the notion of taking a cycle out to
fully concentrate on solving existing quality/scaling/performance
issues, if that's what you meant by pausing to define key cycle
goals while deferring everything else.

Though FWIW I think scaling back the set of currently integrated
projects is not the appropriate solution to the problem of over-
stretched strategic resources on the QA/infra side of the house.

Rather, I think the proposed move to in-project functional
testing, in place of throwing the kitchen sink into Tempest,
is far more likely to pay dividends in terms of making the job
facing the QA Trojans more tractable and sustainable.

Just my $0.02 ...

Cheers,
Eoghan


Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-06 Thread Ivar Lazzaro
Hi Joe,

Are you suggesting to stop/remove everything that is not related to Nova
Parity for the Juno release? Because then I fail to see why this and Mark's
proposal are targeted  only to GBP.

In my humble opinion, these kind of concerns should be addressed at BP
approval time. Otherwise the whole purpose of the BP process feels void.

If we really feel like proposing a new way of addressing new features in
Neutron (which basically is a workflow change), we should discuss all of it
for the next release without blocking patches which went through the whole
approval process and are ready to be merged after community effort (BP
process, Weakly meetings, POC, reviews). Just like has been done in other
similar cases (eg. 3rd Party CI). This of course is IMHO.

Ivar.
On Aug 6, 2014 4:55 PM, Joe Gordon joe.gord...@gmail.com wrote:




 On Wed, Aug 6, 2014 at 4:12 PM, Kevin Benton blak...@gmail.com wrote:

 Are there any parity features you are aware of that aren't receiving
 adequate developer/reviewer time? I'm not aware of any parity features that
 are in a place where throwing more engineers at them is going to speed
 anything up. Maybe Mark McClain (Nova parity leader) can provide some
 better insight here, but that is the impression I've gotten as an active
 Neutron contributor observing the ongoing parity work.


 I cannot speak for which parts of nova-parity are short staffed, if any,
 but from an outsiders perspective I don't think neutron will hit full
 parity in Juno. And I would be very surprised to hear that more developers
 working on parity won't help. For example we are already in Juno-3 and the
 following work is yet to be completed (as per the neutron gap wiki):

 * Make check-tempest-dsvm-neutron-full stable enough to vote
 * Grenade testing
 * DVR (Neutron replacement for Nova multi-host)
 * Document Open Source Options
 * Real world (not in gate) performance, stability and scalability testing
 (performance parity with nova-networking).




 Given that, pointing to the Nova parity work seems a bit like a red
 herring. This new API is being developed orthogonally to the existing API
 endpoints and I don't think it was ever the expectation that Nova would
 switch to this during the Juno timeframe anyway. The new API will not be
 used during normal operation and should not impact the existing API at all.






 On Tue, Aug 5, 2014 at 5:51 PM, Sean Dague s...@dague.net wrote:

 On 08/05/2014 07:28 PM, Joe Gordon wrote:
 
 
 
  On Wed, Aug 6, 2014 at 12:20 AM, Robert Kukura 
 kuk...@noironetworks.com
  mailto:kuk...@noironetworks.com wrote:
 
  On 8/4/14, 4:27 PM, Mark McClain wrote:
  All-
 
  tl;dr
 
  * Group Based Policy API is the kind of experimentation we be
  should attempting.
  * Experiments should be able to fail fast.
  * The master branch does not fail fast.
  * StackForge is the proper home to conduct this experiment.
  The disconnect here is that the Neutron group-based policy sub-team
  that has been implementing this feature for Juno does not see this
  work as an experiment to gather data, but rather as an important
  innovative feature to put in the hands of early adopters in Juno
 and
  into widespread deployment with a stable API as early as Kilo.
 
 
  The group-based policy BP approved for Juno addresses the critical
  need for a more usable, declarative, intent-based interface for
  cloud application developers and deployers, that can co-exist with
  Neutron's current networking-hardware-oriented API and work nicely
  with all existing core plugins. Additionally, we believe that this
  declarative approach is what is needed to properly integrate
  advanced services into Neutron, and will go a long way towards
  resolving the difficulties so far trying to integrate LBaaS, FWaaS,
  and VPNaaS APIs into the current Neutron model.
 
  Like any new service API in Neutron, the initial group policy API
  release will be subject to incompatible changes before being
  declared stable, and hence would be labeled experimental in
  Juno. This does not mean that it is an experiment where to fail
  fast is an acceptable outcome. The sub-team's goal is to stabilize
  the group policy API as quickly as possible,  making any needed
  changes based on early user and operator experience.
 
  The L and M cycles that Mark suggests below to revisit the status
  are a completely different time frame. By the L or M cycle, we
  should be working on a new V3 Neutron API that pulls these APIs
  together into a more cohesive core API. We will not be in a
 position
  to do this properly without the experience of using the proposed
  group policy extension with the V2 Neutron API in production.
 
 
  If we were failing miserably, or if serious technical issues were
  being identified with the patches, some delay might make sense.
 But,
  other than 

Re: [openstack-dev] [Ironic] Proposal for slight change in our spec process

2014-08-06 Thread Jay Faulkner
Similarly, I appreciated this idea when we discussed it at the mid-cycle and 
doing so here.

+1

-Jay Faulkner


From: Lucas Alvares Gomes lucasago...@gmail.com
Sent: Wednesday, August 06, 2014 2:34 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Ironic] Proposal for slight change in our spec
process

Already agreed with the idea at the midcycle, but just making it public: +1

On Tue, Aug 5, 2014 at 8:54 PM, Roman Prykhodchenko
rprikhodche...@mirantis.com wrote:
 Hi!

 I think this is a nice idea indeed. Do you plan to use this process starting
 from Juno or as soon as possible?

It will start in Kilo

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-06 Thread Matt Joyce
On Wed, Aug 06, 2014 at 11:51:27AM -0400, Eoghan Glynn wrote:
 
 
 You've touched on multiple trains-of-thought that could indeed
 justify separate threads of their own.
 
 I agree with your read on the diverging growth rates in the
 strategic versus the tactical elements of the community.
 
 I would also be supportive of the notion of taking a cycle out to
 fully concentrate on solving existing quality/scaling/performance
 issues, if that's what you meant by pausing to define key cycle
 goals while deferring everything else.
 
 Though FWIW I think scaling back the set of currently integrated
 projects is not the appropriate solution to the problem of over-
 stretched strategic resources on the QA/infra side of the house.
 
 Rather, I think the proposed move to in-project functional
 testing, in place of throwing the kitchen sink into Tempest,
 is far more likely to pay dividends in terms of making the job
 facing the QA Trojans more tractable and sustainable.
 
 Just my $0.02 ...
 
 Cheers,
 Eoghan
 

This is where complexity theory trumps automation.

And frankly the problem we're facing here is a monumental one that
has plagued developers for decades.  I think Denis Ritchie got closest
to getting it right.  Small tools that do one job well.  Really well.

Make those tools obey some common rules of behavior and interaction.

But don't let a tool do more than one thing well.  When you do, you
end up with infighting among developers and complexity that's
unsustainable.

I think glance is a great example of going one step too far with
artifacts.  This isn't in scope for what glance is.  It's really
enough of a deviation from glances core function to merit the spin
up of a new tool.  And we shouldn't be afraid of letting people
spin up tools.  Hell they will anyways.  If there's a road block
in front of a developer they'll route around it.

What we need to do is make sure that developers have a list of
requirements they can meet so as not to end up routing too far
around and ending up in some dangerous territory.  We have some
of that already, but it could be cleaned up and codified better.

That's not to say cenralized clearing houses don't work.  Oslo 
works because it's just a clearing house of small bits of 
functionality that people can commonly focus on.  Kind of like GNU 
tools.  That's n-scaling nested in an umbrella project.

Tempest is a vertical.  Glance artifacts are the first step
in turning glance into a vertical.  These are things we need
to try to avoid.

That's my USD$.02

-Matt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][ml2] Mech driver as out-of-tree add-on

2014-08-06 Thread Luke Gorrie
Howdy!

Rumor has it that it's easy to distribute ML2 mech drivers as out-of-tree
add-on modules.

Is this true? Has it been done? Where would one find an example?

Cheers!
-Luke
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-06 Thread Kevin Benton
It sounds to me like you are describing how a developer uses Keystone, not
a user. My reference to 'application deployer' was to someone trying to run
something like a mail server on an openstack cloud.
On Aug 6, 2014 7:07 AM, Russell Bryant rbry...@redhat.com wrote:

 On 08/05/2014 06:13 PM, Kevin Benton wrote:
  That makes sense. It's not quite a fair analogy though to compare to
  reintroducing projects or tenants because Keystone endpoints aren't
  'user-facing' so to speak. i.e. a regular user (application deployer,
  instance operator, etc) should never have to see or understand the
  purpose of a Keystone endpoint.

 An end user that is consuming any OpenStack API absolutely must
 understand endpoints in the service catalog.  The entire purpose of the
 catalog is so that an application only needs to know the API endpoint to
 keystone and is then able to discover where the rest of the APIs are
 located.  They are very much user facing, IMO.

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] so what do i do about libvirt-python if i'm on precise?

2014-08-06 Thread Matt Riedemann



On 8/5/2014 12:39 PM, Solly Ross wrote:

Just to add my two cents, while I get that people need to run on older versions 
of software,
at a certain point you have to bump the minimum version.  Even libvirt 0.9.11 
is from April 3rd 2012.
That's two and a third years old at this point.  I think at a certain point we need 
to say if you want
to run OpenStack on an older platform, then you'll need to run an older 
OpenStack or backport the required
packages.

Best Regards,
Solly Ross

- Original Message -

From: Joe Gordon joe.gord...@gmail.com
To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
Sent: Wednesday, July 30, 2014 7:07:13 PM
Subject: Re: [openstack-dev] [nova] so what do i do about libvirt-python if i'm 
on precise?




On Jul 30, 2014 3:36 PM, Clark Boylan  cboy...@sapwetik.org  wrote:


On Wed, Jul 30, 2014, at 03:23 PM, Jeremy Stanley wrote:

On 2014-07-30 13:21:10 -0700 (-0700), Joe Gordon wrote:

While forcing people to move to a newer version of libvirt is
doable on most environments, do we want to do that now? What is
the benefit of doing so?

[...]

The only dog I have in this fight is that using the split-out
libvirt-python on PyPI means we finally get to run Nova unit tests
in virtualenvs which aren't built with system-site-packages enabled.
It's been a long-running headache which I'd like to see eradicated
everywhere we can. I understand though if we have to go about it
more slowly, I'm just excited to see it finally within our grasp.
--
Jeremy Stanley


We aren't quite forcing people to move to newer versions. Only those
installing nova test-requirements need newer libvirt. This does not
include people using eg devstack. I think it is reasonable to expect
people testing tip of nova master to have a reasonably newish test bed
to test it (its not like the Infra team moves at a really fast pace :)
).


Based on
http://lists.openstack.org/pipermail/openstack-dev/2014-July/041457.html
this patch is breaking people, which is the basis for my concerns. Perhaps
we should get some further details from Salvatore.



Avoiding system site packages in virtualenvs is a huge win particularly
for consistency of test results. It avoids pollution of site packages
that can happen differently across test machines. This particular type
of inconsistency has been the cause of the previously mentioned
headaches.


I agree this is a huge win, but I am just concerned we don't have any
deprecation cycle and just roll out a new requirement without a heads up.



Clark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Yeah, I agree, I'm just, you know, a curmudgeon.  I was doing a 
stable/havana backport though on my ubuntu precise + libvirt 1.2.2 from 
cloud-archive:icehouse and hit this bug:


https://bugs.launchpad.net/nova/+bug/1266711

I guess I should just get off my ass and setup a Trusty VM for Juno+ 
development and leave my Precise one alone for stable branch work.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-06 Thread Kevin Benton
In the weekly neutron meetings it hasn't been mentioned that any of these
items are at risk due to developer shortage. That's why I wanted Mark
McClain to reply here because he has been leading the parity effort.
On Aug 6, 2014 8:56 AM, Joe Gordon joe.gord...@gmail.com wrote:




 On Wed, Aug 6, 2014 at 4:12 PM, Kevin Benton blak...@gmail.com wrote:

 Are there any parity features you are aware of that aren't receiving
 adequate developer/reviewer time? I'm not aware of any parity features that
 are in a place where throwing more engineers at them is going to speed
 anything up. Maybe Mark McClain (Nova parity leader) can provide some
 better insight here, but that is the impression I've gotten as an active
 Neutron contributor observing the ongoing parity work.


 I cannot speak for which parts of nova-parity are short staffed, if any,
 but from an outsiders perspective I don't think neutron will hit full
 parity in Juno. And I would be very surprised to hear that more developers
 working on parity won't help. For example we are already in Juno-3 and the
 following work is yet to be completed (as per the neutron gap wiki):

 * Make check-tempest-dsvm-neutron-full stable enough to vote
 * Grenade testing
 * DVR (Neutron replacement for Nova multi-host)
 * Document Open Source Options
 * Real world (not in gate) performance, stability and scalability testing
 (performance parity with nova-networking).




 Given that, pointing to the Nova parity work seems a bit like a red
 herring. This new API is being developed orthogonally to the existing API
 endpoints and I don't think it was ever the expectation that Nova would
 switch to this during the Juno timeframe anyway. The new API will not be
 used during normal operation and should not impact the existing API at all.






 On Tue, Aug 5, 2014 at 5:51 PM, Sean Dague s...@dague.net wrote:

 On 08/05/2014 07:28 PM, Joe Gordon wrote:
 
 
 
  On Wed, Aug 6, 2014 at 12:20 AM, Robert Kukura 
 kuk...@noironetworks.com
  mailto:kuk...@noironetworks.com wrote:
 
  On 8/4/14, 4:27 PM, Mark McClain wrote:
  All-
 
  tl;dr
 
  * Group Based Policy API is the kind of experimentation we be
  should attempting.
  * Experiments should be able to fail fast.
  * The master branch does not fail fast.
  * StackForge is the proper home to conduct this experiment.
  The disconnect here is that the Neutron group-based policy sub-team
  that has been implementing this feature for Juno does not see this
  work as an experiment to gather data, but rather as an important
  innovative feature to put in the hands of early adopters in Juno
 and
  into widespread deployment with a stable API as early as Kilo.
 
 
  The group-based policy BP approved for Juno addresses the critical
  need for a more usable, declarative, intent-based interface for
  cloud application developers and deployers, that can co-exist with
  Neutron's current networking-hardware-oriented API and work nicely
  with all existing core plugins. Additionally, we believe that this
  declarative approach is what is needed to properly integrate
  advanced services into Neutron, and will go a long way towards
  resolving the difficulties so far trying to integrate LBaaS, FWaaS,
  and VPNaaS APIs into the current Neutron model.
 
  Like any new service API in Neutron, the initial group policy API
  release will be subject to incompatible changes before being
  declared stable, and hence would be labeled experimental in
  Juno. This does not mean that it is an experiment where to fail
  fast is an acceptable outcome. The sub-team's goal is to stabilize
  the group policy API as quickly as possible,  making any needed
  changes based on early user and operator experience.
 
  The L and M cycles that Mark suggests below to revisit the status
  are a completely different time frame. By the L or M cycle, we
  should be working on a new V3 Neutron API that pulls these APIs
  together into a more cohesive core API. We will not be in a
 position
  to do this properly without the experience of using the proposed
  group policy extension with the V2 Neutron API in production.
 
 
  If we were failing miserably, or if serious technical issues were
  being identified with the patches, some delay might make sense.
 But,
  other than Mark's -2 blocking the initial patches from merging, we
  are on track to complete the planned work in Juno.
 
  -Bob
 
 
 
  As a member of nova-core, I find this whole discussion very startling.
  Putting aside the concerns over technical details and the pain of
 having
  in tree experimental APIs (such as nova v3 API), neutron still isn't
 the
  de-facto networking solution from nova's perspective and it won't be
  until neutron has feature and performance parity with nova-network. In
  fact due to the slow maturation of neutron, nova has moved 

Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-06 Thread Aaron Rosen
As a cloud admin one needs to make sure the endpoints in keystone
publicurl, internalurl and adminurl all map to the right places in the
infrastructure. As a cloud user (for example when using the HP/RAX public
cloud that has multiple regions/endpoints) a user needs to be aware of
which region maps to which endpoint.


On Wed, Aug 6, 2014 at 9:56 AM, Kevin Benton blak...@gmail.com wrote:

 It sounds to me like you are describing how a developer uses Keystone, not
 a user. My reference to 'application deployer' was to someone trying to run
 something like a mail server on an openstack cloud.
 On Aug 6, 2014 7:07 AM, Russell Bryant rbry...@redhat.com wrote:

 On 08/05/2014 06:13 PM, Kevin Benton wrote:
  That makes sense. It's not quite a fair analogy though to compare to
  reintroducing projects or tenants because Keystone endpoints aren't
  'user-facing' so to speak. i.e. a regular user (application deployer,
  instance operator, etc) should never have to see or understand the
  purpose of a Keystone endpoint.

 An end user that is consuming any OpenStack API absolutely must
 understand endpoints in the service catalog.  The entire purpose of the
 catalog is so that an application only needs to know the API endpoint to
 keystone and is then able to discover where the rest of the APIs are
 located.  They are very much user facing, IMO.

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Blueprints process

2014-08-06 Thread Tomasz Napierala

On 06 Aug 2014, at 10:41, Sergii Golovatiuk sgolovat...@mirantis.com wrote:

 Hi,
 
 I really like what Mike proposed. It will help us to keep milestone clean and 
 accurate.

+1 for Mike’s proposal. New members will also benefit from that move: clean 
picture will make easier to pick up features to work on.

Regards,
-- 
Tomasz Napierala
Sr. OpenStack Engineer
tnapier...@mirantis.com






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-06 Thread Ronak Shah
We have diverged our attention towards nova-network- neutron parity on
this thread unnecessarily.

Can we discuss and collectively decide on what is the way forward for GBP
in Juno release?

Efforts have been made by the subteam starting from throwing PoC at last
summit to spec approval to code review.

There are usefulness to this feature and I think everyone is on the same
page there.

Let us not discourage the effort by bringing in existing neutron issue in
play.
Yes, we has a neutorn community needs to fix that with highest priority.
But this is orthogonal effort.
If endpoint is not a likeable preferred name than lets propose more
meaningful alternative.
Let us try to find a middle ground on how this feature can be made
generally available.

Thanks,
Ronak
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron][Technical Committee] nova-network - Neutron. Throwing a wrench in the Neutron gap analysis

2014-08-06 Thread Joe Gordon
On Tue, Aug 5, 2014 at 8:25 PM, Tom Fifield t...@openstack.org wrote:

 On 06/08/14 03:54, Jay Pipes wrote:

 On 08/05/2014 03:23 PM, Collins, Sean wrote:

 On Tue, Aug 05, 2014 at 12:50:45PM EDT, Monty Taylor wrote:

 However, I think the cost to providing that path far outweighs
 the benefit in the face of other things on our plate.


 Perhaps those large operators that are hoping for a
 Nova-Network-Neutron zero-downtime live migration, could dedicate
 resources to this requirement? It is my direct experience that features
 that are important to a large organization will require resources
 from that very organization to be completed.


 Indeed, that's partly why I called out Metacloud in the original post,
 as they were brought up as a deployer with this potential need. Please,
 if there are any other shops that:

 * Currently deploy nova-network
 * Need to move to Neutron
 * Their tenants cannot tolerate any downtime due to a cold migration

 Please do comment on this thread and speak up.


 Just to chip in for the dozens of users I have personally spoken to that
 do have the requirement for nova-network to neutron migration, and would be
 adversely affected if it was not implemented prior to deprecating
 nova-network: raising this concept only on a development mailing list is a
 bad idea :)


The way I see it, a migration strategy shouldn't be required to make
neutron the recommended networking model in OpenStack (something that the
nova team is not comfortable saying today). But a migration strategy is
required for deprecating nova-network (starting the clock for a possible
removal date).  For deployers this comes down to two different questions:

* What networking model should be used in greenfield deployments?
* How do I migrate to the new networking solution?



 If anyone is serious about not providing a proper migration path for these
 users that need it, there is a need to be yelling this for probably a few
 of summits in a row and every OpenStack event we have in between, as well
 as the full gamut of periodic surveys, blogs, twitters, weibos, linkedins,
 facebooks etc,


 Regards,


 Tom


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-06 Thread Kevin Benton
What I was referring to was also not Keystone's definition of an endpoint.
It's almost as if the term has many uses and was not invented for Keystone.
:-)

http://www.wireshark.org/docs/wsug_html_chunked/ChStatEndpoints.html

Did a similar discussion occur when Heat wanted to use the word 'template'
since this was clearly already in use by Horizon?
On Aug 6, 2014 9:24 AM, Jay Pipes jaypi...@gmail.com wrote:

 On 08/06/2014 02:12 AM, Kevin Benton wrote:

 Given that, pointing to the Nova parity work seems a bit like a red
 herring. This new API is being developed orthogonally to the existing
 API endpoints


 You see how you used the term endpoints there? :P

 -jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-06 Thread Zane Bitter

On 04/08/14 19:18, Yuriy Taraday wrote:

Hello, git-review users!

I'd like to gather feedback on a feature I want to implement that might
turn out useful for you.

I like using Git for development. It allows me to keep track of current
development process, it remembers everything I ever did with the code
(and more).


_CVS_ allowed you to remember everything you ever did; Git is _much_ 
more useful.



I also really like using Gerrit for code review. It provides clean
interfaces, forces clean histories (who needs to know that I changed one
line of code in 3am on Monday?) and allows productive collaboration.


+1


What I really hate is having to throw away my (local, precious for me)
history for all change requests because I need to upload a change to Gerrit.


IMO Ben is 100% correct and, to be blunt, the problem here is your workflow.

Don't get me wrong, I sympathise - really, I do. Nobody likes to change 
their workflow. I *hate* it when I have to change mine. However what 
you're proposing is to modify the tools to make it easy for other people 
- perhaps new developers - to use a bad workflow instead of to learn a 
good one from the beginning, and that would be a colossal mistake. All 
of the things you want to be made easy are currently hard because doing 
them makes the world a worse place.


The big advantage of Git is that you no longer have to choose between 
having no version control while you work or having a history full of 
broken commits, like you did back in the bad old days of CVS. The fact 
that local history is editable is incredibly powerful, and you should 
start making use of it.


A history of small, incremental, *working* changes is the artifact we 
want to produce. For me, trying to reconstruct that from a set of broken 
changes, or changes that only worked with now-obsolete versions of 
master, is highly counterproductive. I work with patches in the form 
that I intend to submit them (which changes as I work) because that 
means I don't have to maintain that form in my head, instead Git can do 
it for me. Of course while I'm working I might need to retain a small 
amount of history - the most basic level of that just the working copy, 
but it often extends to some temporary patches that will later be 
squashed with others or dropped altogether. Don't forget about git add 
-p either - that makes it easy to split changes in a single file into 
separate commits, which is *much* more effective than using a purely 
time-based history.


When master changes I rebase my work, because I need to test all of the 
patches that I will propose in a series against the current master into 
which they will be submitted. Retaining a change that did once work in a 
world that no longer exists is rarely interesting to me, but on those 
occasions where it is I would just create a local branch to hold on to it.


You are struggling because you think of history as a linear set of 
temporal changes. If you accept that history can instead contain an 
arbitrarily-ordered set of logical changes then your life will be much 
easier, since that corresponds exactly to what you need to deliver for 
review. As soon as you stop fighting against Gerrit and learn how to 
work with it, all of your problems evaporate. Tools that make it easier 
to fight it will ultimately only add complexity without solving the 
underlying problems.


(BTW a tool I use to help with this is Stacked Git. It makes things like 
editing an early patch in a series much easier than rebase -i... I just 
do `stg refresh -p patch_name` and the right patch gets the changes. 
For dead-end ideas I just do `stg pop` and the patch stays around for 
reference but isn't part of the history. I usually don't recommend this 
tool to the community, however, because StGit doesn't run the commit 
hook that is needed to insert the ChangeId for Gerrit. I'd be happy to 
send you my patches that fix it if you want to try it out though.)



That's why I want to propose making git-review to support the workflow
that will make me happy. Imagine you could do smth like this:

0. create new local branch;

master: M--
  \
feature:  *

1. start hacking, doing small local meaningful (to you) commits;

master: M--
  \
feature:  A-B-...-C

2. since hacking takes tremendous amount of time (you're doing a Cool
Feature (tm), nothing less) you need to update some code from master, so
you're just merging master in to your branch (i.e. using Git as you'd
use it normally);


This is not how I'd use Git normally.


master: M---N-O-...
  \\\
feature:  A-B-...-C-D-...

3. and now you get the first version that deserves to be seen by
community, so you run 'git review', it asks you for desired commit
message, and poof, magic-magic all changes from your branch is
uploaded to Gerrit as _one_ change request;


You just said that this was a Cool Feature (tm) taking a tremendous 
amount of time. Yet your solution is to squash everything 

Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-06 Thread Joe Gordon
On Aug 6, 2014 10:21 AM, Ronak Shah ronak.malav.s...@gmail.com wrote:

 We have diverged our attention towards nova-network- neutron parity on
this thread unnecessarily.

 Can we discuss and collectively decide on what is the way forward for GBP
in Juno release?

 Efforts have been made by the subteam starting from throwing PoC at last
summit to spec approval to code review.

 There are usefulness to this feature and I think everyone is on the same
page there.

 Let us not discourage the effort by bringing in existing neutron issue in
play.

 Yes, we has a neutorn community needs to fix that with highest priority.
 But this is orthogonal effort.

The efforts may be orthogonal, but the review team and bandwidth of said
team is one and the same. Making nova-network the highest priority means
pushing other blueprints back as needed.  And since there is still so much
uncertainty around GPB this late in the cycle, IMHO it's a good candidate
for getting deferred.

 If endpoint is not a likeable preferred name than lets propose more
meaningful alternative.
 Let us try to find a middle ground on how this feature can be made
generally available.

 Thanks,
 Ronak

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-06 Thread Sridar Kandaswamy (skandasw)
Hi All:

+1 Ivar. Yes the timing of the alternate proposal does make the notion of spec 
reviews seem like a process tick mark with no seeming benefit. It is indeed 
unfair to the folks who have put in a lot of effort with an approved spec to 
have a workflow change pulled on them so late in the cycle.

Thanks

Sridar

From: Ivar Lazzaro ivarlazz...@gmail.commailto:ivarlazz...@gmail.com
Reply-To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, August 6, 2014 at 12:01 PM
To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron] Group Based Policy and the way forward


Hi Joe,

Are you suggesting to stop/remove everything that is not related to Nova Parity 
for the Juno release? Because then I fail to see why this and Mark's proposal 
are targeted  only to GBP.

In my humble opinion, these kind of concerns should be addressed at BP approval 
time. Otherwise the whole purpose of the BP process feels void.

If we really feel like proposing a new way of addressing new features in 
Neutron (which basically is a workflow change), we should discuss all of it for 
the next release without blocking patches which went through the whole approval 
process and are ready to be merged after community effort (BP process, Weakly 
meetings, POC, reviews). Just like has been done in other similar cases (eg. 
3rd Party CI). This of course is IMHO.

Ivar.

On Aug 6, 2014 4:55 PM, Joe Gordon 
joe.gord...@gmail.commailto:joe.gord...@gmail.com wrote:



On Wed, Aug 6, 2014 at 4:12 PM, Kevin Benton 
blak...@gmail.commailto:blak...@gmail.com wrote:
Are there any parity features you are aware of that aren't receiving adequate 
developer/reviewer time? I'm not aware of any parity features that are in a 
place where throwing more engineers at them is going to speed anything up. 
Maybe Mark McClain (Nova parity leader) can provide some better insight here, 
but that is the impression I've gotten as an active Neutron contributor 
observing the ongoing parity work.

I cannot speak for which parts of nova-parity are short staffed, if any, but 
from an outsiders perspective I don't think neutron will hit full parity in 
Juno. And I would be very surprised to hear that more developers working on 
parity won't help. For example we are already in Juno-3 and the following work 
is yet to be completed (as per the neutron gap wiki):

* Make check-tempest-dsvm-neutron-full stable enough to vote
* Grenade testing
* DVR (Neutron replacement for Nova multi-host)
* Document Open Source Options
* Real world (not in gate) performance, stability and scalability testing 
(performance parity with nova-networking).



Given that, pointing to the Nova parity work seems a bit like a red herring. 
This new API is being developed orthogonally to the existing API endpoints and 
I don't think it was ever the expectation that Nova would switch to this during 
the Juno timeframe anyway. The new API will not be used during normal operation 
and should not impact the existing API at all.



On Tue, Aug 5, 2014 at 5:51 PM, Sean Dague 
s...@dague.netmailto:s...@dague.net wrote:
On 08/05/2014 07:28 PM, Joe Gordon wrote:



 On Wed, Aug 6, 2014 at 12:20 AM, Robert Kukura 
 kuk...@noironetworks.commailto:kuk...@noironetworks.com
 mailto:kuk...@noironetworks.commailto:kuk...@noironetworks.com wrote:

 On 8/4/14, 4:27 PM, Mark McClain wrote:
 All-

 tl;dr

 * Group Based Policy API is the kind of experimentation we be
 should attempting.
 * Experiments should be able to fail fast.
 * The master branch does not fail fast.
 * StackForge is the proper home to conduct this experiment.
 The disconnect here is that the Neutron group-based policy sub-team
 that has been implementing this feature for Juno does not see this
 work as an experiment to gather data, but rather as an important
 innovative feature to put in the hands of early adopters in Juno and
 into widespread deployment with a stable API as early as Kilo.


 The group-based policy BP approved for Juno addresses the critical
 need for a more usable, declarative, intent-based interface for
 cloud application developers and deployers, that can co-exist with
 Neutron's current networking-hardware-oriented API and work nicely
 with all existing core plugins. Additionally, we believe that this
 declarative approach is what is needed to properly integrate
 advanced services into Neutron, and will go a long way towards
 resolving the difficulties so far trying to integrate LBaaS, FWaaS,
 and VPNaaS APIs into the current Neutron model.

 Like any new service API in Neutron, the initial group policy API
 release will be subject to incompatible changes before being
 declared stable, and hence would be labeled experimental in
 Juno. This does not mean that it is an experiment where to fail
 

Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-06 Thread Edgar Magana
This is the consequence of a proposal that is not following the standardized 
terminology (IETF - RFC) for any Policy-based System:
http://tools.ietf.org/html/rfc3198

Well, I did bring  this point during the Hong Kong Summit but as you can see my 
comments were totally ignored:
https://docs.google.com/document/d/1ZbOFxAoibZbJmDWx1oOrOsDcov6Cuom5aaBIrupCD9E/edit

I clearly saw this kind of issues coming. Let me quote myself what I suggested: 
For instance: endpoints should be enforcement point

I do not understand why GBP did not include this suggestion...

Edgar

From: Kevin Benton blak...@gmail.commailto:blak...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, August 6, 2014 at 10:22 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron] Group Based Policy and the way forward


What I was referring to was also not Keystone's definition of an endpoint. It's 
almost as if the term has many uses and was not invented for Keystone. :-)

http://www.wireshark.org/docs/wsug_html_chunked/ChStatEndpoints.html

Did a similar discussion occur when Heat wanted to use the word 'template' 
since this was clearly already in use by Horizon?

On Aug 6, 2014 9:24 AM, Jay Pipes 
jaypi...@gmail.commailto:jaypi...@gmail.com wrote:
On 08/06/2014 02:12 AM, Kevin Benton wrote:
Given that, pointing to the Nova parity work seems a bit like a red
herring. This new API is being developed orthogonally to the existing
API endpoints

You see how you used the term endpoints there? :P

-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-06 Thread Ivar Lazzaro
Which kind of uncertainty are you referring to?

Given that the blueprint was approved long ago, and the code has been ready
and under review following those specs... I think GBP is probably the patch
with the least effort to be merged right now.

Ivar.


On Wed, Aug 6, 2014 at 7:34 PM, Joe Gordon joe.gord...@gmail.com wrote:


 On Aug 6, 2014 10:21 AM, Ronak Shah ronak.malav.s...@gmail.com wrote:
 
  We have diverged our attention towards nova-network- neutron parity on
 this thread unnecessarily.
 
  Can we discuss and collectively decide on what is the way forward for
 GBP in Juno release?
 
  Efforts have been made by the subteam starting from throwing PoC at last
 summit to spec approval to code review.
 
  There are usefulness to this feature and I think everyone is on the same
 page there.
 
  Let us not discourage the effort by bringing in existing neutron issue
 in play.

  Yes, we has a neutorn community needs to fix that with highest priority.
  But this is orthogonal effort.

 The efforts may be orthogonal, but the review team and bandwidth of said
 team is one and the same. Making nova-network the highest priority means
 pushing other blueprints back as needed.  And since there is still so much
 uncertainty around GPB this late in the cycle, IMHO it's a good candidate
 for getting deferred.

  If endpoint is not a likeable preferred name than lets propose more
 meaningful alternative.
  Let us try to find a middle ground on how this feature can be made
 generally available.
 
  Thanks,
  Ronak
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron][Technical Committee] nova-network - Neutron. Throwing a wrench in the Neutron gap analysis

2014-08-06 Thread Jay Pipes

On 08/06/2014 01:40 AM, Tom Fifield wrote:

On 06/08/14 13:30, Robert Collins wrote:

On 6 August 2014 17:27, Tom Fifield t...@openstack.org wrote:

On 06/08/14 13:24, Robert Collins wrote:



What happened to your DB migrations then? :)



Sorry if I misunderstood, I thought we were talking about running VM
downtime here?


While DB migrations are running things like the nova metadata service
can/will misbehave - and user code within instances will be affected.
Thats arguably VM downtime.

OTOH you could define it more narrowly as 'VMs are not powered off' or
'VMs are not stalled for more than 2s without a time slice' etc etc -
my sense is that most users are going to be particularly concerned
about things for which they have to *do something* - e.g. VMs being
powered off or rebooted - but having no network for a short period
while vifs are replugged and the overlay network re-establishes itself
would be much less concerning.


I think you've got it there, Rob - nicely put :)

In many cases the users I've spoken to who are looking for a live path
out of nova-network on to neutron are actually completely OK with some
API service downtime (metadata service is an API service by their
definition). A little 'glitch' in the network is also OK for many of them.

Contrast that with the original proposal in this thread (snapshot VMs
in old nova-network deployment, store in Swift or something, then launch
VM from a snapshot in new Neutron deployment) - it is completely
unacceptable and is not considered a migration path for these users.


Who are these users? Can we speak with them? Would they be interested in 
participating in the documentation and migration feature process?


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-06 Thread Salvatore Orlando
As Ronak said, this thread is starting to move in a lot of different
directions, ranging from correctness of the blueprint approval process to
nova/neutron integration, which are rather off topic.

In particular it seems things are being skewed towards a discussion around
nova parity, whereas actually some people have just chimed in with their
honest opinion that with all the stuff still needed to finally be able to
make neutron THE openstack networking solution, the effort towards adding
a new tenant facing API appears to have a lesser priority.

I just want to reassure everybody that the majority of the core team and a
large part of the community have actually made this their first priority.
For what is worth, some of them have even delayed plugin/driver specific
development to this aim.

So I would invite to go back to the original subject of the discussion,
that is to say decide as a community what would the best way forward for
this effort.
I see so far the following options:
- merge the outstanding patches, assuming there are no further technical
concerns, and include GBP in Juno.
- consider GBP an 'experimental' V3 tenant API (this was mentioned
somewhere in this thread) and treat it accordingly
- delay to the next release
- move the development of the service plugin to stackforge as suggested to
this thread.

More options are obviously welcome!

Regards,
Salvatore


On 6 August 2014 19:40, Ivar Lazzaro ivarlazz...@gmail.com wrote:

 Which kind of uncertainty are you referring to?

 Given that the blueprint was approved long ago, and the code has been
 ready and under review following those specs... I think GBP is probably the
 patch with the least effort to be merged right now.

 Ivar.


 On Wed, Aug 6, 2014 at 7:34 PM, Joe Gordon joe.gord...@gmail.com wrote:


 On Aug 6, 2014 10:21 AM, Ronak Shah ronak.malav.s...@gmail.com wrote:
 
  We have diverged our attention towards nova-network- neutron parity on
 this thread unnecessarily.
 
  Can we discuss and collectively decide on what is the way forward for
 GBP in Juno release?
 
  Efforts have been made by the subteam starting from throwing PoC at
 last summit to spec approval to code review.
 
  There are usefulness to this feature and I think everyone is on the
 same page there.
 
  Let us not discourage the effort by bringing in existing neutron issue
 in play.

  Yes, we has a neutorn community needs to fix that with highest
 priority.
  But this is orthogonal effort.

 The efforts may be orthogonal, but the review team and bandwidth of said
 team is one and the same. Making nova-network the highest priority means
 pushing other blueprints back as needed.  And since there is still so much
 uncertainty around GPB this late in the cycle, IMHO it's a good candidate
 for getting deferred.

  If endpoint is not a likeable preferred name than lets propose more
 meaningful alternative.
  Let us try to find a middle ground on how this feature can be made
 generally available.
 
  Thanks,
  Ronak
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-06 Thread Jay Pipes

On 08/06/2014 01:22 PM, Kevin Benton wrote:

What I was referring to was also not Keystone's definition of an
endpoint. It's almost as if the term has many uses and was not invented
for Keystone. :-)

http://www.wireshark.org/docs/wsug_html_chunked/ChStatEndpoints.html

Did a similar discussion occur when Heat wanted to use the word
'template' since this was clearly already in use by Horizon?


Not sure. But I do know that conversations around resource names in REST 
API endpoints (yes, that is how the term has been used in OpenStack) 
have been numerous. Just search for various conversations around project 
or tenant, or any of the Tuskar API modeling conversations. These kinds 
of discussions do come up often, and for good reason... these are the 
public APIs of OpenStack and are our first impression to the user, so 
to speak. It's a lot easier to try and get things right first than go 
through endless deprecation and backwards compatibility discussions. :)


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Deprecating CONF.block_device_allocate_retries_interval

2014-08-06 Thread Jay Pipes

Hi Stackers!

So, Liyi Meng has an interesting patch up for Nova:

https://review.openstack.org/#/c/104876

that changes the way that the interval and number of retries is 
calculated for a piece of code that waits around for a block device 
mapping to become active.


Namely, the patch discards the value of the following configuration 
options *if the volume size is not 0* (which is a typical case):


* CONF.block_device_allocate_retries_interval
* CONF.block_device_allocate_retries

and in their place, instead uses a hard-coded 60 max number of retries 
and calculates a more appropriate interval by looking at the size of 
the volume to be created. The algorithm uses the sensible idea that it 
will take longer to allocate larger volumes than smaller volumes, and 
therefore the interval time for larger volumes should be longer than 
smaller ones.


So... here's the question: since this code essentially makes the above 
two configuration options obselete for the majority of cases (where 
volume size is not 0), should we do one of the following?


1) We should just deprecate both the options, with a note in the option 
help text that these options are not used when volume size is not 0, and 
that the interval is calculated based on volume size


or

2) We should deprecate the CONF.block_device_allocate_retries_interval 
option only, and keep the CONF.block_device_allocate_retries 
configuration option as-is, changing the help text to read something 
like Max number of retries. We calculate the interval of the retry 
based on the size of the volume.


I bring this up on the mailing list because I think Liyi's patch offers 
an interesting future direction to the way that we think about our retry 
approach in Nova. Instead of having hard-coded or configurable interval 
times, I think Liyi's approach of calculating the interval length based 
on some input values is a good direction to take.


Thoughts?

-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-06 Thread Sumit Naiksatam
Edgar, you seemed to have +2'ed this patch on July 2nd [1]:


Edgar Magana
Jul 2 8:42 AM

Patch Set 13: Code-Review+2

All looks good to me! I am not approving yet because Nachi was also
reviewing this code and I would like to see his opinion as well.


That would suggest that you were happy with what was in it. I don't
see anything in the review comments that suggests otherwise.

[1]  https://review.openstack.org/#/c/95900/

On Wed, Aug 6, 2014 at 10:39 AM, Edgar Magana edgar.mag...@workday.com wrote:
 This is the consequence of a proposal that is not following the standardized
 terminology (IETF - RFC) for any Policy-based System:
 http://tools.ietf.org/html/rfc3198

 Well, I did bring  this point during the Hong Kong Summit but as you can see
 my comments were totally ignored:
 https://docs.google.com/document/d/1ZbOFxAoibZbJmDWx1oOrOsDcov6Cuom5aaBIrupCD9E/edit

 I clearly saw this kind of issues coming. Let me quote myself what I
 suggested: For instance: endpoints should be enforcement point

 I do not understand why GBP did not include this suggestion…

 Edgar

 From: Kevin Benton blak...@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: Wednesday, August 6, 2014 at 10:22 AM
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org

 Subject: Re: [openstack-dev] [Neutron] Group Based Policy and the way
 forward

 What I was referring to was also not Keystone's definition of an endpoint.
 It's almost as if the term has many uses and was not invented for Keystone.
 :-)

 http://www.wireshark.org/docs/wsug_html_chunked/ChStatEndpoints.html

 Did a similar discussion occur when Heat wanted to use the word 'template'
 since this was clearly already in use by Horizon?

 On Aug 6, 2014 9:24 AM, Jay Pipes jaypi...@gmail.com wrote:

 On 08/06/2014 02:12 AM, Kevin Benton wrote:

 Given that, pointing to the Nova parity work seems a bit like a red
 herring. This new API is being developed orthogonally to the existing
 API endpoints


 You see how you used the term endpoints there? :P

 -jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-06 Thread Ivar Lazzaro
Salvatore,

Can you expand on point 2? Not sure what means in this case to 'treat it
accordingly'.

Thanks,
Ivar.


On Wed, Aug 6, 2014 at 7:44 PM, Salvatore Orlando sorla...@nicira.com
wrote:

 As Ronak said, this thread is starting to move in a lot of different
 directions, ranging from correctness of the blueprint approval process to
 nova/neutron integration, which are rather off topic.

 In particular it seems things are being skewed towards a discussion around
 nova parity, whereas actually some people have just chimed in with their
 honest opinion that with all the stuff still needed to finally be able to
 make neutron THE openstack networking solution, the effort towards adding
 a new tenant facing API appears to have a lesser priority.

 I just want to reassure everybody that the majority of the core team and a
 large part of the community have actually made this their first priority.
 For what is worth, some of them have even delayed plugin/driver specific
 development to this aim.

 So I would invite to go back to the original subject of the discussion,
 that is to say decide as a community what would the best way forward for
 this effort.
 I see so far the following options:
 - merge the outstanding patches, assuming there are no further technical
 concerns, and include GBP in Juno.
 - consider GBP an 'experimental' V3 tenant API (this was mentioned
 somewhere in this thread) and treat it accordingly
 - delay to the next release
 - move the development of the service plugin to stackforge as suggested to
 this thread.

 More options are obviously welcome!

 Regards,
 Salvatore


 On 6 August 2014 19:40, Ivar Lazzaro ivarlazz...@gmail.com wrote:

 Which kind of uncertainty are you referring to?

 Given that the blueprint was approved long ago, and the code has been
 ready and under review following those specs... I think GBP is probably the
 patch with the least effort to be merged right now.

 Ivar.


 On Wed, Aug 6, 2014 at 7:34 PM, Joe Gordon joe.gord...@gmail.com wrote:


 On Aug 6, 2014 10:21 AM, Ronak Shah ronak.malav.s...@gmail.com
 wrote:
 
  We have diverged our attention towards nova-network- neutron parity
 on this thread unnecessarily.
 
  Can we discuss and collectively decide on what is the way forward for
 GBP in Juno release?
 
  Efforts have been made by the subteam starting from throwing PoC at
 last summit to spec approval to code review.
 
  There are usefulness to this feature and I think everyone is on the
 same page there.
 
  Let us not discourage the effort by bringing in existing neutron issue
 in play.

  Yes, we has a neutorn community needs to fix that with highest
 priority.
  But this is orthogonal effort.

 The efforts may be orthogonal, but the review team and bandwidth of said
 team is one and the same. Making nova-network the highest priority means
 pushing other blueprints back as needed.  And since there is still so much
 uncertainty around GPB this late in the cycle, IMHO it's a good candidate
 for getting deferred.

  If endpoint is not a likeable preferred name than lets propose more
 meaningful alternative.
  Let us try to find a middle ground on how this feature can be made
 generally available.
 
  Thanks,
  Ronak
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-06 Thread Jay Pipes

On 08/06/2014 04:30 AM, Stefano Santini wrote:

Hi,

In my company (Vodafone), we (DC network architecture) are following
very closely the work happening on Group Based Policy since we see a
great value on the new paradigm to drive network configurations with an
advanced logic.

We're working on a new production project for an internal private cloud
deployment targeting Juno release where we plan to introduce the
capabilities based on using Group Policy and we don't want to see it
delayed.
We strongly request/vote to see this complete as proposed without such
changes to allow to move forward with the evolution of the network
capabilities


Hi Stefano,

AFAICT, there is nothing that can be done with the GBP API that cannot 
be done with the low-level regular Neutron API.


Further, if the Nova integration of the GBP API does not occur in the 
Juno timeframe, what benefit will GBP in Neutron give you? Specifics on 
the individual API calls that you would change would be most appreciated.


Thanks in advance for your input!
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-06 Thread Edgar Magana
That is the beauty of the open source projects, there is always a smartest
reviewer catching out the facts that you don¹t.

Edgar

On 8/6/14, 10:55 AM, Sumit Naiksatam sumitnaiksa...@gmail.com wrote:

Edgar, you seemed to have +2'ed this patch on July 2nd [1]:


Edgar Magana
Jul 2 8:42 AM

Patch Set 13: Code-Review+2

All looks good to me! I am not approving yet because Nachi was also
reviewing this code and I would like to see his opinion as well.


That would suggest that you were happy with what was in it. I don't
see anything in the review comments that suggests otherwise.

[1]  https://review.openstack.org/#/c/95900/

On Wed, Aug 6, 2014 at 10:39 AM, Edgar Magana edgar.mag...@workday.com
wrote:
 This is the consequence of a proposal that is not following the
standardized
 terminology (IETF - RFC) for any Policy-based System:
 http://tools.ietf.org/html/rfc3198

 Well, I did bring  this point during the Hong Kong Summit but as you
can see
 my comments were totally ignored:
 
https://docs.google.com/document/d/1ZbOFxAoibZbJmDWx1oOrOsDcov6Cuom5aaBIr
upCD9E/edit

 I clearly saw this kind of issues coming. Let me quote myself what I
 suggested: For instance: endpoints should be enforcement point

 I do not understand why GBP did not include this suggestionŠ

 Edgar

 From: Kevin Benton blak...@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: Wednesday, August 6, 2014 at 10:22 AM
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org

 Subject: Re: [openstack-dev] [Neutron] Group Based Policy and the way
 forward

 What I was referring to was also not Keystone's definition of an
endpoint.
 It's almost as if the term has many uses and was not invented for
Keystone.
 :-)

 http://www.wireshark.org/docs/wsug_html_chunked/ChStatEndpoints.html

 Did a similar discussion occur when Heat wanted to use the word
'template'
 since this was clearly already in use by Horizon?

 On Aug 6, 2014 9:24 AM, Jay Pipes jaypi...@gmail.com wrote:

 On 08/06/2014 02:12 AM, Kevin Benton wrote:

 Given that, pointing to the Nova parity work seems a bit like a red
 herring. This new API is being developed orthogonally to the existing
 API endpoints


 You see how you used the term endpoints there? :P

 -jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Which program for Rally

2014-08-06 Thread Duncan Thomas
I'm not following here - you complain about rally being monolithic,
then suggest that parts of it should be baked into tempest - a tool
that is already huge and difficult to get into. I'd rather see tools
that do one thing well and some overlap than one tool to rule them
all.

On 6 August 2014 14:44, Sean Dague s...@dague.net wrote:
 On 08/06/2014 09:11 AM, Russell Bryant wrote:
 On 08/06/2014 06:30 AM, Thierry Carrez wrote:
 Hi everyone,

 At the TC meeting yesterday we discussed Rally program request and
 incubation request. We quickly dismissed the incubation request, as
 Rally appears to be able to live happily on top of OpenStack and would
 benefit from having a release cycle decoupled from the OpenStack
 integrated release.

 That leaves the question of the program. OpenStack programs are created
 by the Technical Committee, to bless existing efforts and teams that are
 considered *essential* to the production of the OpenStack integrated
 release and the completion of the OpenStack project mission. There are 3
 ways to look at Rally and official programs at this point:

 1. Rally as an essential QA tool
 Performance testing (and especially performance regression testing) is
 an essential QA function, and a feature that Rally provides. If the QA
 team is happy to use Rally to fill that function, then Rally can
 obviously be adopted by the (already-existing) QA program. That said,
 that would put Rally under the authority of the QA PTL, and that raises
 a few questions due to the current architecture of Rally, which is more
 product-oriented. There needs to be further discussion between the QA
 core team and the Rally team to see how that could work and if that
 option would be acceptable for both sides.

 2. Rally as an essential operator tool
 Regular benchmarking of OpenStack deployments is a best practice for
 cloud operators, and a feature that Rally provides. With a bit of a
 stretch, we could consider that benchmarking is essential to the
 completion of the OpenStack project mission. That program could one day
 evolve to include more such operations best practices tools. In
 addition to the slight stretch already mentioned, one concern here is
 that we still want to have performance testing in QA (which is clearly
 essential to the production of OpenStack). Letting Rally primarily be
 an operational tool might make that outcome more difficult.

 3. Let Rally be a product on top of OpenStack
 The last option is to not have Rally in any program, and not consider it
 *essential* to the production of the OpenStack integrated release or
 the completion of the OpenStack project mission. Rally can happily exist
 as an operator tool on top of OpenStack. It is built as a monolithic
 product: that approach works very well for external complementary
 solutions... Also be more integrated in OpenStack or part of the
 OpenStack programs might come at a cost (slicing some functionality out
 of rally to make it more a framework and less a product) that might not
 be what its authors want.

 Let's explore each option to see which ones are viable, and the pros and
 cons of each.

 My feeling right now is that Rally is trying to accomplish too much at
 the start (both #1 and #2).  I would rather see the project focus on
 doing one of them as best as it can before increasing scope.

 It's my opinion that #1 is the most important thing that Rally can be
 doing to help ensure the success of OpenStack, so I'd like to explore
 the Rally as a QA tool in more detail to start with.

 I want to clarify some things. I don't think that rally in it's current
 form belongs in any OpenStack project. It's a giant monolythic tool,
 which is apparently a design point. That's the wrong design point for an
 OpenStack project.

 For instance:

 https://github.com/stackforge/rally/tree/master/rally/benchmark/scenarios 
 should
 all be tests in Tempest (and actually today mostly are via API tests).
 There is an existing stress framework in Tempest which does the
 repetitive looping that rally does on these already. This fact has been
 brought up before.

 https://github.com/stackforge/rally/tree/master/rally/verification/verifiers
 - should be baked back into Tempest (at least on the results side,
 though diving in there now it looks largely duplicative from existing
 subunit to html code).

 https://github.com/stackforge/rally/blob/master/rally/db/api.py - is
 largely (not entirely) what we'd like from a long term trending piece
 that subunit2sql is working on. Again this was just all thrown into the
 Rally db instead of thinking about how to split it off. Also, notable
 here is there are some fundamental testr bugs (like worker
 misallocation) which mean the data is massively dirty today. It would be
 good for people to actually work on fixing those things.

 The parts that should stay outside of Tempest are the setup tool
 (separation of concerns is that Tempest is the load runner, not the
 setup environment) and any of the SLA 

Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-06 Thread Sumit Naiksatam
Not sure what you are talking about? You claim now that you had
suggestion which was not considered, yet you +2'ed a patch, by stating
that All looks good to me!.

On Wed, Aug 6, 2014 at 11:19 AM, Edgar Magana edgar.mag...@workday.com wrote:
 That is the beauty of the open source projects, there is always a smartest
 reviewer catching out the facts that you don¹t.

 Edgar

 On 8/6/14, 10:55 AM, Sumit Naiksatam sumitnaiksa...@gmail.com wrote:

Edgar, you seemed to have +2'ed this patch on July 2nd [1]:


Edgar Magana
Jul 2 8:42 AM

Patch Set 13: Code-Review+2

All looks good to me! I am not approving yet because Nachi was also
reviewing this code and I would like to see his opinion as well.


That would suggest that you were happy with what was in it. I don't
see anything in the review comments that suggests otherwise.

[1]  https://review.openstack.org/#/c/95900/

On Wed, Aug 6, 2014 at 10:39 AM, Edgar Magana edgar.mag...@workday.com
wrote:
 This is the consequence of a proposal that is not following the
standardized
 terminology (IETF - RFC) for any Policy-based System:
 http://tools.ietf.org/html/rfc3198

 Well, I did bring  this point during the Hong Kong Summit but as you
can see
 my comments were totally ignored:

https://docs.google.com/document/d/1ZbOFxAoibZbJmDWx1oOrOsDcov6Cuom5aaBIr
upCD9E/edit

 I clearly saw this kind of issues coming. Let me quote myself what I
 suggested: For instance: endpoints should be enforcement point

 I do not understand why GBP did not include this suggestionŠ

 Edgar

 From: Kevin Benton blak...@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: Wednesday, August 6, 2014 at 10:22 AM
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org

 Subject: Re: [openstack-dev] [Neutron] Group Based Policy and the way
 forward

 What I was referring to was also not Keystone's definition of an
endpoint.
 It's almost as if the term has many uses and was not invented for
Keystone.
 :-)

 http://www.wireshark.org/docs/wsug_html_chunked/ChStatEndpoints.html

 Did a similar discussion occur when Heat wanted to use the word
'template'
 since this was clearly already in use by Horizon?

 On Aug 6, 2014 9:24 AM, Jay Pipes jaypi...@gmail.com wrote:

 On 08/06/2014 02:12 AM, Kevin Benton wrote:

 Given that, pointing to the Nova parity work seems a bit like a red
 herring. This new API is being developed orthogonally to the existing
 API endpoints


 You see how you used the term endpoints there? :P

 -jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-06 Thread Yapeng Wu
Hi, Salvatore,

Thanks listing out the options.

Can you elaborate more on your 2nd option? Do you mean merge the patches and 
mark the API as ‘experimental’?

Yapeng


From: Salvatore Orlando [mailto:sorla...@nicira.com]
Sent: Wednesday, August 06, 2014 1:44 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

As Ronak said, this thread is starting to move in a lot of different 
directions, ranging from correctness of the blueprint approval process to 
nova/neutron integration, which are rather off topic.

In particular it seems things are being skewed towards a discussion around nova 
parity, whereas actually some people have just chimed in with their honest 
opinion that with all the stuff still needed to finally be able to make neutron 
THE openstack networking solution, the effort towards adding a new tenant 
facing API appears to have a lesser priority.

I just want to reassure everybody that the majority of the core team and a 
large part of the community have actually made this their first priority. For 
what is worth, some of them have even delayed plugin/driver specific 
development to this aim.

So I would invite to go back to the original subject of the discussion, that is 
to say decide as a community what would the best way forward for this effort.
I see so far the following options:
- merge the outstanding patches, assuming there are no further technical 
concerns, and include GBP in Juno.
- consider GBP an 'experimental' V3 tenant API (this was mentioned somewhere in 
this thread) and treat it accordingly
- delay to the next release
- move the development of the service plugin to stackforge as suggested to this 
thread.

More options are obviously welcome!

Regards,
Salvatore

On 6 August 2014 19:40, Ivar Lazzaro 
ivarlazz...@gmail.commailto:ivarlazz...@gmail.com wrote:
Which kind of uncertainty are you referring to?

Given that the blueprint was approved long ago, and the code has been ready and 
under review following those specs... I think GBP is probably the patch with 
the least effort to be merged right now.

Ivar.

On Wed, Aug 6, 2014 at 7:34 PM, Joe Gordon 
joe.gord...@gmail.commailto:joe.gord...@gmail.com wrote:

On Aug 6, 2014 10:21 AM, Ronak Shah 
ronak.malav.s...@gmail.commailto:ronak.malav.s...@gmail.com wrote:

 We have diverged our attention towards nova-network- neutron parity on this 
 thread unnecessarily.

 Can we discuss and collectively decide on what is the way forward for GBP in 
 Juno release?

 Efforts have been made by the subteam starting from throwing PoC at last 
 summit to spec approval to code review.

 There are usefulness to this feature and I think everyone is on the same page 
 there.

 Let us not discourage the effort by bringing in existing neutron issue in 
 play.

 Yes, we has a neutorn community needs to fix that with highest priority.
 But this is orthogonal effort.

The efforts may be orthogonal, but the review team and bandwidth of said team 
is one and the same. Making nova-network the highest priority means pushing 
other blueprints back as needed.  And since there is still so much uncertainty 
around GPB this late in the cycle, IMHO it's a good candidate for getting 
deferred.
 If endpoint is not a likeable preferred name than lets propose more 
 meaningful alternative.
 Let us try to find a middle ground on how this feature can be made generally 
 available.

 Thanks,
 Ronak

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-06 Thread Aaron Rosen
Hi,

I've made my way through the group based policy code and blueprints and I'd
like ask several questions about it.  My first question really is what is
the advantage that the new proposed group based policy model buys us?


Bobs says, The group-based policy BP approved for Juno addresses the
 critical need for a more usable, declarative, intent-based interface for
 cloud application developers and deployers, that can co-exist with
 Neutron's current networking-hardware-oriented API and work nicely with all
 existing core plugins. Additionally, we believe that this declarative
 approach is what is needed to properly integrate advanced services into
 Neutron, and will go a long way towards resolving the difficulties so far
 trying to integrate LBaaS, FWaaS, and VPNaaS APIs into the current Neutron
 model.

My problem with the current blueprint and that comment above is it does not
provide any evidence or data of where the current neutron abstractions
(ports/networks/subnets/routers) provide difficulties and what benefit this
new model will provide.

In the current proposed implementation of group policy, it's implementation
maps onto the existing neutron primitives and the neutron back end(s)
remains unchanged. Because of this one can map the new abstractions onto
the previous ones so I'm curious why we want to move this complexity into
neutron and not have it done externally similarly to how heat works or a
client that abstracts this complexity on it's own end.

From the group-based policy blueprint that was submitted [1]:


The current Neutron model of networks, ports, subnets, routers, and security
 groups provides the necessary building blocks to build a logical network
 topology for connectivity. However, it does not provide the right level
 of abstraction for an application administrator who understands the
 application's details (like application port numbers), but not the
 infrastructure details likes networks and routes.

It looks to me that application administrators still need to understand
network primitives as the concept of networks/ports/routers are still
present though just carrying a different name. For example, in
ENDPOINT_GROUPS there is an attribute l2_policy_id which maps to something
that you use to describe a l2_network and contains an attribute
l3_policy_id which is used to describe an L3 network. This looks similar to
the abstraction we have today where a l2_policy (network) then can have
multiple l3_policies (subnets) mapping to it.  Because of this I'm curious
how the GBP abstraction really provides a different level of abstraction
for application administrators.


 Not only that, the current
 abstraction puts the burden of maintaining the consistency of the network
 topology on the user. The lack of application developer/administrator
 focussed
 abstractions supported by a declarative model make it hard for those users
 to consume Neutron as a connectivity layer.

What is the problem in the current abstraction that puts a burden of
maintaining the consistency of networking topology on users? It seems to me
that the complexity of having to know about topology should be abstracted
at the client layer if desired (and neutron should expose the basic
building blocks for networking). For example, Horizon/Heat or the CLI could
hide the requirement of topology by automatically creating a GROUP  (which
is a network+subnet on a router uplinked to an external network)
simplifying this need for the tenant to understand topology. In addition,
topology still seems to be present in the group policy model proposed just
in a different way as I see it.

From the proposed change section the following is stated:


This proposal suggests a model that allows application administrators to
 express their networking requirements using group and policy abstractions,
 with
 the specifics of policy enforcement and implementation left to the
 underlying
 policy driver. The main advantage of the extensions described in this
 blueprint
 is that they allow for an application-centric interface to Neutron that
 complements the existing network-centric interface.


How is the Application-centric interface complementary to the
network-centric interface?  Is the intention that one would use both
interfaces at one once?

More specifically the new abstractions will achieve the following:
 * Show clear separation of concerns between application and infrastructure
 administrator.


I'm not quite sure I understand this point, how is this different than what
we have today?


 - The application administrator can then deal with a higher level
 abstraction
 that does not concern itself with networking specifics like
 networks/routers/etc.


It seems like the proposed abstraction still requires one to concern
themselves with networking specifics (l2_policies, l3_policies).  I'd
really like to see more evidence backing this. Now they have to deal with
specifies like: Endpoint, Endpoint Group, Contract, Policy Rule,

Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-06 Thread Yuriy Taraday
On Wed, Aug 6, 2014 at 6:20 PM, Ben Nemec openst...@nemebean.com wrote:

 On 08/06/2014 03:35 AM, Yuriy Taraday wrote:
  I'd like to stress this to everyone: I DO NOT propose squashing together
  commits that should belong to separate change requests. I DO NOT propose
 to
  upload all your changes at once. I DO propose letting developers to keep
  local history of all iterations they have with a change request. The
  history that absolutely doesn't matter to anyone but this developer.

 Right, I understand that may not be the intent, but it's almost
 certainly going to be the end result.  You can't control how people are
 going to use this feature, and history suggests if it can be abused, it
 will be.


Can you please outline the abuse scenario that isn't present nowadays?
People upload huge changes and are encouraged to split them during review.
The same will happen within proposed workflow. More experienced developers
split their change into a set of change requests. The very same will happen
within proposed workflow.


  On Wed, Aug 6, 2014 at 12:03 PM, Martin Geisler mar...@geisler.net
 wrote:
 
  Ben Nemec openst...@nemebean.com writes:
 
  On 08/05/2014 03:14 PM, Yuriy Taraday wrote:
 
  When you're developing some big change you'll end up with trying
  dozens of different approaches and make thousands of mistakes. For
  reviewers this is just unnecessary noise (commit title Scratch my
  last CR, that was bullshit) while for you it's a precious history
  that can provide basis for future research or bug-hunting.
 
  So basically keeping a record of how not to do it?  I get that, but I
  think I'm more onboard with the suggestion of sticking those dead end
  changes into a separate branch.  There's no particular reason to keep
  them on your working branch anyway since they'll never merge to master.
   They're basically unnecessary conflicts waiting to happen.
 
  Yeah, I would never keep broken or unfinished commits around like this.
  In my opinion (as a core Mercurial developer), the best workflow is to
  work on a feature and make small and large commits as you go along. When
  the feature works, you begin squashing/splitting the commits to make
  them into logical pieces, if they aren't already in good shape. You then
  submit the branch for review and iterate on it until it is accepted.
 
 
  Absolutely true. And it's mostly the same workflow that happens in
  OpenStack: you do your cool feature, you carve meaningful small
  self-contained pieces out of it, you submit series of change requests.
  And nothing in my proposal conflicts with it. It just provides a way to
  make developer's side of this simpler (which is the intent of git-review,
  isn't it?) while not changing external artifacts of one's work: the same
  change requests, with the same granularity.
 
 
  As a reviewer, it cannot be stressed enough how much small, atomic,
  commits help. Squashing things together into large commits make reviews
  very tricky and removes the possibility of me accepting a later commit
  while still discussing or rejecting earlier commits (cherry-picking).
 
 
  That's true, too. But please don't think I'm proposing to squash
 everything
  together and push 10k-loc patches. I hate that, too. I'm proposing to let
  developer use one's tools (Git) in a simpler way.
  And the simpler way (for some of us) would be to have one local branch
 for
  every change request, not one branch for the whole series. Switching
  between branches is very well supported by Git and doesn't require extra
  thinking. Jumping around in detached HEAD state and editing commits
 during
  rebase requires remembering all those small details.
 
  FWIW, I have had long-lived patch series, and I don't really see what
  is so difficult about running git rebase master. Other than conflicts,
  of course, which are going to be an issue with any long-running change
  no matter how it's submitted. There isn't a ton of git magic involved.
 
  I agree. The conflicts you talk about are intrinsic to the parallel
  development. Doing a rebase is equivalent to doing a series of merges,
  so if rebase gives you conflicts, you can be near certain that a plain
  merge would give you conflicts too. The same applies other way around.
 
 
  You disregard other issues that can happen with patch series. You might
  need something more that rebase. You might need to fix something. You
 might
  need to focus on the one commit in the middle and do huge bunch of
 changes
  in it alone. And I propose to just allow developer to keep track of
 what's
  one been doing instead of forcing one to remember all of this.

 This is a separate issue though.  Editing a commit in the middle of a
 series doesn't have to be done at the same time as a rebase to master.


No, this will be done with a separate interactive rebase or that detached
HEAD and reflog dance. I don't see this as smth clearer than doing proper
commits in a separate branches.

In fact, not having a 

Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-06 Thread Ryan Moats



Jay Pipes jaypi...@gmail.com wrote on 08/06/2014 01:04:41 PM:

[snip]

 AFAICT, there is nothing that can be done with the GBP API that cannot
 be done with the low-level regular Neutron API.

I'll take you up on that, Jay :)

How exactly do I specify behavior between two collections of ports residing
in the same IP subnet (an example of this is a bump-in-the-wire network
appliance).

I've looked around regular Neutron and all I've come up with so far is:
(1) use security groups on the ports
(2) set allow_overlapping_ips to true, set up two networks with
identical CIDR block subnets and disjoint allocation pools and put a
vRouter between them.

Now #1 only works for basic allow/deny access and adds the complexity of
needing to specify per-IP address security rules, which means you need the
ports to have IP addresses already and then manually add them into the
security groups, which doesn't seem particularly very orchestration
friendly.

Now #2 handles both allow/deny access as well as provides a potential
attachment point for other behaviors, *but* you have to know to set up the
disjoint allocation pools, and your depending on your drivers to handle the
case of a router that isn't really a router (i.e. it's got two interfaces
in the same subnet, possibly with the same address (unless you thought of
that when you set things up)).

You can say that both of these are *possible*, but they both look more
complex to me than just having two groups of ports and specifying a policy
between them.

Ryan Moats___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-06 Thread Ben Nemec
On 08/06/2014 01:42 PM, Yuriy Taraday wrote:
 On Wed, Aug 6, 2014 at 6:20 PM, Ben Nemec openst...@nemebean.com wrote:
 
 On 08/06/2014 03:35 AM, Yuriy Taraday wrote:
 I'd like to stress this to everyone: I DO NOT propose squashing together
 commits that should belong to separate change requests. I DO NOT propose
 to
 upload all your changes at once. I DO propose letting developers to keep
 local history of all iterations they have with a change request. The
 history that absolutely doesn't matter to anyone but this developer.

 Right, I understand that may not be the intent, but it's almost
 certainly going to be the end result.  You can't control how people are
 going to use this feature, and history suggests if it can be abused, it
 will be.

 
 Can you please outline the abuse scenario that isn't present nowadays?
 People upload huge changes and are encouraged to split them during review.
 The same will happen within proposed workflow. More experienced developers
 split their change into a set of change requests. The very same will happen
 within proposed workflow.

There will be a documented option in git-review that automatically
squashes all commits.  People _will_ use that incorrectly because from a
submitter perspective it's easier to deal with one review than multiple,
but from a reviewer perspective it's exactly the opposite.

 

 On Wed, Aug 6, 2014 at 12:03 PM, Martin Geisler mar...@geisler.net
 wrote:

 Ben Nemec openst...@nemebean.com writes:

 On 08/05/2014 03:14 PM, Yuriy Taraday wrote:

 When you're developing some big change you'll end up with trying
 dozens of different approaches and make thousands of mistakes. For
 reviewers this is just unnecessary noise (commit title Scratch my
 last CR, that was bullshit) while for you it's a precious history
 that can provide basis for future research or bug-hunting.

 So basically keeping a record of how not to do it?  I get that, but I
 think I'm more onboard with the suggestion of sticking those dead end
 changes into a separate branch.  There's no particular reason to keep
 them on your working branch anyway since they'll never merge to master.
  They're basically unnecessary conflicts waiting to happen.

 Yeah, I would never keep broken or unfinished commits around like this.
 In my opinion (as a core Mercurial developer), the best workflow is to
 work on a feature and make small and large commits as you go along. When
 the feature works, you begin squashing/splitting the commits to make
 them into logical pieces, if they aren't already in good shape. You then
 submit the branch for review and iterate on it until it is accepted.


 Absolutely true. And it's mostly the same workflow that happens in
 OpenStack: you do your cool feature, you carve meaningful small
 self-contained pieces out of it, you submit series of change requests.
 And nothing in my proposal conflicts with it. It just provides a way to
 make developer's side of this simpler (which is the intent of git-review,
 isn't it?) while not changing external artifacts of one's work: the same
 change requests, with the same granularity.


 As a reviewer, it cannot be stressed enough how much small, atomic,
 commits help. Squashing things together into large commits make reviews
 very tricky and removes the possibility of me accepting a later commit
 while still discussing or rejecting earlier commits (cherry-picking).


 That's true, too. But please don't think I'm proposing to squash
 everything
 together and push 10k-loc patches. I hate that, too. I'm proposing to let
 developer use one's tools (Git) in a simpler way.
 And the simpler way (for some of us) would be to have one local branch
 for
 every change request, not one branch for the whole series. Switching
 between branches is very well supported by Git and doesn't require extra
 thinking. Jumping around in detached HEAD state and editing commits
 during
 rebase requires remembering all those small details.

 FWIW, I have had long-lived patch series, and I don't really see what
 is so difficult about running git rebase master. Other than conflicts,
 of course, which are going to be an issue with any long-running change
 no matter how it's submitted. There isn't a ton of git magic involved.

 I agree. The conflicts you talk about are intrinsic to the parallel
 development. Doing a rebase is equivalent to doing a series of merges,
 so if rebase gives you conflicts, you can be near certain that a plain
 merge would give you conflicts too. The same applies other way around.


 You disregard other issues that can happen with patch series. You might
 need something more that rebase. You might need to fix something. You
 might
 need to focus on the one commit in the middle and do huge bunch of
 changes
 in it alone. And I propose to just allow developer to keep track of
 what's
 one been doing instead of forcing one to remember all of this.

 This is a separate issue though.  Editing a commit in the middle of a
 series doesn't have to 

Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-06 Thread Kevin Benton
Hi Aaron,

These are good questions, but can we move this to a different thread
labeled what is the point of group policy?

I don't want to derail this one again and we should stick to Salvatore's
options about the way to move forward with these code changes.
On Aug 6, 2014 12:42 PM, Aaron Rosen aaronoro...@gmail.com wrote:

 Hi,

 I've made my way through the group based policy code and blueprints and
 I'd like ask several questions about it.  My first question really is what
 is the advantage that the new proposed group based policy model buys us?


 Bobs says, The group-based policy BP approved for Juno addresses the
 critical need for a more usable, declarative, intent-based interface for
 cloud application developers and deployers, that can co-exist with
 Neutron's current networking-hardware-oriented API and work nicely with all
 existing core plugins. Additionally, we believe that this declarative
 approach is what is needed to properly integrate advanced services into
 Neutron, and will go a long way towards resolving the difficulties so far
 trying to integrate LBaaS, FWaaS, and VPNaaS APIs into the current Neutron
 model.

 My problem with the current blueprint and that comment above is it does
 not provide any evidence or data of where the current neutron abstractions
 (ports/networks/subnets/routers) provide difficulties and what benefit this
 new model will provide.

 In the current proposed implementation of group policy, it's
 implementation maps onto the existing neutron primitives and the neutron
 back end(s) remains unchanged. Because of this one can map the new
 abstractions onto the previous ones so I'm curious why we want to move this
 complexity into neutron and not have it done externally similarly to how
 heat works or a client that abstracts this complexity on it's own end.

 From the group-based policy blueprint that was submitted [1]:


 The current Neutron model of networks, ports, subnets, routers, and
 security
 groups provides the necessary building blocks to build a logical network
 topology for connectivity. However, it does not provide the right level
 of abstraction for an application administrator who understands the
 application's details (like application port numbers), but not the
 infrastructure details likes networks and routes.

 It looks to me that application administrators still need to understand
 network primitives as the concept of networks/ports/routers are still
 present though just carrying a different name. For example, in
 ENDPOINT_GROUPS there is an attribute l2_policy_id which maps to something
 that you use to describe a l2_network and contains an attribute
 l3_policy_id which is used to describe an L3 network. This looks similar to
 the abstraction we have today where a l2_policy (network) then can have
 multiple l3_policies (subnets) mapping to it.  Because of this I'm curious
 how the GBP abstraction really provides a different level of abstraction
 for application administrators.


  Not only that, the current
 abstraction puts the burden of maintaining the consistency of the network
 topology on the user. The lack of application developer/administrator
 focussed
 abstractions supported by a declarative model make it hard for those users
 to consume Neutron as a connectivity layer.

 What is the problem in the current abstraction that puts a burden of
 maintaining the consistency of networking topology on users? It seems to me
 that the complexity of having to know about topology should be abstracted
 at the client layer if desired (and neutron should expose the basic
 building blocks for networking). For example, Horizon/Heat or the CLI could
 hide the requirement of topology by automatically creating a GROUP  (which
 is a network+subnet on a router uplinked to an external network)
 simplifying this need for the tenant to understand topology. In addition,
 topology still seems to be present in the group policy model proposed just
 in a different way as I see it.

 From the proposed change section the following is stated:


 This proposal suggests a model that allows application administrators to
 express their networking requirements using group and policy
 abstractions, with
 the specifics of policy enforcement and implementation left to the
 underlying
 policy driver. The main advantage of the extensions described in this
 blueprint
 is that they allow for an application-centric interface to Neutron that
 complements the existing network-centric interface.


 How is the Application-centric interface complementary to the
 network-centric interface?  Is the intention that one would use both
 interfaces at one once?

  More specifically the new abstractions will achieve the following:
 * Show clear separation of concerns between application and infrastructure
 administrator.


 I'm not quite sure I understand this point, how is this different than
 what we have today?


 - The application administrator can then deal with a higher level
 

[openstack-dev] How to improve the specs review process (was Re: [Neutron] Group Based Policy and the way forward)

2014-08-06 Thread Stefano Maffulli
On 08/06/2014 11:19 AM, Edgar Magana wrote:
 That is the beauty of the open source projects, there is always a smartest
 reviewer catching out the facts that you don¹t.

And yet, the specification clearly talks about 'endpoints' and nobody
caught it where it supposed to be caught so I fear that something failed
badly here:

https://review.openstack.org/#/c/89469/10

What failed and how we make sure this doesn't happen again? This to me
is the most important question to answer.  If I remember correctly we
introduced the concept of Specs exactly to discuss on the ideas *before*
the implementation starts. We wanted things like architecture, naming
conventions and other important decisions to be socialized and agreed
upon *before* code was proposed. We wanted to avoid developers to spend
time implementing features in ways that are incompatible and likely to
be rejected at code review time. And yet, here we are.

Something failed and I would ask for all core reviewers to sit down and
do an exercise to identify the root cause. If you want we can start from
this specific case, do some simple root cause analysis together and take
GBP as an example. Thoughts?

/stef

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to improve the specs review process (was Re: [Neutron] Group Based Policy and the way forward)

2014-08-06 Thread Kyle Mestery
On Wed, Aug 6, 2014 at 2:07 PM, Stefano Maffulli stef...@openstack.org wrote:
 On 08/06/2014 11:19 AM, Edgar Magana wrote:
 That is the beauty of the open source projects, there is always a smartest
 reviewer catching out the facts that you don¹t.

 And yet, the specification clearly talks about 'endpoints' and nobody
 caught it where it supposed to be caught so I fear that something failed
 badly here:

 https://review.openstack.org/#/c/89469/10

 What failed and how we make sure this doesn't happen again? This to me
 is the most important question to answer.  If I remember correctly we
 introduced the concept of Specs exactly to discuss on the ideas *before*
 the implementation starts. We wanted things like architecture, naming
 conventions and other important decisions to be socialized and agreed
 upon *before* code was proposed. We wanted to avoid developers to spend
 time implementing features in ways that are incompatible and likely to
 be rejected at code review time. And yet, here we are.

 Something failed and I would ask for all core reviewers to sit down and
 do an exercise to identify the root cause. If you want we can start from
 this specific case, do some simple root cause analysis together and take
 GBP as an example. Thoughts?

+100

I'm willing to dedicate part of the Neutron meeting Monday to do a
public post-mortem on GBP here. Stefano, can you attend this meeting
Monday and be there to help guide the conversation as a third party to
the entire process?

Thanks,
Kyle

 /stef

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-06 Thread Aaron Rosen
Hi Ryan,


On Wed, Aug 6, 2014 at 11:55 AM, Ryan Moats rmo...@us.ibm.com wrote:

 Jay Pipes jaypi...@gmail.com wrote on 08/06/2014 01:04:41 PM:

 [snip]


  AFAICT, there is nothing that can be done with the GBP API that cannot
  be done with the low-level regular Neutron API.

 I'll take you up on that, Jay :)

 How exactly do I specify behavior between two collections of ports
 residing in the same IP subnet (an example of this is a bump-in-the-wire
 network appliance).

 Would you mind explaining what behavior you want between the two
collection of ports?


 I've looked around regular Neutron and all I've come up with so far is:
  (1) use security groups on the ports
  (2) set allow_overlapping_ips to true, set up two networks with identical
 CIDR block subnets and disjoint allocation pools and put a vRouter between
 them.

 Now #1 only works for basic allow/deny access and adds the complexity of
 needing to specify per-IP address security rules, which means you need the
 ports to have IP addresses already and then manually add them into the
 security groups, which doesn't seem particularly very orchestration
 friendly.


I believe the referential security group rules solve this problem (unless
I'm not understanding):

neutron security-group-create group1
neutron security-group-create group2

# allow members of group1 to ssh into group2 (but not the other way around):
neutron security-group-rule-create --direction ingress --port-range-min 22
--port-range-max 22 --protocol TCP --remote-group-id group1 group2

# allow members of group2 to be able to access TCP 80 from members of
group1 (but not the other way around):
neutron security-group-rule-create --direction ingress --port-range-min 80
--port-range-max 80 --protocol TCP --remote-group-id group2 group1

# Now when you create ports just place these in the desired security groups
and neutron will automatically handle this orchestration for you (and you
don't have to deal with ip_addresses and updates).

neutron port-create --security-groups group1 network1
neutron port-create --security-groups group2 network1



 Now #2 handles both allow/deny access as well as provides a potential
 attachment point for other behaviors, *but* you have to know to set up the
 disjoint allocation pools, and your depending on your drivers to handle the
 case of a router that isn't really a router (i.e. it's got two interfaces
 in the same subnet, possibly with the same address (unless you thought of
 that when you set things up)).


Are you talking about the firewall as a service stuff here?


 You can say that both of these are *possible*, but they both look more
 complex to me than just having two groups of ports and specifying a policy
 between them.


Would you mind proposing how this is done in the Group policy api? From
what I can tell in the new proposed api you'd need to map both of these
groups to different endpoints i.e networks.



 Ryan Moats


 Best,

Aaron

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-06 Thread Yuriy Taraday
On Wed, Aug 6, 2014 at 7:23 PM, Ben Nemec openst...@nemebean.com wrote:

 On 08/06/2014 12:41 AM, Yuriy Taraday wrote:
  On Wed, Aug 6, 2014 at 1:17 AM, Ben Nemec openst...@nemebean.com
 wrote:
 
  On 08/05/2014 03:14 PM, Yuriy Taraday wrote:
  On Tue, Aug 5, 2014 at 10:48 PM, Ben Nemec openst...@nemebean.com
  wrote:
 
  On 08/05/2014 10:51 AM, ZZelle wrote:
  Hi,
 
 
  I like the idea  ... with complex change, it could useful for the
  understanding to split it into smaller changes during development.
 
  I don't understand this.  If it's a complex change that you need
  multiple commits to keep track of locally, why wouldn't reviewers want
  the same thing?  Squashing a bunch of commits together solely so you
  have one review for Gerrit isn't a good thing.  Is it just the warning
  message that git-review prints when you try to push multiple commits
  that is the problem here?
 
 
  When you're developing some big change you'll end up with trying dozens
  of
  different approaches and make thousands of mistakes. For reviewers this
  is
  just unnecessary noise (commit title Scratch my last CR, that was
  bullshit) while for you it's a precious history that can provide basis
  for
  future research or bug-hunting.
 
  So basically keeping a record of how not to do it?
 
 
  Well, yes, you can call version control system a history of failures.
  Because if there were no failures there would've been one omnipotent
 commit
  that does everything you want it to.

 Ideally, no.  In a perfect world every commit would work, so the version
 history would be a number of small changes that add up to this great
 application.  In reality it's a combination of new features, oopses, and
 fixes for those oopses.  I certainly wouldn't describe it as a history
 of failures though.  I would hope the majority of commits to our
 projects are _not_ failures. :-)


Well, new features are merged just to be later fixed and refactored - how
that's not a failure? And we basically do keep a record of how not to do
it in our repositories. Why prevent developers do the same on the smaller
scale?

  I get that, but I
  think I'm more onboard with the suggestion of sticking those dead end
  changes into a separate branch.  There's no particular reason to keep
  them on your working branch anyway since they'll never merge to master.
 
 
  The commits themselves are never going to merge to master but that's not
  the only meaning of their life. With current tooling working branch
 ends
  up a patch series that is constantly rewritten with no proper history of
  when did that happen and why. As I said, you can't find roots of bugs in
  your code, you can't dig into old versions of your code (what if you
 need a
  method that you've already created but removed because of some wrong
  suggestion?).

 You're not going to find the root of a bug in your code by looking at an
 old commit that was replaced by some other implementation.  If anything,
 I see that as more confusing.  And if you want to keep old versions of
 your code, either push it to Gerrit or create a new branch before
 changing it further.


So you propose two options:
- store history of your work within Gerrit's patchsets for each change
request, which don't fit commit often approach (who'd want to see how I
struggle with fixing some bug or write working test?);
- store history of your work in new branches instead of commits in the same
branch, which... is not how Git is supposed to be used.
And both this options don't provide any proper way of searching through
this history.

Have you ever used bisect? Sometimes I find myself wanting to use it
instead of manually digging through patchsets in Gerrit to find out which
change I made broke some usecase I didn't put in unittests yet.

  They're basically unnecessary conflicts waiting to happen.
 
 
  No. They are your local history. They don't need to be rebased on top of
  master - you can just merge master into your branch and resolve conflicts
  once. After that your autosquashed commit will merge clearly back to
  master.

 Then don't rebase them.  git checkout -b dead-end and move on. :-)


I never proposed to rebase anything. I want to use merge instead of rebase.

  Merges are one of the strong sides of Git itself (and keeping them very
  easy is one of the founding principles behind it). With current
 workflow
  we
  don't use them at all. master went too far forward? You have to do
 rebase
  and screw all your local history and most likely squash everything
 anyway
  because you don't want to fix commits with known bugs in them. With
  proposed feature you can just do merge once and let 'git review' add
 some
  magic without ever hurting your code.
 
  How do rebases screw up your local history?  All your commits are still
  there after a rebase, they just have a different parent.  I also don't
  see how rebases are all that much worse than merges.  If there are no
  conflicts, rebases are trivial.  If there are 

Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-06 Thread Henry Fourie
+1

From: Edgar Magana [mailto:edgar.mag...@workday.com]
Sent: Wednesday, August 06, 2014 10:40 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

This is the consequence of a proposal that is not following the standardized 
terminology (IETF - RFC) for any Policy-based System:
http://tools.ietf.org/html/rfc3198

Well, I did bring  this point during the Hong Kong Summit but as you can see my 
comments were totally ignored:
https://docs.google.com/document/d/1ZbOFxAoibZbJmDWx1oOrOsDcov6Cuom5aaBIrupCD9E/edit

I clearly saw this kind of issues coming. Let me quote myself what I suggested: 
For instance: endpoints should be enforcement point

I do not understand why GBP did not include this suggestion...

Edgar

From: Kevin Benton blak...@gmail.commailto:blak...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, August 6, 2014 at 10:22 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron] Group Based Policy and the way forward


What I was referring to was also not Keystone's definition of an endpoint. It's 
almost as if the term has many uses and was not invented for Keystone. :-)

http://www.wireshark.org/docs/wsug_html_chunked/ChStatEndpoints.html

Did a similar discussion occur when Heat wanted to use the word 'template' 
since this was clearly already in use by Horizon?
On Aug 6, 2014 9:24 AM, Jay Pipes 
jaypi...@gmail.commailto:jaypi...@gmail.com wrote:
On 08/06/2014 02:12 AM, Kevin Benton wrote:
Given that, pointing to the Nova parity work seems a bit like a red
herring. This new API is being developed orthogonally to the existing
API endpoints

You see how you used the term endpoints there? :P

-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-06 Thread Aaron Rosen
Hi Kevin,

I think we should keep these threads together as we need to understand the
benefit collectively before we move forward with what to do.

Aaron


On Wed, Aug 6, 2014 at 12:03 PM, Kevin Benton blak...@gmail.com wrote:

 Hi Aaron,

 These are good questions, but can we move this to a different thread
 labeled what is the point of group policy?

 I don't want to derail this one again and we should stick to Salvatore's
 options about the way to move forward with these code changes.
 On Aug 6, 2014 12:42 PM, Aaron Rosen aaronoro...@gmail.com wrote:

 Hi,

 I've made my way through the group based policy code and blueprints and
 I'd like ask several questions about it.  My first question really is what
 is the advantage that the new proposed group based policy model buys us?


 Bobs says, The group-based policy BP approved for Juno addresses the
 critical need for a more usable, declarative, intent-based interface for
 cloud application developers and deployers, that can co-exist with
 Neutron's current networking-hardware-oriented API and work nicely with all
 existing core plugins. Additionally, we believe that this declarative
 approach is what is needed to properly integrate advanced services into
 Neutron, and will go a long way towards resolving the difficulties so far
 trying to integrate LBaaS, FWaaS, and VPNaaS APIs into the current Neutron
 model.

 My problem with the current blueprint and that comment above is it does
 not provide any evidence or data of where the current neutron abstractions
 (ports/networks/subnets/routers) provide difficulties and what benefit this
 new model will provide.

 In the current proposed implementation of group policy, it's
 implementation maps onto the existing neutron primitives and the neutron
 back end(s) remains unchanged. Because of this one can map the new
 abstractions onto the previous ones so I'm curious why we want to move this
 complexity into neutron and not have it done externally similarly to how
 heat works or a client that abstracts this complexity on it's own end.

 From the group-based policy blueprint that was submitted [1]:


 The current Neutron model of networks, ports, subnets, routers, and
 security
 groups provides the necessary building blocks to build a logical network
 topology for connectivity. However, it does not provide the right level
 of abstraction for an application administrator who understands the
 application's details (like application port numbers), but not the
 infrastructure details likes networks and routes.

 It looks to me that application administrators still need to understand
 network primitives as the concept of networks/ports/routers are still
 present though just carrying a different name. For example, in
 ENDPOINT_GROUPS there is an attribute l2_policy_id which maps to something
 that you use to describe a l2_network and contains an attribute
 l3_policy_id which is used to describe an L3 network. This looks similar to
 the abstraction we have today where a l2_policy (network) then can have
 multiple l3_policies (subnets) mapping to it.  Because of this I'm curious
 how the GBP abstraction really provides a different level of abstraction
 for application administrators.


  Not only that, the current
 abstraction puts the burden of maintaining the consistency of the network
 topology on the user. The lack of application developer/administrator
 focussed
 abstractions supported by a declarative model make it hard for those
 users
 to consume Neutron as a connectivity layer.

 What is the problem in the current abstraction that puts a burden of
 maintaining the consistency of networking topology on users? It seems to me
 that the complexity of having to know about topology should be abstracted
 at the client layer if desired (and neutron should expose the basic
 building blocks for networking). For example, Horizon/Heat or the CLI could
 hide the requirement of topology by automatically creating a GROUP  (which
 is a network+subnet on a router uplinked to an external network)
 simplifying this need for the tenant to understand topology. In addition,
 topology still seems to be present in the group policy model proposed just
 in a different way as I see it.

 From the proposed change section the following is stated:


 This proposal suggests a model that allows application administrators to
 express their networking requirements using group and policy
 abstractions, with
 the specifics of policy enforcement and implementation left to the
 underlying
 policy driver. The main advantage of the extensions described in this
 blueprint
 is that they allow for an application-centric interface to Neutron that
 complements the existing network-centric interface.


 How is the Application-centric interface complementary to the
 network-centric interface?  Is the intention that one would use both
 interfaces at one once?

  More specifically the new abstractions will achieve the following:
 * Show clear separation of 

Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-06 Thread Salvatore Orlando
As long as the discussion stays focused on how group policies are
beneficial for the user community and how the Neutron developer community
should move forward, I reckon it's fine to keep the discussion in this
thread.

Salvatore
Il 06/ago/2014 21:18 Aaron Rosen aaronoro...@gmail.com ha scritto:

 Hi Kevin,

 I think we should keep these threads together as we need to understand the
 benefit collectively before we move forward with what to do.

 Aaron


 On Wed, Aug 6, 2014 at 12:03 PM, Kevin Benton blak...@gmail.com wrote:

 Hi Aaron,

 These are good questions, but can we move this to a different thread
 labeled what is the point of group policy?

 I don't want to derail this one again and we should stick to Salvatore's
 options about the way to move forward with these code changes.
 On Aug 6, 2014 12:42 PM, Aaron Rosen aaronoro...@gmail.com wrote:

 Hi,

 I've made my way through the group based policy code and blueprints and
 I'd like ask several questions about it.  My first question really is what
 is the advantage that the new proposed group based policy model buys us?


 Bobs says, The group-based policy BP approved for Juno addresses the
 critical need for a more usable, declarative, intent-based interface for
 cloud application developers and deployers, that can co-exist with
 Neutron's current networking-hardware-oriented API and work nicely with all
 existing core plugins. Additionally, we believe that this declarative
 approach is what is needed to properly integrate advanced services into
 Neutron, and will go a long way towards resolving the difficulties so far
 trying to integrate LBaaS, FWaaS, and VPNaaS APIs into the current Neutron
 model.

 My problem with the current blueprint and that comment above is it does
 not provide any evidence or data of where the current neutron abstractions
 (ports/networks/subnets/routers) provide difficulties and what benefit this
 new model will provide.

 In the current proposed implementation of group policy, it's
 implementation maps onto the existing neutron primitives and the neutron
 back end(s) remains unchanged. Because of this one can map the new
 abstractions onto the previous ones so I'm curious why we want to move this
 complexity into neutron and not have it done externally similarly to how
 heat works or a client that abstracts this complexity on it's own end.

 From the group-based policy blueprint that was submitted [1]:


 The current Neutron model of networks, ports, subnets, routers, and
 security
 groups provides the necessary building blocks to build a logical network
 topology for connectivity. However, it does not provide the right level
 of abstraction for an application administrator who understands the
 application's details (like application port numbers), but not the
 infrastructure details likes networks and routes.

 It looks to me that application administrators still need to understand
 network primitives as the concept of networks/ports/routers are still
 present though just carrying a different name. For example, in
 ENDPOINT_GROUPS there is an attribute l2_policy_id which maps to something
 that you use to describe a l2_network and contains an attribute
 l3_policy_id which is used to describe an L3 network. This looks similar to
 the abstraction we have today where a l2_policy (network) then can have
 multiple l3_policies (subnets) mapping to it.  Because of this I'm curious
 how the GBP abstraction really provides a different level of abstraction
 for application administrators.


  Not only that, the current
 abstraction puts the burden of maintaining the consistency of the
 network
 topology on the user. The lack of application developer/administrator
 focussed
 abstractions supported by a declarative model make it hard for those
 users
 to consume Neutron as a connectivity layer.

 What is the problem in the current abstraction that puts a burden of
 maintaining the consistency of networking topology on users? It seems to me
 that the complexity of having to know about topology should be abstracted
 at the client layer if desired (and neutron should expose the basic
 building blocks for networking). For example, Horizon/Heat or the CLI could
 hide the requirement of topology by automatically creating a GROUP  (which
 is a network+subnet on a router uplinked to an external network)
 simplifying this need for the tenant to understand topology. In addition,
 topology still seems to be present in the group policy model proposed just
 in a different way as I see it.

 From the proposed change section the following is stated:


 This proposal suggests a model that allows application administrators to
 express their networking requirements using group and policy
 abstractions, with
 the specifics of policy enforcement and implementation left to the
 underlying
 policy driver. The main advantage of the extensions described in this
 blueprint
 is that they allow for an application-centric interface to Neutron that
 

Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-06 Thread Ivar Lazzaro
Hi Aaron,

Please note that the user using the current reference implementation
doesn't need to create Networks, Ports, or anything else. As a matter of
fact, the mapping is done implicitly.

Also, I agree with Kevin when he says that this is a whole different
discussion.

Thanks,
Ivar.


On Wed, Aug 6, 2014 at 9:12 PM, Aaron Rosen aaronoro...@gmail.com wrote:

 Hi Ryan,


 On Wed, Aug 6, 2014 at 11:55 AM, Ryan Moats rmo...@us.ibm.com wrote:

 Jay Pipes jaypi...@gmail.com wrote on 08/06/2014 01:04:41 PM:

 [snip]


  AFAICT, there is nothing that can be done with the GBP API that cannot
  be done with the low-level regular Neutron API.

 I'll take you up on that, Jay :)

 How exactly do I specify behavior between two collections of ports
 residing in the same IP subnet (an example of this is a bump-in-the-wire
 network appliance).

 Would you mind explaining what behavior you want between the two
 collection of ports?


  I've looked around regular Neutron and all I've come up with so far is:
  (1) use security groups on the ports
  (2) set allow_overlapping_ips to true, set up two networks with
 identical CIDR block subnets and disjoint allocation pools and put a
 vRouter between them.

 Now #1 only works for basic allow/deny access and adds the complexity of
 needing to specify per-IP address security rules, which means you need the
 ports to have IP addresses already and then manually add them into the
 security groups, which doesn't seem particularly very orchestration
 friendly.


 I believe the referential security group rules solve this problem (unless
 I'm not understanding):

 neutron security-group-create group1
 neutron security-group-create group2

 # allow members of group1 to ssh into group2 (but not the other way
 around):
 neutron security-group-rule-create --direction ingress --port-range-min 22
 --port-range-max 22 --protocol TCP --remote-group-id group1 group2

 # allow members of group2 to be able to access TCP 80 from members of
 group1 (but not the other way around):
 neutron security-group-rule-create --direction ingress --port-range-min 80
 --port-range-max 80 --protocol TCP --remote-group-id group2 group1

 # Now when you create ports just place these in the desired security
 groups and neutron will automatically handle this orchestration for you
 (and you don't have to deal with ip_addresses and updates).

 neutron port-create --security-groups group1 network1
 neutron port-create --security-groups group2 network1



 Now #2 handles both allow/deny access as well as provides a potential
 attachment point for other behaviors, *but* you have to know to set up the
 disjoint allocation pools, and your depending on your drivers to handle the
 case of a router that isn't really a router (i.e. it's got two interfaces
 in the same subnet, possibly with the same address (unless you thought of
 that when you set things up)).


 Are you talking about the firewall as a service stuff here?


  You can say that both of these are *possible*, but they both look more
 complex to me than just having two groups of ports and specifying a policy
 between them.


 Would you mind proposing how this is done in the Group policy api? From
 what I can tell in the new proposed api you'd need to map both of these
 groups to different endpoints i.e networks.



 Ryan Moats


 Best,

 Aaron

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-06 Thread Aaron Rosen
On Wed, Aug 6, 2014 at 12:25 PM, Ivar Lazzaro ivarlazz...@gmail.com wrote:

 Hi Aaron,

 Please note that the user using the current reference implementation
 doesn't need to create Networks, Ports, or anything else. As a matter of
 fact, the mapping is done implicitly.



The user still needs to create an endpointgroup. What is being done
implicitly here? I fail to see the difference.



 Also, I agree with Kevin when he says that this is a whole different
 discussion.

 Thanks,
 Ivar.


 On Wed, Aug 6, 2014 at 9:12 PM, Aaron Rosen aaronoro...@gmail.com wrote:

 Hi Ryan,


 On Wed, Aug 6, 2014 at 11:55 AM, Ryan Moats rmo...@us.ibm.com wrote:

 Jay Pipes jaypi...@gmail.com wrote on 08/06/2014 01:04:41 PM:

 [snip]


  AFAICT, there is nothing that can be done with the GBP API that cannot
  be done with the low-level regular Neutron API.

 I'll take you up on that, Jay :)

 How exactly do I specify behavior between two collections of ports
 residing in the same IP subnet (an example of this is a bump-in-the-wire
 network appliance).

 Would you mind explaining what behavior you want between the two
 collection of ports?


  I've looked around regular Neutron and all I've come up with so far is:
  (1) use security groups on the ports
  (2) set allow_overlapping_ips to true, set up two networks with
 identical CIDR block subnets and disjoint allocation pools and put a
 vRouter between them.

 Now #1 only works for basic allow/deny access and adds the complexity of
 needing to specify per-IP address security rules, which means you need the
 ports to have IP addresses already and then manually add them into the
 security groups, which doesn't seem particularly very orchestration
 friendly.


 I believe the referential security group rules solve this problem (unless
 I'm not understanding):

 neutron security-group-create group1
 neutron security-group-create group2

 # allow members of group1 to ssh into group2 (but not the other way
 around):
 neutron security-group-rule-create --direction ingress --port-range-min
 22 --port-range-max 22 --protocol TCP --remote-group-id group1 group2

 # allow members of group2 to be able to access TCP 80 from members of
 group1 (but not the other way around):
 neutron security-group-rule-create --direction ingress --port-range-min
 80 --port-range-max 80 --protocol TCP --remote-group-id group2 group1

 # Now when you create ports just place these in the desired security
 groups and neutron will automatically handle this orchestration for you
 (and you don't have to deal with ip_addresses and updates).

 neutron port-create --security-groups group1 network1
 neutron port-create --security-groups group2 network1



 Now #2 handles both allow/deny access as well as provides a potential
 attachment point for other behaviors, *but* you have to know to set up the
 disjoint allocation pools, and your depending on your drivers to handle the
 case of a router that isn't really a router (i.e. it's got two interfaces
 in the same subnet, possibly with the same address (unless you thought of
 that when you set things up)).


 Are you talking about the firewall as a service stuff here?


  You can say that both of these are *possible*, but they both look more
 complex to me than just having two groups of ports and specifying a policy
 between them.


  Would you mind proposing how this is done in the Group policy api? From
 what I can tell in the new proposed api you'd need to map both of these
 groups to different endpoints i.e networks.



 Ryan Moats


 Best,

 Aaron

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [UX] [Horizon] [Heat] Merlin project (formerly known as cross-project UI library for Heat/Mistral/Murano/Solum) plans for PoC and more

2014-08-06 Thread Timur Sufiev
Hi, folks!

Two months ago there was an announcement in ML about gathering the
requirements for cross-project UI library for
Heat/Mistral/Murano/Solum [1]. The positive feedback in related
googledoc [2] and some IRC chats and emails that followed convinced me
that I'm not the only person interested in it :), so I'm happy to make
the next announcement.

The project finally has got its name - 'Merlin' (making complex UIs is
a kind of magic), Openstack wiki page [3] and all other stuff like
stackforge repo, launchpad page and IRC channel (they are all
referenced in [3]). For those who don't like clicking the links, here
is quick summary.

Merlin aims to provide a convenient client side framework for building
rich UIs for Openstack projects dealing with complex input data with
lot of dependencies and constraints (usually encoded in YAML format
via some DSL) - projects like Heat, Murano, Mistral or Solum. The
ultimate goal for such UI is to save users from reading comprehensive
documentation just in order to provide correct input data, thus making
the UI of these projects more user-friendly. If things go well for
Merlin, it could be eventually merged into Horizon library (I’ll spare
another option for the end of this letter).

The framework trying to solve this ambitious task is facing at least 2
challenges:
(1) enabling the proper UX patterns and
(2) dealing with complexities of different projects' DSLs.

Having worked on DSL things in Murano project before, I'm planning at
first to deal with the challenge (2) in the upcoming Merlin PoC. So,
here is the initial plan: design an in-framework object model (OM)
that could translated forth and back into target project's DSL. This
OM is meant to be synchronised with visual elements shown on browser
canvas. Target project is the Heat with its HOT templates - it has the
most well-established syntax among other projects and comprehensive
documentation.

Considering the challenge (1), not being a dedicated UX engineer, I'm
planning to start with some rough UI concepts [4] and gradually
improve them relying on community feedback, and especially, Openstack
UX group. If anybody from the UX team (or any other team!) is willing
to be involved to a greater degree than just giving some feedback,
you're are enormously welcome! Join Merlin, it will be fun :)!

Finally, with this announcement I’d like to start a discussion with
Horizon community. As far as I know, Horizon in its current state
lacks such UI toolkit as Merlin aims to provide. Would it be by any
chance possible for the Merlin project to be developed from the very
beginning as part of Horizon library? This choice has its pros and
cons I’m aware of, but I’d like to hear the opinions of Horizon
developers on that matter.

[1] http://lists.openstack.org/pipermail/openstack-dev/2014-June/037054.html
[2] 
https://docs.google.com/a/mirantis.com/document/d/19Q9JwoO77724RyOp7XkpYmALwmdb7JjoQHcDv4ffZ-I/edit#
[3] https://wiki.openstack.org/wiki/Merlin
[4] https://wiki.openstack.org/wiki/Merlin/SampleUI

-- 
Timur Sufiev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] make mac address updatable: which plugins?

2014-08-06 Thread Carlino, Chuck (OpenStack TripleO, Neutron)
Yamamoto has reviewed the changes for this, and has raised the following issue 
(among others).


  *   iirc mellanox uses mac address as port identifier. what happens on 
address change?

Can someone who knows mellanox please comment, either here or in the review?

Thanks,
Chuck


On Aug 5, 2014, at 1:22 PM, Carlino, Chuck (OpenStack TripleO, Neutron) 
chuck.carl...@hp.commailto:chuck.carl...@hp.com wrote:

Thanks for the quick responses.

Here's the WIP review:

https://review.openstack.org/112129.

The base plugin doesn't contribute to the notification decision right now, so 
I've modified the actual plugin code.

Chuck


On Aug 5, 2014, at 12:51 PM, Amir Sadoughi 
amir.sadou...@rackspace.commailto:amir.sadou...@rackspace.commailto:amir.sadou...@rackspace.com
wrote:

I agree with Kevin here. Just a note, don't bother with openvswitch and 
linuxbridge plugins as they are marked for deletion this cycle, imminently 
(already deprecated)[0].

Amir

[0] 
http://eavesdrop.openstack.org/meetings/networking/2014/networking.2014-08-04-21.02.html
 Announcements 2e.

From: Kevin Benton 
[blak...@gmail.commailto:blak...@gmail.commailto:blak...@gmail.com]
Sent: Tuesday, August 05, 2014 2:40 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] make mac address updatable: which 
plugins?

How are you implementing the change? It would be good to get to see some code 
in a review to get an idea of what needs to be updated.

If it's just a change in the DB base plugin, just let those changes propagate 
to the plugins that haven't overridden the inherited behavior.


On Tue, Aug 5, 2014 at 1:28 PM, Charles Carlino 
chuckjcarl...@gmail.commailto:chuckjcarl...@gmail.commailto:chuckjcarl...@gmail.com
 wrote:
Hi all,

I need some help regarding a bug [1] I'm working on.

The bug is basically a request to make the mac address of a port updatable.  
The use case is a baremetal (Ironic) node that has a bad NIC which must be 
replaced, resulting in a new mac address.  The bad NIC has an associated 
neutron port which of course holds the NIC's IP address.  The reason to make 
mac_address updatable (as opposed to having the user create a new port and 
delete the old one) is that during the recovery process the IP address must be 
retained and assigned to the new NIC/port, which is not guaranteed in the above 
work-around.

I'm coding the changes to do this in the ml2, openvswitch, and linuxbridge 
plugins but I'm not sure how to handle the the other plugins since I don't know 
if the associated backends are prepared to handle such updates.  My first 
thought is to disallow the update in the other plugins, but I would really 
appreciate your advice.

Kind regards,
Chuck Carlino

[1] https://bugs.launchpad.net/neutron/+bug/1341268

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-06 Thread Kevin Benton
I believe the referential security group rules solve this problem (unless
I'm not understanding):

I think the disconnect is that you are comparing the way to current mapping
driver implements things for the reference implementation with the existing
APIs. Under this light, it's not going to look like there is a point to
this code being in Neutron since, as you said, the abstraction could happen
at a client. However, this changes once new mapping drivers can be added
that implement things differently.

Let's take the security groups example. Using the security groups API
directly is imperative (put a firewall rule on this port that blocks this
IP) compared to a higher level declarative abstraction (make sure these
two endpoints cannot communicate). With the former, the ports must support
security groups and there is nowhere except for the firewall rules on that
port to implement it without violating the user's expectation. With the
latter, a mapping driver could determine that communication between these
two hosts can be prevented by using an ACL on a router or a switch, which
doesn't violate the user's intent and buys a performance improvement and
works with ports that don't support security groups.

Group based policy is trying to move the requests into the declarative
abstraction so optimizations like the one above can be made.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   3   >