Re: [openstack-dev] [Neutron] Nominate Oleg Bondarev for Core

2014-02-12 Thread Akihiro Motoki
+1

(2014/02/11 8:28), Mark McClain wrote:
 All-

 I’d like to nominate Oleg Bondarev to become a Neutron core reviewer.  Oleg 
 has been valuable contributor to Neutron by actively reviewing, working on 
 bugs, and contributing code.

 Neutron cores please reply back with +1/0/-1 votes.

 mark
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Neutron py27 test 'FAIL: process-returncode'

2014-02-12 Thread Oleg Bondarev
Hi Gary,

please see my comments on the review.

Thanks,
Oleg


On Wed, Feb 12, 2014 at 5:52 AM, Gary Duan garyd...@gmail.com wrote:

 Hi,

 The patch I submitted for L3 service framework integration fails on
 jenkins test, py26 and py27. The console only gives following error message,

 2014-02-12 00:45:01.710 | FAIL: process-returncode
 2014-02-12 00:45:01.711 | tags: worker-1

 and at the end,

 2014-02-12 00:45:01.916 | ERROR: InvocationError: 
 '/home/jenkins/workspace/gate-neutron-python27/.tox/py27/bin/python -m 
 neutron.openstack.common.lockutils python setup.py testr --slowest 
 --testr-args='
 2014-02-12 00:45:01.917 | ___ summary 
 
 2014-02-12 00:45:01.918 | ERROR:   py27: commands failed

 I wonder what might be the reason for the failure and how to debug this 
 problem?

 The patch is at, https://review.openstack.org/#/c/59242/

 The console output is, 
 http://logs.openstack.org/42/59242/7/check/gate-neutron-python27/e395b06/console.html

 Thanks,

 Gary


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Nominate Oleg Bondarev for Core

2014-02-12 Thread trinath.soman...@freescale.com
+1

--
Trinath Somanchi - B39208
trinath.soman...@freescale.com | extn: 4048

-Original Message-
From: Akihiro Motoki [mailto:mot...@da.jp.nec.com] 
Sent: Wednesday, February 12, 2014 1:45 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron] Nominate Oleg Bondarev for Core

+1

(2014/02/11 8:28), Mark McClain wrote:
 All-

 I'd like to nominate Oleg Bondarev to become a Neutron core reviewer.  Oleg 
 has been valuable contributor to Neutron by actively reviewing, working on 
 bugs, and contributing code.

 Neutron cores please reply back with +1/0/-1 votes.

 mark
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] request for testing new cloud foundation layer on bare metal

2014-02-12 Thread Aryeh Friedman
PetiteCloud is a 100% Free Open Source and Open Knowledge bare metal
capable Cloud Foundation Layer for Unix-like operating systems. It has the
following features:

* Support for bhyve (FreeBSD only) and QEMU
* Any x86 OS as a guest (FreeBSD and Linux via bhyve or QEMU; all
others via QEMU only) and all supported software (including running
OpenStack on VM's)
* Install, import, start, stop and reboot instances safely (guest OS
needs to be controlled independently)
* Clone, backup/export, delete stopped instances 100% safely
* Keep track of all your instances on one screen
* All transactions that change instance state are password protected at
all critical stages
* Advanced options:
* Ability to use/make bootable bare metal disks for backing stores
* Multiple NIC's and disks
* User settable (vs. auto assigned) backing store locations
* A growing number of general purpose and specialized
instances/applications are available for PetiteCloud

We would like to know if people a) find this useful and b) does it live up
to it's claims for a wide variety of open stack installs
-- 
Aryeh M. Friedman, Lead Developer, http://www.PetiteCloud.org
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Neutron py27 test 'FAIL: process-returncode'

2014-02-12 Thread Gary Duan
Oleg,

Thanks for the suggestion. I will give it a try.

Gary


On Wed, Feb 12, 2014 at 12:12 AM, Oleg Bondarev obonda...@mirantis.comwrote:

 Hi Gary,

 please see my comments on the review.

 Thanks,
 Oleg


 On Wed, Feb 12, 2014 at 5:52 AM, Gary Duan garyd...@gmail.com wrote:

 Hi,

 The patch I submitted for L3 service framework integration fails on
 jenkins test, py26 and py27. The console only gives following error message,

 2014-02-12 00:45:01.710 | FAIL: process-returncode
 2014-02-12 00:45:01.711 | tags: worker-1

 and at the end,

 2014-02-12 00:45:01.916 | ERROR: InvocationError: 
 '/home/jenkins/workspace/gate-neutron-python27/.tox/py27/bin/python -m 
 neutron.openstack.common.lockutils python setup.py testr --slowest 
 --testr-args='
 2014-02-12 00:45:01.917 | ___ summary 
 
 2014-02-12 00:45:01.918 | ERROR:   py27: commands failed

 I wonder what might be the reason for the failure and how to debug this 
 problem?

 The patch is at, https://review.openstack.org/#/c/59242/

 The console output is, 
 http://logs.openstack.org/42/59242/7/check/gate-neutron-python27/e395b06/console.html

 Thanks,

 Gary


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Nominate Oleg Bondarev for Core

2014-02-12 Thread Mayur Patil
+1

*--*
*Cheers,*
*Mayur*
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Neutron] Neutron network in Nova V3 API

2014-02-12 Thread Alex Xu

Hi, guys,

I'm working neutron network stuff in nova V3 API. We will only pass port 
ids when create server, and
Nova won't proxy any neutron call in the future. I plan to add new v3 
network extension, that only
accept port ids as parameters. And it will pass those port ids in the 
old request_networks parameters.
(line 50 in 
https://review.openstack.org/#/c/36615/13/nova/api/openstack/compute/plugins/v3/networks.py)
Then other code path is same with before. In next release or when we 
remove nova-network, we
can add neutron specific code path. But v2 and v3 api's behavior is 
different. I need change something
before adding new network api. I want to hear you guys' suggestion 
first, ensure I'm working on the right

way.


1. Disable allocate ports automatically when create server without any 
port in v3 api.
When user create server without any port, the new server shouldn't be 
created with any ports in V3.
But in v2, nova-compute will allocate port from existed networks. I plan 
to pass parameter down to
nova-compute, that told nova-compute don't allocate ports for new 
server. And also keep old behavior

for v2 api.

2. Disable delete ports from neutron when remove server in v3 api.
In v2 api, after remove server, the port that attached to that server is 
removed by nova-compute.
But in v3 api, we shoudn't proxy any neutron call. Because there are 
some periodic tasks will delete
servers, just pass a parameter down to nova-compute from api isn't 
enough. So I plan to add a parameter in instance's
metadata when create server. When remove server, it will check the 
metadata first. If the server is marked as

created by v3 api, nova-compute won't remove attached neutron ports.

3. Enable pass port ids when multiple servers creation.
Currently multiple_create didn't support pass port ids. And we won't 
allocate ports automatically in v3 api.

So my plan as below:

When request with max_count=2 and ports=[{'id': 'port_id1'},  {'id': 
'port_id2'}, {'id': 'port_id3'}, {'id': 'port_id4}]
The first server create with ports 'port_id1' and 'port_id2', the second 
server create with ports 'port_id3' and 'port_id4'


When request with max_count=2 and ports = [{'id': 'port_id1'}]
The request return fault.
The request must be len(ports) % max_count == 0

Thanks
Alex
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [stable/havana] cherry backport, multiple external networks, passing tests

2014-02-12 Thread Miguel Angel Ajo Pelayo

Could any core developer check/approve this if it does look good?

https://review.openstack.org/#/c/68601/

I'd like to get it in for the new stable/havana release 
if it's possible.


Best regards,
Miguel Ángel


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Proposal for model change - Layer 7 support

2014-02-12 Thread Stephen Balukoff
Hi Samuel!

Per your request, here's some feedback on the layer 7 proposal referenced
below. Please let me know if by comment is something missing there you
meant I should actually comment within the blueprint or wiki system instead
of in this e-mail thread.


   - Are L7Policy, L7Rule, and L7VipPolicyAssociation all new resources
   within the data model, or are any of these simply going to be implemented
   as additional fields to existing objects? (I'll try to produce a new
   diagram for y'all illustrating how these fit in with the existing data
   model, unless one of you has this already.)

   - I see the intent is to support L7 rules of the following types: Hostname,
   Path, File Type, Header, Cookie Has there been discussion around
   supporting other types of L7 rules?  (I ask mostly out of curiosity-- the
   list there actually covers the 90% use case for our customers just fine.)

   - Since the L7Rule object contains a position indicator, I assume,
   therefore, that a given L7Rule object cannot exist in more than one
   L7Policy, correct?  Also, I assume that the first L7Rule added to an
   L7Policy becomes the rule at position 0 and that subsequent rules are added
   with incrementing positions. This is unless the position is specified, in
   which case, the rule is inserted into the policy list at the position
   specified, and all subsequent rule position indicators get incremented.
   Correct?

   - Shouldn't the L7Rule object also have a l7PolicyID attribute?

   - It is unclear from the proposal whether a given VIP can have
multiple L7VipPolicyAssociation
   objects associated with it. If not, then we've not really solved the
   problem of multiple back-end pools per VIP. If so, then the
   L7VipPolicyAssociation object is missing its own 'position' attribute
   (because order matters here, too!).

   - I assume any given pool can have multiple L7VipPolicyAssociations. If
   this is not the case, then a given single pool can only be associated with
   one VIP.

   - There is currently no way to set a 'default' back-end pool in this
   proposal. Perhaps this could be solved by:
   - Make 'DEFAULT' one of the actions possible for a L7VipPolicyAssociation
  - Any L7VipPolicyAssociation with an action of 'DEFAULT' would have a
  null position and null L7PolicyId.
  - We would need to enforce having only one L7VipPolicyAssociation
  object with a 'DEFAULT' action per VIP.


Other than the last three points above, the L7Policy, L7Rule, and
L7VipPolicyAssociation do essentially the same thing as the 'ACL' object in
my proposal, albeit with more granularity in the objects themselves. (In
our BLBv2 implementation, we have pretty loose rules around what can be
specified in a given ACL, and allow haproxy to do syntax checking itself on
the whole generated configuration file, returning an error and refusing to
update a listener's in-production configuration until the error is resolved
in the case where the user made an error on any given ACL.)  I like that in
this proposal, the model seems to enforce compliance with certain rule
formats, which, presumably, could be syntax checked against what haproxy
will allow without having to call haproxy directly.

The down-side of the above is that we start to enforce, at the model level,
very haproxy-specific configuration terminology with this. This is fine, so
long as load balancer vendors that want to write drivers for Neutron LBaaS
are capable of translating haproxy-specific ACL language into whatever
rules make sense for their appliance.

Having said the above, I don't see a way to expose a lot of L7
functionality and still be able to do syntax checking without choosing one
particular configuration format in which rules can be specified (in our
case, haproxy).  I suppose we could invent our own pseudo rule language--
but why bother when haproxy has already done this, eh?

I'll take a look at the SSL stuff next, then the LoadBalancerInstance
stuff...

Thanks,
Stephen

On Tue, Feb 11, 2014 at 5:26 AM, Samuel Bercovici samu...@radware.comwrote:

 Please review the current work in progress and comment is something is
 missing there.

 The logical load balancer API which is already addressed:

 · Multiple pools per VIP (ie. “layer 7” support)   -
 https://blueprints.launchpad.net/neutron/+spec/lbaas-l7-rules.





-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Proposal for model

2014-02-12 Thread Samuel Bercovici
Hi,

We plan to address LBaaS in ceilometer for Juno.
A blue print was registered 
https://blueprints.launchpad.net/neutron/+spec/lbaas-ceilometer-integration
Please use the following  google document to add include requirements and 
thoughts at: 
https://docs.google.com/document/d/1mrrn6DEQkiySwx4eTaKijr0IJkJpUT3WX277aC12YFg/edit?usp=sharing

Regards,
-Sam.


-Original Message-
From: WICKES, ROGER [mailto:rw3...@att.com] 
Sent: Tuesday, February 11, 2014 7:35 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS] Proposal for model

[Roger] Hi Stephen! Great job! Obviously your experience is both awesome and 
essential here.

I would ask that we add a historical archive (physically implemented as a log 
file, probably) object to your model. When you mentioned sending data off to 
Ceilometer, that triggered me to think about one problem I have had to deal 
with is what packet went where?  
in diagnosing errors usually related to having a bug on 1 out of 5 
load-balanced servers, usually because of a deployed version mismatch, but 
could also be due to virus. When our customer sees hey every now and then this 
image is broken on a web page that points us to an inconsistent farm, and 
having the ability to trace or see which server got that customer's packet 
(routed to by the LB) would really help in pinpointing the errant server.  

 Benefits of a new model

 If we were to adopt either of these data models, this would enable us 
 to eventually support the following feature sets, in the following 
 ways (for
 example):

 Automated scaling of load-balancer services

[Roger] Would the Heat module be called on to add more LB's to the farm?

 I talked about horizontal scaling of load balancers above under High 
 Availability, but, at least in the case of a software appliance, 
 vertical scaling should also be possible in an active-standby 
 cluster_model by
**

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Putting nova-network support into the V3 API

2014-02-12 Thread Kenichi Oomichi

Hi Chris,

Is it OK to postpone nova-network v3 APIs until Juno release?
I guess that because some nova-network v3 API patches are abandoned today.
I'd just like to make it clear.


Thanks
Ken'ichi Ohmichi

---

 -Original Message-
 From: Christopher Yeoh [mailto:cbky...@gmail.com]
 Sent: Tuesday, February 04, 2014 8:37 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Nova] Putting nova-network support into the V3 
 API
 
 On Tue, 04 Feb 2014 11:37:29 +0100
 Thierry Carrez thie...@openstack.org wrote:
 
  Christopher Yeoh wrote:
   On Tue, Feb 4, 2014 at 12:03 PM, Joe Gordon joe.gord...@gmail.com
   mailto:joe.gord...@gmail.com wrote:
  
   John and I discussed a third possibility:
  
   nova-network v3 should be an extension, so the idea was to: Make
   nova-network API a subset of neturon (instead of them adopting our
   API we adopt theirs). And we could release v3 without nova network
   in Icehouse and add the nova-network extension in Juno.
  
   This would actually be my preferred approach if we can get consensus
   around this. It takes a lot of pressure off this late in the cycle
   and there's less risk around having to live with a nova-network API
   in V3 that still has some rough edges around it. I imagine it will
   be quite a while before we can deprecate the V2 API so IMO going
   one cycle without nova-network support is not a big thing.
 
  So user story would be, in icehouse release (nothing deprecated yet):
  v2 + nova-net: supported
  v2 + neutron: supported
  v3 + nova-net: n/a
  v3 + neutron: supported
 
  And for juno:
  v2 + nova-net: works, v2 could be deprecated
  v2 + neutron: works, v2 could be deprecated
  v3 + nova-net: works through extension, nova-net could be deprecated
 
 So to be clear the idea I think is that nova-net of v3 + nova-net
 would look like the neutron api. Eg nova-net API from v2 would look
 quite different to 'nova-net' API from v3. To minimise the transition
 pain for users on V3 moving to a neutron based cloud. Though those
 moving from v2 + nova-net to v3 + nova-net would have to cope with more 
 changes.
 
  v3 + neutron: supported (encouraged future-proof combo)
 
  That doesn't sound too bad to me. Lets us finalize v3 core in icehouse
  and keeps a lot of simplification / deprecation options open for Juno,
  depending on how the nova-net vs. neutron story pans out then.
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Proposal for model change - Layer 7 support

2014-02-12 Thread Samuel Bercovici
Hi Stephen.

Thank you for reviewing this!
See my comments bellow.

-Sam.


From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
Sent: Wednesday, February 12, 2014 11:58 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Proposal for model change - Layer 
7 support

Hi Samuel!

Per your request, here's some feedback on the layer 7 proposal referenced 
below. Please let me know if by comment is something missing there you meant 
I should actually comment within the blueprint or wiki system instead of in 
this e-mail thread.


  *   Are L7Policy, L7Rule, and L7VipPolicyAssociation all new resources within 
the data model, or are any of these simply going to be implemented as 
additional fields to existing objects? (I'll try to produce a new diagram for 
y'all illustrating how these fit in with the existing data model, unless one of 
you has this already.)
Sam Those are new resources within the data model. Please review the Wiki and 
the code for further details.

  *   I see the intent is to support L7 rules of the following types: 
Hostname, Path, File Type, Header, Cookie Has there been discussion around 
supporting other types of L7 rules?  (I ask mostly out of curiosity-- the list 
there actually covers the 90% use case for our customers just fine.)
Sam We have reviewed this based on capabilities that we belive could be 
supported by HA proxy and all commercial vendors.
Sam What is missing?

  *   Since the L7Rule object contains a position indicator, I assume, 
therefore, that a given L7Rule object cannot exist in more than one L7Policy, 
correct?  Also, I assume that the first L7Rule added to an L7Policy becomes the 
rule at position 0 and that subsequent rules are added with incrementing 
positions. This is unless the position is specified, in which case, the rule is 
inserted into the policy list at the position specified, and all subsequent 
rule position indicators get incremented. Correct?
Sam Correct.

  *   Shouldn't the L7Rule object also have a l7PolicyID attribute?
Sam It does.

  *   It is unclear from the proposal whether a given VIP can have multiple 
L7VipPolicyAssociation objects associated with it. If not, then we've not 
really solved the problem of multiple back-end pools per VIP. If so, then the 
L7VipPolicyAssociation object is missing its own 'position' attribute (because 
order matters here, too!).
Sam Correct the L7VIPPollicyassociation should have a “position” attribute. 
The way to implement is under consideration.

  *   I assume any given pool can have multiple L7VipPolicyAssociations. If 
this is not the case, then a given single pool can only be associated with one 
VIP.
Sam nope this any to any. Pools can be associated with multiple VIPs

  *   There is currently no way to set a 'default' back-end pool in this 
proposal. Perhaps this could be solved by:

 *   Make 'DEFAULT' one of the actions possible for a L7VipPolicyAssociation
 *   Any L7VipPolicyAssociation with an action of 'DEFAULT' would have a 
null position and null L7PolicyId.
 *   We would need to enforce having only one L7VipPolicyAssociation object 
with a 'DEFAULT' action per VIP.
Sam the “default” behavior is governed by the current VIP -- Pool 
relationship. This is the canonical approach that could also be addressed by 
LBaaS drivers that do not support L7 content switching.
Sam We will fix the VIPPool limitation (for Juno) by removing the 
Pool--VIP reference al only leaving the VIP -- Pool reference thus allowing 
the Pool to be used by multiple VIPs. This was originally planned for icehouse 
but will be handle for Juno.

Other than the last three points above, the L7Policy, L7Rule, and 
L7VipPolicyAssociation do essentially the same thing as the 'ACL' object in my 
proposal, albeit with more granularity in the objects themselves. (In our BLBv2 
implementation, we have pretty loose rules around what can be specified in a 
given ACL, and allow haproxy to do syntax checking itself on the whole 
generated configuration file, returning an error and refusing to update a 
listener's in-production configuration until the error is resolved in the case 
where the user made an error on any given ACL.)  I like that in this proposal, 
the model seems to enforce compliance with certain rule formats, which, 
presumably, could be syntax checked against what haproxy will allow without 
having to call haproxy directly.

The down-side of the above is that we start to enforce, at the model level, 
very haproxy-specific configuration terminology with this. This is fine, so 
long as load balancer vendors that want to write drivers for Neutron LBaaS are 
capable of translating haproxy-specific ACL language into whatever rules make 
sense for their appliance.
Sam It is not HA proxy specific as all commercial implementation can support 
this.
Having said the above, I don't see a way to expose a lot of L7 functionality 
and still be able to do 

Re: [openstack-dev] [neutron] [stable/havana] cherry backport, multiple external networks, passing tests

2014-02-12 Thread Alan Pevec
2014-02-12 10:48 GMT+01:00 Miguel Angel Ajo Pelayo mangel...@redhat.com:
 Could any core developer check/approve this if it does look good?
 https://review.openstack.org/#/c/68601/

 I'd like to get it in for the new stable/havana release
 if it's possible.

I'm afraid it's too late for 2013.2.2 (to be released tomorrow after week delay)
It would be the same answer as
http://lists.openstack.org/pipermail/openstack-stable-maint/2014-February/002124.html
- both linked bugs are Medium only and known for long time, so
targeting 2013.2.3 is more reasonable.


Cheers,
Alan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [stable/havana] cherry backport, multiple external networks, passing tests

2014-02-12 Thread Miguel Angel Ajo Pelayo
Thank you Alan, 

   I wasn't aware of a stable freeze being active at this moment.

   Then we must target it for 2013.2.3 .

   Cheers,
Miguel Ángel

- Original Message -
 From: Alan Pevec ape...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Cc: openstack-stable-maint openstack-stable-ma...@lists.openstack.org
 Sent: Wednesday, February 12, 2014 12:05:29 PM
 Subject: Re: [openstack-dev] [neutron] [stable/havana] cherry backport, 
 multiple external networks, passing tests
 
 2014-02-12 10:48 GMT+01:00 Miguel Angel Ajo Pelayo mangel...@redhat.com:
  Could any core developer check/approve this if it does look good?
  https://review.openstack.org/#/c/68601/
 
  I'd like to get it in for the new stable/havana release
  if it's possible.
 
 I'm afraid it's too late for 2013.2.2 (to be released tomorrow after week
 delay)
 It would be the same answer as
 http://lists.openstack.org/pipermail/openstack-stable-maint/2014-February/002124.html
 - both linked bugs are Medium only and known for long time, so
 targeting 2013.2.3 is more reasonable.
 
 
 Cheers,
 Alan
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] in-instance update hooks

2014-02-12 Thread James Slagle
On Tue, Feb 11, 2014 at 12:22 AM, Clint Byrum cl...@fewbar.com wrote:
 Hi, so in the previous thread about rolling updates it became clear that
 having in-instance control over updates is a more fundamental idea than
 I had previously believed. During an update, Heat does things to servers
 that may interrupt the server's purpose, and that may cause it to fail
 subsequent things in the graph.

 Specifically, in TripleO we have compute nodes that we are managing.
 Before rebooting a machine, we want to have a chance to live-migrate
 workloads if possible, or evacuate in the simpler case, before the node
 is rebooted. Also in the case of a Galera DB where we may even be running
 degraded, we want to ensure that we have quorum before proceeding.

 I've filed a blueprint for this functionality:

 https://blueprints.launchpad.net/heat/+spec/update-hooks

 I've cobbled together a spec here, and I would very much welcome
 edits/comments/etc:

 https://etherpad.openstack.org/p/heat-update-hooks

I like this approach.

Could this work for the non-reboot required incremental update (via
rsync or whatever) idea that's been discussed as well? I think it'd be
nice if we had a model that worked for both the rebuild case and
incremental case.

What if there was an additional type for actions under action_hooks
called update or incremental (I'm not sure if there is a term for this
in Heat today) in addition to the rebuild and delete action choices
that are already there.

When the instance sees that this action type is pending, it can
perform the update, and then use the wait condition handle to indicate
to Heat that the update is complete vs. using the wait condition
handle to indicate to proceed with the rebuild.

I suppose the instance might need some additional data (such as the
new Image Id) in order to perform the incremental update. Could this
be made available somehow in the metadata structure?

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Interested in attracting new contributors?

2014-02-12 Thread Julie Pichon
Hi folks,

Stefano's post on how to make contributions to OpenStack easier [1]
finally stirred me into writing about something that vkmc and myself
have been doing on the side for a few months to help new contributors
to get involved.

Some of you may be aware of OpenHatch [2], a non-profit dedicated to
helping newcomers get started in open-source. About 6 months ago we
created a project page for Horizon [3], filled in a few high level
details, set ourselves up as mentors. Since then people have been
expressing interest in the project and a number of them got a patch
submitted and approved, a couple are sticking around (often helping out
with bug triaging, as confirming new bugs is one of the few tasks one
can help out with when only having limited time).

I can definitely sympathise with the comment in Stefano's article that
there are not enough easy tasks / simple issues for newcomers. There's
a lot to learn already when you're starting out (git, gerrit, python,
devstack, ...) and simple bugs are so hard to find - something that
will take a few minutes to an existing contributor will take much
longer for someone who's still figuring out where to get the code
from. Unfortunately it's not uncommon for existing contributors to take
on tasks marked as low-hanging-fruit because it's only 5 minutes (I
can understand this coming up to an RC but otherwise low-hanging-fruits
are often low priority nits that could wait a little bit longer). In
Horizon the low-hanging-fruits definitely get snatched up quickly and I
try to keep a list of typos or other low impact, trivial bugs that
would make good first tasks for people reaching out via OpenHatch.

OpenHatch doesn't spam, you get one email a week if one or more people
indicated they want to help. The initial effort is not time-consuming,
following OpenHatch's advice [4] you can refine a nice initial
contact email that helps you get people started and understand what
they are interested in quickly. I don't find the time commitment to be
too much so far, and it's incredibly gratifying to see someone
submitting their first patch after you answered a couple of questions
or helped resolve a hairy git issue. I'm happy to chat about it more,
if you're curious or have any questions.

In any case if you'd like to attract more contributors to your project,
and/or help newcomers get started in open-source, consider adding your
project to OpenHatch too!

Cheers,

Julie

[1] http://opensource.com/business/14/2/analyzing-contributions-to-openstack
[2] http://openhatch.org/
[3] http://openhatch.org/+projects/OpenStack%20dashboard%20%28Horizon%29
[4] https://openhatch.org/wiki/Contacting_new_contributors

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python-openstacksdk] First meeting scheduled

2014-02-12 Thread Ed Leafe
On Feb 11, 2014, at 3:30 PM, Jesse Noller jesse.nol...@rackspace.com wrote:

 I did propose in the original thread that we join efforts: in fact, we 
 already have a fully functioning, unified SDK that *could* be checked in 
 today - but without discussing the APIs, design and other items with the 
 community at large, I don’t think that that would be successful.
 
 [1] 
 https://github.com/openstack/oslo-incubator/tree/master/openstack/common/apiclient

While there are some differences, this library shares the same 
client/manager/resource class design and flow as the work we've done. IOW, I 
think the approaches are more alike than different, which is encouraging. 


-- Ed Leafe





signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][neutron] SRIOV: Recap of Feb 12th and agenda on Feb 13th

2014-02-12 Thread Robert Li (baoli)
Hi Folks,

I put the recap in here:2 Feb. 12th, 2014 
Recaphttps://wiki.openstack.org/wiki/Meetings/Passthrough#Feb._12th.2C_2014_Recap.
 Please take a look at and see if everything is fine and correct any 
misunderstandings.

I also put together an Agenda for tomorrow in here:1 Agenda on Feb. 13th, 
2014https://wiki.openstack.org/wiki/Meetings/Passthrough#Agenda_on_Feb._13th.2C_2014.
 Hopefully we can get the nova side of things cleared.

Thanks,
Robert
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python-openstacksdk] First meeting scheduled

2014-02-12 Thread Jesse Noller

On Feb 12, 2014, at 9:07 AM, Ed Leafe e...@openstack.org wrote:

 On Feb 11, 2014, at 3:30 PM, Jesse Noller jesse.nol...@rackspace.com wrote:
 
 I did propose in the original thread that we join efforts: in fact, we 
 already have a fully functioning, unified SDK that *could* be checked in 
 today - but without discussing the APIs, design and other items with the 
 community at large, I don’t think that that would be successful.
 
 [1] 
 https://github.com/openstack/oslo-incubator/tree/master/openstack/common/apiclient
 
 While there are some differences, this library shares the same 
 client/manager/resource class design and flow as the work we've done. IOW, I 
 think the approaches are more alike than different, which is encouraging. 
 
 
 -- Ed Leafe

The projects are not exclusive to one another - they are completely 
complimentary. 

And as of yet, we have not solidified the client/manager/resource structure - 
we should talk about that at the meeting next week.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Interest in discussing vendor plugins for L3 services?

2014-02-12 Thread Carl Baldwin
Paul,

I'm interesting in joining the discussion.  UTC-7.  Any word on when
this will take place?

Carl

On Mon, Feb 3, 2014 at 3:19 PM, Paul Michali p...@cisco.com wrote:
 I'd like to see if there is interest in discussing vendor plugins for L3
 services. The goal is to strive for consistency across vendor
 plugins/drivers and across service types (if possible/sensible). Some of
 this could/should apply to reference drivers as well. I'm thinking about
 these topics (based on questions I've had on VPNaaS - feel free to add to
 the list):

 How to handle vendor specific validation (e.g. say a vendor has restrictions
 or added capabilities compared to the reference drivers for attributes).
 Providing client feedback (e.g. should help and validation be extended to
 include vendor capabilities or should it be delegated to server reporting?)
 Handling and reporting of errors to the user (e.g. how to indicate to the
 user that a failure has occurred establishing a IPSec tunnel in device
 driver?)
 Persistence of vendor specific information (e.g. should new tables be used
 or should/can existing reference tables be extended?).
 Provider selection for resources (e.g. should we allow --provider attribute
 on VPN IPSec policies to have vendor specific policies or should we rely on
 checks at connection creation for policy compatibility?)
 Handling of multiple device drivers per vendor (e.g. have service driver
 determine which device driver to send RPC requests, or have agent determine
 what driver requests should go to - say based on the router type)

 If you have an interest, please reply to me and include some days/times that
 would be good for you, and I'll send out a notice on the ML of the time/date
 and we can discuss.

 Looking to hearing form you!

 PCM (Paul Michali)

 MAIL  p...@cisco.com
 IRCpcm_  (irc.freenode.net)
 TW@pmichali
 GPG key4525ECC253E31A83
 Fingerprint 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Interested in attracting new contributors?

2014-02-12 Thread Jesse Noller

On Feb 12, 2014, at 8:30 AM, Julie Pichon jpic...@redhat.com wrote:

 Hi folks,
 
 Stefano's post on how to make contributions to OpenStack easier [1]
 finally stirred me into writing about something that vkmc and myself
 have been doing on the side for a few months to help new contributors
 to get involved.
 
 Some of you may be aware of OpenHatch [2], a non-profit dedicated to
 helping newcomers get started in open-source. About 6 months ago we
 created a project page for Horizon [3], filled in a few high level
 details, set ourselves up as mentors. Since then people have been
 expressing interest in the project and a number of them got a patch
 submitted and approved, a couple are sticking around (often helping out
 with bug triaging, as confirming new bugs is one of the few tasks one
 can help out with when only having limited time).
 
 I can definitely sympathise with the comment in Stefano's article that
 there are not enough easy tasks / simple issues for newcomers. There's
 a lot to learn already when you're starting out (git, gerrit, python,
 devstack, ...) and simple bugs are so hard to find - something that
 will take a few minutes to an existing contributor will take much
 longer for someone who's still figuring out where to get the code
 from. Unfortunately it's not uncommon for existing contributors to take
 on tasks marked as low-hanging-fruit because it's only 5 minutes (I
 can understand this coming up to an RC but otherwise low-hanging-fruits
 are often low priority nits that could wait a little bit longer). In
 Horizon the low-hanging-fruits definitely get snatched up quickly and I
 try to keep a list of typos or other low impact, trivial bugs that
 would make good first tasks for people reaching out via OpenHatch.
 
 OpenHatch doesn't spam, you get one email a week if one or more people
 indicated they want to help. The initial effort is not time-consuming,
 following OpenHatch's advice [4] you can refine a nice initial
 contact email that helps you get people started and understand what
 they are interested in quickly. I don't find the time commitment to be
 too much so far, and it's incredibly gratifying to see someone
 submitting their first patch after you answered a couple of questions
 or helped resolve a hairy git issue. I'm happy to chat about it more,
 if you're curious or have any questions.
 
 In any case if you'd like to attract more contributors to your project,
 and/or help newcomers get started in open-source, consider adding your
 project to OpenHatch too!
 
 Cheers,
 
 Julie
 

+10

There’s been quite a bit of talk about this - but not necessarily on the dev 
list. I think openhatch is great - mentorship programs in general go a *long* 
way to help raise up and gain new people. Core Python has had this issue for 
awhile, and many other large OSS projects continue to suffer from it (“barrier 
to entry too high”).

Some random thoughts:

I’d like to see something like Solum’s Contributing page:

https://wiki.openstack.org/wiki/Solum/Contributing

Expanded a little and potentially be the recommended “intro to contribution” 
guide - https://wiki.openstack.org/wiki/How_To_Contribute is good, but a more 
accessible version goes a long way. You want to show them how easy / fast it 
is, not all of the options at once. I think this is somewhere where python-core 
has gotten better at with:

http://docs.python.org/devguide/

It’s not perfect, but it errors on “time to submission/fix

 [1] http://opensource.com/business/14/2/analyzing-contributions-to-openstack
 [2] http://openhatch.org/
 [3] http://openhatch.org/+projects/OpenStack%20dashboard%20%28Horizon%29
 [4] https://openhatch.org/wiki/Contacting_new_contributors
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] problems with non overlapping requirements changes

2014-02-12 Thread Doug Hellmann
On Tue, Feb 11, 2014 at 4:31 PM, Sean Dague s...@dague.net wrote:

 A few weeks ago we realized one of the wrecking balls in the gate were
 non overlapping requirements changes, like this -
 https://review.openstack.org/#/c/72475/

 Regular jobs in the gate have to use the OpenStack mirror. Requirements
 repo doesn't, because it needs to be able to test things not in the mirror.

 So when a requirements job goes into the gate, everything behind it will
 be using the new requirements. But the mirror isn't updated until the
 requirements change merges.

 So if you make a non overlapping change like that, for 1hr (or more)
 everything in the wake of the requirements job gets blown up in global
 requirements because it can't install that from the mirror.

 This issue is partially synthetic, however it does raise a good issue
 for continuous deployed environments, because assuming atomic upgrade of
 2 code bases isn't a good assumption.

 Anyway, the point of this email is we really shouldn't be approving
 requirements changes that are disjoint upgrades like that, because they
 basically mean they'll trigger 10 - 20 -2s of other people's patches in
 the gate.


Good point, Sean. I added this to the requirements project review
checklist (https://wiki.openstack.org/wiki/Requirements#Review_Criteria).

Doug




 -Sean

 --
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Putting nova-network support into the V3 API

2014-02-12 Thread Christopher Yeoh
Hi Kenichi,

Ah yes, it was decided at the mid cycle meetup to delay the nova network
changes
until Juno. Sorry I should have told you sooner.

Regards,

Chris



On Wed, Feb 12, 2014 at 1:35 AM, Kenichi Oomichi
oomi...@mxs.nes.nec.co.jpwrote:


 Hi Chris,

 Is it OK to postpone nova-network v3 APIs until Juno release?
 I guess that because some nova-network v3 API patches are abandoned today.
 I'd just like to make it clear.


 Thanks
 Ken'ichi Ohmichi

 ---

  -Original Message-
  From: Christopher Yeoh [mailto:cbky...@gmail.com]
  Sent: Tuesday, February 04, 2014 8:37 PM
  To: openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [Nova] Putting nova-network support into
 the V3 API
 
  On Tue, 04 Feb 2014 11:37:29 +0100
  Thierry Carrez thie...@openstack.org wrote:
 
   Christopher Yeoh wrote:
On Tue, Feb 4, 2014 at 12:03 PM, Joe Gordon joe.gord...@gmail.com
mailto:joe.gord...@gmail.com wrote:
   
John and I discussed a third possibility:
   
nova-network v3 should be an extension, so the idea was to: Make
nova-network API a subset of neturon (instead of them adopting our
API we adopt theirs). And we could release v3 without nova network
in Icehouse and add the nova-network extension in Juno.
   
This would actually be my preferred approach if we can get consensus
around this. It takes a lot of pressure off this late in the cycle
and there's less risk around having to live with a nova-network API
in V3 that still has some rough edges around it. I imagine it will
be quite a while before we can deprecate the V2 API so IMO going
one cycle without nova-network support is not a big thing.
  
   So user story would be, in icehouse release (nothing deprecated yet):
   v2 + nova-net: supported
   v2 + neutron: supported
   v3 + nova-net: n/a
   v3 + neutron: supported
  
   And for juno:
   v2 + nova-net: works, v2 could be deprecated
   v2 + neutron: works, v2 could be deprecated
   v3 + nova-net: works through extension, nova-net could be deprecated
 
  So to be clear the idea I think is that nova-net of v3 + nova-net
  would look like the neutron api. Eg nova-net API from v2 would look
  quite different to 'nova-net' API from v3. To minimise the transition
  pain for users on V3 moving to a neutron based cloud. Though those
  moving from v2 + nova-net to v3 + nova-net would have to cope with more
 changes.
 
   v3 + neutron: supported (encouraged future-proof combo)
  
   That doesn't sound too bad to me. Lets us finalize v3 core in icehouse
   and keeps a lot of simplification / deprecation options open for Juno,
   depending on how the nova-net vs. neutron story pans out then.
  
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Interest in discussing vendor plugins for L3 services?

2014-02-12 Thread Eugene Nikanorov
I'd be interested too.

Thanks,
Eugene.


On Wed, Feb 12, 2014 at 7:51 PM, Carl Baldwin c...@ecbaldwin.net wrote:

 Paul,

 I'm interesting in joining the discussion.  UTC-7.  Any word on when
 this will take place?

 Carl

 On Mon, Feb 3, 2014 at 3:19 PM, Paul Michali p...@cisco.com wrote:
  I'd like to see if there is interest in discussing vendor plugins for L3
  services. The goal is to strive for consistency across vendor
  plugins/drivers and across service types (if possible/sensible). Some of
  this could/should apply to reference drivers as well. I'm thinking about
  these topics (based on questions I've had on VPNaaS - feel free to add to
  the list):
 
  How to handle vendor specific validation (e.g. say a vendor has
 restrictions
  or added capabilities compared to the reference drivers for attributes).
  Providing client feedback (e.g. should help and validation be extended
 to
  include vendor capabilities or should it be delegated to server
 reporting?)
  Handling and reporting of errors to the user (e.g. how to indicate to the
  user that a failure has occurred establishing a IPSec tunnel in device
  driver?)
  Persistence of vendor specific information (e.g. should new tables be
 used
  or should/can existing reference tables be extended?).
  Provider selection for resources (e.g. should we allow --provider
 attribute
  on VPN IPSec policies to have vendor specific policies or should we rely
 on
  checks at connection creation for policy compatibility?)
  Handling of multiple device drivers per vendor (e.g. have service driver
  determine which device driver to send RPC requests, or have agent
 determine
  what driver requests should go to - say based on the router type)
 
  If you have an interest, please reply to me and include some days/times
 that
  would be good for you, and I'll send out a notice on the ML of the
 time/date
  and we can discuss.
 
  Looking to hearing form you!
 
  PCM (Paul Michali)
 
  MAIL  p...@cisco.com
  IRCpcm_  (irc.freenode.net)
  TW@pmichali
  GPG key4525ECC253E31A83
  Fingerprint 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo-incubator] update.py copies more modules than expected

2014-02-12 Thread Sanchez, Cristian A
Hi,
We’ve modified the openstack-commons.conf file for climate remove rpc and 
notifier modules. But when the update.py  is executed, the notifier and rpc 
modules are still copied. Do you know what could be wrong?
Here you can see a log showing this situation: 
http://paste.openstack.org/show/64674/

Thanks

Cristian

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Neutron network in Nova V3 API

2014-02-12 Thread Alex Xu

On 2014?02?12? 17:15, Alex Xu wrote:

Hi, guys,

I'm working neutron network stuff in nova V3 API. We will only pass 
port ids when create server, and
Nova won't proxy any neutron call in the future. I plan to add new v3 
network extension, that only
accept port ids as parameters. And it will pass those port ids in the 
old request_networks parameters.
(line 50 in 
https://review.openstack.org/#/c/36615/13/nova/api/openstack/compute/plugins/v3/networks.py)
Then other code path is same with before. In next release or when we 
remove nova-network, we
can add neutron specific code path. But v2 and v3 api's behavior is 
different. I need change something
before adding new network api. I want to hear you guys' suggestion 
first, ensure I'm working on the right

way.


1. Disable allocate ports automatically when create server without any 
port in v3 api.
When user create server without any port, the new server shouldn't be 
created with any ports in V3.
But in v2, nova-compute will allocate port from existed networks. I 
plan to pass parameter down to
nova-compute, that told nova-compute don't allocate ports for new 
server. And also keep old behavior

for v2 api.


reference to https://review.openstack.org/#/c/73000/



2. Disable delete ports from neutron when remove server in v3 api.
In v2 api, after remove server, the port that attached to that server 
is removed by nova-compute.
But in v3 api, we shoudn't proxy any neutron call. Because there are 
some periodic tasks will delete
servers, just pass a parameter down to nova-compute from api isn't 
enough. So I plan to add a parameter in instance's
metadata when create server. When remove server, it will check the 
metadata first. If the server is marked as

created by v3 api, nova-compute won't remove attached neutron ports.


reference to https://review.openstack.org/#/c/73001/



3. Enable pass port ids when multiple servers creation.
Currently multiple_create didn't support pass port ids. And we won't 
allocate ports automatically in v3 api.

So my plan as below:

When request with max_count=2 and ports=[{'id': 'port_id1'}, {'id': 
'port_id2'}, {'id': 'port_id3'}, {'id': 'port_id4}]
The first server create with ports 'port_id1' and 'port_id2', the 
second server create with ports 'port_id3' and 'port_id4'


When request with max_count=2 and ports = [{'id': 'port_id1'}]
The request return fault.
The request must be len(ports) % max_count == 0



reference to https://review.openstack.org/#/c/73002/

V3 API layer works reference to:
https://review.openstack.org/#/c/36615/
https://review.openstack.org/#/c/42315/
https://review.openstack.org/#/c/42315/



Thanks
Alex


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python-openstacksdk] First meeting scheduled

2014-02-12 Thread Matthew Farina
I would like to point out that the unified SDK has a different target
audience from the existing clients. Application developers who want to work
with OpenStack APIs are far different from those who are trying to build
OpenStack.

For example, application developers don't know what keystone, nova, or
swift are and these names will confuse them creating a barrier for them to
build great things against OpenStack. The language usage matters in a good
SDK.

Given the different target audiences there's room for overlap when trying
to create great experiences for two segments.



On Wed, Feb 12, 2014 at 10:32 AM, Jesse Noller
jesse.nol...@rackspace.comwrote:


 On Feb 12, 2014, at 9:07 AM, Ed Leafe e...@openstack.org wrote:

  On Feb 11, 2014, at 3:30 PM, Jesse Noller jesse.nol...@rackspace.com
 wrote:
 
  I did propose in the original thread that we join efforts: in fact, we
 already have a fully functioning, unified SDK that *could* be checked in
 today - but without discussing the APIs, design and other items with the
 community at large, I don't think that that would be successful.
 
  [1]
 https://github.com/openstack/oslo-incubator/tree/master/openstack/common/apiclient
 
  While there are some differences, this library shares the same
 client/manager/resource class design and flow as the work we've done. IOW,
 I think the approaches are more alike than different, which is encouraging.
 
 
  -- Ed Leafe

 The projects are not exclusive to one another - they are completely
 complimentary.

 And as of yet, we have not solidified the client/manager/resource
 structure - we should talk about that at the meeting next week.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Group Policy questions

2014-02-12 Thread Carlos Gonçalves
Hi,

I’ve a couple of questions regarding the ongoing work on Neutron Group Policy 
proposed in [1].

1. One of the described actions is redirection to a service chain. How do you 
see BPs [2] and [3] addressing service chaining? Will this BP implement its own 
service chaining mechanism enforcing traffic steering or will it make use of, 
and thus depending on, those BPs?

2. In the second use case presented in the BP document, “Tired application with 
service insertion/chaining”, do you consider that the two firewalls entities 
can represent the same firewall instance or two running and independent 
instances? In case it’s a shared instance, how would it support multiple 
chains? This is, HTTP(s) traffic from Inet group would be redirected to the 
firewall and then passes through the ADC; traffic from App group with 
destination DB group would also be redirected to the very same firewall 
instance, although to a different destination group as the chain differs.

Thanks.

Cheers,
Carlos Gonçalves

[1] 
https://blueprints.launchpad.net/neutron/+spec/group-based-policy-abstraction
[2] 
https://blueprints.launchpad.net/neutron/+spec/neutron-services-insertion-chaining-steering
[3] 
https://blueprints.launchpad.net/neutron/+spec/nfv-and-network-service-chain-implementation

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [TripleO] in-instance update hooks

2014-02-12 Thread Clint Byrum
Excerpts from James Slagle's message of 2014-02-12 05:18:31 -0800:
 On Tue, Feb 11, 2014 at 12:22 AM, Clint Byrum cl...@fewbar.com wrote:
  Hi, so in the previous thread about rolling updates it became clear that
  having in-instance control over updates is a more fundamental idea than
  I had previously believed. During an update, Heat does things to servers
  that may interrupt the server's purpose, and that may cause it to fail
  subsequent things in the graph.
 
  Specifically, in TripleO we have compute nodes that we are managing.
  Before rebooting a machine, we want to have a chance to live-migrate
  workloads if possible, or evacuate in the simpler case, before the node
  is rebooted. Also in the case of a Galera DB where we may even be running
  degraded, we want to ensure that we have quorum before proceeding.
 
  I've filed a blueprint for this functionality:
 
  https://blueprints.launchpad.net/heat/+spec/update-hooks
 
  I've cobbled together a spec here, and I would very much welcome
  edits/comments/etc:
 
  https://etherpad.openstack.org/p/heat-update-hooks
 
 I like this approach.
 
 Could this work for the non-reboot required incremental update (via
 rsync or whatever) idea that's been discussed as well? I think it'd be
 nice if we had a model that worked for both the rebuild case and
 incremental case.
 
 What if there was an additional type for actions under action_hooks
 called update or incremental (I'm not sure if there is a term for this
 in Heat today) in addition to the rebuild and delete action choices
 that are already there.
 
 When the instance sees that this action type is pending, it can
 perform the update, and then use the wait condition handle to indicate
 to Heat that the update is complete vs. using the wait condition
 handle to indicate to proceed with the rebuild.
 
 I suppose the instance might need some additional data (such as the
 new Image Id) in order to perform the incremental update. Could this
 be made available somehow in the metadata structure?
 

Thanks for reading it James.

We've talked about having an image ID in the Metadata that may or may not
be the same image ID that is in the server properties to signify that we
want the machine to update itself from said image. That would already be
covered by the general I'm done configuring myself wait condition. In
this case, there is no pending Heat action to prevent when we're just
updating metadata, so I think we already handle this situation.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Gamification and on-boarding ...

2014-02-12 Thread Sandy Walsh
At the Nova mid-cycle meetup we've been talking about the problem of helping 
new contributors. It got into a discussion of karma, code reviews, bug fixes 
and establishing a name for yourself before screaming in a chat room can 
someone look at my branch. We want this experience to be positive, but not 
everyone has time to hand-hold new people in the dance.

The informal OpenStack motto is automate everything, so perhaps we should 
consider some form of gamification [1] to help us? Can we offer badges, quests 
and challenges to new users to lead them on the way to being strong 
contributors?

Fixed your first bug badge
Updated the docs badge
Got your blueprint approved badge
Triaged a bug badge
Reviewed a branch badge
Contributed to 3 OpenStack projects badge
Fixed a Cells bug badge
Constructive in IRC badge
Freed the gate badge
Reverted branch from a core badge
etc. 

These can be strung together as Quests to lead people along the path. It's more 
than karma and less sterile than stackalytics. The Foundation could even 
promote the rising stars and highlight the leader board. 

There are gamification-as-a-service offerings out there [2] as well as Fedora 
Badges [3] (python and open source) that we may want to consider. 

Thoughts?
-Sandy

[1] http://en.wikipedia.org/wiki/Gamification
[2] http://gamify.com/ (and many others)
[3] https://badges.fedoraproject.org/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python-openstacksdk] First meeting scheduled

2014-02-12 Thread Joe Gordon
On Wed, Feb 12, 2014 at 10:36 AM, Matthew Farina m...@mattfarina.com wrote:
 I would like to point out that the unified SDK has a different target
 audience from the existing clients. Application developers who want to work
 with OpenStack APIs are far different from those who are trying to build
 OpenStack.

The existing clients (python-novaclient, python-glanceclient,
python-cinderclient etc.) are not supposed to be targeted at people
trying to 'build' (does 'build' mean develop or deploy?) OpenStack
they are targeted at people trying to write applications for
OpenStack. Although perhaps we are not doing a great job of that.


 For example, application developers don't know what keystone, nova, or swift
 are and these names will confuse them creating a barrier for them to build
 great things against OpenStack. The language usage matters in a good SDK.

 Given the different target audiences there's room for overlap when trying to
 create great experiences for two segments.

I don't think there are two different target audiences, I think they
are the same. As someone 'building' OpenStack I want to consume it the
same way an application developer would.




 On Wed, Feb 12, 2014 at 10:32 AM, Jesse Noller jesse.nol...@rackspace.com
 wrote:


 On Feb 12, 2014, at 9:07 AM, Ed Leafe e...@openstack.org wrote:

  On Feb 11, 2014, at 3:30 PM, Jesse Noller jesse.nol...@rackspace.com
  wrote:
 
  I did propose in the original thread that we join efforts: in fact, we
  already have a fully functioning, unified SDK that *could* be checked in
  today - but without discussing the APIs, design and other items with the
  community at large, I don't think that that would be successful.
 
  [1]
  https://github.com/openstack/oslo-incubator/tree/master/openstack/common/apiclient
 
  While there are some differences, this library shares the same
  client/manager/resource class design and flow as the work we've done. IOW, 
  I
  think the approaches are more alike than different, which is encouraging.
 
 
  -- Ed Leafe

 The projects are not exclusive to one another - they are completely
 complimentary.

 And as of yet, we have not solidified the client/manager/resource
 structure - we should talk about that at the meeting next week.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gamification and on-boarding ...

2014-02-12 Thread Sanchez, Cristian A
I¹m kind of new in Openstack.

+1 to this 

On 12/02/14 15:00, Sandy Walsh sandy.wa...@rackspace.com wrote:

At the Nova mid-cycle meetup we've been talking about the problem of
helping new contributors. It got into a discussion of karma, code
reviews, bug fixes and establishing a name for yourself before screaming
in a chat room can someone look at my branch. We want this experience
to be positive, but not everyone has time to hand-hold new people in the
dance.

The informal OpenStack motto is automate everything, so perhaps we
should consider some form of gamification [1] to help us? Can we offer
badges, quests and challenges to new users to lead them on the way to
being strong contributors?

Fixed your first bug badge
Updated the docs badge
Got your blueprint approved badge
Triaged a bug badge
Reviewed a branch badge
Contributed to 3 OpenStack projects badge
Fixed a Cells bug badge
Constructive in IRC badge
Freed the gate badge
Reverted branch from a core badge
etc. 

These can be strung together as Quests to lead people along the path.
It's more than karma and less sterile than stackalytics. The Foundation
could even promote the rising stars and highlight the leader board.

There are gamification-as-a-service offerings out there [2] as well as
Fedora Badges [3] (python and open source) that we may want to consider.

Thoughts?
-Sandy

[1] http://en.wikipedia.org/wiki/Gamification
[2] http://gamify.com/ (and many others)
[3] https://badges.fedoraproject.org/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Interested in attracting new contributors?

2014-02-12 Thread Ben Nemec

On 2014-02-12 09:51, Jesse Noller wrote:

On Feb 12, 2014, at 8:30 AM, Julie Pichon jpic...@redhat.com wrote:


Hi folks,

Stefano's post on how to make contributions to OpenStack easier [1]
finally stirred me into writing about something that vkmc and myself
have been doing on the side for a few months to help new contributors
to get involved.

Some of you may be aware of OpenHatch [2], a non-profit dedicated to
helping newcomers get started in open-source. About 6 months ago we
created a project page for Horizon [3], filled in a few high level
details, set ourselves up as mentors. Since then people have been
expressing interest in the project and a number of them got a patch
submitted and approved, a couple are sticking around (often helping 
out

with bug triaging, as confirming new bugs is one of the few tasks one
can help out with when only having limited time).

I can definitely sympathise with the comment in Stefano's article that
there are not enough easy tasks / simple issues for newcomers. There's
a lot to learn already when you're starting out (git, gerrit, python,
devstack, ...) and simple bugs are so hard to find - something that
will take a few minutes to an existing contributor will take much
longer for someone who's still figuring out where to get the code
from. Unfortunately it's not uncommon for existing contributors to 
take

on tasks marked as low-hanging-fruit because it's only 5 minutes (I
can understand this coming up to an RC but otherwise 
low-hanging-fruits

are often low priority nits that could wait a little bit longer). In
Horizon the low-hanging-fruits definitely get snatched up quickly and 
I

try to keep a list of typos or other low impact, trivial bugs that
would make good first tasks for people reaching out via OpenHatch.

OpenHatch doesn't spam, you get one email a week if one or more people
indicated they want to help. The initial effort is not time-consuming,
following OpenHatch's advice [4] you can refine a nice initial
contact email that helps you get people started and understand what
they are interested in quickly. I don't find the time commitment to be
too much so far, and it's incredibly gratifying to see someone
submitting their first patch after you answered a couple of questions
or helped resolve a hairy git issue. I'm happy to chat about it more,
if you're curious or have any questions.

In any case if you'd like to attract more contributors to your 
project,

and/or help newcomers get started in open-source, consider adding your
project to OpenHatch too!

Cheers,

Julie



+10

There’s been quite a bit of talk about this - but not necessarily on
the dev list. I think openhatch is great - mentorship programs in
general go a *long* way to help raise up and gain new people. Core
Python has had this issue for awhile, and many other large OSS
projects continue to suffer from it (“barrier to entry too high”).

Some random thoughts:

I’d like to see something like Solum’s Contributing page:

https://wiki.openstack.org/wiki/Solum/Contributing

Expanded a little and potentially be the recommended “intro to
contribution” guide -
https://wiki.openstack.org/wiki/How_To_Contribute is good, but a more
accessible version goes a long way. You want to show them how easy /
fast it is, not all of the options at once.


So, glancing over the Solum page, I don't see anything specific to Solum 
in there besides a few paths in examples.  It's basically a condensed 
version of https://wiki.openstack.org/wiki/GerritWorkflow sans a lot of 
the detail.  This might be a good thing to add as a QuickStart section 
on that wiki page (which is linked from the how to contribute page, 
although maybe not as prominently as it should be).  But, a lot of that 
detail is needed before a change is going to be accepted anyway.  I'm 
not sure giving a new contributor just the bare minimum is actually 
doing them any favors.  Without letting them know things like how to 
format a commit message and configure their ssh keys on Gerrit, they 
aren't going to be able to get a change accepted anyway and IMHO they're 
likely to just give up anyway (and possibly waste some reviewer time in 
the process).


That said, the GerritWorkflow page definitely needs some updates and 
maybe a little condensing of its own.  For example, do we really need 
distro-specific instructions for installing git-review?  How about just 
Install pip through your distro's package management tool of choice and 
then run pip install git-review?  I would hope anyone planning to 
contribute to OpenStack is capable of following that.


I guess my main point is that our contributing docs could use work, but 
there's also that as simple as possible, but no simpler thing to 
consider.  And I certainly don't like that we seem to be duplicating 
effort on this front.


/2 cents

-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [Neutron] Group Policy questions

2014-02-12 Thread Stephen Wong
Hi Carlos,


On Wed, Feb 12, 2014 at 9:37 AM, Carlos Gonçalves m...@cgoncalves.ptwrote:

 Hi,

 I've a couple of questions regarding the ongoing work on Neutron Group
 Policy proposed in [1].

 1. One of the described actions is redirection to a service chain. How do
 you see BPs [2] and [3] addressing service chaining? Will this BP implement
 its own service chaining mechanism enforcing traffic steering or will it
 make use of, and thus depending on, those BPs?


We plan to support both specifying Neutron native service chain
(reference [2] from your email below) as the object to 'redirect' traffic
to as well as actually setting an ordered chain of services specified
directly via the 'redirect' list. In the latter case we would need the
plugins to perform traffic steering across these services.


2. In the second use case presented in the BP document, Tired application
 with service insertion/chaining, do you consider that the two firewalls
 entities can represent the same firewall instance or two running and
 independent instances? In case it's a shared instance, how would it support
 multiple chains? This is, HTTP(s) traffic from Inet group would be
 redirected to the firewall and then passes through the ADC; traffic from
 App group with destination DB group would also be redirected to the very
 same firewall instance, although to a different destination group as the
 chain differs.


We certainly do not restrict users from setting the same firewall
instance on two different 'redirect' list - but at this point, since the
group-policy project has no plan to perform actual configurations for the
services, it is therefore the users' responsibility to set the rules
correctly on the firewall instance such that the correct firewall rules
will be applied for traffic from group A - B as well as group C - D.

- Stephen



 Thanks.

 Cheers,
 Carlos Gonçalves

 [1]
 https://blueprints.launchpad.net/neutron/+spec/group-based-policy-abstraction
 [2]
 https://blueprints.launchpad.net/neutron/+spec/neutron-services-insertion-chaining-steering
 [3]
 https://blueprints.launchpad.net/neutron/+spec/nfv-and-network-service-chain-implementation


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Interest in discussing vendor plugins for L3 services?

2014-02-12 Thread Mandeep Dhami
I would be interested as well (UTC-8).

Regards,
Mandeep



On Wed, Feb 12, 2014 at 8:18 AM, Eugene Nikanorov
enikano...@mirantis.comwrote:

 I'd be interested too.

 Thanks,
 Eugene.


 On Wed, Feb 12, 2014 at 7:51 PM, Carl Baldwin c...@ecbaldwin.net wrote:

 Paul,

 I'm interesting in joining the discussion.  UTC-7.  Any word on when
 this will take place?

 Carl

 On Mon, Feb 3, 2014 at 3:19 PM, Paul Michali p...@cisco.com wrote:
  I'd like to see if there is interest in discussing vendor plugins for L3
  services. The goal is to strive for consistency across vendor
  plugins/drivers and across service types (if possible/sensible). Some of
  this could/should apply to reference drivers as well. I'm thinking about
  these topics (based on questions I've had on VPNaaS - feel free to add
 to
  the list):
 
  How to handle vendor specific validation (e.g. say a vendor has
 restrictions
  or added capabilities compared to the reference drivers for attributes).
  Providing client feedback (e.g. should help and validation be
 extended to
  include vendor capabilities or should it be delegated to server
 reporting?)
  Handling and reporting of errors to the user (e.g. how to indicate to
 the
  user that a failure has occurred establishing a IPSec tunnel in device
  driver?)
  Persistence of vendor specific information (e.g. should new tables be
 used
  or should/can existing reference tables be extended?).
  Provider selection for resources (e.g. should we allow --provider
 attribute
  on VPN IPSec policies to have vendor specific policies or should we
 rely on
  checks at connection creation for policy compatibility?)
  Handling of multiple device drivers per vendor (e.g. have service driver
  determine which device driver to send RPC requests, or have agent
 determine
  what driver requests should go to - say based on the router type)
 
  If you have an interest, please reply to me and include some days/times
 that
  would be good for you, and I'll send out a notice on the ML of the
 time/date
  and we can discuss.
 
  Looking to hearing form you!
 
  PCM (Paul Michali)
 
  MAIL  p...@cisco.com
  IRCpcm_  (irc.freenode.net)
  TW@pmichali
  GPG key4525ECC253E31A83
  Fingerprint 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Putting nova-network support into the V3 API

2014-02-12 Thread Ken'ichi Ohmichi
Hi Chris,

Thanks for your info, I got it.
That is a quick response and enough for me:-)


Thanks
Ken'ichi Ohmichi

---
2014-02-13 Christopher Yeoh cbky...@gmail.com:
 Hi Kenichi,

 Ah yes, it was decided at the mid cycle meetup to delay the nova network
 changes
 until Juno. Sorry I should have told you sooner.

 Regards,

 Chris



 On Wed, Feb 12, 2014 at 1:35 AM, Kenichi Oomichi oomi...@mxs.nes.nec.co.jp
 wrote:


 Hi Chris,

 Is it OK to postpone nova-network v3 APIs until Juno release?
 I guess that because some nova-network v3 API patches are abandoned today.
 I'd just like to make it clear.


 Thanks
 Ken'ichi Ohmichi

 ---

  -Original Message-
  From: Christopher Yeoh [mailto:cbky...@gmail.com]
  Sent: Tuesday, February 04, 2014 8:37 PM
  To: openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [Nova] Putting nova-network support into
  the V3 API
 
  On Tue, 04 Feb 2014 11:37:29 +0100
  Thierry Carrez thie...@openstack.org wrote:
 
   Christopher Yeoh wrote:
On Tue, Feb 4, 2014 at 12:03 PM, Joe Gordon joe.gord...@gmail.com
mailto:joe.gord...@gmail.com wrote:
   
John and I discussed a third possibility:
   
nova-network v3 should be an extension, so the idea was to: Make
nova-network API a subset of neturon (instead of them adopting our
API we adopt theirs). And we could release v3 without nova network
in Icehouse and add the nova-network extension in Juno.
   
This would actually be my preferred approach if we can get consensus
around this. It takes a lot of pressure off this late in the cycle
and there's less risk around having to live with a nova-network API
in V3 that still has some rough edges around it. I imagine it will
be quite a while before we can deprecate the V2 API so IMO going
one cycle without nova-network support is not a big thing.
  
   So user story would be, in icehouse release (nothing deprecated yet):
   v2 + nova-net: supported
   v2 + neutron: supported
   v3 + nova-net: n/a
   v3 + neutron: supported
  
   And for juno:
   v2 + nova-net: works, v2 could be deprecated
   v2 + neutron: works, v2 could be deprecated
   v3 + nova-net: works through extension, nova-net could be deprecated
 
  So to be clear the idea I think is that nova-net of v3 + nova-net
  would look like the neutron api. Eg nova-net API from v2 would look
  quite different to 'nova-net' API from v3. To minimise the transition
  pain for users on V3 moving to a neutron based cloud. Though those
  moving from v2 + nova-net to v3 + nova-net would have to cope with more
  changes.
 
   v3 + neutron: supported (encouraged future-proof combo)
  
   That doesn't sound too bad to me. Lets us finalize v3 core in icehouse
   and keeps a lot of simplification / deprecation options open for Juno,
   depending on how the nova-net vs. neutron story pans out then.
  
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] modify_image_attribute() in ec2_api is broken in Nova

2014-02-12 Thread Vishvananda Ishaya
This looks like a bug to me. It would be great if you could report it on 
launchpad.

Vish

On Feb 11, 2014, at 7:49 PM, wu jiang win...@gmail.com wrote:

 Hi all,
 
 I met some problems when testing an ec2_api:'modify_image_attribute()' in 
 Nova.
 I found the params send to Nova, are not suitable to match it in AWS api.
 I logged it in launchpad: https://bugs.launchpad.net/nova/+bug/1272844
 
 -
 
 1. Here is the definition part of modify_image_attribute(): 
 
 def modify_image_attribute(
 self, context, image_id, attribute, operation_type, **kwargs)
 
 2. And here is the example of it in AWS api:
 
 https://ec2.amazonaws.com/?Action=ModifyImageAttributeImageId=ami-61a54008LaunchPermission.Remove.1.UserId=
 
 -
 
 3. You can see the value isn't suitable to match the defination in Nova codes.
 Therefore, Nova will raise the exception like this:
 
 TypeError: 'modify_image_attribute() takes exactly 5 non-keyword arguments 
 (3 given)'
 
 4. I printed out the params send to Nova via eucaTools.
 The results also validate the conclusions above:
 
  args={'launch_permission': {'add': {'1': {'group': u'all'}}}, 'image_id': 
  u'ami-0004'} 
 
 --
 
 So, is this api correct? Should we need to modify it according to the format 
 of AWS api?
 
 
 Best Wishes,
 wingwj
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] The future of nosetests with Tempest

2014-02-12 Thread Matt Riedemann



On 1/17/2014 8:34 AM, Matthew Treinish wrote:

On Fri, Jan 17, 2014 at 08:32:19AM -0500, David Kranz wrote:

On 01/16/2014 10:56 PM, Matthew Treinish wrote:

Hi everyone,

With some recent changes made to Tempest compatibility with nosetests is going
away. We've started using newer features that nose just doesn't support. One
example of this is that we've started using testscenarios and we're planning to
do this in more places moving forward.

So at Icehouse-3 I'm planning to push the patch out to remove nosetests from the
requirements list and all the workarounds and references to nose will be pulled
out of the tree. Tempest will also start raising an unsupported exception when
you try to run it with nose so that there isn't any confusion on this moving
forward. We talked about doing this at summit briefly and I've brought it up a
couple of times before, but I believe it is time to do this now. I feel for
tempest to move forward we need to do this now so that there isn't any ambiguity
as we add even more features and new types of testing.

I'm with you up to here.


Now, this will have implications for people running tempest with python 2.6
since up until now we've set nosetests. There is a workaround for getting
tempest to run with python 2.6 and testr see:

https://review.openstack.org/#/c/59007/1/README.rst

but essentially this means that when nose is marked as unsupported on tempest
python 2.6 will also be unsupported by Tempest. (which honestly it basically has
been for while now just we've gone without making it official)

The way we handle different runners/os can be categorized as tested
in gate, unsupported (should work, possibly some hacks needed),
and hostile. At present, both nose and py2.6 I would say are in
the unsupported category. The title of this message and the content
up to here says we are moving nose to the hostile category. With
only 2 months to feature freeze I see no justification in moving
py2.6 to the hostile category. I don't see what new testing features
scheduled for the next two months will be enabled by saying that
tempest cannot and will not run on 2.6. It has been agreed I think
by all projects that py2.6 will be dropped in J. It is OK that py2.6
will require some hacks to work and if in the next few months it
needs a few more then that is ok. If I am missing another connection
between the py2.6 and nose issues, please explain.



So honestly we're already at this point in tempest. Nose really just doesn't
work with tempest, and we're adding more features to tempest, your negative test
generator being one of them, that interfere further with nose. I've seen several


I disagree here, my team is running Tempest API, CLI and scenario tests 
every day with nose on RHEL 6 with minimal issues.  I had to workaround 
the negative test discovery by simply sed'ing that out of the tests 
before running it, but that's acceptable to me until we can start 
testing on RHEL 7.  Otherwise I'm completely OK with saying py26 isn't 
really supported and isn't used in the gate, and it's a buyer beware 
situation to make it work, which includes pushing up trivial patches to 
make it work (which I did a few of last week, and they were small syntax 
changes or usages of testtools).


I don't understand how the core projects can be running unit tests in 
the gate on py26 but our functional integration project is going to 
actively go out and make it harder to run Tempest with py26, that sucks.


If we really want to move the test project away from py26, let's make 
the concerted effort to get the core projects to move with it.


And FWIW, I tried the discover.py patch with unittest2 and testscenarios 
last week and either I botched it, it's not documented properly on how 
to apply it, or I screwed something up, but it didn't work for me, so 
I'm not convinced that's the workaround.


What's the other option for running Tempest on py26 (keeping RHEL 6 in 
mind)?  Using tox with testr and pip?  I'm doing this all single-node.



patches this cycle that attempted to introduce incorrect behavior while trying
to fix compatibility with nose. That's why I think we need a clear message on
this sooner than later. Which is why I'm proposing actively raising an error
when things are run with nose upfront so there isn't any illusion that things
are expected to work.

This doesn't necessarily mean we're moving python 2.6 to the hostile category.
Nose support is independent of python 2.6 support. Py26 I would still consider
to be unsupported, the issue is that the hack to make py26 work is outside of
tempest. This is why we've recommended that people using python 2.6 run with
nose, which really is no longer an option. Attila's abandoned patch that I
linked above documents points to this bug with a patch to discover which is
need to get python 2.6 working with tempest and testr:

https://code.google.com/p/unittest-ext/issues/detail?id=79


-Matt Treinish

___

Re: [openstack-dev] Gamification and on-boarding ...

2014-02-12 Thread Ben Nemec

On 2014-02-12 12:00, Sandy Walsh wrote:

At the Nova mid-cycle meetup we've been talking about the problem of
helping new contributors. It got into a discussion of karma, code
reviews, bug fixes and establishing a name for yourself before
screaming in a chat room can someone look at my branch. We want this
experience to be positive, but not everyone has time to hand-hold new
people in the dance.

The informal OpenStack motto is automate everything, so perhaps we
should consider some form of gamification [1] to help us? Can we offer
badges, quests and challenges to new users to lead them on the way to
being strong contributors?

Fixed your first bug badge
Updated the docs badge
Got your blueprint approved badge
Triaged a bug badge
Reviewed a branch badge
Contributed to 3 OpenStack projects badge
Fixed a Cells bug badge
Constructive in IRC badge
Freed the gate badge
Reverted branch from a core badge
etc.

These can be strung together as Quests to lead people along the path.
It's more than karma and less sterile than stackalytics. The
Foundation could even promote the rising stars and highlight the
leader board.

There are gamification-as-a-service offerings out there [2] as well as
Fedora Badges [3] (python and open source) that we may want to
consider.

Thoughts?
-Sandy

[1] http://en.wikipedia.org/wiki/Gamification
[2] http://gamify.com/ (and many others)
[3] https://badges.fedoraproject.org/


+1 from me, if this can be done without a huge amount of ongoing 
maintenance for someone.  I will admit that climbing the reviewstats 
leaderboard is good motivation for those days when I just don't feel 
like reviewing.  Ditto for Launchpad karma. :-)


-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Proposal for model change - Layer 7 support

2014-02-12 Thread Stephen Balukoff
Howdy, Sam!

Thanks also for your speedy response.  Comments / additional questions are
in-line below:


On Wed, Feb 12, 2014 at 2:51 AM, Samuel Bercovici samu...@radware.comwrote:

  Sam We have reviewed this based on capabilities that we belive could
 be supported by HA proxy and all commercial vendors.

 Sam What is missing?

Nothing major that I could see--  I was mostly curious where the discussion
took place and whether it was documented anywhere. (Again, I'm still in the
process of catching up with the goings on of this project, and understand
LBaaS features of Neutron have been under development for a couple years
now. Doing forensics to find out this sort of thing is often tedious and
fruitless-- it's usually quicker just to ask.)



- Since the L7Rule object contains a position indicator, I assume,
therefore, that a given L7Rule object cannot exist in more than one
L7Policy, correct?  Also, I assume that the first L7Rule added to an
L7Policy becomes the rule at position 0 and that subsequent rules are added
with incrementing positions. This is unless the position is specified, in
which case, the rule is inserted into the policy list at the position
specified, and all subsequent rule position indicators get incremented.
Correct?

 Sam Correct.

- Shouldn't the L7Rule object also have a l7PolicyID attribute?

 Sam It does.

Excellent! I've updated the wiki page to reflect this. :)  I shall also be
endeavoring to produce an updated diagram, since I think a picture can save
a lot of words here, eh. :)



- It is unclear from the proposal whether a given VIP can have
multiple L7VipPolicyAssociation objects associated with it. If not,
then we've not really solved the problem of multiple back-end pools per
VIP. If so, then the L7VipPolicyAssociation object is missing its own
'position' attribute (because order matters here, too!).

 Sam Correct the L7VIPPollicyassociation should have a “position”
 attribute. The way to implement is under consideration.

Cool. Are the solutions being considered:

1. Make it at additional integer attribute of
the L7VIPPolicyAssociation object. (The side effect of this is that any
given L7VIPPolicyAssociation object can only be associated with one VIP.)

2. Create an additional association object to associate the
L7VIPPolicyAssociation with a VIP (ie. a join table of some kind), which
carries this position attribute and would allow a given
L7VIPPolicyAssociation to be associated with multiple VIPs.

FWIW, I think of the above, only number 1 really makes sense. The point of
the L7VIPPolicyAssociation is to associate a VIP and Pool, using the rules
in a L7Policy. All that L7VIPPolicyAssociation is missing is a position
attribute.

As an aside, when you say, under consideration, exactly how / where is it
being considered?  (Again, sorry for my newbishness-- just trying to figure
out how this body makes these kinds of decisions.)


- I assume any given pool can have multiple L7VipPolicyAssociations.
If this is not the case, then a given single pool can only be associated
with one VIP.

 Sam nope this any to any. Pools can be associated with multiple VIPs

Thanks for the clarification!


- There is currently no way to set a 'default' back-end pool in this
proposal. Perhaps this could be solved by:


 - Make 'DEFAULT' one of the actions possible for a
   L7VipPolicyAssociation
   - Any L7VipPolicyAssociation with an action of 'DEFAULT' would have
   a null position and null L7PolicyId.
   - We would need to enforce having only one L7VipPolicyAssociation
   object with a 'DEFAULT' action per VIP.

  Sam the “default” behavior is governed by the current VIP à Pool
 relationship. This is the canonical approach that could also be addressed
 by LBaaS drivers that do not support L7 content switching.

 Sam We will fix the VIPßàPool limitation (for Juno) by removing the Pool
 àVIP reference al only leaving the VIP à Pool reference thus allowing the
 Pool to be used by multiple VIPs. This was originally planned for icehouse
 but will be handle for Juno.




Aah--  Ok, I thought that based on recent discussions in the IRC meetings
that the L7 features of LBaaS were very unlikely to make it into Icehouse,
and therefore any discussion of them was essentially a discussion about
what's going to go into Juno.  If this isn't the case, could we get a
little more clarification of exactly what features are still being
considered for Icehouse?

In any case, you're right that relying on the legacy PoolàVIP association
to define the 'Default backend' configuration means that a given pool can
be the default backend for only one VIP. I therefore think it makes a whole
lot of sense to scrap that entirely when the L7 features are introduced.

The model here:
https://wiki.openstack.org/w/images/e/e1/LBaaS_Core_Resource_Model_Proposal.png

...doesn't seem to reflect a PoolID attribute, and therefore it is 

[openstack-dev] [qa][neutron] API tests in the Neutron tree

2014-02-12 Thread Maru Newby
At the last 2 summits, I've suggested that API tests could be maintained in the 
Neutron tree and reused by Tempest.  I've finally submitted some patches that 
demonstrate this concept:

https://review.openstack.org/#/c/72585/  (implements a unit test for the 
lifecycle of the network resource)
https://review.openstack.org/#/c/72588/  (runs the test with tempest rest 
clients)

My hope is to make API test maintenance a responsibility of the Neutron team.  
The API compatibility of each Neutron plugin has to be validated by Neutron 
tests anyway, and if the tests are structured as I am proposing, Tempest can 
reuse those efforts rather than duplicating them.

I've added this topic to this week's agenda, and I would really appreciate it 
interested parties would take a look at the patches in question to prepare 
themselves to participate in the discussion.


m.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo-incubator] update.py copies more modules than expected

2014-02-12 Thread Chris Buccella

On 02/12/2014 11:50 AM, Sanchez, Cristian A wrote:

Hi,
We’ve modified the openstack-commons.conf file for climate remove rpc and 
notifier modules. But when the update.py  is executed, the notifier and rpc 
modules are still copied. Do you know what could be wrong?
Here you can see a log showing this situation: 
http://paste.openstack.org/show/64674/


I think it's because you're pulling in middleware. The audit module 
imports notifier, so notifier gets dragged along as a dependency.



-Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo-incubator] update.py copies more modules than expected

2014-02-12 Thread Dina Belova
May we use smth like:

module=notifier.api

only to copy needed for middleware notifier?


On Wed, Feb 12, 2014 at 10:55 PM, Chris Buccella 
bucce...@linux.vnet.ibm.com wrote:

 On 02/12/2014 11:50 AM, Sanchez, Cristian A wrote:

 Hi,
 We've modified the openstack-commons.conf file for climate remove rpc and
 notifier modules. But when the update.py  is executed, the notifier and rpc
 modules are still copied. Do you know what could be wrong?
 Here you can see a log showing this situation:
 http://paste.openstack.org/show/64674/


 I think it's because you're pulling in middleware. The audit module
 imports notifier, so notifier gets dragged along as a dependency.


 -Chris



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo-incubator] update.py copies more modules than expected

2014-02-12 Thread Dina Belova
Cause it seems that such thing is not working:
http://paste.openstack.org/show/64740/
It still downloads all directories for notifier and roc, although is only
module=notifier.api in conf file.

Thanks
Dina


On Wed, Feb 12, 2014 at 11:13 PM, Dina Belova dbel...@mirantis.com wrote:

 May we use smth like:

 module=notifier.api

 only to copy needed for middleware notifier?


 On Wed, Feb 12, 2014 at 10:55 PM, Chris Buccella 
 bucce...@linux.vnet.ibm.com wrote:

 On 02/12/2014 11:50 AM, Sanchez, Cristian A wrote:

 Hi,
 We've modified the openstack-commons.conf file for climate remove rpc
 and notifier modules. But when the update.py  is executed, the notifier and
 rpc modules are still copied. Do you know what could be wrong?
 Here you can see a log showing this situation:
 http://paste.openstack.org/show/64674/


 I think it's because you're pulling in middleware. The audit module
 imports notifier, so notifier gets dragged along as a dependency.


 -Chris



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --

 Best regards,

 Dina Belova

 Software Engineer

 Mirantis Inc.




-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo-incubator] update.py copies more modules than expected

2014-02-12 Thread Doug Hellmann
The update script tries to be smart about the dependencies between modules,
based on cross-module imports.

It looks like the notifier middleware will need to move out of the
incubator, or be updated to use oslo.messaging or the rpc code from the
incubator.

Doug


On Wed, Feb 12, 2014 at 2:23 PM, Dina Belova dbel...@mirantis.com wrote:

 Cause it seems that such thing is not working:
 http://paste.openstack.org/show/64740/
 It still downloads all directories for notifier and roc, although is only
 module=notifier.api in conf file.

 Thanks
 Dina


 On Wed, Feb 12, 2014 at 11:13 PM, Dina Belova dbel...@mirantis.comwrote:

 May we use smth like:

 module=notifier.api

 only to copy needed for middleware notifier?


 On Wed, Feb 12, 2014 at 10:55 PM, Chris Buccella 
 bucce...@linux.vnet.ibm.com wrote:

 On 02/12/2014 11:50 AM, Sanchez, Cristian A wrote:

 Hi,
 We've modified the openstack-commons.conf file for climate remove rpc
 and notifier modules. But when the update.py  is executed, the notifier and
 rpc modules are still copied. Do you know what could be wrong?
 Here you can see a log showing this situation:
 http://paste.openstack.org/show/64674/


 I think it's because you're pulling in middleware. The audit module
 imports notifier, so notifier gets dragged along as a dependency.


 -Chris



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --

 Best regards,

 Dina Belova

 Software Engineer

 Mirantis Inc.




 --

 Best regards,

 Dina Belova

 Software Engineer

 Mirantis Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] [TripleO] Goal setting // progress towards integration

2014-02-12 Thread Devananda van der Veen
Hello ironic developers, tripleo folks, and interested third-parties!

The Icehouse graduation deadline is fast approaching and I think we all
need to take a good look at where Ironic is, compare that to the
requirements that have been laid out by the TC [1] for all Integrated
projects, and prioritize our work aggressively.


We have four areas inside of Ironic that need significant movement in order
to graduate this cycle:

* Critical  High priority bugs
   There are enough of these that I'm not comfortable releasing with the
current state of things. I think we can address all of these bugs in time
if we focus on getting to a minimum-viable-product, rather than on
engineering a perfect solution. (Yes, I'm talking about myself too...) We
have patches in flight for many of them, but the average turn-around time
for our review queue is, in my opinion, higher than it should be [3] given
the size of our team. If we don't collectively speed up the review queue,
it's going to be very difficult to land everything we need to.

* Missing functionality
  We need to match the feature set of Nova baremetal. We're pretty good on
that front, but I'm calling out two blueprints and I'll explain why.

  https://blueprints.launchpad.net/ironic/+spec/serial-console-access
  I believe some folks are using this, even though the TripleO team is not,
and I personally never used the feature in Nova baremetal.
  IBM contributed code to implement this in Ironic, but the patch #64100
has been abandoned for some time. We need to revive this and continue the
work.
  https://blueprints.launchpad.net/ironic/+spec/preserve-ephemeral
  The TripleO team added this feature to Nova baremetal during the current
cycle;  even though it wasn't on our plans at the start of Icehouse, we
need to do it to keep feature parity.

* Testing  QA
  Ironic has API tests in tempest, and these run in Ironic's gate. However,
there are no functional tests today (iow, nothing testing that a PXE deploy
actually works). These are a graduation requirement. Aleksandr has been
working on this, but it's a long way from done. Anyone want to jump in?

* Documentation
  We have CLI and API docs, but we have no installation or deployer docs.
These are also a graduation requirement. No one has stepped up to own this
effort yet.


Additionally, the TC has laid out some (draft) graduation requirements [2]
for projects that duplicate functionality in a pre-existing project -- that
means us, since we're supplanting the old Nova baremetal driver. For
reference, here's the pertinent snippet from the draft:

[T]he new project must reach a level of functionality and maturity such
 that we are ready to deprecate the old code ... including details for how
 users will be able to migrate from the old to the new.



We have two blueprints up that pertain specifically to this requirement:

https://blueprints.launchpad.net/nova/+spec/deprecate-baremetal-driver

The nova.virt.ironic driver is especially important as it represents Nova's
ability to perform the same functionality that is available today with the
baremetal driver. The Project meeting yesterday [4] made it clear that
getting the nova.virt.ironic driver landed is a pre-condition of Ironic's
graduation. This driver is well underway, but still has much work to be
done. We've started to see a few reviews from the Nova team, and have begun
splitting up the patch into smaller, more reviewable chunks.

The blueprint is set Low priority. I know the Nova core team is swamped
with review work, but we'll need to start getting regular feedback on this.
Russel suggested that this is suitable for a FFE so we could continue
working on it after I3, which is great - we'll need the extra time. Even
so, getting a little early feedback from nova-core would be very helpful in
case there is major work that they think we need to do.

https://blueprints.launchpad.net/ironic/+spec/migration-from-nova

We will need to provide a data migration tool for existing Nova baremetal
deployments, along with usage documentation. Roman is working on this, but
there still isn't any code up, and I'm getting a bit nervous


That's it for the graduation-critical tasks, but that's not all the work
currently in our review queue or targeted to I3 ...

We also have third-party drivers coming in. Both SeaMicro and HP have
blueprints up for a vendor driver with power and deploy interfaces.
SeaMicro has code already up; HP has promised code very soon. There's also
a blueprint to enable PXE booting of windows images. None of these are
required for graduation, and even though I want to encourage vendors -- and
I think the functionality these blueprints describe is very valuable to the
project -- I question whether we'll have the time and bandwidth to review
them and ensure they're documented. I am not going to set a code proposal
deadline as some other projects have done, in part because I expect a lot
of development to happen at the TripleO sprint (March 3 - 7) and 

Re: [openstack-dev] [oslo-incubator] update.py copies more modules than expected

2014-02-12 Thread Dina Belova
I think variant with using oslo.messaging is nice here - cause now it looks
strange when openstack.common.middleware asks to use
openstack.common.notifier.api - and that leads to two unused really dirs
downloaded by update.py script.

Anyway, we should go further and begin using oslo.messaging where it's
possible now.

Dina


On Wed, Feb 12, 2014 at 11:34 PM, Doug Hellmann doug.hellm...@dreamhost.com
 wrote:

 The update script tries to be smart about the dependencies between
 modules, based on cross-module imports.

 It looks like the notifier middleware will need to move out of the
 incubator, or be updated to use oslo.messaging or the rpc code from the
 incubator.

 Doug


 On Wed, Feb 12, 2014 at 2:23 PM, Dina Belova dbel...@mirantis.com wrote:

 Cause it seems that such thing is not working:
 http://paste.openstack.org/show/64740/
 It still downloads all directories for notifier and roc, although is only
 module=notifier.api in conf file.

 Thanks
 Dina


 On Wed, Feb 12, 2014 at 11:13 PM, Dina Belova dbel...@mirantis.comwrote:

 May we use smth like:

 module=notifier.api

 only to copy needed for middleware notifier?


 On Wed, Feb 12, 2014 at 10:55 PM, Chris Buccella 
 bucce...@linux.vnet.ibm.com wrote:

 On 02/12/2014 11:50 AM, Sanchez, Cristian A wrote:

 Hi,
 We've modified the openstack-commons.conf file for climate remove rpc
 and notifier modules. But when the update.py  is executed, the notifier 
 and
 rpc modules are still copied. Do you know what could be wrong?
 Here you can see a log showing this situation:
 http://paste.openstack.org/show/64674/


 I think it's because you're pulling in middleware. The audit module
 imports notifier, so notifier gets dragged along as a dependency.


 -Chris



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --

 Best regards,

 Dina Belova

 Software Engineer

 Mirantis Inc.




 --

 Best regards,

 Dina Belova

 Software Engineer

 Mirantis Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo-incubator] update.py copies more modules than expected

2014-02-12 Thread Sylvain Bauza
I would vote for option #2 : move to oslo.messaging, that could be a quick
win.


2014-02-12 20:34 GMT+01:00 Doug Hellmann doug.hellm...@dreamhost.com:

 The update script tries to be smart about the dependencies between
 modules, based on cross-module imports.

 It looks like the notifier middleware will need to move out of the
 incubator, or be updated to use oslo.messaging or the rpc code from the
 incubator.

 Doug


 On Wed, Feb 12, 2014 at 2:23 PM, Dina Belova dbel...@mirantis.com wrote:

 Cause it seems that such thing is not working:
 http://paste.openstack.org/show/64740/
 It still downloads all directories for notifier and roc, although is only
 module=notifier.api in conf file.

 Thanks
 Dina


 On Wed, Feb 12, 2014 at 11:13 PM, Dina Belova dbel...@mirantis.comwrote:

 May we use smth like:

 module=notifier.api

 only to copy needed for middleware notifier?


 On Wed, Feb 12, 2014 at 10:55 PM, Chris Buccella 
 bucce...@linux.vnet.ibm.com wrote:

 On 02/12/2014 11:50 AM, Sanchez, Cristian A wrote:

 Hi,
 We've modified the openstack-commons.conf file for climate remove rpc
 and notifier modules. But when the update.py  is executed, the notifier 
 and
 rpc modules are still copied. Do you know what could be wrong?
 Here you can see a log showing this situation:
 http://paste.openstack.org/show/64674/


 I think it's because you're pulling in middleware. The audit module
 imports notifier, so notifier gets dragged along as a dependency.


 -Chris



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --

 Best regards,

 Dina Belova

 Software Engineer

 Mirantis Inc.




 --

 Best regards,

 Dina Belova

 Software Engineer

 Mirantis Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] The future of nosetests with Tempest

2014-02-12 Thread Matthew Treinish
On Wed, Feb 12, 2014 at 11:32:39AM -0700, Matt Riedemann wrote:
 
 
 On 1/17/2014 8:34 AM, Matthew Treinish wrote:
 On Fri, Jan 17, 2014 at 08:32:19AM -0500, David Kranz wrote:
 On 01/16/2014 10:56 PM, Matthew Treinish wrote:
 Hi everyone,
 
 With some recent changes made to Tempest compatibility with nosetests is 
 going
 away. We've started using newer features that nose just doesn't support. 
 One
 example of this is that we've started using testscenarios and we're 
 planning to
 do this in more places moving forward.
 
 So at Icehouse-3 I'm planning to push the patch out to remove nosetests 
 from the
 requirements list and all the workarounds and references to nose will be 
 pulled
 out of the tree. Tempest will also start raising an unsupported exception 
 when
 you try to run it with nose so that there isn't any confusion on this 
 moving
 forward. We talked about doing this at summit briefly and I've brought it 
 up a
 couple of times before, but I believe it is time to do this now. I feel for
 tempest to move forward we need to do this now so that there isn't any 
 ambiguity
 as we add even more features and new types of testing.
 I'm with you up to here.
 
 Now, this will have implications for people running tempest with python 2.6
 since up until now we've set nosetests. There is a workaround for getting
 tempest to run with python 2.6 and testr see:
 
 https://review.openstack.org/#/c/59007/1/README.rst
 
 but essentially this means that when nose is marked as unsupported on 
 tempest
 python 2.6 will also be unsupported by Tempest. (which honestly it 
 basically has
 been for while now just we've gone without making it official)
 The way we handle different runners/os can be categorized as tested
 in gate, unsupported (should work, possibly some hacks needed),
 and hostile. At present, both nose and py2.6 I would say are in
 the unsupported category. The title of this message and the content
 up to here says we are moving nose to the hostile category. With
 only 2 months to feature freeze I see no justification in moving
 py2.6 to the hostile category. I don't see what new testing features
 scheduled for the next two months will be enabled by saying that
 tempest cannot and will not run on 2.6. It has been agreed I think
 by all projects that py2.6 will be dropped in J. It is OK that py2.6
 will require some hacks to work and if in the next few months it
 needs a few more then that is ok. If I am missing another connection
 between the py2.6 and nose issues, please explain.
 
 
 So honestly we're already at this point in tempest. Nose really just doesn't
 work with tempest, and we're adding more features to tempest, your negative 
 test
 generator being one of them, that interfere further with nose. I've seen 
 several
 
 I disagree here, my team is running Tempest API, CLI and scenario
 tests every day with nose on RHEL 6 with minimal issues.  I had to
 workaround the negative test discovery by simply sed'ing that out of
 the tests before running it, but that's acceptable to me until we
 can start testing on RHEL 7.  Otherwise I'm completely OK with
 saying py26 isn't really supported and isn't used in the gate, and
 it's a buyer beware situation to make it work, which includes
 pushing up trivial patches to make it work (which I did a few of
 last week, and they were small syntax changes or usages of
 testtools).
 
 I don't understand how the core projects can be running unit tests
 in the gate on py26 but our functional integration project is going
 to actively go out and make it harder to run Tempest with py26, that
 sucks.
 
 If we really want to move the test project away from py26, let's
 make the concerted effort to get the core projects to move with it.

So as I said before the python 2.6 story for tempest remains the same after this
change. The only thing that we'll be doing is actively preventing nose from
working with tempest.

 
 And FWIW, I tried the discover.py patch with unittest2 and
 testscenarios last week and either I botched it, it's not documented
 properly on how to apply it, or I screwed something up, but it
 didn't work for me, so I'm not convinced that's the workaround.
 
 What's the other option for running Tempest on py26 (keeping RHEL 6
 in mind)?  Using tox with testr and pip?  I'm doing this all
 single-node.

Yes, that is what the discover patch is used to enable. By disabling nose the
only path to run tempest with py2.6 is to use testr. (which is what it always
should have been)

Attila confirmed it was working here:
http://fpaste.org/76651/32143139/
in that example he applies 2 patches the second one is currently in the gate for
tempest. (https://review.openstack.org/#/c/72388/ ) So all that needs to be done
is to apply that discover patch:

https://code.google.com/p/unittest-ext/issues/detail?id=79

(which I linked to before)

Then tempest should run more or less the same between 2.7 and 2.6. (The only
difference I've seen is in how skips are handled)

 
 

[openstack-dev] help the oslo team help you

2014-02-12 Thread Doug Hellmann
If you have a change in your project that is blocked waiting for a patch to
land in oslo (in the incubator, or any of the libraries we manage) *please*
either open a blueprint or mark the associated bug as also affecting the
relevant oslo project, then let me know about it so I can put it on our
review priority list. We have a lot going on in oslo right now, but will do
our best to prioritize reviews that affect features landing in other
projects -- if you let us know about them.

Doug
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [TripleO] Goal setting // progress towards integration

2014-02-12 Thread Michael Still
On Wed, Feb 12, 2014 at 12:40 PM, Devananda van der Veen
devananda@gmail.com wrote:

[snip]

 Additionally, the TC has laid out some (draft) graduation requirements [2]
 for projects that duplicate functionality in a pre-existing project -- that
 means us, since we're supplanting the old Nova baremetal driver. For
 reference, here's the pertinent snippet from the draft:

 [T]he new project must reach a level of functionality and maturity such
 that we are ready to deprecate the old code ... including details for how
 users will be able to migrate from the old to the new.

 We have two blueprints up that pertain specifically to this requirement:

 https://blueprints.launchpad.net/nova/+spec/deprecate-baremetal-driver

 The nova.virt.ironic driver is especially important as it represents Nova's
 ability to perform the same functionality that is available today with the
 baremetal driver. The Project meeting yesterday [4] made it clear that
 getting the nova.virt.ironic driver landed is a pre-condition of Ironic's
 graduation. This driver is well underway, but still has much work to be
 done. We've started to see a few reviews from the Nova team, and have begun
 splitting up the patch into smaller, more reviewable chunks.

 The blueprint is set Low priority. I know the Nova core team is swamped
 with review work, but we'll need to start getting regular feedback on this.
 Russel suggested that this is suitable for a FFE so we could continue
 working on it after I3, which is great - we'll need the extra time. Even so,
 getting a little early feedback from nova-core would be very helpful in case
 there is major work that they think we need to do.

I would also like to see CI (either third party or in the gate) for
the nova driver before merging it. There's a chicken and egg problem
here if its in the gate, but I'd like to see it at least proposed as a
review.

Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [TripleO] Goal setting // progress towards integration

2014-02-12 Thread Matt Wagner
heavy snipping ensues
On Wed Feb 12 14:40:27 2014, Devananda van der Veen wrote:

   https://blueprints.launchpad.net/ironic/+spec/serial-console-access
   I believe some folks are using this, even though the TripleO team is
 not, and I personally never used the feature in Nova baremetal.
   IBM contributed code to implement this in Ironic, but the patch
 #64100 has been abandoned for some time. We need to revive this and
 continue the work.

I have started to look at this. However, I am pretty new to the project
and this looks fairly involved. I'm more than willing to jump in and
give it my all, but I don't want to formally claim this whole task
without being confident that I can complete it in time (which I am not).

If someone more experienced picks this up, I will switch to helping
them; until then I will carry on with trying to test + fix up the
existing patch.


  But I would like the expectation to be clearly set -- the core team
 has a lot on its plate, and vendor features aren't part of our
 critical path right now, so don't expect them to get much attention
 from us unless we finish everything else.

The core team is pretty small. Do you think growing the core team to
include some of the other regular contributors would help to ease the
burden?


 There are also several reviews up just to do code cleanup and
 refactoring of some base classes. I think we should land these
 immediately and deal with any nits in subsequent patches to reduce the
 queue backlog and speed up development.

Big +1 from me on this. (I'd love to see this as more common practice in
the community overall, IMHO. Don't merge anything broken, but address
the it would be neat if... tasks in new patches instead of repeatedly
-1'ing good patches.)

Kind regards,

-- 
Matt Wagner
Software Engineer, Red Hat



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][neutron] API tests in the Neutron tree

2014-02-12 Thread Sean Dague
On 02/12/2014 01:48 PM, Maru Newby wrote:
 At the last 2 summits, I've suggested that API tests could be maintained in 
 the Neutron tree and reused by Tempest.  I've finally submitted some patches 
 that demonstrate this concept:
 
 https://review.openstack.org/#/c/72585/  (implements a unit test for the 
 lifecycle of the network resource)
 https://review.openstack.org/#/c/72588/  (runs the test with tempest rest 
 clients)
 
 My hope is to make API test maintenance a responsibility of the Neutron team. 
  The API compatibility of each Neutron plugin has to be validated by Neutron 
 tests anyway, and if the tests are structured as I am proposing, Tempest can 
 reuse those efforts rather than duplicating them.
 
 I've added this topic to this week's agenda, and I would really appreciate it 
 interested parties would take a look at the patches in question to prepare 
 themselves to participate in the discussion.
 
 
 m.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

Realistically, having API tests duplicated in the Tempest tree is a
feature, not a bug.

tempest/api is there for double book keep accounting, and it has been
really effective at preventing accidental breakage of our APIs (which
used to happen all the time), so I don't think putting API testing in
neutron obviates that.

Today most projects (excepting swift... which I'll get to in a second)
think about testing in 2 ways. Unit tests driven by tox in a venv, and
tempest tests in a devstack environment.

Because of this dualism, people have put increasingly more awkward live
environments in the tox unit tests jobs. For instance, IIRC, neutron
actually starts a full wsgi stack to tests every single in tree plugin,
instead of just testing the driver call down path.

Swift did something a little different. They have 3 classes of things.
Unit tests, Tempest Tests, and Swift Functional tests. The Swift
functional tests run in a devstack, but not with Tempest. Instead they
run their own suite.

This test suite only runs on the swift project, not on other projects,
and it's something they can use to test functional scenarios that
wouldn't fit in tempest, and extend to their heart's content, not having
to worry about the interaction with other components, because it's only
pushing on swift.

Going down this third path with neutron testing is I think very
valuable. Honestly, I'd like to encourage more projects to do this as
well. It would give them a greater component assuredness before entering
the integrated gate.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Proposal for model change - Layer 7 support

2014-02-12 Thread Samuel Bercovici
Hi Stephen,

See embedded response bellow.

Regards,
-Sam.

From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
Sent: Wednesday, February 12, 2014 8:46 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Proposal for model change - Layer 
7 support

Howdy, Sam!

Thanks also for your speedy response.  Comments / additional questions are 
in-line below:

On Wed, Feb 12, 2014 at 2:51 AM, Samuel Bercovici 
samu...@radware.commailto:samu...@radware.com wrote:
Sam We have reviewed this based on capabilities that we belive could be 
supported by HA proxy and all commercial vendors.
Sam What is missing?
Nothing major that I could see--  I was mostly curious where the discussion 
took place and whether it was documented anywhere. (Again, I'm still in the 
process of catching up with the goings on of this project, and understand LBaaS 
features of Neutron have been under development for a couple years now. Doing 
forensics to find out this sort of thing is often tedious and fruitless-- it's 
usually quicker just to ask.)


  *   Since the L7Rule object contains a position indicator, I assume, 
therefore, that a given L7Rule object cannot exist in more than one L7Policy, 
correct?  Also, I assume that the first L7Rule added to an L7Policy becomes the 
rule at position 0 and that subsequent rules are added with incrementing 
positions. This is unless the position is specified, in which case, the rule is 
inserted into the policy list at the position specified, and all subsequent 
rule position indicators get incremented. Correct?
Sam Correct.

  *   Shouldn't the L7Rule object also have a l7PolicyID attribute?
Sam It does.
Excellent! I've updated the wiki page to reflect this. :)  I shall also be 
endeavoring to produce an updated diagram, since I think a picture can save a 
lot of words here, eh. :)


  *   It is unclear from the proposal whether a given VIP can have multiple 
L7VipPolicyAssociation objects associated with it. If not, then we've not 
really solved the problem of multiple back-end pools per VIP. If so, then the 
L7VipPolicyAssociation object is missing its own 'position' attribute (because 
order matters here, too!).
Sam Correct the L7VIPPollicyassociation should have a “position” attribute. 
The way to implement is under consideration.
Cool. Are the solutions being considered:

1. Make it at additional integer attribute of the L7VIPPolicyAssociation 
object. (The side effect of this is that any given L7VIPPolicyAssociation 
object can only be associated with one VIP.)
Sam We are looking to use an integer attribute so that the combination of 
vip-id + ordinal number governs the order per vip. Alternatively we might just 
add a float column so that you can control the ordering by specifying some 
arbitrary floating number and the sort order will be vip + ordinal column.

2. Create an additional association object to associate the 
L7VIPPolicyAssociation with a VIP (ie. a join table of some kind), which 
carries this position attribute and would allow a given L7VIPPolicyAssociation 
to be associated with multiple VIPs.

FWIW, I think of the above, only number 1 really makes sense. The point of the 
L7VIPPolicyAssociation is to associate a VIP and Pool, using the rules in a 
L7Policy. All that L7VIPPolicyAssociation is missing is a position attribute.

As an aside, when you say, under consideration, exactly how / where is it 
being considered?  (Again, sorry for my newbishness-- just trying to figure out 
how this body makes these kinds of decisions.)
Sam see response above.


  *   I assume any given pool can have multiple L7VipPolicyAssociations. If 
this is not the case, then a given single pool can only be associated with one 
VIP.
Sam nope this any to any. Pools can be associated with multiple VIPs
Thanks for the clarification!

  *   There is currently no way to set a 'default' back-end pool in this 
proposal. Perhaps this could be solved by:

 *   Make 'DEFAULT' one of the actions possible for a L7VipPolicyAssociation
 *   Any L7VipPolicyAssociation with an action of 'DEFAULT' would have a 
null position and null L7PolicyId.
 *   We would need to enforce having only one L7VipPolicyAssociation object 
with a 'DEFAULT' action per VIP.
Sam the “default” behavior is governed by the current VIP -- Pool 
relationship. This is the canonical approach that could also be addressed by 
LBaaS drivers that do not support L7 content switching.
Sam We will fix the VIPPool limitation (for Juno) by removing the 
Pool--VIP reference al only leaving the VIP -- Pool reference thus allowing 
the Pool to be used by multiple VIPs. This was originally planned for icehouse 
but will be handle for Juno.


Aah--  Ok, I thought that based on recent discussions in the IRC meetings that 
the L7 features of LBaaS were very unlikely to make it into Icehouse, and 
therefore any discussion of them was essentially a discussion about 

Re: [openstack-dev] Interested in attracting new contributors?

2014-02-12 Thread Dolph Mathews
On Wed, Feb 12, 2014 at 8:30 AM, Julie Pichon jpic...@redhat.com wrote:


 I can definitely sympathise with the comment in Stefano's article that
 there are not enough easy tasks / simple issues for newcomers. There's
 a lot to learn already when you're starting out (git, gerrit, python,
 devstack, ...) and simple bugs are so hard to find - something that
 will take a few minutes to an existing contributor will take much
 longer for someone who's still figuring out where to get the code
 from.


My counterargument to this is to jump straight into
http://review.openstack.org/ (which happens to be publicly available to
newcomers).

Easy tasks / simple issues (i.e. nits!) are *frequently* cited in code
review, and although our community tends to get hung up on seeing them
fixed prior merging the patchset in question (sometimes with good reason,
sometimes due to arbitrary opinion), that doesn't always happen (for
example, it's not worth delaying approval of an important patch to see a
typo fixed in an inline comment) and isn't always appropriate (such as,
this other thing over here should be refactored).

There's a lot of such scenarios where new contributors can quickly find
things to contribute, or at lest provide incredibly valuable feedback to
the project in the form of reviews! As a bonus, new contributors jumping
straight into reviews tend to get up to speed on the code base *much* more
quickly than they otherwise would (IMO), as they become directly involved
in design discussions, etc.



 [1]
 http://opensource.com/business/14/2/analyzing-contributions-to-openstack
 [2] http://openhatch.org/
 [3] http://openhatch.org/+projects/OpenStack%20dashboard%20%28Horizon%29
 [4] https://openhatch.org/wiki/Contacting_new_contributors

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Interested in attracting new contributors?

2014-02-12 Thread Ben Nemec

On 2014-02-12 13:48, Adrian Otto wrote:

On Feb 12, 2014, at 10:13 AM, Ben Nemec openst...@nemebean.com
 wrote:


On 2014-02-12 09:51, Jesse Noller wrote:

On Feb 12, 2014, at 8:30 AM, Julie Pichon jpic...@redhat.com wrote:

Hi folks,
Stefano's post on how to make contributions to OpenStack easier [1]
finally stirred me into writing about something that vkmc and myself
have been doing on the side for a few months to help new 
contributors

to get involved.
Some of you may be aware of OpenHatch [2], a non-profit dedicated to
helping newcomers get started in open-source. About 6 months ago we
created a project page for Horizon [3], filled in a few high level
details, set ourselves up as mentors. Since then people have been
expressing interest in the project and a number of them got a patch
submitted and approved, a couple are sticking around (often helping 
out
with bug triaging, as confirming new bugs is one of the few tasks 
one

can help out with when only having limited time).
I can definitely sympathise with the comment in Stefano's article 
that
there are not enough easy tasks / simple issues for newcomers. 
There's
a lot to learn already when you're starting out (git, gerrit, 
python,

devstack, ...) and simple bugs are so hard to find - something that
will take a few minutes to an existing contributor will take much
longer for someone who's still figuring out where to get the code
from. Unfortunately it's not uncommon for existing contributors to 
take
on tasks marked as low-hanging-fruit because it's only 5 minutes 
(I
can understand this coming up to an RC but otherwise 
low-hanging-fruits

are often low priority nits that could wait a little bit longer). In
Horizon the low-hanging-fruits definitely get snatched up quickly 
and I

try to keep a list of typos or other low impact, trivial bugs that
would make good first tasks for people reaching out via OpenHatch.
OpenHatch doesn't spam, you get one email a week if one or more 
people
indicated they want to help. The initial effort is not 
time-consuming,

following OpenHatch's advice [4] you can refine a nice initial
contact email that helps you get people started and understand what
they are interested in quickly. I don't find the time commitment to 
be

too much so far, and it's incredibly gratifying to see someone
submitting their first patch after you answered a couple of 
questions
or helped resolve a hairy git issue. I'm happy to chat about it 
more,

if you're curious or have any questions.
In any case if you'd like to attract more contributors to your 
project,
and/or help newcomers get started in open-source, consider adding 
your

project to OpenHatch too!
Cheers,
Julie

+10
There’s been quite a bit of talk about this - but not necessarily on
the dev list. I think openhatch is great - mentorship programs in
general go a *long* way to help raise up and gain new people. Core
Python has had this issue for awhile, and many other large OSS
projects continue to suffer from it (“barrier to entry too high”).
Some random thoughts:
I’d like to see something like Solum’s Contributing page:
https://wiki.openstack.org/wiki/Solum/Contributing
Expanded a little and potentially be the recommended “intro to
contribution” guide -
https://wiki.openstack.org/wiki/How_To_Contribute is good, but a more
accessible version goes a long way. You want to show them how easy /
fast it is, not all of the options at once.


So, glancing over the Solum page, I don't see anything specific to 
Solum in there besides a few paths in examples.  It's basically a 
condensed version of https://wiki.openstack.org/wiki/GerritWorkflow 
sans a lot of the detail.  This might be a good thing to add as a 
QuickStart section on that wiki page (which is linked from the how to 
contribute page, although maybe not as prominently as it should be).  
But, a lot of that detail is needed before a change is going to be 
accepted anyway.  I'm not sure giving a new contributor just the bare 
minimum is actually doing them any favors.  Without letting them know 
things like how to format a commit message and configure their ssh 
keys on Gerrit, they aren't going to be able to get a change accepted 
anyway and IMHO they're likely to just give up anyway (and possibly 
waste some reviewer time in the process).


The key point I'd like to emphasize is that we should not optimize for
the ease and comfort of the incumbent OpenStack developers and
reviewers. Instead, we should focus effort on welcoming new
contribution. I like to think about this through a long term lens. I
believe that long lived organizations thrive based size and diversity
of their membership. I'm not saying we disregard quality and
efficiency of our conduct, but we should place a higher value on
making OpenStack a community that people are delighted to join.


I'm not disagreeing, but there has to be a balance.  tldr: there are a 
lot of pitfalls in the submission process, and if they aren't 
well-documented they're going to 

[openstack-dev] [nova][network] Gateway in network_info seems not to be correct

2014-02-12 Thread Daniel Kuffner
Hi All,

I'm currently trying out the docker in FlatDHCPManager multi_host mode.
I figured out that my instance are not pingable. The reason for that
is that they get the wrong gateway assigned.

Currently we use the gateway field provided by the network_info
object. That works great in a single node setup. But in a multi_host
setup it still delivers (in my case) 10.0.0.1. Which is wrong since it
must be dnsmasq IP address of the compute node.

Should a driver use the DNS ip (dnsmasq) address in the multi_host
mode as gateway? Why is the gateway delivered by the nova not correct
(bug)? Any insides?

Corresponding bug:
please see also: https://bugs.launchpad.net/nova/+bug/1279509

thank you,
Daniel

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Meeting Thursday February 13th at 17:00UTC

2014-02-12 Thread Matthew Treinish
Just a quick reminder that the weekly OpenStack QA team IRC meeting will be
tomorrow Thursday, February 13th at 17:00 UTC in the #openstack-meeting
channel.

The agenda for tomorrow's meeting can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting
Anyone is welcome to add an item to the agenda.

To help people figure out what time 17:00 UTC is in other timezones tomorrow's
meeting will be at:

12:00 EST
02:00 JST
03:30 ACDT
18:00 CET
11:00 CST
9:00 PST

-Matt Treinish

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [All]Optional dependencies and requirements.txt

2014-02-12 Thread Ben Nemec

Hi all,

This is an issue that has come up recently in tripleo as we try to 
support more varied configurations.  Currently qpid-python is not listed 
in requirements.txt for many of the OpenStack projects, even though they 
support using Qpid as a messaging broker.  This means that when we 
install from source in tripleo we have to dynamically add a line to 
requirements.txt if we want to use Qpid (we pip install -r to handle 
deps).  There seems to be disagreement over the correct way to handle 
this, so Joe requested on my proposed Nova change that I raise the issue 
here.


There's already some discussion on the bug here: 
https://bugs.launchpad.net/heat/+bug/1225191 as well as a separate 
Neutron bug here: https://bugs.launchpad.net/neutron/+bug/1225232


If there's a better alternative to require all the things I'm 
certainly interested to hear it.  I expect we're going to hit this more 
in the future as we add support for other optional backends for services 
and such.


Thanks.

-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] review days

2014-02-12 Thread Devananda van der Veen
Hi again!

I promise this will be a much shorter email than my last one ... :)

I'd like to propose that we find regular day/time to have a recurring code
jam. Here's what it looks like in my head:
- we get at least three core reviewers together
- as many non-core folks as show up are welcome, too
- we tune out all distractions for a few hours
- we pick a patch and all review it

If the author is present, we iterate with the author, and review each
revision they submit while we're all together. If the author is not present
and there are only minor issues, we fix them up in a follow-on patch and
land both at once. If neither of those are possible, we -1 it and move on.

I think we could make very quick progress in our review queue this way. In
particular, I want us to plow through the bug fixes that have been in Fix
Proposed status for a while ...

What do ya'll think of this idea? Useful or a doomed to fail?

What time would work for you? How about Thursdays at 8am PST?


Cheers,
Devananda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] review days

2014-02-12 Thread Roman Prykhodchenko
I think that's a nice idea.

During the last two days using this approach we managed to find several 
problems in an important patch which is now waiting to be merged.
I bet it would take much longer, if we didn't do that.

How about picking a particular time every day for performing this kind of 
collaboration?


- Roman

On Feb 12, 2014, at 23:14 , Devananda van der Veen devananda@gmail.com 
wrote:

 Hi again!
 
 I promise this will be a much shorter email than my last one ... :)
 
 I'd like to propose that we find regular day/time to have a recurring code 
 jam. Here's what it looks like in my head:
 - we get at least three core reviewers together
 - as many non-core folks as show up are welcome, too
 - we tune out all distractions for a few hours
 - we pick a patch and all review it
 
 If the author is present, we iterate with the author, and review each 
 revision they submit while we're all together. If the author is not present 
 and there are only minor issues, we fix them up in a follow-on patch and land 
 both at once. If neither of those are possible, we -1 it and move on.
 
 I think we could make very quick progress in our review queue this way. In 
 particular, I want us to plow through the bug fixes that have been in Fix 
 Proposed status for a while ...
 
 What do ya'll think of this idea? Useful or a doomed to fail?
 
 What time would work for you? How about Thursdays at 8am PST?
 
 
 Cheers,
 Devananda
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] instructions for creating a new oslo library

2014-02-12 Thread Ben Nemec
 

On 2014-02-12 13:28, Doug Hellmann wrote: 

 I have been working on instructions for creating new Oslo libraries, either 
 from scratch or by graduating code from the incubator. I would appreciate any 
 feedback about whether I have all of the necessary details included in 
 https://wiki.openstack.org/wiki/Oslo/CreatingANewLibrary [1] 
 
 Thanks, 
 Doug

First off: o/ 

This should really help cut down on the amount of code we need to sync
from incubator. 

Given all the fun we've had with oslo.sphinx, maybe we should add a note
that only runtime deps should use the oslo. namespace? Other than that,
I think I'd have to run through the process to have further comments. 

-Ben 
 

Links:
--
[1] https://wiki.openstack.org/wiki/Oslo/CreatingANewLibrary
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][neutron] API tests in the Neutron tree

2014-02-12 Thread Maru Newby

On Feb 12, 2014, at 12:36 PM, Sean Dague s...@dague.net wrote:

 On 02/12/2014 01:48 PM, Maru Newby wrote:
 At the last 2 summits, I've suggested that API tests could be maintained in 
 the Neutron tree and reused by Tempest.  I've finally submitted some patches 
 that demonstrate this concept:
 
 https://review.openstack.org/#/c/72585/  (implements a unit test for the 
 lifecycle of the network resource)
 https://review.openstack.org/#/c/72588/  (runs the test with tempest rest 
 clients)
 
 My hope is to make API test maintenance a responsibility of the Neutron 
 team.  The API compatibility of each Neutron plugin has to be validated by 
 Neutron tests anyway, and if the tests are structured as I am proposing, 
 Tempest can reuse those efforts rather than duplicating them.
 
 I've added this topic to this week's agenda, and I would really appreciate 
 it interested parties would take a look at the patches in question to 
 prepare themselves to participate in the discussion.
 
 
 m.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 Realistically, having API tests duplicated in the Tempest tree is a
 feature, not a bug.
 
 tempest/api is there for double book keep accounting, and it has been
 really effective at preventing accidental breakage of our APIs (which
 used to happen all the time), so I don't think putting API testing in
 neutron obviates that.

Given how limited our testing resources are, might it be worth considering 
whether 'double-entry accounting' is actually the best way to preventing 
accidental breakage going forward?  Might reasonable alternatives exist, such 
as clearly separating api tests from other tests in the neutron tree and giving 
review oversight only to qualified individuals?


 
 Today most projects (excepting swift... which I'll get to in a second)
 think about testing in 2 ways. Unit tests driven by tox in a venv, and
 tempest tests in a devstack environment.
 
 Because of this dualism, people have put increasingly more awkward live
 environments in the tox unit tests jobs. For instance, IIRC, neutron
 actually starts a full wsgi stack to tests every single in tree plugin,
 instead of just testing the driver call down path.

Yeah, and this full-stack approach means that neutron tests are incredibly hard 
to maintain and debug, to say nothing of the time penalty (approaching the 
duration of a tempest smoke run).  Part of the benefit of the proposal in 
question is to allow targeting the plugins directly with the same tests that 
will run against the full stack.


m.

 
 Swift did something a little different. They have 3 classes of things.
 Unit tests, Tempest Tests, and Swift Functional tests. The Swift
 functional tests run in a devstack, but not with Tempest. Instead they
 run their own suite.
 
 This test suite only runs on the swift project, not on other projects,
 and it's something they can use to test functional scenarios that
 wouldn't fit in tempest, and extend to their heart's content, not having
 to worry about the interaction with other components, because it's only
 pushing on swift.
 
 Going down this third path with neutron testing is I think very
 valuable. Honestly, I'd like to encourage more projects to do this as
 well. It would give them a greater component assuredness before entering
 the integrated gate.
 
   -Sean
 
 -- 
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Solum] Question about Zuul's role in Solum

2014-02-12 Thread devdatta kulkarni
Hi,

I have been looking at Zuul for last few days and had a question
about its intended role within Solum.

From what I understand, Zuul is a code gating system.

I have been wondering if code gating is something we are considering as a 
feature
to be provided in Solum? If yes, then Zuul is a perfect fit.
But if not, then we should discuss what benefits do we gain by using Zuul
as an integral part of Solum.

It feels to me that right now we are treating Zuul as a conduit for triggering 
job(s)
that would do the following:
- clone/download source
- run tests
- create a deployment unit (DU) if tests pass
- upload DU to glance
- trigger the DU deployment workflow

In the language-pack working group we have talked about being able to do
CI on the submitted code and building the DUs only after tests pass. 
Now, there is a distinction between doing CI on merged code vs.
doing it before code is permanently merged to master/stable branches.
The latter is what a 'code gating' system does, and Zuul is a perfect fit for 
this.
For the former though, using a code gating system is not be needed.
We can achieve the former with an API endpoint, a queue,
and a mechanism to trigger job(s) that perform above mentioned steps.

I guess it comes down to Solum's vision. If the vision includes supporting, 
among other things, code gating
to ensure that Solum-managed code is never broken, then Zuul is a perfect fit.
Of course, in that situation we would want to ensure that the gating 
functionality is pluggable
so that operators can have a choice of whether to use Zuul or something else.
But if the vision is to be that part of an overall application lifecycle 
management flow which deals with
creation and scaling of DUs/plans/assemblies but not necessarily be a code 
gate, then we should re-evaluate Zuul's role
as an integral part of Solum.

Thoughts?

Thanks,
Devdatta



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] instructions for creating a new oslo library

2014-02-12 Thread Doug Hellmann
On Wed, Feb 12, 2014 at 4:12 PM, Ben Nemec openst...@nemebean.com wrote:

  On 2014-02-12 13:28, Doug Hellmann wrote:

  I have been working on instructions for creating new Oslo libraries,
 either from scratch or by graduating code from the incubator. I would
 appreciate any feedback about whether I have all of the necessary details
 included in https://wiki.openstack.org/wiki/Oslo/CreatingANewLibrary

 Thanks,
 Doug

 First off: \o/

 This should really help cut down on the amount of code we need to sync
 from incubator.

 Given all the fun we've had with oslo.sphinx, maybe we should add a note
 that only runtime deps should use the oslo. namespace?


Yes, definitely, I added that. Thanks!



 Other than that, I think I'd have to run through the process to have
 further comments.


You'll have your chance to do that soon, I hope! :-)

Doug
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] review days

2014-02-12 Thread Maksym Lobur
Hi All,

I like the idea! I'm comfortable to reserve about 3 hours for this,
Thursdays 8am (4pm GMT +0) sounds good. I assume we'll pick and merge as
much patches as possible during that time, and if the last one doesn't fit
to the time - we're increasing the time until the patch merged (in
reasonable limit). Makes sense?

Best regards,
Max Lobur,
Python Developer, Mirantis, Inc.

Mobile: +38 (093) 665 14 28
Skype: max_lobur

38, Lenina ave. Kharkov, Ukraine
www.mirantis.com
www.mirantis.ru


On Wed, Feb 12, 2014 at 11:14 PM, Devananda van der Veen 
devananda@gmail.com wrote:

 Hi again!

 I promise this will be a much shorter email than my last one ... :)

 I'd like to propose that we find regular day/time to have a recurring code
 jam. Here's what it looks like in my head:
 - we get at least three core reviewers together
 - as many non-core folks as show up are welcome, too
 - we tune out all distractions for a few hours
 - we pick a patch and all review it

 If the author is present, we iterate with the author, and review each
 revision they submit while we're all together. If the author is not present
 and there are only minor issues, we fix them up in a follow-on patch and
 land both at once. If neither of those are possible, we -1 it and move on.

 I think we could make very quick progress in our review queue this way. In
 particular, I want us to plow through the bug fixes that have been in Fix
 Proposed status for a while ...

 What do ya'll think of this idea? Useful or a doomed to fail?

 What time would work for you? How about Thursdays at 8am PST?


 Cheers,
 Devananda

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Interested in attracting new contributors?

2014-02-12 Thread Adrian Otto

On Feb 12, 2014, at 12:41 PM, Ben Nemec openst...@nemebean.com
 wrote:

 On 2014-02-12 13:48, Adrian Otto wrote:
 On Feb 12, 2014, at 10:13 AM, Ben Nemec openst...@nemebean.com
 wrote:
 On 2014-02-12 09:51, Jesse Noller wrote:
 On Feb 12, 2014, at 8:30 AM, Julie Pichon jpic...@redhat.com wrote:
 Hi folks,
 Stefano's post on how to make contributions to OpenStack easier [1]
 finally stirred me into writing about something that vkmc and myself
 have been doing on the side for a few months to help new contributors
 to get involved.
 Some of you may be aware of OpenHatch [2], a non-profit dedicated to
 helping newcomers get started in open-source. About 6 months ago we
 created a project page for Horizon [3], filled in a few high level
 details, set ourselves up as mentors. Since then people have been
 expressing interest in the project and a number of them got a patch
 submitted and approved, a couple are sticking around (often helping out
 with bug triaging, as confirming new bugs is one of the few tasks one
 can help out with when only having limited time).
 I can definitely sympathise with the comment in Stefano's article that
 there are not enough easy tasks / simple issues for newcomers. There's
 a lot to learn already when you're starting out (git, gerrit, python,
 devstack, ...) and simple bugs are so hard to find - something that
 will take a few minutes to an existing contributor will take much
 longer for someone who's still figuring out where to get the code
 from. Unfortunately it's not uncommon for existing contributors to take
 on tasks marked as low-hanging-fruit because it's only 5 minutes (I
 can understand this coming up to an RC but otherwise low-hanging-fruits
 are often low priority nits that could wait a little bit longer). In
 Horizon the low-hanging-fruits definitely get snatched up quickly and I
 try to keep a list of typos or other low impact, trivial bugs that
 would make good first tasks for people reaching out via OpenHatch.
 OpenHatch doesn't spam, you get one email a week if one or more people
 indicated they want to help. The initial effort is not time-consuming,
 following OpenHatch's advice [4] you can refine a nice initial
 contact email that helps you get people started and understand what
 they are interested in quickly. I don't find the time commitment to be
 too much so far, and it's incredibly gratifying to see someone
 submitting their first patch after you answered a couple of questions
 or helped resolve a hairy git issue. I'm happy to chat about it more,
 if you're curious or have any questions.
 In any case if you'd like to attract more contributors to your project,
 and/or help newcomers get started in open-source, consider adding your
 project to OpenHatch too!
 Cheers,
 Julie
 +10
 There’s been quite a bit of talk about this - but not necessarily on
 the dev list. I think openhatch is great - mentorship programs in
 general go a *long* way to help raise up and gain new people. Core
 Python has had this issue for awhile, and many other large OSS
 projects continue to suffer from it (“barrier to entry too high”).
 Some random thoughts:
 I’d like to see something like Solum’s Contributing page:
 https://wiki.openstack.org/wiki/Solum/Contributing
 Expanded a little and potentially be the recommended “intro to
 contribution” guide -
 https://wiki.openstack.org/wiki/How_To_Contribute is good, but a more
 accessible version goes a long way. You want to show them how easy /
 fast it is, not all of the options at once.
 So, glancing over the Solum page, I don't see anything specific to Solum in 
 there besides a few paths in examples.  It's basically a condensed version 
 of https://wiki.openstack.org/wiki/GerritWorkflow sans a lot of the detail. 
  This might be a good thing to add as a QuickStart section on that wiki 
 page (which is linked from the how to contribute page, although maybe not 
 as prominently as it should be).  But, a lot of that detail is needed 
 before a change is going to be accepted anyway.  I'm not sure giving a new 
 contributor just the bare minimum is actually doing them any favors.  
 Without letting them know things like how to format a commit message and 
 configure their ssh keys on Gerrit, they aren't going to be able to get a 
 change accepted anyway and IMHO they're likely to just give up anyway (and 
 possibly waste some reviewer time in the process).
 The key point I'd like to emphasize is that we should not optimize for
 the ease and comfort of the incumbent OpenStack developers and
 reviewers. Instead, we should focus effort on welcoming new
 contribution. I like to think about this through a long term lens. I
 believe that long lived organizations thrive based size and diversity
 of their membership. I'm not saying we disregard quality and
 efficiency of our conduct, but we should place a higher value on
 making OpenStack a community that people are delighted to join.
 
 I'm not disagreeing, but there has to be a balance.  tldr: 

Re: [openstack-dev] [All]Optional dependencies and requirements.txt

2014-02-12 Thread Doug Hellmann
On Wed, Feb 12, 2014 at 3:58 PM, Ben Nemec openst...@nemebean.com wrote:

 Hi all,

 This is an issue that has come up recently in tripleo as we try to support
 more varied configurations.  Currently qpid-python is not listed in
 requirements.txt for many of the OpenStack projects, even though they
 support using Qpid as a messaging broker.  This means that when we install
 from source in tripleo we have to dynamically add a line to
 requirements.txt if we want to use Qpid (we pip install -r to handle deps).
  There seems to be disagreement over the correct way to handle this, so Joe
 requested on my proposed Nova change that I raise the issue here.

 There's already some discussion on the bug here:
 https://bugs.launchpad.net/heat/+bug/1225191 as well as a separate
 Neutron bug here: https://bugs.launchpad.net/neutron/+bug/1225232

 If there's a better alternative to require all the things I'm certainly
 interested to hear it.  I expect we're going to hit this more in the future
 as we add support for other optional backends for services and such.


We could use a separate requirements file for each driver, following a
naming convention to let installation tools pick up the right file. For
example, oslo.messaging might include amqp-requirements.txt,
qpid-requirements.txt, zmq-requirements.txt, etc.

That would complicate the global requirements sync script, but not terribly
so.

Thoughts?

Doug




 Thanks.

 -Ben

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Support for multiple provider networks with same VLAN segmentation id

2014-02-12 Thread Aaron Rosen
Hi Vinay,


On Tue, Feb 11, 2014 at 4:20 PM, Vinay Bannai vban...@gmail.com wrote:

 One way to look at the VLANs is that it identifies a security zone that
 meets some regulatory compliance standards. The VLANs on each rack are
 separated from other VLANs by using VRF. Tenants in our case map to
 applications. So each application is served by a VIP with a pool of VMs. So
 all applications needing regulatory compliance would map to this VLAN. Make
 sense?


Yes, I understand your desire to do this though I think that networks
should identify a security zone and not a vlan.


 As far your question about keeping mac_addresses unique, isn't that
 achieved by the fact that each VM will have a unique mac since it is the
 same control plane allocating the mac address? Or did I miss something??


If I understand correctly what you want to do is to be able to create two
network sharing the same vlan (i.e):

neutron net-create net1 --provider:network_type=vlan
--provider:segmentation_id=1
neutron net-create net2 --provider:network_type=vlan
--provider:segmentation_id=1

in this case the mac_address uniqueness is not preserved as neutron
enforces those per networks and doesn't look at the segmentation_id part at
all.

It seems to me that you want something that allows you to share a network
between specific tenants only? I understand that sharing the same vlan
between tenants does accomplish this but it would be better if we were able
to do this type of thing regardless of the network_type.


 Vinay


 On Tue, Feb 11, 2014 at 12:32 PM, Aaron Rosen aaronoro...@gmail.comwrote:

 I believe it would need to be like:

  network_vlan_ranges = physnet1:100:300, phynet2:100:300, phynet3:100:300


 Additional comments inline:

 On Mon, Feb 10, 2014 at 8:49 PM, Vinay Bannai vban...@gmail.com wrote:

 Bob and Kyle,

 Thanks for your review.
 We looked at this option and it seems it might meet our needs. Here is
 what we intend to do:

 Let's say we have three racks (each rack supports three VLANs - 100, 200
 and 300).
 We create the following config file for the neutron server




 tenant_network_type = vlan
  network_vlan_ranges = physnet1:100:300
  network_vlan_ranges = phynet2:100:300
  network_vlan_ranges = phynet3:100:300
  integration_bridge = br-int
  bridge_mappings = physnet1:br-eth1, physnet2:br-eth1, physnet3:br-eth1
 Is this what you meant?

 Vinay


 On Sun, Feb 9, 2014 at 6:03 PM, Robert Kukura rkuk...@redhat.comwrote:

 On 02/09/2014 12:56 PM, Kyle Mestery wrote:
  On Feb 6, 2014, at 5:24 PM, Vinay Bannai vban...@gmail.com wrote:
 
  Hello Folks,
 
  We are running into a situation where we are not able to create
 multiple provider networks with the same VLAN id. We would like to propose
 a solution to remove this restriction through a configuration option. This
 approach would not conflict with the present behavior where it is not
 possible to create multiple provider networks with the same VLAN id.
 
  The changes should be minimal and would like to propose it for the
 next summit. The use case for this need is documented in the blueprint
 specification.
  Any feedback or comments are welcome.
 
 
 https://blueprints.launchpad.net/neutron/+spec/duplicate-providernet-vlans
 
  Hi Vinay:
 
  This problem seems straightforward enough, though currently you are
 right
  in that we don't allow multiple Neutron networks to have the same
 segmentation
  ID. I've added myself as approver for this BP and look forward to
 further
  discussions of this before and during the upcoming Summit!


 I kind of feel like allowing a vlan to span multiple networks is kind of
 wonky. I feel like a better abstraction would be if we had better access
 control over shared networks between tenants. This way we could explicitly
 allow two tenants to share a network. Is this the problem you are trying to
 solve though doing it with the same vlan?  How do you plan on enforcing
 that mac_addresses are unique on the same physical network?

  Multiple networks with network_type of 'vlan' are already allowed to
 have the same segmentation ID with the ml2, openvswitch, or linuxbridge
 plugins - the networks just need to have different physical_network
 names.


 This is the same for the NSX plugin as well.


  If they have the same network_type, physical_network, and
 segmentation_id, they are the same network. What else would distinguish
 them from each other?

 Could your use case be addressed by simply using different
 physical_network names for each rack? This would provide independent
 spaces of segmentation_ids for each.

 -Bob

 
  Thanks!
  Kyle
 
  Thanks
  --
  Vinay Bannai
  Email: vban...@gmail.com
  Google Voice: 415 938 7576
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  

Re: [openstack-dev] [qa][neutron] API tests in the Neutron tree

2014-02-12 Thread Sean Dague
On 02/12/2014 04:25 PM, Maru Newby wrote:
 
 On Feb 12, 2014, at 12:36 PM, Sean Dague s...@dague.net wrote:
 
 On 02/12/2014 01:48 PM, Maru Newby wrote:
 At the last 2 summits, I've suggested that API tests could be maintained in 
 the Neutron tree and reused by Tempest.  I've finally submitted some 
 patches that demonstrate this concept:

 https://review.openstack.org/#/c/72585/  (implements a unit test for the 
 lifecycle of the network resource)
 https://review.openstack.org/#/c/72588/  (runs the test with tempest rest 
 clients)

 My hope is to make API test maintenance a responsibility of the Neutron 
 team.  The API compatibility of each Neutron plugin has to be validated by 
 Neutron tests anyway, and if the tests are structured as I am proposing, 
 Tempest can reuse those efforts rather than duplicating them.

 I've added this topic to this week's agenda, and I would really appreciate 
 it interested parties would take a look at the patches in question to 
 prepare themselves to participate in the discussion.


 m.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 Realistically, having API tests duplicated in the Tempest tree is a
 feature, not a bug.

 tempest/api is there for double book keep accounting, and it has been
 really effective at preventing accidental breakage of our APIs (which
 used to happen all the time), so I don't think putting API testing in
 neutron obviates that.
 
 Given how limited our testing resources are, might it be worth considering 
 whether 'double-entry accounting' is actually the best way to preventing 
 accidental breakage going forward?  Might reasonable alternatives exist, such 
 as clearly separating api tests from other tests in the neutron tree and 
 giving review oversight only to qualified individuals?

Our direct experience is that if we don't do this, within 2 weeks some
project will have landed API breaking changes. This approach actually
takes a lot of review load off the core reviewers, so reverting to a
model which puts more work back on the review team (given the current
review load), isn't something I think we want.

I get that there is a cost with this. But there is a cost of all of
this. And because API tests should be write once for the API (and
specifically not evolving), the upfront cost vs. the continually paid
cost of review time in tree has been the better trade off.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa][tempest][rally] Rally Tempest integration: tempest.conf autogeneration

2014-02-12 Thread Boris Pavlovic
Hi all,

It goes without saying words that actually tempest[1] is only one public
known tool that could fully verify deployments and ensure dthat they work.

In Rally[2] we would like to use it as a cloud verifier (before
benchmarking it is actually very useful;) to ensure that cloud work). We
are going to build on top of tempest pretty interface and aliases  support
of working with different clouds. E.g.:

rally use deployment uuid # use some deployment that is registered in
Rally
rally verify nova # Run only nova tests against `in-use` deployment
rally verify small/big/full # set of tests
rally verify list # List all verification results for this deployment
rally verify show id # Show detailed information
# Okay we found that something failed, fixed it in cloud, restart service
and we would like you to run only failed tests
rally verify latest_failed # do it in one simple command

These commands should be very smart, generate proper tempest.conf for
specific cloud, prepare cloud for tempest testing, store somewhere results
and so on and so on. So at the end we will have very simple way to work
with tempest.

We added first patch that adds base functionality to Rally [3]:
https://review.openstack.org/#/c/70131/

At QA meeting I discussed it with David Kranz, as a result we agree that
part of this functionality (tempest.conf generator  cloud prepare), should
be implemented inside tempest.

Current situation is not super cool because, there are at least 4 projects
where we are generating in different way tempest.conf:
1) DevStack
2) Fuel CI
3) Rally
4) Tempest (currently broken)


To put it in a nutshell, it's clear that we should make only 1 tempest.conf
generator [4], that will cover all cases, and will be enough simple to be
used in all other projects.


Is anybody from tempest (or community) interested in helping with this?



[1] https://wiki.openstack.org/wiki/QA#Tempest
[2] https://wiki.openstack.org/wiki/Rally
[3] https://blueprints.launchpad.net/rally/+spec/tempest-verification
[4] https://blueprints.launchpad.net/tempest/+spec/tempest-config-generator


Best regards,
Boris Pavlovic
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Subteam meeting 13.02.2014 at 14-00UTC

2014-02-12 Thread Eugene Nikanorov
Hi folks,

Lets gather as usual on #openstack-meeting on Thursday, 13 at 14-00 UTC
The updated agenda for our regular meeting is here:
https://wiki.openstack.org/wiki/Network/LBaaS

In addition to that I'd like to discuss some ideas that Stephen Balukoff
has proposed.

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] review days

2014-02-12 Thread Lucas Alvares Gomes
+1


 What time would work for you? How about Thursdays at 8am PST?


Works for me
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][neutron] API tests in the Neutron tree

2014-02-12 Thread Maru Newby

On Feb 12, 2014, at 1:59 PM, Sean Dague s...@dague.net wrote:

 On 02/12/2014 04:25 PM, Maru Newby wrote:
 
 On Feb 12, 2014, at 12:36 PM, Sean Dague s...@dague.net wrote:
 
 On 02/12/2014 01:48 PM, Maru Newby wrote:
 At the last 2 summits, I've suggested that API tests could be maintained 
 in the Neutron tree and reused by Tempest.  I've finally submitted some 
 patches that demonstrate this concept:
 
 https://review.openstack.org/#/c/72585/  (implements a unit test for the 
 lifecycle of the network resource)
 https://review.openstack.org/#/c/72588/  (runs the test with tempest rest 
 clients)
 
 My hope is to make API test maintenance a responsibility of the Neutron 
 team.  The API compatibility of each Neutron plugin has to be validated by 
 Neutron tests anyway, and if the tests are structured as I am proposing, 
 Tempest can reuse those efforts rather than duplicating them.
 
 I've added this topic to this week's agenda, and I would really appreciate 
 it interested parties would take a look at the patches in question to 
 prepare themselves to participate in the discussion.
 
 
 m.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 Realistically, having API tests duplicated in the Tempest tree is a
 feature, not a bug.
 
 tempest/api is there for double book keep accounting, and it has been
 really effective at preventing accidental breakage of our APIs (which
 used to happen all the time), so I don't think putting API testing in
 neutron obviates that.
 
 Given how limited our testing resources are, might it be worth considering 
 whether 'double-entry accounting' is actually the best way to preventing 
 accidental breakage going forward?  Might reasonable alternatives exist, 
 such as clearly separating api tests from other tests in the neutron tree 
 and giving review oversight only to qualified individuals?
 
 Our direct experience is that if we don't do this, within 2 weeks some
 project will have landed API breaking changes. This approach actually
 takes a lot of review load off the core reviewers, so reverting to a
 model which puts more work back on the review team (given the current
 review load), isn't something I think we want.

Just so I'm clear, is there anything I could say that would change your mind?


m.


 
 I get that there is a cost with this. But there is a cost of all of
 this. And because API tests should be write once for the API (and
 specifically not evolving), the upfront cost vs. the continually paid
 cost of review time in tree has been the better trade off.
 
   -Sean
 
 -- 
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] review days

2014-02-12 Thread Maksym Lobur
Also, I think 3 hours might be too much, maybe to have 2 sessions of 2
hours. It might be hard to concentrate on review for 3 hours in a line..

Best regards,
Max Lobur,
Python Developer, Mirantis, Inc.

Mobile: +38 (093) 665 14 28
Skype: max_lobur

38, Lenina ave. Kharkov, Ukraine
www.mirantis.com
www.mirantis.ru


On Wed, Feb 12, 2014 at 11:32 PM, Maksym Lobur mlo...@mirantis.com wrote:

 Hi All,

 I like the idea! I'm comfortable to reserve about 3 hours for this,
 Thursdays 8am (4pm GMT +0) sounds good. I assume we'll pick and merge as
 much patches as possible during that time, and if the last one doesn't fit
 to the time - we're increasing the time until the patch merged (in
 reasonable limit). Makes sense?

 Best regards,
 Max Lobur,
 Python Developer, Mirantis, Inc.

 Mobile: +38 (093) 665 14 28
 Skype: max_lobur

 38, Lenina ave. Kharkov, Ukraine
 www.mirantis.com
 www.mirantis.ru


 On Wed, Feb 12, 2014 at 11:14 PM, Devananda van der Veen 
 devananda@gmail.com wrote:

 Hi again!

 I promise this will be a much shorter email than my last one ... :)

 I'd like to propose that we find regular day/time to have a recurring
 code jam. Here's what it looks like in my head:
 - we get at least three core reviewers together
 - as many non-core folks as show up are welcome, too
 - we tune out all distractions for a few hours
 - we pick a patch and all review it

 If the author is present, we iterate with the author, and review each
 revision they submit while we're all together. If the author is not present
 and there are only minor issues, we fix them up in a follow-on patch and
 land both at once. If neither of those are possible, we -1 it and move on.

 I think we could make very quick progress in our review queue this way.
 In particular, I want us to plow through the bug fixes that have been in
 Fix Proposed status for a while ...

 What do ya'll think of this idea? Useful or a doomed to fail?

 What time would work for you? How about Thursdays at 8am PST?


 Cheers,
 Devananda

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] review days

2014-02-12 Thread Roman Prykhodchenko
Since there are two core-subteams located in different time zones I propose 
making two sessions:
- One in the morning in Europe. Let's say at 10 GMT
- One in the morning in the US at a convenient time.

This approach might be useful for the submitters located in different time 
zones.


- Roman

On Feb 13, 2014, at 00:18 , Maksym Lobur mlo...@mirantis.com wrote:

 Also, I think 3 hours might be too much, maybe to have 2 sessions of 2 hours. 
 It might be hard to concentrate on review for 3 hours in a line..
 
 
 Best regards,
 Max Lobur,
 Python Developer, Mirantis, Inc.
 
 Mobile: +38 (093) 665 14 28
 Skype: max_lobur
 
 38, Lenina ave. Kharkov, Ukraine
 www.mirantis.com
 www.mirantis.ru
 
 
 On Wed, Feb 12, 2014 at 11:32 PM, Maksym Lobur mlo...@mirantis.com wrote:
 Hi All,
 
 I like the idea! I'm comfortable to reserve about 3 hours for this, Thursdays 
 8am (4pm GMT +0) sounds good. I assume we'll pick and merge as much patches 
 as possible during that time, and if the last one doesn't fit to the time - 
 we're increasing the time until the patch merged (in reasonable limit). Makes 
 sense?
 
 Best regards,
 Max Lobur,
 Python Developer, Mirantis, Inc.
 
 Mobile: +38 (093) 665 14 28
 Skype: max_lobur
 
 38, Lenina ave. Kharkov, Ukraine
 www.mirantis.com
 www.mirantis.ru
 
 
 On Wed, Feb 12, 2014 at 11:14 PM, Devananda van der Veen 
 devananda@gmail.com wrote:
 Hi again!
 
 I promise this will be a much shorter email than my last one ... :)
 
 I'd like to propose that we find regular day/time to have a recurring code 
 jam. Here's what it looks like in my head:
 - we get at least three core reviewers together
 - as many non-core folks as show up are welcome, too
 - we tune out all distractions for a few hours
 - we pick a patch and all review it
 
 If the author is present, we iterate with the author, and review each 
 revision they submit while we're all together. If the author is not present 
 and there are only minor issues, we fix them up in a follow-on patch and land 
 both at once. If neither of those are possible, we -1 it and move on.
 
 I think we could make very quick progress in our review queue this way. In 
 particular, I want us to plow through the bug fixes that have been in Fix 
 Proposed status for a while ...
 
 What do ya'll think of this idea? Useful or a doomed to fail?
 
 What time would work for you? How about Thursdays at 8am PST?
 
 
 Cheers,
 Devananda
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Subteam meeting 13.02.2014 at 14-00UTC

2014-02-12 Thread Stephen Balukoff
Sounds good! I'll be sure to be there. Also, I'm going to shift my focus
today to the analysis of the LoadBalancerInstance resource as I think my
proposals actually have more to do with how that's implemented than L7 or
SSL.

Thanks,
Stephen


On Wed, Feb 12, 2014 at 2:15 PM, Eugene Nikanorov
enikano...@mirantis.comwrote:

 Hi folks,

 Lets gather as usual on #openstack-meeting on Thursday, 13 at 14-00 UTC
 The updated agenda for our regular meeting is here:
 https://wiki.openstack.org/wiki/Network/LBaaS

 In addition to that I'd like to discuss some ideas that Stephen Balukoff
 has proposed.

 Thanks,
 Eugene.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][tempest][rally] Rally Tempest integration: tempest.conf autogeneration

2014-02-12 Thread Sean Dague
On 02/12/2014 05:08 PM, Boris Pavlovic wrote:
 Hi all, 
 
 It goes without saying words that actually tempest[1] is only one public
 known tool that could fully verify deployments and ensure dthat they work.
 
 In Rally[2] we would like to use it as a cloud verifier (before
 benchmarking it is actually very useful;) to ensure that cloud work). We
 are going to build on top of tempest pretty interface and aliases 
 support of working with different clouds. E.g.: 
 
 rally use deployment uuid # use some deployment that is registered in
 Rally
 rally verify nova # Run only nova tests against `in-use` deployment 
 rally verify small/big/full # set of tests
 rally verify list # List all verification results for this deployment
 rally verify show id # Show detailed information
 # Okay we found that something failed, fixed it in cloud, restart
 service and we would like you to run only failed tests
 rally verify latest_failed # do it in one simple command
 
 These commands should be very smart, generate proper tempest.conf for
 specific cloud, prepare cloud for tempest testing, store somewhere
 results and so on and so on. So at the end we will have very simple way
 to work with tempest. 
 
 We added first patch that adds base functionality to Rally [3]: 
 https://review.openstack.org/#/c/70131/
 
 At QA meeting I discussed it with David Kranz, as a result we agree that 
 part of this functionality (tempest.conf generator  cloud prepare),
 should be implemented inside tempest.
 
 Current situation is not super cool because, there are at least 4
 projects where we are generating in different way tempest.conf: 
 1) DevStack
 2) Fuel CI
 3) Rally
 4) Tempest (currently broken)
 
 
 To put it in a nutshell, it's clear that we should make only 1
 tempest.conf generator [4], that will cover all cases, and will be
 enough simple to be used in all other projects. 

So in the past the issue was we could never get anyone to agree on one.
For instance, devstack makes some really fine grained decisions, and the
RDO team showed up with a very different answer file approach, which
wouldn't work for devstack (it had the wrong level of knob changing).

And if the end of the day you replace build a tempest config generator,
which takes 200 options, I'm not entirely sure how that was better than
just setting those 200 options directly.

So while happy to consider options here, realize that there is a reason
this has been punted before.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] review days

2014-02-12 Thread Chris K
Roman:
*Since there are two core-subteams located in different time zones I
propose making two sessions:*
*- One in the morning in Europe. Let's say at 10 GMT*
*- One in the morning in the US at a convenient time.*

I like this but it might be confusing to the devs, at least initially. I
would vote to moving the initial meeting from 8 to say 8:30 on Thursdays.
 I say lets talk about split review days in the next meeting?

Chris



On Wed, Feb 12, 2014 at 2:35 PM, Roman Prykhodchenko 
rprikhodche...@mirantis.com wrote:

 Since there are two core-subteams located in different time zones I
 propose making two sessions:
 - One in the morning in Europe. Let's say at 10 GMT
 - One in the morning in the US at a convenient time.

 This approach might be useful for the submitters located in different time
 zones.


 - Roman

 On Feb 13, 2014, at 00:18 , Maksym Lobur mlo...@mirantis.com wrote:

 Also, I think 3 hours might be too much, maybe to have 2 sessions of 2
 hours. It might be hard to concentrate on review for 3 hours in a line..

 Best regards,
 Max Lobur,
 Python Developer, Mirantis, Inc.

 Mobile: +38 (093) 665 14 28
 Skype: max_lobur

 38, Lenina ave. Kharkov, Ukraine
 www.mirantis.com
 www.mirantis.ru


 On Wed, Feb 12, 2014 at 11:32 PM, Maksym Lobur mlo...@mirantis.comwrote:

 Hi All,

 I like the idea! I'm comfortable to reserve about 3 hours for this,
 Thursdays 8am (4pm GMT +0) sounds good. I assume we'll pick and merge as
 much patches as possible during that time, and if the last one doesn't fit
 to the time - we're increasing the time until the patch merged (in
 reasonable limit). Makes sense?

 Best regards,
 Max Lobur,
 Python Developer, Mirantis, Inc.

 Mobile: +38 (093) 665 14 28
 Skype: max_lobur

 38, Lenina ave. Kharkov, Ukraine
 www.mirantis.com
 www.mirantis.ru


 On Wed, Feb 12, 2014 at 11:14 PM, Devananda van der Veen 
 devananda@gmail.com wrote:

 Hi again!

 I promise this will be a much shorter email than my last one ... :)

 I'd like to propose that we find regular day/time to have a recurring
 code jam. Here's what it looks like in my head:
 - we get at least three core reviewers together
 - as many non-core folks as show up are welcome, too
 - we tune out all distractions for a few hours
 - we pick a patch and all review it

 If the author is present, we iterate with the author, and review each
 revision they submit while we're all together. If the author is not present
 and there are only minor issues, we fix them up in a follow-on patch and
 land both at once. If neither of those are possible, we -1 it and move on.

 I think we could make very quick progress in our review queue this way.
 In particular, I want us to plow through the bug fixes that have been in
 Fix Proposed status for a while ...

 What do ya'll think of this idea? Useful or a doomed to fail?

 What time would work for you? How about Thursdays at 8am PST?


 Cheers,
 Devananda

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][tempest][rally] Rally Tempest integration: tempest.conf autogeneration

2014-02-12 Thread David Kranz

On 02/12/2014 05:55 PM, Sean Dague wrote:

On 02/12/2014 05:08 PM, Boris Pavlovic wrote:

Hi all,

It goes without saying words that actually tempest[1] is only one public
known tool that could fully verify deployments and ensure dthat they work.

In Rally[2] we would like to use it as a cloud verifier (before
benchmarking it is actually very useful;) to ensure that cloud work). We
are going to build on top of tempest pretty interface and aliases 
support of working with different clouds. E.g.:

rally use deployment uuid # use some deployment that is registered in
Rally
rally verify nova # Run only nova tests against `in-use` deployment
rally verify small/big/full # set of tests
rally verify list # List all verification results for this deployment
rally verify show id # Show detailed information
# Okay we found that something failed, fixed it in cloud, restart
service and we would like you to run only failed tests
rally verify latest_failed # do it in one simple command

These commands should be very smart, generate proper tempest.conf for
specific cloud, prepare cloud for tempest testing, store somewhere
results and so on and so on. So at the end we will have very simple way
to work with tempest.

We added first patch that adds base functionality to Rally [3]:
https://review.openstack.org/#/c/70131/

At QA meeting I discussed it with David Kranz, as a result we agree that
part of this functionality (tempest.conf generator  cloud prepare),
should be implemented inside tempest.

Current situation is not super cool because, there are at least 4
projects where we are generating in different way tempest.conf:
1) DevStack
2) Fuel CI
3) Rally
4) Tempest (currently broken)


To put it in a nutshell, it's clear that we should make only 1
tempest.conf generator [4], that will cover all cases, and will be
enough simple to be used in all other projects.

So in the past the issue was we could never get anyone to agree on one.
For instance, devstack makes some really fine grained decisions, and the
RDO team showed up with a very different answer file approach, which
wouldn't work for devstack (it had the wrong level of knob changing).

And if the end of the day you replace build a tempest config generator,
which takes 200 options, I'm not entirely sure how that was better than
just setting those 200 options directly.

So while happy to consider options here, realize that there is a reason
this has been punted before.

-Sean
I have thought about this and think we can do better. I will present a 
spec when I get a chance if no one else does. I would leave devstack out
of it, at least for now. In general it would be good to decouple tempest 
from devstack a little more, especially as it gains wider use in rally, 
refstack, etc. For example, there are default paths and stuff in 
tempest.conf.sample that refer to files that are only put there by devstack.


 -David




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack-Dev] [Cinder] Cinder driver verification

2014-02-12 Thread John Griffith
Hey,

So we've talked about this a bit and had a number of ideas regarding
how to test and show compatibility for third-party drivers in Cinder.
This has been an eye opening experience (the number of folks that have
NEVER run tempest before, as well as the problems uncovered now that
they're trying it).

I'm even more convinced now that having vendors run these tests is a
good thing and should be required.  That being said there's a ton of
push back from my proposal to require that results from a successful
run of the tempest tests to accompany any new drivers submitted to
Cinder.  The consensus from the Cinder community for now is that we'll
log a bug for each driver after I3, stating that it hasn't passed
certification tests.  We'll then have a public record showing
drivers/vendors that haven't demonstrated functional compatibility,
and in order to close those bugs they'll be required to run the tests
and submit the results to the bug in Launchpad.

So, this seems to be the approach we're taking for Icehouse at least,
it's far from ideal IMO, however I think it's still progress and it's
definitely exposed some issues with how drivers are currently
submitted to Cinder so those are positive things that we can learn
from and improve upon in future releases.

To add some controversy and keep the original intent of having only
known tested and working drivers in the Cinder release, I am going to
propose that any driver that has not submitted successful functional
testing by RC1 that that driver be removed.  I'd at least like to see
driver maintainers try... if the test fails a test or two that's
something that can be discussed, but it seems that until now most
drivers just flat out are not even being tested.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Question about Zuul's role in Solum

2014-02-12 Thread Paul Czarkowski


On 2/12/14 5:16 PM, Roshan Agrawal roshan.agra...@rackspace.com wrote:


 -Original Message-
 From: devdatta kulkarni [mailto:devdatta.kulka...@rackspace.com]
 Sent: Wednesday, February 12, 2014 3:26 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Solum] Question about Zuul's role in Solum
 
 Hi,
 
 I have been looking at Zuul for last few days and had a question about
its
 intended role within Solum.
 
 From what I understand, Zuul is a code gating system.
 
 I have been wondering if code gating is something we are considering as
a
 feature to be provided in Solum? If yes, then Zuul is a perfect fit.
 But if not, then we should discuss what benefits do we gain by using
Zuul as
 an integral part of Solum.
 
 It feels to me that right now we are treating Zuul as a conduit for
triggering
 job(s) that would do the following:
 - clone/download source
 - run tests
 - create a deployment unit (DU) if tests pass
 - upload DU to glance
 - trigger the DU deployment workflow
 
 In the language-pack working group we have talked about being able to
do CI
 on the submitted code and building the DUs only after tests pass.
 Now, there is a distinction between doing CI on merged code vs.
 doing it before code is permanently merged to master/stable branches.
 The latter is what a 'code gating' system does, and Zuul is a perfect
fit for this.
 For the former though, using a code gating system is not be needed.
 We can achieve the former with an API endpoint, a queue, and a mechanism
 to trigger job(s) that perform above mentioned steps.
 
 I guess it comes down to Solum's vision. If the vision includes
supporting,
 among other things, code gating to ensure that Solum-managed code is
 never broken, then Zuul is a perfect fit.
 Of course, in that situation we would want to ensure that the gating
 functionality is pluggable so that operators can have a choice of
whether to
 use Zuul or something else.
 But if the vision is to be that part of an overall application lifecycle
 management flow which deals with creation and scaling of
 DUs/plans/assemblies but not necessarily be a code gate, then we should
re-
 evaluate Zuul's role as an integral part of Solum.

The question is: is Zuul the right tool for the code deployment workflow
you outlined above? The code deployment workflow is the higher order
need. 

The code gating functionality is also useful and potentially something we
would want to implement in Solum at some point, but the decision criteria
on what tool we use to implement the code deployment workflow depends on
how good is Zuul in helping us with the deployment workflow.

 Thoughts?
 
 Thanks,
 Devdatta
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


The current proposed workflow for Solum (m1) is shorthanded to be something
like this

1. User writes code
2. User pushes to master branch in github
3. a github hook fires against the solum API.
4. Solum coordinates the testing/building and deployment of the app

None of this seems overly suitable for zuul to me ( not without a
significant
amount of work that will be needed to customize zuul to work for us ), and
can be easily ( for certain values of easy ) achieved with solum
forking a thread to do the build ( m1 implementation ? ) or solum
sending messages to a set of worker nodes watching a queue ( marconi? Post
m1, pluggable so operator could use their existing jenkins, etc ).

If an enterprise or provider wanted to implement code gating it would slip
in before option 1,  and would be relatively simple for an operator to
plug in their existing code gating/review tooling (github
PRs,CodeCollaborator,Crucible+Bamboo) or set up Gerrit/Zuul system:

1. User writes code
2. User runs `git review`
3. Gerrit calls zuul to run automated tests
4. Core reviewers +2 the code
5. Gerrit merges code to master branch in github
6. a github hook fires against the solum API
7. Solum coordinates the testing/building and deployment of the app


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Question about Zuul's role in Solum

2014-02-12 Thread Adrian Otto
Dev,

Thanks for raising this discussion. This is an important decision point for us.

On Feb 12, 2014, at 1:25 PM, devdatta kulkarni devdatta.kulka...@rackspace.com
 wrote:

 Hi,
 
 I have been looking at Zuul for last few days and had a question
 about its intended role within Solum.
 
 From what I understand, Zuul is a code gating system.
 
 I have been wondering if code gating is something we are considering as a 
 feature
 to be provided in Solum? If yes, then Zuul is a perfect fit.

Although this feature is not currently scoped for our earliest milestones, it 
is part of our long term vision under the umbrella of CI and developer 
automation, and we should think about how to get there.

 But if not, then we should discuss what benefits do we gain by using Zuul
 as an integral part of Solum.
 
 It feels to me that right now we are treating Zuul as a conduit for 
 triggering job(s)
 that would do the following:
 - clone/download source
 - run tests
 - create a deployment unit (DU) if tests pass
 - upload DU to glance
 - trigger the DU deployment workflow
 
 In the language-pack working group we have talked about being able to do
 CI on the submitted code and building the DUs only after tests pass. 
 Now, there is a distinction between doing CI on merged code vs.
 doing it before code is permanently merged to master/stable branches.
 The latter is what a 'code gating' system does, and Zuul is a perfect fit for 
 this.
 For the former though, using a code gating system is not be needed.
 We can achieve the former with an API endpoint, a queue,
 and a mechanism to trigger job(s) that perform above mentioned steps.
 
 I guess it comes down to Solum's vision. If the vision includes supporting, 
 among other things, code gating
 to ensure that Solum-managed code is never broken, then Zuul is a perfect fit.
 Of course, in that situation we would want to ensure that the gating 
 functionality is pluggable
 so that operators can have a choice of whether to use Zuul or something else.
 But if the vision is to be that part of an overall application lifecycle 
 management flow which deals with
 creation and scaling of DUs/plans/assemblies but not necessarily be a code 
 gate, then we should re-evaluate Zuul's role
 as an integral part of Solum.

I see code gating as a best practice for CI. As we begin to offer a CI 
experience as a default for Solum, we will want to employ a set of best 
practices to help those who are just starting out, and do not yet have any 
automation of this sort. Gating is well understood by OpenStack developers, but 
it's not well understood by all developers. It's actually pretty tricky to set 
up a complete CI system that does enter+exit gating, integrates with a review 
system, and leverages a build job runner like Jenkins. Because Solum is 
targeting the persona of Application Developers we want an experience that 
will feel natural to them.

I do think there are areas where the current openstack-ci system may be 
streamlined for general purpose use. Gerrit has various user interface quirks, 
for example. We should be willing to contribute to the various projects to 
improve them so they are more pleasant to use.

I'm reluctant to endorse an approach that involves re-creating functionality 
that Zuul already offers. I'd rather just leverage it. If Zuul has more 
features and capabilities than we need, then we should explore the possibility 
of turning off those features for now, and activating them when we are ready 
for them. If there are any tools that's are *better* fit with our long term 
vision compared to Zuul, then we should be willing to evaluate those.

I think that if people just want generic ALM, they can stand up Jenkins (or 
some equivalent), and have a set of build scripts that burp out a container 
image and a plan file at the end, and feed that into Solum. If they are Github 
lovers, they can configure the post commit webhook to trigger Solum, and have 
Solum pick up and build the container image, and send it through the CD 
process. Any variety of CI systems from a long list of awesome software vendors 
can fit here.

If they want something more comprehensive, including a full set of open source 
best practices by default, such as entrance and exit gating, hosted code review 
and collaboration, It would be really nice to have a full 
Zuul+Nodepool+Jenkins+Gerrit setup with some integration points where they 
could potentially customize it.

 Thoughts?

In summary, I'd prefer that we consider using Zuul from the beginning, and turn 
of any parts we don't need for our early releases. I'm willing to consider 
alternatives if they can be shown to be superior.

Thanks,

Adrian

 
 Thanks,
 Devdatta
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list

Re: [openstack-dev] [Neutron][LBaaS] Proposal for model change - Multiple services per floating IP

2014-02-12 Thread Stephen Balukoff
Hello!

This e-mail is concerning the Multiple load balanced services per floating
IP feature:

Sam, I think you said:


On Tue, Feb 11, 2014 at 5:26 AM, Samuel Bercovici samu...@radware.comwrote:


 · Multiple load balanced services per floating IP - You can
 already multiple VIPs using the same IP address for different services.



And Eugene said:

 Multiple load balanced services per floating IP
That is what blueprint 'multiple vips per pool' is trying to address.


Is this blueprint not yet implemented?  When I attempt to create multiple
VIPs using the same IP in my test cluster, I get:

sbalukoff@testbox:~$ neutron lb-vip-create --name test-vip2 --protocol-port
8000 --protocol HTTP --subnet-id a4370142-dc49-4633-9679-ce5366c482f5
--address 10.48.7.7 test-lb2
Unable to complete operation for network
aa370a26-742d-4eb6-a6f3-a8c344c664de. The IP address 10.48.7.7 is in use.

From that, I gathered there was a uniqueness check on the IP address.

If this has been implemented previously, could y'all point me at a working
example showcasing the CLI or API commands used to create multiple services
per floating IP (or just point out to me what I'm doing wrong)?

Regardless of the above:  I think splitting the concept of a 'VIP' into
'instance' and 'listener' objects has a couple other benefits as well:


   - You can continue to do a simple uniqueness check on the IP address, as
   only one instance should have a given IP.

   - The 'instance' object can contain a 'tenant_id' attribute, which means
   that at the model level, we enforce the idea that a given floating IP can
   only be used by one tenant (which is good, from a security perspective).

   - This seems less ambiguous from a terminology perspective. The name
   'VIP' in other contexts means 'virtual IP address', which is the same thing
   as a floating IP, which in other contexts is usually considered to be
   unique to a subset of devices that share the IP (or pass it between them).
   It doesn't necessarily have anything to do with layers 4 and above in the
   OSI model. However, if in the context of Neutron LBaaS, VIP has a
   protocol-port attribute, this means it's no longer just a floating IP:
It's a floating IP + TCP port (plus other attributes that make sense for a
   TCP service). This feels like Neutron LBaaS is trying to redefine what a
   virtual IP is, and is in any case going to be confusing for new comers
   expecting it to be one thing when it's actually another.

 Thanks,
Stephen


-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] healthnmon and devstack

2014-02-12 Thread Paulo Rômulo
Hello everyone,

My name is Paulo, I'm a software engineer from Brazil and just getting
started with OpenStack development using *devstack*. I'm working on a
project which one the goals is to monitor resource usage (at first
*cpu_util*), considering both VM instances and their hosts.

In order to get input data, we're already using *ceilometer*. However, we
also want to monitor host machines, not only VMs. I read about
*healthnmon*and looks like it can provide the data we need. Right now
our problem is on
enabling *healthnmon* on a *devstack* environment.

I'm using Ubuntu server 12.04. I've tried to build *healthnmon* from git,
but the generated Debian package depends on python-novaclient and
python-glance, whose *devstack* already installs. Dpkg doesn't detect the
*devstack* installation of nova and glance, so I'm not sure of how can I
enable *healthnmon* services from *devstack* (maybe editing *stack.sh* by
hand?).

Would any of you have some hint about how can I integrate *healthnmon* with
*devstack*? Any help would be much appreciated.

Thanks in advance and best regards,
--
*Paulo Rômulo Alves Barros*
*Federal University of Campina Grande*
*Campina Grande/Brazil*
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Question about Zuul's role in Solum

2014-02-12 Thread Adrian Otto
Paul,

On Feb 12, 2014, at 3:44 PM, Paul Czarkowski paul.czarkow...@rackspace.com
 wrote:

 
 
 On 2/12/14 5:16 PM, Roshan Agrawal roshan.agra...@rackspace.com wrote:
 
 
 -Original Message-
 From: devdatta kulkarni [mailto:devdatta.kulka...@rackspace.com]
 Sent: Wednesday, February 12, 2014 3:26 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Solum] Question about Zuul's role in Solum
 
 Hi,
 
 I have been looking at Zuul for last few days and had a question about
 its
 intended role within Solum.
 
 From what I understand, Zuul is a code gating system.
 
 I have been wondering if code gating is something we are considering as
 a
 feature to be provided in Solum? If yes, then Zuul is a perfect fit.
 But if not, then we should discuss what benefits do we gain by using
 Zuul as
 an integral part of Solum.
 
 It feels to me that right now we are treating Zuul as a conduit for
 triggering
 job(s) that would do the following:
 - clone/download source
 - run tests
 - create a deployment unit (DU) if tests pass
 - upload DU to glance
 - trigger the DU deployment workflow
 
 In the language-pack working group we have talked about being able to
 do CI
 on the submitted code and building the DUs only after tests pass.
 Now, there is a distinction between doing CI on merged code vs.
 doing it before code is permanently merged to master/stable branches.
 The latter is what a 'code gating' system does, and Zuul is a perfect
 fit for this.
 For the former though, using a code gating system is not be needed.
 We can achieve the former with an API endpoint, a queue, and a mechanism
 to trigger job(s) that perform above mentioned steps.
 
 I guess it comes down to Solum's vision. If the vision includes
 supporting,
 among other things, code gating to ensure that Solum-managed code is
 never broken, then Zuul is a perfect fit.
 Of course, in that situation we would want to ensure that the gating
 functionality is pluggable so that operators can have a choice of
 whether to
 use Zuul or something else.
 But if the vision is to be that part of an overall application lifecycle
 management flow which deals with creation and scaling of
 DUs/plans/assemblies but not necessarily be a code gate, then we should
 re-
 evaluate Zuul's role as an integral part of Solum.
 
 The question is: is Zuul the right tool for the code deployment workflow
 you outlined above? The code deployment workflow is the higher order
 need. 
 
 The code gating functionality is also useful and potentially something we
 would want to implement in Solum at some point, but the decision criteria
 on what tool we use to implement the code deployment workflow depends on
 how good is Zuul in helping us with the deployment workflow.
 
 Thoughts?
 
 Thanks,
 Devdatta
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 The current proposed workflow for Solum (m1) is shorthanded to be something
 like this
 
 1. User writes code
 2. User pushes to master branch in github
 3. a github hook fires against the solum API.
 4. Solum coordinates the testing/building and deployment of the app
 
 None of this seems overly suitable for zuul to me ( not without a
 significant
 amount of work that will be needed to customize zuul to work for us ), and
 can be easily ( for certain values of easy ) achieved with solum
 forking a thread to do the build ( m1 implementation ? ) or solum
 sending messages to a set of worker nodes watching a queue ( marconi? Post
 m1, pluggable so operator could use their existing jenkins, etc ).
 
 If an enterprise or provider wanted to implement code gating it would slip
 in before option 1,  and would be relatively simple for an operator to
 plug in their existing code gating/review tooling (github
 PRs,CodeCollaborator,Crucible+Bamboo) or set up Gerrit/Zuul system:
 
 1. User writes code
 2. User runs `git review`
 3. Gerrit calls zuul to run automated tests
 4. Core reviewers +2 the code
 5. Gerrit merges code to master branch in github
 6. a github hook fires against the solum API
 7. Solum coordinates the testing/building and deployment of the app

This point of view assumes they already have a CI solution. I speak with a lot 
of folks out there just looking at cloud for the first time, and if you ask 
them are they using CI, they look at you with a blank stare, and then ask 
something like C-what?. We should be prepared to help Application Developers 
as an entire class, not only the ones that have adopted agile, devops, and know 
something about CI already. 

Cheers,

Adrian

 
 
 ___
 OpenStack-dev mailing list
 

Re: [openstack-dev] [All]Optional dependencies and requirements.txt

2014-02-12 Thread David Koo

 We could use a separate requirements file for each driver, following a naming
 convention to let installation tools pick up the right file.  For example,
 oslo.messaging might include amqp-requirements.txt, qpid-requirements.txt,
 zmq-requirements.txt, etc.

If we're going to have more than one requirement file then may I propose
something like a requirements.d directory and putting the files in that
directory (and no need for a -requirements suffix)?

requirements.d/
global
amqp
qpid
zmq
...

Somehow seems cleaner.

--
Koo

On Wed, 12 Feb 2014 16:42:17 -0500
Doug Hellmann doug.hellm...@dreamhost.com wrote:

 On Wed, Feb 12, 2014 at 3:58 PM, Ben Nemec openst...@nemebean.com
 wrote:
 
  Hi all,
 
  This is an issue that has come up recently in tripleo as we try to
  support more varied configurations.  Currently qpid-python is not
  listed in requirements.txt for many of the OpenStack projects, even
  though they support using Qpid as a messaging broker.  This means
  that when we install from source in tripleo we have to dynamically
  add a line to requirements.txt if we want to use Qpid (we pip
  install -r to handle deps). There seems to be disagreement over the
  correct way to handle this, so Joe requested on my proposed Nova
  change that I raise the issue here.
 
  There's already some discussion on the bug here:
  https://bugs.launchpad.net/heat/+bug/1225191 as well as a separate
  Neutron bug here: https://bugs.launchpad.net/neutron/+bug/1225232
 
  If there's a better alternative to require all the things I'm
  certainly interested to hear it.  I expect we're going to hit this
  more in the future as we add support for other optional backends
  for services and such.
 
 
 We could use a separate requirements file for each driver, following a
 naming convention to let installation tools pick up the right file.
 For example, oslo.messaging might include amqp-requirements.txt,
 qpid-requirements.txt, zmq-requirements.txt, etc.
 
 That would complicate the global requirements sync script, but not
 terribly so.
 
 Thoughts?
 
 Doug
 
 
 
 
  Thanks.
 
  -Ben
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] healthnmon and devstack

2014-02-12 Thread Dean Troyer
On Wed, Feb 12, 2014 at 6:13 PM, Paulo Rômulo p.romu...@gmail.com wrote:

 My name is Paulo, I'm a software engineer from Brazil and just getting
 started with OpenStack development using *devstack*. I'm working on a
 project which one the goals is to monitor resource usage (at first
 *cpu_util*), considering both VM instances and their hosts.


Welcome!


 I'm using Ubuntu server 12.04. I've tried to build *healthnmon* from git,
 but the generated Debian package depends on python-novaclient and
 python-glance, whose *devstack* already installs. Dpkg doesn't detect the
 *devstack* installation of nova and glance, so I'm not sure of how can I
 enable *healthnmon* services from *devstack* (maybe editing *stack.sh* by
 hand?).


DevStack builds everything supplied by OpenStack and Stackforge projects
from source, so anything extra that you want to use with dependencies on
OpenStack projects will also have to be installed in a similar manner.

One approach would be to build a file in devstack/lib to do the install,
configure and startup steps for healthmon.  You can use lib/template as a
starting point and refer to the other service files for reference.
 lib/glance is one of the simpler ones.  You would also need a file in
devstack/extras.d to hook into stack.sh, again use one of the existing
files as a starting point, they are all very similar.

You will need at a minimum to
- check out the source repo - install_XXX()
- do any configuration - configure_XXX()
- start the service - maybe init_XXX(), definitely start_XXX()
- stop the service - stop_XXX()

The dispatch file in extras.d should start with 80 so it runs last, that
way all of the dependencies that come from the OpenStack repos will be in
place already.

You should be able to just drop these files into an existing DevStack
checkout and go.

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All]Optional dependencies and requirements.txt

2014-02-12 Thread Jesusaurus
On Wed, Feb 12, 2014 at 4:22 PM, David Koo kpublicm...@gmail.com wrote:


 If we're going to have more than one requirement file then may I propose
 something like a requirements.d directory and putting the files in that
 directory (and no need for a -requirements suffix)?

 requirements.d/
 global
 amqp
 qpid
 zmq
 ...

 Somehow seems cleaner.

 --
 Koo


+1
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Proposal for model change - Loadbalancer Instance feedback

2014-02-12 Thread Stephen Balukoff
Hi y'all!

I've been reading through the LoadBalancerInsance description as outlined
here and have some feedback:
https://wiki.openstack.org/wiki/Neutron/LBaaS/LoadbalancerInstance

First off, I agree that we need a container object and that the pool
shouldn't be the object root. This container object is going to have some
attributes associated with it which then will apply to all related objects
further down on the chain.  (I'm thinking, for example, that it may make
sense for the loadbalancer to have 'network_id' as an attribute, and the
associated VIPs, pools, etc. will inherit this from the container object.)

One thing that was not clear to me just yet:  Is the 'loadbalancer' object
meant, at least eventually, to be associated with an actual load balancer
device of some kind (be that the neutron node with haproxy, a vendor
appliance or a software appliance)?

If not, then I think we should use a name other than 'Loadbalancer' so we
don't confuse people. I realize I might just be harping on one of the two
truly difficult problems in software engineering (which are: Naming things,
cache invalidation, and off-by-one errors). But if a 'loadbalancer' object
isn't meant to actually be synonymous with a load balancer appliance of
some kind, the object needs a new name.

If the object and the device are meant to essentially be synonymous, then I
think we're starting off too simplistic here, and the model proposed is
going to need another significant revision when we add additional features
later on.  I suspect we'll be painting ourselves into a corner with the
LoadBalancerInstance as proposed. Specifically, I'm thinking about:


   - Operational concerns around the life cycle of a physical piece of
   infrastructure. If we're going to replace a physical load balancer, it
   often makes sense to have both the old and new load balancer defined in the
   system at the same time during the transition. If you then swap all the
   VIPs from the old to the new, suddenly all the child objects have their
   loadbalancer_id changed, which will often wreak havoc on client application
   code (who really shouldn't be hard-coding things like loadbalancer_id, but
   will do so anyway. :P ) Such transitions are much easier accomplished if
   both load balancers can exist within an overarching container object (ie.
   cluster in my proposal) which will never need to be swapped out.

   - Having tenants know about loadbalancer_id (if it corresponds with
   physical hardware) feels inherently un-cloud-like to me. Better that said
   tenants know about the container object (which doesn't actually correspond
   with any single physical piece of infrastructure) and not concern
   themselves with physical hardware.

   - In an active-standby or active-active HA load balancer topology (ie.
   anything other than 'single device' topology), multiple load balancers will
   carry the same configuration, as far as VIPs, Pools, Members, etc. are
   concerned. Therefore, it doesn't make sense for the 'container' object to
   be synonymous with a single device. It might be possible to hide this
   complexity from the model by having HA features exist/exposed only within
   the driver, but this seems like really backward thinking to me: Why
   shouldn't we allow API-based configuration of load balancer cluster
   topology within our model, or force clients to talk to a driver directly
   for these features?  (This is one of the hack-ish work-arounds I alluded to
   in my e-mail from Monday which is both annoying and corrected with a model
   which can accurately reflect the topology we're working with.)


   - When we add HA and auto-scaling, Heat is going to need to be able to
   manipulate these objects in ways that allow it to make decisions about what
   physical or virtual load balancers it is going to spin up / shut down /
   configure, etc. Again, hiding this complexity within the driver seems like
   the wrong way to go.

   - Side note: It's possible to still have drivers / load balancer
   appliances which do not support certain types of HA or auto-scaling
   topologies. In this case, it probably makes sense to add some kind of
   'capabilities' list that the driver publishes to the lbaas daemon when it's
   loaded.

So, I won't elaborate on the model I would propose we use instead of the
above, since I've already done so before. I'll just say that what we're
trying to solve with the LoadBalancerInstance resource in this proposal can
also be solved with the 'cluster' and 'loadbalancer' resources in the model
I've proposed, and my proposal is also capable of supporting HA and
auto-scaling topologies without further significant model changes.

Beyond this, other feedback I have:


   - I don't recommend having loadbalancer_id added to the pool, member,
   and healthmonitor objects. I see no reason a pool (and its child objects)
   needs to be restricted to a single load balancer (and it may be very
   advantageous in large clusters 

Re: [openstack-dev] [Solum] Question about Zuul's role in Solum

2014-02-12 Thread Angus Salkeld
Excuse top post (web mail :()

Can't we change our proposed workflow to match what Gerrit/Zuul are
doing? So endorse the gating workflow as the official solum workflow.

Pros:
- we just use the current Gerrit/Zuul as-is.
- great gating system for little effort
- lots of people already helping maintain it.
- maybe infra can use solum at some point

Cons:
- slightly more complex workflow (we could have an option to
   autoapprove?)

I am not sure if the flexibility of the plan matches the reality of
gerrit/zuul.
1 just run an assembly
   (review and test are a noop, straight to merge/and run)
2 build an image and run an assembly
3 bulld an image, test then run an assembly
4 review, bulld an image, test then run an assembly

So can we short cut the workflow of our current Gerrit/Zuul to do
that?

-Angus


From: Paul Czarkowski [paul.czarkow...@rackspace.com]
Sent: Thursday, February 13, 2014 9:44 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Solum] Question about Zuul's role in Solum

On 2/12/14 5:16 PM, Roshan Agrawal roshan.agra...@rackspace.com wrote:


 -Original Message-
 From: devdatta kulkarni [mailto:devdatta.kulka...@rackspace.com]
 Sent: Wednesday, February 12, 2014 3:26 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Solum] Question about Zuul's role in Solum

 Hi,

 I have been looking at Zuul for last few days and had a question about
its
 intended role within Solum.

 From what I understand, Zuul is a code gating system.

 I have been wondering if code gating is something we are considering as
a
 feature to be provided in Solum? If yes, then Zuul is a perfect fit.
 But if not, then we should discuss what benefits do we gain by using
Zuul as
 an integral part of Solum.

 It feels to me that right now we are treating Zuul as a conduit for
triggering
 job(s) that would do the following:
 - clone/download source
 - run tests
 - create a deployment unit (DU) if tests pass
 - upload DU to glance
 - trigger the DU deployment workflow

 In the language-pack working group we have talked about being able to
do CI
 on the submitted code and building the DUs only after tests pass.
 Now, there is a distinction between doing CI on merged code vs.
 doing it before code is permanently merged to master/stable branches.
 The latter is what a 'code gating' system does, and Zuul is a perfect
fit for this.
 For the former though, using a code gating system is not be needed.
 We can achieve the former with an API endpoint, a queue, and a mechanism
 to trigger job(s) that perform above mentioned steps.

 I guess it comes down to Solum's vision. If the vision includes
supporting,
 among other things, code gating to ensure that Solum-managed code is
 never broken, then Zuul is a perfect fit.
 Of course, in that situation we would want to ensure that the gating
 functionality is pluggable so that operators can have a choice of
whether to
 use Zuul or something else.
 But if the vision is to be that part of an overall application lifecycle
 management flow which deals with creation and scaling of
 DUs/plans/assemblies but not necessarily be a code gate, then we should
re-
 evaluate Zuul's role as an integral part of Solum.

The question is: is Zuul the right tool for the code deployment workflow
you outlined above? The code deployment workflow is the higher order
need.

The code gating functionality is also useful and potentially something we
would want to implement in Solum at some point, but the decision criteria
on what tool we use to implement the code deployment workflow depends on
how good is Zuul in helping us with the deployment workflow.

 Thoughts?

 Thanks,
 Devdatta



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


The current proposed workflow for Solum (m1) is shorthanded to be something
like this

1. User writes code
2. User pushes to master branch in github
3. a github hook fires against the solum API.
4. Solum coordinates the testing/building and deployment of the app

None of this seems overly suitable for zuul to me ( not without a
significant
amount of work that will be needed to customize zuul to work for us ), and
can be easily ( for certain values of easy ) achieved with solum
forking a thread to do the build ( m1 implementation ? ) or solum
sending messages to a set of worker nodes watching a queue ( marconi? Post
m1, pluggable so operator could use their existing jenkins, etc ).

If an enterprise or provider wanted to implement code gating it would slip
in before option 1,  and would be relatively simple for an operator to
plug in their existing 

[openstack-dev] [OpenStack][Nova][VCDriver] VMWare VCDriver problems

2014-02-12 Thread Jay Lau
Greetings,

I was now doing some integration with VMWare VCDriver and have some
questions during the integration work.

1) In Hong Kong Summit, it was mentioned that ESXDriver will be dropped, so
do we have any plan when to drop this driver?
2) There are many good features in VMWare was not supportted by VCDriver,
such as live migration, cold migration and resize within one vSphere
cluster, also we cannot get individual ESX Server details via VCDriver.

Do we have some planning to make those features worked?

-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] review days

2014-02-12 Thread Devananda van der Veen
On Wed, Feb 12, 2014 at 2:18 PM, Maksym Lobur mlo...@mirantis.com wrote:

 Also, I think 3 hours might be too much

I'm happy to start with 2 hours and see how it goes.


On Wed, Feb 12, 2014 at 2:35 PM, Roman Prykhodchenko 
rprikhodche...@mirantis.com wrote:

 Since there are two core-subteams located in different time zones I
 propose making two sessions:
 - One in the morning in Europe. Let's say at 10 GMT
 - One in the morning in the US at a convenient time.


We don't exactly have two teams split by timezone... we have core members
in GMT -8 +0 +2 +12, +/- 1hr for DST

Regardless of that, I would rather not split this up. Without some overlap
between US and EU, it may be hard to land much during these code jams. If
only two cores are present, neither one of them can land a new patch
without +2'ing their own work, which we shouldn't do. The point of this is
to be able to rapidly iterate on fixing bugs, and a lot of the important
bug fixes have been proposed by core team members already, so we need
minimum of 3 cores present.

-Deva
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-qa] Unit Tests for Cinder and Swift

2014-02-12 Thread Masayuki Igawa
Hi Oscar,

I'm sorry for my delayed reply.
I think this is a development topic. I added openstack-dev ML to CC this
time.

Actually, I'm not familiar with Swift unit tests. So I just CCed and I'm
expecting answers from Swift people.

-- Masayuki Igawa
 Thank you Masayuki for everything. I woud like help the community
Openstack QA writing the Unit tests for Swift module (rings, object,
container, accounts and replication), what is the next step?



*Oscar Correia*
CT
www.lettersvitae.com
Skype: oscar.correia
Facebook: Letters Vitae
E-Mail: oscar_corre...@hotmail.com
oscarmo...@lettersvitae.com

Regis Regum servitium


 Date: Tue, 4 Feb 2014 09:28:08 +0900
 Subject: Re: [openstack-qa] Unit Tests for Cinder and Swift
 From: masayuki.ig...@gmail.com
 To: openst...@lists.openstack.org; oscar_corre...@hotmail.com
 CC: openstack...@lists.openstack.org

 Hi Oscar,

 As I already replied to you in another mail[1], unfortunately, the
 openstack...@lists.openstack.org has been deprecated.
 And for asking about usages, you send things to
 openst...@lists.openstack.org or access to https://ask.openstack.org/
 .

 [1]
http://lists.openstack.org/pipermail/openstack/2014-February/005070.html

 On Sun, Feb 2, 2014 at 11:45 PM, Oscar Jr oscar_corre...@hotmail.com
wrote:
  Hi Jenkins, where I can get full Unit tests for Cinder and Swift module?

 You can get them from their source code:
 http://git.openstack.org/cgit/openstack/cinder/tree/cinder/tests
 http://git.openstack.org/cgit/openstack/swift/tree/test/

 Thanks.


 
 
 
  Oscar Correia
  CT
  www.lettersvitae.com
  Skype: oscar.correia
  Facebook: Letters Vitae
  E-Mail: oscar_corre...@hotmail.com
  oscarmo...@lettersvitae.com
 
  Regis Regum servitium
 

 -- Masayuki Igawa
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All]Optional dependencies and requirements.txt

2014-02-12 Thread Joshua Harlow
In taskflow we've done something like the following.

Its not perfect, but it is what currently works (willing to take suggestions on 
better).

We have three different requirements files.

1. Required to work in any manner @ 
https://github.com/openstack/taskflow/blob/master/requirements.txt
2. Optionally required to work (depending on features used) @ 
https://github.com/openstack/taskflow/blob/master/optional-requirements.txt
3. Test requirements (only for testing) @ 
https://github.com/openstack/taskflow/blob/master/test-requirements.txt

Most of the reason for the #2 there is for plugins that taskflow can use (but 
which are not required for all users). Ideally there would be more dynamic way 
to list requirements of libraries and projects, one that actually changes 
depends on the features used/desired to be used. In a way u could imagine a 
function taking in a list of desired features (qpid vs rabbit could be one such 
feature) and that function would return a list of requirements to work with 
this feature set. Splitting it up into these separate files is doing something 
similar (except using static files instead of just a function to determine 
this). I'm not sure if anything existing (or is planned) for making this 
situation better in python (it would be nice if there was a way).

-Josh

From: Doug Hellmann 
doug.hellm...@dreamhost.commailto:doug.hellm...@dreamhost.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, February 12, 2014 at 1:42 PM
To: Ben Nemec openst...@nemebean.commailto:openst...@nemebean.com, 
OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [All]Optional dependencies and requirements.txt




On Wed, Feb 12, 2014 at 3:58 PM, Ben Nemec 
openst...@nemebean.commailto:openst...@nemebean.com wrote:
Hi all,

This is an issue that has come up recently in tripleo as we try to support more 
varied configurations.  Currently qpid-python is not listed in requirements.txt 
for many of the OpenStack projects, even though they support using Qpid as a 
messaging broker.  This means that when we install from source in tripleo we 
have to dynamically add a line to requirements.txt if we want to use Qpid (we 
pip install -r to handle deps).  There seems to be disagreement over the 
correct way to handle this, so Joe requested on my proposed Nova change that I 
raise the issue here.

There's already some discussion on the bug here: 
https://bugs.launchpad.net/heat/+bug/1225191 as well as a separate Neutron bug 
here: https://bugs.launchpad.net/neutron/+bug/1225232

If there's a better alternative to require all the things I'm certainly 
interested to hear it.  I expect we're going to hit this more in the future as 
we add support for other optional backends for services and such.

We could use a separate requirements file for each driver, following a naming 
convention to let installation tools pick up the right file. For example, 
oslo.messaging might include amqp-requirements.txt, qpid-requirements.txt, 
zmq-requirements.txt, etc.

That would complicate the global requirements sync script, but not terribly so.

Thoughts?

Doug



Thanks.

-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Question about Zuul's role in Solum

2014-02-12 Thread Noorul Islam K M
devdatta kulkarni devdatta.kulka...@rackspace.com writes:

 Hi,

 I have been looking at Zuul for last few days and had a question
 about its intended role within Solum.

 From what I understand, Zuul is a code gating system.

 I have been wondering if code gating is something we are considering as a 
 feature
 to be provided in Solum? If yes, then Zuul is a perfect fit.
 But if not, then we should discuss what benefits do we gain by using Zuul
 as an integral part of Solum.

 It feels to me that right now we are treating Zuul as a conduit for 
 triggering job(s)
 that would do the following:
 - clone/download source
 - run tests
 - create a deployment unit (DU) if tests pass
 - upload DU to glance
 - trigger the DU deployment workflow

 In the language-pack working group we have talked about being able to do
 CI on the submitted code and building the DUs only after tests pass. 
 Now, there is a distinction between doing CI on merged code vs.
 doing it before code is permanently merged to master/stable branches.
 The latter is what a 'code gating' system does, and Zuul is a perfect fit for 
 this.
 For the former though, using a code gating system is not be needed.
 We can achieve the former with an API endpoint, a queue,
 and a mechanism to trigger job(s) that perform above mentioned steps.

 I guess it comes down to Solum's vision. If the vision includes supporting, 
 among other things, code gating
 to ensure that Solum-managed code is never broken, then Zuul is a perfect fit.
 Of course, in that situation we would want to ensure that the gating 
 functionality is pluggable
 so that operators can have a choice of whether to use Zuul or something else.
 But if the vision is to be that part of an overall application lifecycle 
 management flow which deals with
 creation and scaling of DUs/plans/assemblies but not necessarily be a code 
 gate, then we should re-evaluate Zuul's role
 as an integral part of Solum.

 Thoughts?


Is Zuul tightly couple with launchpad? I see that most of the
information that it displays is coming from launchpad.

If it is, is it a good idea to force launchpad on users?

Regards,
Noorul

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] modify_image_attribute() in ec2_api is broken in Nova

2014-02-12 Thread wu jiang
Hi Vish, thanks for your reply.

I've already registered it on launchpad:
https://bugs.launchpad.net/nova/+bug/1272844


Thanks~


On Thu, Feb 13, 2014 at 2:18 AM, Vishvananda Ishaya
vishvana...@gmail.comwrote:

 This looks like a bug to me. It would be great if you could report it on
 launchpad.

 Vish

 On Feb 11, 2014, at 7:49 PM, wu jiang win...@gmail.com wrote:

 Hi all,

 I met some problems when testing an ec2_api:'modify_image_attribute()' in
 Nova.
 I found the params send to Nova, are not suitable to match it in AWS api.
 I logged it in launchpad: https://bugs.launchpad.net/nova/+bug/1272844

 -

 1. Here is the definition part of modify_image_attribute():

 def modify_image_attribute(
 self, context, image_id, attribute, operation_type, **kwargs)

 2. And here is the example of it in AWS api:


 https://ec2.amazonaws.com/?Action=ModifyImageAttributeImageId=ami-61a54008LaunchPermission.Remove.1.UserId=

 -

 3. You can see the value isn't suitable to match the defination in Nova
 codes.
 Therefore, Nova will raise the exception like this:

 TypeError: 'modify_image_attribute() takes exactly 5 non-keyword
 arguments (3 given)'

 4. I printed out the params send to Nova via eucaTools.
 The results also validate the conclusions above:

  args={'launch_permission': {'add': {'1': {'group': u'all'}}},
 'image_id': u'ami-0004'}

 --

 So, is this api correct? Should we need to modify it according to the
 format of AWS api?


 Best Wishes,
 wingwj
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All]Optional dependencies and requirements.txt

2014-02-12 Thread David Koo

 Most of the reason for the #2 there is for plugins that taskflow can
 use (but which are not required for all users). Ideally there would be
 more dynamic way to list requirements of libraries and projects, one
 that actually changes depends on the features used/desired to be used.

Disclaimer: Not a pip/setuptools developer - only a user.

From the pip  setuptools docs, the extras feature[1] of these tools
seems to do what you want:

SQLAlchemy=0.7.99,=0.9.99 [SQLAlchemyPersistence]
alembic=0.4.1 [SQLAlchemyPersistence]
...

[1] http://www.pip-installer.org/en/1.1/requirements.html

--
Koo

On Thu, 13 Feb 2014 02:37:44 +
Joshua Harlow harlo...@yahoo-inc.com wrote:

 In taskflow we've done something like the following.
 
 Its not perfect, but it is what currently works (willing to take
 suggestions on better).
 
 We have three different requirements files.
 
 1. Required to work in any manner @
 https://github.com/openstack/taskflow/blob/master/requirements.txt 2.
 Optionally required to work (depending on features used) @
 https://github.com/openstack/taskflow/blob/master/optional-requirements.txt
 3. Test requirements (only for testing) @
 https://github.com/openstack/taskflow/blob/master/test-requirements.txt
 
 Most of the reason for the #2 there is for plugins that taskflow can
 use (but which are not required for all users). Ideally there would
 be more dynamic way to list requirements of libraries and projects,
 one that actually changes depends on the features used/desired to be
 used. In a way u could imagine a function taking in a list of desired
 features (qpid vs rabbit could be one such feature) and that function
 would return a list of requirements to work with this feature set.
 Splitting it up into these separate files is doing something similar
 (except using static files instead of just a function to determine
 this). I'm not sure if anything existing (or is planned) for making
 this situation better in python (it would be nice if there was a way).
 
 -Josh
 
 From: Doug Hellmann
 doug.hellm...@dreamhost.commailto:doug.hellm...@dreamhost.com
 Reply-To: OpenStack Development Mailing List (not for usage
 questions)
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 Date: Wednesday, February 12, 2014 at 1:42 PM To: Ben Nemec
 openst...@nemebean.commailto:openst...@nemebean.com, OpenStack
 Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [All]Optional dependencies and
 requirements.txt
 
 
 
 
 On Wed, Feb 12, 2014 at 3:58 PM, Ben Nemec
 openst...@nemebean.commailto:openst...@nemebean.com wrote: Hi all,
 
 This is an issue that has come up recently in tripleo as we try to
 support more varied configurations.  Currently qpid-python is not
 listed in requirements.txt for many of the OpenStack projects, even
 though they support using Qpid as a messaging broker.  This means
 that when we install from source in tripleo we have to dynamically
 add a line to requirements.txt if we want to use Qpid (we pip install
 -r to handle deps).  There seems to be disagreement over the correct
 way to handle this, so Joe requested on my proposed Nova change that
 I raise the issue here.
 
 There's already some discussion on the bug here:
 https://bugs.launchpad.net/heat/+bug/1225191 as well as a separate
 Neutron bug here: https://bugs.launchpad.net/neutron/+bug/1225232
 
 If there's a better alternative to require all the things I'm
 certainly interested to hear it.  I expect we're going to hit this
 more in the future as we add support for other optional backends for
 services and such.
 
 We could use a separate requirements file for each driver, following
 a naming convention to let installation tools pick up the right file.
 For example, oslo.messaging might include amqp-requirements.txt,
 qpid-requirements.txt, zmq-requirements.txt, etc.
 
 That would complicate the global requirements sync script, but not
 terribly so.
 
 Thoughts?
 
 Doug
 
 
 
 Thanks.
 
 -Ben
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] ARP Proxy in l2-population Mechanism Driver for OVS

2014-02-12 Thread Nick Ma
Hi all,

I'm running a OpenStack Havana cloud on pre-production stage using
Neutron ML2 VxLAN. I'd like to incorporate l2-population to get rid of
tunnel broadcast.

However, it seems that ARP Proxy has NOT been implemented yet for Open
vSwitch for Havana and also the latest master branch.

I find that ebtables arpreply can do it and then put some corresponding
flow rules into OVS.

Could anyone provide more hints on how to implement it in l2-pop?

thanks,

-- 

Nick Ma
skywalker.n...@gmail.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All]Optional dependencies and requirements.txt

2014-02-12 Thread Joshua Harlow
Interesting, is that supported in pbr (since this is whats being used in
this situation at least for requirements).

-Original Message-
From: David Koo kpublicm...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date: Wednesday, February 12, 2014 at 7:35 PM
To: openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [All]Optional dependencies and
requirements.txt


 Most of the reason for the #2 there is for plugins that taskflow can
 use (but which are not required for all users). Ideally there would be
 more dynamic way to list requirements of libraries and projects, one
 that actually changes depends on the features used/desired to be used.

Disclaimer: Not a pip/setuptools developer - only a user.

From the pip  setuptools docs, the extras feature[1] of these tools
seems to do what you want:

SQLAlchemy=0.7.99,=0.9.99 [SQLAlchemyPersistence]
alembic=0.4.1 [SQLAlchemyPersistence]
...

[1] http://www.pip-installer.org/en/1.1/requirements.html

--
Koo

On Thu, 13 Feb 2014 02:37:44 +
Joshua Harlow harlo...@yahoo-inc.com wrote:

 In taskflow we've done something like the following.
 
 Its not perfect, but it is what currently works (willing to take
 suggestions on better).
 
 We have three different requirements files.
 
 1. Required to work in any manner @
 https://github.com/openstack/taskflow/blob/master/requirements.txt 2.
 Optionally required to work (depending on features used) @
 
https://github.com/openstack/taskflow/blob/master/optional-requirements.t
xt
 3. Test requirements (only for testing) @
 https://github.com/openstack/taskflow/blob/master/test-requirements.txt
 
 Most of the reason for the #2 there is for plugins that taskflow can
 use (but which are not required for all users). Ideally there would
 be more dynamic way to list requirements of libraries and projects,
 one that actually changes depends on the features used/desired to be
 used. In a way u could imagine a function taking in a list of desired
 features (qpid vs rabbit could be one such feature) and that function
 would return a list of requirements to work with this feature set.
 Splitting it up into these separate files is doing something similar
 (except using static files instead of just a function to determine
 this). I'm not sure if anything existing (or is planned) for making
 this situation better in python (it would be nice if there was a way).
 
 -Josh
 
 From: Doug Hellmann
 doug.hellm...@dreamhost.commailto:doug.hellm...@dreamhost.com
 Reply-To: OpenStack Development Mailing List (not for usage
 questions)
 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.o
rg
 Date: Wednesday, February 12, 2014 at 1:42 PM To: Ben Nemec
 openst...@nemebean.commailto:openst...@nemebean.com, OpenStack
 Development Mailing List (not for usage questions)
 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.o
rg
 Subject: Re: [openstack-dev] [All]Optional dependencies and
 requirements.txt
 
 
 
 
 On Wed, Feb 12, 2014 at 3:58 PM, Ben Nemec
 openst...@nemebean.commailto:openst...@nemebean.com wrote: Hi all,
 
 This is an issue that has come up recently in tripleo as we try to
 support more varied configurations.  Currently qpid-python is not
 listed in requirements.txt for many of the OpenStack projects, even
 though they support using Qpid as a messaging broker.  This means
 that when we install from source in tripleo we have to dynamically
 add a line to requirements.txt if we want to use Qpid (we pip install
 -r to handle deps).  There seems to be disagreement over the correct
 way to handle this, so Joe requested on my proposed Nova change that
 I raise the issue here.
 
 There's already some discussion on the bug here:
 https://bugs.launchpad.net/heat/+bug/1225191 as well as a separate
 Neutron bug here: https://bugs.launchpad.net/neutron/+bug/1225232
 
 If there's a better alternative to require all the things I'm
 certainly interested to hear it.  I expect we're going to hit this
 more in the future as we add support for other optional backends for
 services and such.
 
 We could use a separate requirements file for each driver, following
 a naming convention to let installation tools pick up the right file.
 For example, oslo.messaging might include amqp-requirements.txt,
 qpid-requirements.txt, zmq-requirements.txt, etc.
 
 That would complicate the global requirements sync script, but not
 terribly so.
 
 Thoughts?
 
 Doug
 
 
 
 Thanks.
 
 -Ben
 
 ___
 OpenStack-dev mailing list
 
OpenStack-dev@lists.openstack.orgmailto:openstack-...@lists.openstack.or
g
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



  1   2   >