Re: [openstack-dev] [Heat] Live update of openstack

2014-08-20 Thread Angus Salkeld
On Tue, Aug 19, 2014 at 11:12 PM, Sergey Kraynev skray...@mirantis.com
wrote:

 Added tag Heat

 Regards,
 Sergey.


 On 1 August 2014 09:52, Manickam, Kanagaraj kanagaraj.manic...@hp.com
 wrote:

  Hi,



 This mail is generic to all openstack service and explained the problem
 with respect to heat here.



 I have come across the situation where, updates are involved in the heat
 db model and corresponding code changes in the heat-engine component. Now
 assume that in a given deployment, there are two heat-engines and customer
 wants to update as follows:

 1.   Run db sync and update the first heat-engine and restart the
 heat-engine

 2.   Don’t touch the second heat-engine.



 In this scenario, first heat-engine will work properly as its updated
 with new db change done by the db sync. But second heat-engine is not
 updated and it will fail. To address this problem, would like to know if
 any other service came across the same scenario and a solution is already
 provided? Thanks.


Nova has a conductor that is the only thing talking to the actual DB
(afaik) so the other daemons can use versioned rpc to talk to the conductor.

-Angus





 Regards

 Kanagaraj M

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][QoS] Request to be considered for neutron-incubator

2014-08-20 Thread Kevin Benton
Vendors can even implement their own private extension without any change
to ML2 by defining their customized vif-detail fields.

I'm not sure this is a good thing. What happens when 3 different vendors
all implement the same attribute with the same name with different
behavior? Since the API is no longer standard even with the reference core
plugin, it fragments the clients. Each vendor will need to write it's own
neutron client changes, GUIs, etc.

If the ML2 vif-details structure is going to become a dumping ground for
anything, why even store things there in the first place? Since everything
will require custom clients, the port ID can just be used as a foreign key
to another API instead and the ML2 objects don't need to change at all.


On Tue, Aug 19, 2014 at 6:11 PM, Wuhongning wuhongn...@huawei.com wrote:

  +1 to service plugin

  It's better to strip service related extensions from ML2 core plugin as
 possible as we can, and put them in separate service plugin. Not only QOS,
 but also SG or possible other extensions. For the binding issue,
 vif-detail dict might be used for foreign key association.

  Whenever service is added, new key type could be defined in vif-detail
 dict to associate with service object uuid from this new plugin. Vendors
 can even implement their own private extension without any change to ML2 by
 defining their customized vif-detail fields. Not only port, but
 network/subnet should also add such meta dict fields in their attribute,
 flexible foreign key support has been in absence for a long time on these
 ML2 core resource.

  In the previous GBP discussion, I've also suggested similar idea. If we
 have clean boundary between ML2 core plugin and service plugin, the 
 argumentative
 EP/EPG or renamed PT/PG resource object could be eliminated even if GBP
 is in the Neutron, because we can apply service contract group objects
 directly onto existing port/network/subnet resource by foreign key
 association binding, without reinvent these overlapped concept.

  --
 *From:* Salvatore Orlando [sorla...@nicira.com]
 *Sent:* Wednesday, August 20, 2014 6:12 AM

 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Neutron][QoS] Request to be considered
 for neutron-incubator

   In the current approach QoS support is being hardwired into ML2.

  Maybe this is not the best way of doing that, as perhaps it will end up
 requiring every mech driver which enforces VIF configuration should support
 it.
 I see two routes. One is a mechanism driver similar to l2-pop, and then
 you might have a look at the proposed extension framework (and partecipate
 into the discussion).
 The other is doing a service plugin. Still, we'll have to solve how to
 implement the binding between a port/network and the QoS entity. If we go
 for the approach we've chosen so far the resource extension model you still
 have to deal with ML2 extensions. But I like orthogonality in services, and
 QoS is a service to me.
 Another arguable point is that we might want to reconsider our
 abuse^H^H^H^H^H use of resource attribute extension, but this is a story
 for a different thread.

  Regarding the incubator request, I think we need to wait for the process
 to be blessed. But you have my support and I would happy to help to
 assist with this work item through its process towards graduation.

  This obviously provided the QoS team wants me to do that!

  Salvatore


 On 19 August 2014 23:15, Alan Kavanagh alan.kavan...@ericsson.com wrote:

 +1, I am hoping this is just a short term holding point and this will
 eventually be merged into main branch as this is a feature a lot of
 companies, us included would definitely benefit from having supported and
 many thanks to Sean for sticking with this and continue to push this.
 /Alan

 -Original Message-
 From: Collins, Sean [mailto:sean_colli...@cable.comcast.com]
 Sent: August-19-14 8:33 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Neutron][QoS] Request to be considered for
 neutron-incubator

 Hi,

 The QoS API extension has lived in Gerrit/been in review for about a
 year. It's gone through revisions, summit design sessions, and for a little
 while, a subteam.

 I would like to request incubation in the upcoming incubator, so that the
 code will have a more permanent home where we can collaborate and improve.
 --
 Sean M. Collins
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

Re: [openstack-dev] [Nova] Scheduler split wrt Extensible Resource Tracking

2014-08-20 Thread Joe Gordon
On Aug 19, 2014 10:45 AM, Day, Phil philip@hp.com wrote:

  -Original Message-
  From: Nikola Đipanov [mailto:ndipa...@redhat.com]
  Sent: 19 August 2014 17:50
  To: openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [Nova] Scheduler split wrt Extensible
Resource
  Tracking
 
  On 08/19/2014 06:39 PM, Sylvain Bauza wrote:
   On the other hand, ERT discussion is decoupled from the scheduler
   split discussion and will be delayed until Extensible Resource Tracker
   owner (Paul Murray) is back from vacation.
   In the mean time, we're considering new patches using ERT as
   non-acceptable, at least until a decision is made about ERT.
  
 
  Even though this was not officially agreed I think this is the least we
can do
  under the circumstances.
 
  A reminder that a revert proposal is up for review still, and I
consider it fair
  game to approve, although it would be great if we could hear from Paul
first:
 
https://review.openstack.org/115218

 Given the general consensus seemed to be to wait some before deciding
what to do here, isn't putting the revert patch up for approval a tad
premature ?

 The RT may be not able to cope with all of the new and more complex
resource types we're now trying to schedule, and so it's not surprising
that the ERT can't fix that.  It does however address some specific use
cases that the current RT can't cope with,  the spec had a pretty through
review under the new process, and was discussed during the last 2 design
summits.   It worries me that we're continually failing to make even small
and useful progress in this area.

 Sylvain's approach of leaving the ERT in place so it can be used for the
use cases it was designed for while holding back on doing some of the more
complex things than might need either further work in the ERT, or some more
fundamental work in the RT (which feels like as L or M timescales based on
current progress) seemed pretty pragmatic to me.

++, I really don't like the idea of rushing the revert of a feature that
went through significant design discussion especially when the author is
away and cannot defend it.


 Phil

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Issues with POSIX semaphores and other locks in lockutils

2014-08-20 Thread Angus Lees
On Mon, 18 Aug 2014 10:05:28 PM Pádraig Brady wrote:
 On 08/18/2014 03:38 PM, Julien Danjou wrote:
  On Thu, Aug 14 2014, Yuriy Taraday wrote:
  
  Hi Yuriy,
  
  […]
  
  Looking forward to your opinions.
  
  This looks like a good summary of the situation.
  
  I've added a solution E based on pthread, but didn't get very far about
  it for now.
 
 In my experience I would just go with the fcntl locks.
 They're auto unlocked and well supported, and importantly,
 supported for distributed processes.
 
 I'm not sure how problematic the lock_path config is TBH.
 That is adjusted automatically in certain cases where needed anyway.
 
 Pádraig.

I added a C2 (fcntl locks on byte ranges) option to the etherpad that I 
believe addresses all the issues raised.I must be missing something 
because it seems too obvious to have not been considered before :/

-- 
 - Gus

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo] official recommendations to handle oslo-incubator sync requests

2014-08-20 Thread Thierry Carrez
Ihar Hrachyshka wrote:
 [...]
 I hope I've described the existing oral tradition correctly. Please
 comment on that, and if we're ok with the way it's written above, I'd
 like to update our wiki pages ([1] and [2]) with that.

That matches my version of the oral tradition.

One point to note is that we should refrain from doing gratuitous
oslo-incubator syncs, since the process is so painful :) Once oslo
libraries graduate, they are handled through usual dependency version
updates, which are not really encouraged in stable either. So this
process should IMHO really be limited to critical bugs and security issues.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][db] Nominating Mike Bayer for the oslo.db core reviewers team

2014-08-20 Thread Victor Sergeyev
Sure, here is my +1

Folks, it looks like, that there is no no objection to this proposal.

Welcome to the team, Mike!

On Tue, Aug 19, 2014 at 12:02 PM, Flavio Percoco fla...@redhat.com wrote:

 On 08/15/2014 10:26 PM, Doug Hellmann wrote:
 
  On Aug 15, 2014, at 10:00 AM, Ben Nemec openst...@nemebean.com wrote:
 
  On 08/15/2014 08:20 AM, Russell Bryant wrote:
  On 08/15/2014 09:13 AM, Jay Pipes wrote:
  On 08/15/2014 04:21 AM, Roman Podoliaka wrote:
  Hi Oslo team,
 
  I propose that we add Mike Bayer (zzzeek) to the oslo.db core
  reviewers team.
 
  Mike is an author of SQLAlchemy, Alembic, Mako Templates and some
  other stuff we use in OpenStack. Mike has been working on OpenStack
  for a few months contributing a lot of good patches and code reviews
  to oslo.db [1]. He has also been revising the db patterns in our
  projects and prepared a plan how to solve some of the problems we
 have
  [2].
 
  I think, Mike would be a good addition to the team.
 
  Uhm, yeah... +10 :)
 
  ^2 :-)
 
 
  What took us so long to do this? :-)
 
  +1 obviously.
 
  I did think it would be a good idea to wait a *little* while and make
 sure we weren’t going to scare him off. ;-)
 
  Seriously, Mike has been doing a great job collaborating with the
 existing team and helping us make oslo.db sane.
 
  +1


 Big +1


 --
 @flaper87
 Flavio Percoco

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [MagnetoDB] OpenStack versioning adoption

2014-08-20 Thread Ilya Sviridov
Hello contributors,

As it was already discussed in #magnetodb IRC channel we are making one
more step forward to community processes and adopting OpenStack versioning
approach.

The development branch 2.0.x [1] stopped and is not going to be supported.
The last released version is 2.0.5 [2]

The current scope will land master as juno-3 [3]

[1] https://launchpad.net/magnetodb/2.0
[2] https://launchpad.net/magnetodb/2.0/2.0.5
[3] https://launchpad.net/magnetodb/+milestone/juno-3

--
Ilya Sviridov
isviridov @ FreeNode
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Scheduler split wrt Extensible Resource Tracking

2014-08-20 Thread Nikola Đipanov
On 08/20/2014 08:27 AM, Joe Gordon wrote:
 
 On Aug 19, 2014 10:45 AM, Day, Phil philip@hp.com
 mailto:philip@hp.com wrote:

  -Original Message-
  From: Nikola Đipanov [mailto:ndipa...@redhat.com
 mailto:ndipa...@redhat.com]
  Sent: 19 August 2014 17:50
  To: openstack-dev@lists.openstack.org
 mailto:openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [Nova] Scheduler split wrt Extensible
 Resource
  Tracking
 
  On 08/19/2014 06:39 PM, Sylvain Bauza wrote:
   On the other hand, ERT discussion is decoupled from the scheduler
   split discussion and will be delayed until Extensible Resource Tracker
   owner (Paul Murray) is back from vacation.
   In the mean time, we're considering new patches using ERT as
   non-acceptable, at least until a decision is made about ERT.
  
 
  Even though this was not officially agreed I think this is the least
 we can do
  under the circumstances.
 
  A reminder that a revert proposal is up for review still, and I
 consider it fair
  game to approve, although it would be great if we could hear from
 Paul first:
 
https://review.openstack.org/115218

 Given the general consensus seemed to be to wait some before deciding
 what to do here, isn't putting the revert patch up for approval a tad
 premature ?

There was a recent discussion about reverting patches, and from that
(but not only) my understanding is that we should revert whenever in doubt.

Putting the patch back in is easy, and if proven wrong I'd be the first
to +2 it. As scary as they sound - I don't think reverts are a big deal.


 The RT may be not able to cope with all of the new and more complex
 resource types we're now trying to schedule, and so it's not surprising
 that the ERT can't fix that.  It does however address some specific use
 cases that the current RT can't cope with,  the spec had a pretty
 through review under the new process, and was discussed during the last
 2 design summits.   It worries me that we're continually failing to make
 even small and useful progress in this area.

 Sylvain's approach of leaving the ERT in place so it can be used for
 the use cases it was designed for while holding back on doing some of
 the more complex things than might need either further work in the ERT,
 or some more fundamental work in the RT (which feels like as L or M
 timescales based on current progress) seemed pretty pragmatic to me.
 
 ++, I really don't like the idea of rushing the revert of a feature that
 went through significant design discussion especially when the author is
 away and cannot defend it.
 

Fair enough - I will WIP the revert until Phil is back. It's the right
thing to do seeing that he is away.

However - I don't agree with using the length of discussion around the
feature as a valid argument against reverting.

I've supplied several technical arguments on the original thread to why
I think we should revert it, and would expect a discussion that either
refutes them, or provides alternative ways forward.

Saying 'but we talked about it at length' is the ultimate appeal to
imaginary authority and frankly not helping at all.

N.



 Phil

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][QoS] Request to be considered for neutron-incubator

2014-08-20 Thread Wuhongning
Absolutely it's not a good idea to encourage vendor implementing their own 
attributes in vif-detail, and I just mean it is *possible* to do this, which 
make sense when some feature is wanted temporarily before it is approved by 
community. As for the conflicts, restriction can be made that certain prefix 
must be attached: _private_vendorA_xxx.

For the vif-detail dumping ground, I'm not sure if API compatibility needs it. 
For example now the security group uuid is applied on the PORT object, and this 
api call is sent to ML2 plugin for association. If we change the api style, let 
PORT uuid be applied on SG object then indeed no foreign key is needed.


From: Kevin Benton [blak...@gmail.com]
Sent: Wednesday, August 20, 2014 2:14 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][QoS] Request to be considered for 
neutron-incubator

Vendors can even implement their own private extension without any change to 
ML2 by defining their customized vif-detail fields.

I'm not sure this is a good thing. What happens when 3 different vendors all 
implement the same attribute with the same name with different behavior? Since 
the API is no longer standard even with the reference core plugin, it fragments 
the clients. Each vendor will need to write it's own neutron client changes, 
GUIs, etc.

If the ML2 vif-details structure is going to become a dumping ground for 
anything, why even store things there in the first place? Since everything will 
require custom clients, the port ID can just be used as a foreign key to 
another API instead and the ML2 objects don't need to change at all.


On Tue, Aug 19, 2014 at 6:11 PM, Wuhongning 
wuhongn...@huawei.commailto:wuhongn...@huawei.com wrote:
+1 to service plugin

It's better to strip service related extensions from ML2 core plugin as 
possible as we can, and put them in separate service plugin. Not only QOS, but 
also SG or possible other extensions. For the binding issue, vif-detail dict 
might be used for foreign key association.

Whenever service is added, new key type could be defined in vif-detail dict to 
associate with service object uuid from this new plugin. Vendors can even 
implement their own private extension without any change to ML2 by defining 
their customized vif-detail fields. Not only port, but network/subnet should 
also add such meta dict fields in their attribute, flexible foreign key support 
has been in absence for a long time on these ML2 core resource.

In the previous GBP discussion, I've also suggested similar idea. If we have 
clean boundary between ML2 core plugin and service plugin, the argumentative 
EP/EPG or renamed PT/PG resource object could be eliminated even if GBP is in 
the Neutron, because we can apply service contract group objects directly onto 
existing port/network/subnet resource by foreign key association binding, 
without reinvent these overlapped concept.


From: Salvatore Orlando [sorla...@nicira.commailto:sorla...@nicira.com]
Sent: Wednesday, August 20, 2014 6:12 AM

To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][QoS] Request to be considered for 
neutron-incubator

In the current approach QoS support is being hardwired into ML2.

Maybe this is not the best way of doing that, as perhaps it will end up 
requiring every mech driver which enforces VIF configuration should support it.
I see two routes. One is a mechanism driver similar to l2-pop, and then you 
might have a look at the proposed extension framework (and partecipate into the 
discussion).
The other is doing a service plugin. Still, we'll have to solve how to 
implement the binding between a port/network and the QoS entity. If we go for 
the approach we've chosen so far the resource extension model you still have to 
deal with ML2 extensions. But I like orthogonality in services, and QoS is a 
service to me.
Another arguable point is that we might want to reconsider our abuse^H^H^H^H^H 
use of resource attribute extension, but this is a story for a different thread.

Regarding the incubator request, I think we need to wait for the process to be 
blessed. But you have my support and I would happy to help to assist with 
this work item through its process towards graduation.

This obviously provided the QoS team wants me to do that!

Salvatore


On 19 August 2014 23:15, Alan Kavanagh 
alan.kavan...@ericsson.commailto:alan.kavan...@ericsson.com wrote:
+1, I am hoping this is just a short term holding point and this will 
eventually be merged into main branch as this is a feature a lot of companies, 
us included would definitely benefit from having supported and many thanks to 
Sean for sticking with this and continue to push this.
/Alan

-Original Message-
From: Collins, Sean 
[mailto:sean_colli...@cable.comcast.commailto:sean_colli...@cable.comcast.com]
Sent: August-19-14 8:33 

Re: [openstack-dev] [TripleO] Future CI jobs

2014-08-20 Thread Derek Higgins
On 19/08/14 20:55, Gregory Haynes wrote:
 Excerpts from Derek Higgins's message of 2014-08-19 10:41:11 +:
 Hi All,

I'd like to firm up our plans around the ci jobs we discussed at the
 tripleo sprint, at the time we jotted down the various jobs on an
 etherpad, to better visualize the matrix of coverage I've put it into a
 spreadsheet[1]. Before we go about making these changes I'd like to go
 through a few questions for firm things up

 1. Did we miss any jobs that we should have included?
gfidente mentioned on IRC about adding blockstoragescale and
 swiftstoragescale jobs into the mix, should we add this to the matrix so
 at each is tested on at least one of the existing jobs?

 2. Which jobs should run where? i.e. we should probably only aim to run
 a subset of these jobs (possibly 1 fedora and 1 ubuntu?) on non tripleo
 projects.

 3. Are there any jobs here we should remove?

 4. Is there anything we should add to the test matrix?
Here I'm thinking we should consider dependent libraries i.e. have at
 least one job that uses the git version of dependent libraries rather
 then the released library

 5. On selinux we had said that we would set it to enforcing on Fedora
 jobs, once its ready we can flick the switch. This may cause us
 breakages as projects evolve but we can revisit if they are too frequent.

 Once anybody with an opinion has had had a chance to look over the
 spreadsheet, I'll start to make changes to our existing jobs so that
 they match jobs on the spreadsheet and then add the new jobs (one at a time)

 Feel free to add comments to the spreadsheet or reply here.

 thanks,
 Derek

 [1]
 https://docs.google.com/spreadsheets/d/1LuK4FaG4TJFRwho7bcq6CcgY_7oaGnF-0E6kcK4QoQc/edit?usp=sharing

 
 Looks Great! One suggestion is that due to capacity issues we had a
 prioritization of these jobs and were going to walk down the list to add
 new jobs as capacity became available. It might be a good idea to add a
 column for this?

I made an attempt at sorting these into an order of priority, the top 4
jobs I would see as all required and we add the rest in order as
resources allow.

With the 4 tests on top in place we have coverage for ha, non ha,
updates and reboots on both ubuntu and fedora.

 
 -Greg
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Future CI jobs

2014-08-20 Thread Derek Higgins
On 19/08/14 13:07, Giulio Fidente wrote:
 On 08/19/2014 12:41 PM, Derek Higgins wrote:
 Hi All,

 I'd like to firm up our plans around the ci jobs we discussed at the
 tripleo sprint, at the time we jotted down the various jobs on an
 etherpad, to better visualize the matrix of coverage I've put it into a
 spreadsheet[1]. Before we go about making these changes I'd like to go
 through a few questions for firm things up
 
 hi Derek!
 
 1. Did we miss any jobs that we should have included?
 gfidente mentioned on IRC about adding blockstoragescale and
 swiftstoragescale jobs into the mix, should we add this to the matrix so
 at each is tested on at least one of the existing jobs?
 
 thanks for bringing this up indeed
 
 mi idea is the following: given we have support for blockstorage nodes
 scaling in devtest now and will (hopefully soon) have the option to
 scale swift nodes too, it'd be nice to test an OC where we have volumes
 and objects stored on those separate nodes
 
 this would test our ability to deploy such a configuration and we have
 tests for this set in place already as our user image is now booting
 from volume and glance is backed by swift
 
 so maybe a nonha job with 1 external blockstorage and 2 external swift
 nodes would be a nice to have?

I've added block scaling and swift scaling to the matrix and have
included each in one of the tests, this should give us coverage on both,
so I think we can do this without adding a new job.

 
 3. Are there any jobs here we should remove?
 
 I was suspicious about the -juno and -icehouse jobs.
 
 Are these jobs supposed to be test lates 'stable' (juno) and 'stable -1'
 (icehouse) releases, with all other jobs deploying from 'K trunk?

I'm having difficulty recalling what we decided at the sprint, but long
term latest stable sounds like a must, anybody know where the notes are
on this?

 
 Once anybody with an opinion has had had a chance to look over the
 spreadsheet, I'll start to make changes to our existing jobs so that
 they match jobs on the spreadsheet and then add the new jobs (one at a
 time)

 Feel free to add comments to the spreadsheet or reply here.
 
 One last comment, maybe a bit OT but I'm raising it here to see what is
 the other people opinion: how about we modify the -ha job so that at
 some point we actually kill one of the controllers and spawn a second
 user image?


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Future CI jobs

2014-08-20 Thread Derek Higgins
On 19/08/14 20:58, Gregory Haynes wrote:
 Excerpts from Giulio Fidente's message of 2014-08-19 12:07:53 +:
 One last comment, maybe a bit OT but I'm raising it here to see what is 
 the other people opinion: how about we modify the -ha job so that at 
 some point we actually kill one of the controllers and spawn a second 
 user image?
 
 I think this is a great long term goal, but IMO performing an update
 isnt really the type of verification we want for this kind of test. We
 really should have some minimal tempest testing in place first so we can
 verify that when these types of failures occur our cloud remains in a
 functioning state.

Greg, you said performing an update did you mean killing a controller
node ?

if so I agree, verifying our cloud is still in a working order with
tempest would get us more coverage then spawning a node. So once we have
tempest in place we can add a test to kill a controller node.


 
 - Greg
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Change of meeting time

2014-08-20 Thread Derek Higgins
On 24/05/14 01:21, James Polley wrote:
 Following a lengthy discussion under the subject Alternating meeting
 tmie for more TZ friendliness, the TripleO meeting now alternates
 between Tuesday 1900UTC (the former time) and Wednesday 0700UTC, for
 better coverage across Australia, India, China, Japan, and the other
 parts of the world that found it impossible to get to our previous
 meeting time.

Raising a point that came up on this morning's irc meeting

A lot (most?) of the people at this morning's meeting were based in
western Europe, getting up earlier then usual for the meeting (me
included), When daylight saving kicks in it might push them passed the
threshold, would an hour later (0800 UTC) work better for people or is
the current time what fits best?

I'll try to make the meeting regardless if its moved or not but an hour
later would certainly make it a little more palatable.

 
 https://wiki.openstack.org/wiki/Meetings/TripleO#Weekly_TripleO_team_meeting
 has been updated with a link to the iCal feed so you can figure out
 which time we're using each week.
 
 The coming meeting will be our first Wednesday 0700UTC meeting. We look
 forward to seeing some fresh faces (well, fresh nicks at least)!
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-20 Thread Eoghan Glynn


   Additional cross-project resources can be ponied up by the large
   contributor companies, and existing cross-project resources are not
   necessarily divertable on command.
  
  Sure additional cross-project resources can and need to be ponied up, but I
  am doubtful that will be enough.
 
 OK, so what exactly do you suspect wouldn't be enough, for what
 exactly?
 
 
 I am not sure what would be enough to get OpenStack back in a position where
 more developers/users are happier with the current state of affairs. Which
 is why I think we may want to try several things.
 
 
 
 Is it the likely number of such new resources, or the level of domain-
 expertise that they can be realistically be expected bring to the
 table, or the period of time to on-board them, or something else?
 
 
 Yes, all of the above.

Hi Joe,

In coming to that conclusion, have you thought about and explicitly
rejected all of the approaches that have been mooted to mitigate
those concerns?

Is there a strong reason why the following non-exhaustive list
would all be doomed to failure:

 * encouraging projects to follow the successful Sahara model,
   where one core contributor also made a large contribution to
   a cross-project effort (in this case infra, but could be QA
   or docs or release management or stable-maint ... etc)

   [this could be seen as essentially offsetting the cost of
that additional project drawing from the cross-project well]

 * assigning liaisons from each project to *each* of the cross-
   project efforts

   [this could be augmented/accelerated with one of the standard
on-boarding approaches, such as a designated mentor for the
liaison or even an immersive period of secondment]

 * applying back-pressure via the board representation to make
   it more likely that the appropriate number of net-new
   cross-project resources are forthcoming

   [c.f. Stef's we're not amateurs or volunteers mail earlier
on this thread]

I really think we need to do better than dismissing out-of-hand
the idea of beefing up the cross-project efforts. If it won't
work for specific reasons, let's get those reasons out onto
the table and make a data-driven decision on this.

 And which cross-project concern do you think is most strained by the
 current set of projects in the integrated release? Is it:
 
 * QA
 * infra
 * release management
 * oslo
 * documentation
 * stable-maint
 
 or something else?
 
 
 Good question.
 
 IMHO QA, Infra and release management are probably the most strained.

OK, well let's brain-storm on how some of those efforts could
potentially be made more scalable.

Should we for example start to look at release management as a
program onto itself, with a PTL *and* a group of cores to divide
and conquer the load?

(the hands-on rel mgmt for the juno-2 milestone, for example, was
 delegated - is there a good reason why such delegation wouldn't
 work as a matter of course?)

Should QA programs such as grenade be actively seeking new cores to
spread the workload?

(until recently, this had the effective minimum of 2 cores, despite
 now being a requirement for integrated projects)

Could the infra group potentially delegate some of the workload onto
the distro folks?

(given that it's strongly in their interest to have their distro
 represented in the CI gate.

None of the above ideas may make sense, but it doesn't feel like
every avenue has been explored here. I for one don't feel entirely
satisfied that every potential solution to cross-project strain was
fully thought-out in advance of the de-integration being presented
as the solution.

Just my $0.02 ...

Cheers,
Eoghan

[on vacation with limited connectivity]

 But I also think there is something missing from this list. Many of the 
 projects
 are hitting similar issues and end up solving them in different ways, which
 just leads to more confusion for the end user. Today we have a decent model
 for rolling out cross-project libraries (Oslo) but we don't have a good way
 of having broader cross project discussions such as: API standards (such as
 discoverability of features), logging standards, aligning on concepts
 (different projects have different terms and concepts for scaling and
 isolating failure domains), and an overall better user experience. So I
 think we have a whole class of cross project issues that we have not even
 begun addressing.
 
 
 
 Each of those teams has quite different prerequisite skill-sets, and
 the on-ramp for someone jumping in seeking to make a positive impact
 will vary from team to team.
 
 Different approaches have been tried on different teams, ranging from
 dedicated project-liaisons (Oslo) to shared cores (Sahara/Infra) to
 newly assigned dedicated resources (QA/Infra). Which of these models
 might work in your opinion? Which are doomed to failure, and why?
 
 So can you be more specific here on why you think adding more cross-
 project resources won't be enough to address an identified shortage
 

Re: [openstack-dev] [TripleO] Change of meeting time

2014-08-20 Thread Dougal Matthews
- Original Message -
 From: Derek Higgins der...@redhat.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Wednesday, 20 August, 2014 10:15:51 AM
 Subject: Re: [openstack-dev] [TripleO] Change of meeting time
 
 On 24/05/14 01:21, James Polley wrote:
  Following a lengthy discussion under the subject Alternating meeting
  tmie for more TZ friendliness, the TripleO meeting now alternates
  between Tuesday 1900UTC (the former time) and Wednesday 0700UTC, for
  better coverage across Australia, India, China, Japan, and the other
  parts of the world that found it impossible to get to our previous
  meeting time.
 
 Raising a point that came up on this morning's irc meeting
 
 A lot (most?) of the people at this morning's meeting were based in
 western Europe, getting up earlier then usual for the meeting (me
 included), When daylight saving kicks in it might push them passed the
 threshold, would an hour later (0800 UTC) work better for people or is
 the current time what fits best?
 
 I'll try to make the meeting regardless if its moved or not but an hour
 later would certainly make it a little more palatable.

+1, I don't have a strong preference, but an hour later would make it a
bit easier, particularly when DST kicks in.

Dougal

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][QoS] Request to be considered for neutron-incubator

2014-08-20 Thread Mathieu Rohon
Hi

On Wed, Aug 20, 2014 at 12:12 AM, Salvatore Orlando sorla...@nicira.com wrote:
 In the current approach QoS support is being hardwired into ML2.

 Maybe this is not the best way of doing that, as perhaps it will end up
 requiring every mech driver which enforces VIF configuration should support
 it.
 I see two routes. One is a mechanism driver similar to l2-pop, and then you
 might have a look at the proposed extension framework (and partecipate into
 the discussion).
 The other is doing a service plugin. Still, we'll have to solve how to
 implement the binding between a port/network and the QoS entity.

We have exactly the same issue while implementing the BGPVPN service plugin [1].
As for the Qos extension, the BGPVPN extension can extend network by
adding route target infos.
the BGPVPN data model has a foreign key to the extended network.

If Qos is implemented as a service plugin, I assume that the
architecture would be similar, with Qos datamodel
having  foreign keys to ports and/or Networks.

When a port is created, and it has Qos enforcement thanks to the service plugin,
let's assume that a ML2 Qos Mech Driver can fetch Qos info and send
them back to the L2 agent.
We would probably need a Qos Agent which communicates with the plugin
through a dedicated topic.

But when a Qos info is updated through the Qos extension, backed with
the service plugin,
the driver that implements the Qos plugin should send the new Qos
enforcment to the Qos agent through the Qos topic.

So I feel like implementing a core resource extension with a service
plugin needs :
1 : a MD to interact with the service plugin
2 : an agent and a mixin used by the the L2 agent.
3 : a dedicated topic used by the MD and the driver of the service
plugin to communicate with the new agent

Am I wrong?


[1]https://review.openstack.org/#/c/93329/


 If we go
 for the approach we've chosen so far the resource extension model you still
 have to deal with ML2 extensions. But I like orthogonality in services, and
 QoS is a service to me.
 Another arguable point is that we might want to reconsider our
 abuse^H^H^H^H^H use of resource attribute extension, but this is a story for
 a different thread.

 Regarding the incubator request, I think we need to wait for the process to
 be blessed. But you have my support and I would happy to help to assist
 with this work item through its process towards graduation.

 This obviously provided the QoS team wants me to do that!

 Salvatore


 On 19 August 2014 23:15, Alan Kavanagh alan.kavan...@ericsson.com wrote:

 +1, I am hoping this is just a short term holding point and this will
 eventually be merged into main branch as this is a feature a lot of
 companies, us included would definitely benefit from having supported and
 many thanks to Sean for sticking with this and continue to push this.
 /Alan

 -Original Message-
 From: Collins, Sean [mailto:sean_colli...@cable.comcast.com]
 Sent: August-19-14 8:33 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Neutron][QoS] Request to be considered for
 neutron-incubator

 Hi,

 The QoS API extension has lived in Gerrit/been in review for about a year.
 It's gone through revisions, summit design sessions, and for a little while,
 a subteam.

 I would like to request incubation in the upcoming incubator, so that the
 code will have a more permanent home where we can collaborate and improve.
 --
 Sean M. Collins
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Using gerrymander, a client API and command line tool for gerrit

2014-08-20 Thread Daniel P. Berrange
On Thu, Aug 14, 2014 at 05:54:27PM +0100, Daniel P. Berrange wrote:
 
 In terms of how I personally use it (assuming the above config file)...
 

Vish pointed out that I made a bunch of stupid typos in my examples
here. Next time I'll actually double-check what I write :-)

 At least once a day I look at all open changes in Nova which touch any
 libvirt source files
 
   $ gerrymander -g nova --branch master libvirt

Should have been:

  $ gerrymander changes -g nova --branch master libvirt

 
 and will go through and review anything that I've not seen before.
 
 In addition I'll process any open changes where I have commented on a
 previous version of a patch, but not commented in the current version
 which is a report generated by a special command:
 
   $ gerrymander -g nova todo-mine

Args reversed

  $ gerrymander todo-mine -g nova 

 
 If I have more time I'll look at any patches which have been submitted
 where *no  one* has done any review, touching anything in nova source
 tree
 
   $ gerrymander -g nova todo-noones

And again

  $ gerrymander todo-noones -g nova

 
 or any patches where I've not reviewed the current version
 
   $ gerrymander -g nova todo-others

And again

  $ gerrymander todo-others -g nova

 
 That's pretty much all I need on a day-to-day basis to identify stuff
 needing my review attention. For actual review I'll use the gerrit
 web UI, or git fetch the patch locally depending on complexity of the
 change in question. Also if I want to see comments fully expanded,
 without bots, for a specific change number 104264 I would do
 
   $ gerrymander comments 104264
 
 One day I'd love to be able to write some reports which pull priority
 data on blueprints from nova-specs or launchpad, and correlate with 
 reviews so important changes needing attention can be highlighted...

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-20 Thread Thierry Carrez
Eoghan Glynn wrote:
 [...] 
 And which cross-project concern do you think is most strained by the
 current set of projects in the integrated release? Is it:

 * QA
 * infra
 * release management
 * oslo
 * documentation
 * stable-maint

 or something else?


 Good question.

 IMHO QA, Infra and release management are probably the most strained.
 
 OK, well let's brain-storm on how some of those efforts could
 potentially be made more scalable.
 
 Should we for example start to look at release management as a
 program onto itself, with a PTL *and* a group of cores to divide
 and conquer the load?
 
 (the hands-on rel mgmt for the juno-2 milestone, for example, was
  delegated - is there a good reason why such delegation wouldn't
  work as a matter of course?)

For the record, I wouldn't say release management (as a role) is
strained. I'm strained, but that's because I do more than just release
management. We are taking steps to grow the team (both at release
management program level and at foundation development coordination
levels) that should help in that area. Oslo has some growth issues but I
think they are under control. Stable maint (which belongs to the release
management program, btw) needs more a restructuration that a resource
injection.

I think the most strained function is keeping on top of test failures
(which is most case is just about investigating, reproducing and fixing
rare issues bugs). It's a complex task, it falls somewhere between QA
and Infra right now, and the very few resources that have the unique
combination of knowledge and will/time to spend on those is quickly
dying of burnout.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-20 Thread Thierry Carrez
Jay Pipes wrote:
 [...]
 If either of the above answers is NO, then I believe the Technical
 Committee should recommend that the integrated project be removed from
 the integrated release.
 
 HOWEVER, I *also* believe that the previously-integrated project should
 not just be cast away back to Stackforge. I think the project should
 remain in its designated Program and should remain in the openstack/
 code namespace. Furthermore, active, competing visions and
 implementations of projects that address the Thing the
 previously-integrated project addressed should be able to apply to join
 the same Program, and *also* live in the openstack/ namespace.
 
 All of these projects should be able to live in the Program, in the
 openstack/ code namespace, for as long as the project is actively
 developed, and let the contributor communities in these competing
 projects *naturally* work to do any of the following:
 
  * Pick a best-of-breed implementation from the projects that address
 the same Thing
  * Combine code and efforts to merge the good bits of multiple projects
 into one
  * Let multiple valid choices of implementation live in the same Program
 with none of them being blessed by the TC to be part of the integrated
 release

That would work if an OpenStack Program was just like a category under
which you can file projects. However, OpenStack programs are not a
competition category where we could let multiple competing
implementations fight it out for becoming the solution; they are
essentially just a team of people working toward a common goal, having
meetings and sharing/electing the same technical lead.

I'm not convinced you would set competing solutions for a fair
competition by growing them inside the same team (and under the same
PTL!) as the current mainstream/blessed option. How likely is the
Orchestration PTL to make the decision to drop Heat in favor of a new
contender ?

I'm also concerned with making a program a collection of competing
teams, rather than a single team sharing the same meetings and electing
the same leadership, working all together. I don't want the teams
competing to get a number of contributors that would let them game the
elections and take over the program leadership. I think such a setup
would just increase the political tension inside programs, and we have
enough of it already.

If we want to follow your model, we probably would have to dissolve
programs as they stand right now, and have blessed categories on one
side, and teams on the other (with projects from some teams being
blessed as the current solution). That would leave the horizontal
programs like Docs, QA or Infra, where the team and the category are the
same thing, as outliers again (like they were before we did programs).

Finally, I'm slightly concerned with the brand aspect -- letting *any*
project call themselves OpenStack something (which is what living
under the openstack/* namespace gives you) just because they happen to
compete with an existing openstack project sounds like a recipe for
making sure openstack doesn't mean anything upstream anymore.

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] 'qemu-img: error while reading sector xxxxxx: Input/Output error

2014-08-20 Thread Peeyush Gupta
Hi all,

I have done a TripleO setup using RDO instack. Now,
I am trying to deploy an overcloud on physical servers.
I entered all the required parameters in deploy-baremetal-overcloudrc
and when I ran instack-deploy-overcloud, I get the following 
error in nova-compute logs:

2014-08-20 06:35:43.483 16428 TRACE nova.compute.manager [instance: 
774b5107-5504-4e99-bc77-af5abd8367d6]     return processutils.execute(*cmd, 
**kwargs)
2014-08-20 06:35:43.483 16428 TRACE nova.compute.manager [instance: 
774b5107-5504-4e99-bc77-af5abd8367d6]   File 
/usr/lib/python2.7/site-packages/nova/openstack/common/processutils.py, line 
193, in execute
2014-08-20 06:35:43.483 16428 TRACE nova.compute.manager [instance: 
774b5107-5504-4e99-bc77-af5abd8367d6]     cmd=' '.join(cmd))
2014-08-20 06:35:43.483 16428 TRACE nova.compute.manager [instance: 
774b5107-5504-4e99-bc77-af5abd8367d6] ProcessExecutionError: Unexpected error 
while running command.
2014-08-20 06:35:43.483 16428 TRACE nova.compute.manager [instance: 
774b5107-5504-4e99-bc77-af5abd8367d6] Command: qemu-img convert -O raw 
/mnt/state/var/lib/nova/instances/instance-0022/disk.part 
/mnt/state/var/lib/nova/instances/instance-0022/disk.converted
2014-08-20 06:35:43.483 16428 TRACE nova.compute.manager [instance: 
774b5107-5504-4e99-bc77-af5abd8367d6] Exit code: 1
2014-08-20 06:35:43.483 16428 TRACE nova.compute.manager [instance: 
774b5107-5504-4e99-bc77-af5abd8367d6] Stdout: ''
2014-08-20 06:35:43.483 16428 TRACE nova.compute.manager [instance: 
774b5107-5504-4e99-bc77-af5abd8367d6] Stderr: 'qemu-img: error while reading 
sector 962560: Input/output error\n'
2014-08-20 06:35:43.483 16428 TRACE nova.compute.manager [instance: 
774b5107-5504-4e99-bc77-af5abd8367d6]

CAn anyone please help me with why am I getting
this error?

Regards 
~Peeyush Gupta___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Thought on service plugin architecture (was [Neutron][QoS] Request to be considered for neutron-incubator)

2014-08-20 Thread Salvatore Orlando
As the original thread had a completely different subject, I'm starting a
new one here.

More specifically the aim of this thread is about:
1) Define when a service is best implemented with a service plugin or with
a ML2 driver
2) Discuss how bindings between a core resource and the one provided by
the service plugin should be exposed at the management plane, implemented
at the control plane, and if necessary also at the data plane.

Some more comments inline.

Salvatore

On 20 August 2014 11:31, Mathieu Rohon mathieu.ro...@gmail.com wrote:

 Hi

 On Wed, Aug 20, 2014 at 12:12 AM, Salvatore Orlando sorla...@nicira.com
 wrote:
  In the current approach QoS support is being hardwired into ML2.
 
  Maybe this is not the best way of doing that, as perhaps it will end up
  requiring every mech driver which enforces VIF configuration should
 support
  it.
  I see two routes. One is a mechanism driver similar to l2-pop, and then
 you
  might have a look at the proposed extension framework (and partecipate
 into
  the discussion).
  The other is doing a service plugin. Still, we'll have to solve how to
  implement the binding between a port/network and the QoS entity.

 We have exactly the same issue while implementing the BGPVPN service
 plugin [1].
 As for the Qos extension, the BGPVPN extension can extend network by
 adding route target infos.
 the BGPVPN data model has a foreign key to the extended network.

 If Qos is implemented as a service plugin, I assume that the
 architecture would be similar, with Qos datamodel
 having  foreign keys to ports and/or Networks.


From a data model perspective, I believe so if we follow the pattern we've
followed so far. However, I think this would be correct also if QoS is not
implemented as a service plugin!


 When a port is created, and it has Qos enforcement thanks to the service
 plugin,
 let's assume that a ML2 Qos Mech Driver can fetch Qos info and send
 them back to the L2 agent.
 We would probably need a Qos Agent which communicates with the plugin
 through a dedicated topic.


A distinct agent has pro and cons. I think however that we should try and
limit the number of agents on the hosts to a minimum. And this minimum in
my opinion should be 1! There is already a proposal around a modular agent
which should be able of loading modules for handling distinct services. I
think that's the best way forward.



 But when a Qos info is updated through the Qos extension, backed with
 the service plugin,
 the driver that implements the Qos plugin should send the new Qos
 enforcment to the Qos agent through the Qos topic.


I reckon that is pretty much correct. At the end of the day, the agent
which enforces QoS at the data plane just needs to ensure the appropriate
configuration is in place on all ports. Whether this information is coming
from a driver or a serivice plugin, it does not matter a lot (as long as
it's not coming from an untrusted source, obviously). If you look at sec
group agent module, the concept is pretty much the same.


 So I feel like implementing a core resource extension with a service
 plugin needs :
 1 : a MD to interact with the service plugin
 2 : an agent and a mixin used by the the L2 agent.
 3 : a dedicated topic used by the MD and the driver of the service
 plugin to communicate with the new agent

 Am I wrong?


There is nothing wrong with that. Nevertheless, the fact that we need a
Mech driver _and_ a service plugin probably also implies that the service
plugin at the end of the day has not succeeded in its goal of being
orthogonal.
I think it's worth try and exploring solutions which will allow us to
completely decouple the service plugin for the core functionality, and
therefore completely contain QoS management within its service plugin. If
you too think this is not risible, I can perhaps put together something to
validate this idea.




 [1]https://review.openstack.org/#/c/93329/


  If we go
  for the approach we've chosen so far the resource extension model you
 still
  have to deal with ML2 extensions. But I like orthogonality in services,
 and
  QoS is a service to me.
  Another arguable point is that we might want to reconsider our
  abuse^H^H^H^H^H use of resource attribute extension, but this is a story
 for
  a different thread.
 
  Regarding the incubator request, I think we need to wait for the process
 to
  be blessed. But you have my support and I would happy to help to assist
  with this work item through its process towards graduation.
 
  This obviously provided the QoS team wants me to do that!
 
  Salvatore
 
 
  On 19 August 2014 23:15, Alan Kavanagh alan.kavan...@ericsson.com
 wrote:
 
  +1, I am hoping this is just a short term holding point and this will
  eventually be merged into main branch as this is a feature a lot of
  companies, us included would definitely benefit from having supported
 and
  many thanks to Sean for sticking with this and continue to push this.
  /Alan
 
  -Original Message-
  

Re: [openstack-dev] Thought on service plugin architecture (was [Neutron][QoS] Request to be considered for neutron-incubator)

2014-08-20 Thread Kevin Benton
From a data model perspective, I believe so if we follow the pattern we've
followed so far.

How will database setup work in this case? IIRC, the auto-generation of
schema was just disabled in a recent merge. Will we have a big pile of
various migration scripts that users will need to pick from depending on
which services he/she wants to use from the various neutron incubated
projects?


On Wed, Aug 20, 2014 at 4:03 AM, Salvatore Orlando sorla...@nicira.com
wrote:

 As the original thread had a completely different subject, I'm starting a
 new one here.

 More specifically the aim of this thread is about:
 1) Define when a service is best implemented with a service plugin or with
 a ML2 driver
 2) Discuss how bindings between a core resource and the one provided by
 the service plugin should be exposed at the management plane, implemented
 at the control plane, and if necessary also at the data plane.

 Some more comments inline.

 Salvatore

 On 20 August 2014 11:31, Mathieu Rohon mathieu.ro...@gmail.com wrote:

 Hi

 On Wed, Aug 20, 2014 at 12:12 AM, Salvatore Orlando sorla...@nicira.com
 wrote:
  In the current approach QoS support is being hardwired into ML2.
 
  Maybe this is not the best way of doing that, as perhaps it will end up
  requiring every mech driver which enforces VIF configuration should
 support
  it.
  I see two routes. One is a mechanism driver similar to l2-pop, and then
 you
  might have a look at the proposed extension framework (and partecipate
 into
  the discussion).
  The other is doing a service plugin. Still, we'll have to solve how to
  implement the binding between a port/network and the QoS entity.

 We have exactly the same issue while implementing the BGPVPN service
 plugin [1].
 As for the Qos extension, the BGPVPN extension can extend network by
 adding route target infos.
 the BGPVPN data model has a foreign key to the extended network.

 If Qos is implemented as a service plugin, I assume that the
 architecture would be similar, with Qos datamodel
 having  foreign keys to ports and/or Networks.


 From a data model perspective, I believe so if we follow the pattern we've
 followed so far. However, I think this would be correct also if QoS is not
 implemented as a service plugin!


 When a port is created, and it has Qos enforcement thanks to the service
 plugin,
 let's assume that a ML2 Qos Mech Driver can fetch Qos info and send
 them back to the L2 agent.
 We would probably need a Qos Agent which communicates with the plugin
 through a dedicated topic.


 A distinct agent has pro and cons. I think however that we should try and
 limit the number of agents on the hosts to a minimum. And this minimum in
 my opinion should be 1! There is already a proposal around a modular agent
 which should be able of loading modules for handling distinct services. I
 think that's the best way forward.



 But when a Qos info is updated through the Qos extension, backed with
 the service plugin,
 the driver that implements the Qos plugin should send the new Qos
 enforcment to the Qos agent through the Qos topic.


 I reckon that is pretty much correct. At the end of the day, the agent
 which enforces QoS at the data plane just needs to ensure the appropriate
 configuration is in place on all ports. Whether this information is coming
 from a driver or a serivice plugin, it does not matter a lot (as long as
 it's not coming from an untrusted source, obviously). If you look at sec
 group agent module, the concept is pretty much the same.


 So I feel like implementing a core resource extension with a service
 plugin needs :
 1 : a MD to interact with the service plugin
 2 : an agent and a mixin used by the the L2 agent.
 3 : a dedicated topic used by the MD and the driver of the service
 plugin to communicate with the new agent

 Am I wrong?


 There is nothing wrong with that. Nevertheless, the fact that we need a
 Mech driver _and_ a service plugin probably also implies that the service
 plugin at the end of the day has not succeeded in its goal of being
 orthogonal.
 I think it's worth try and exploring solutions which will allow us to
 completely decouple the service plugin for the core functionality, and
 therefore completely contain QoS management within its service plugin. If
 you too think this is not risible, I can perhaps put together something to
 validate this idea.




 [1]https://review.openstack.org/#/c/93329/


  If we go
  for the approach we've chosen so far the resource extension model you
 still
  have to deal with ML2 extensions. But I like orthogonality in services,
 and
  QoS is a service to me.
  Another arguable point is that we might want to reconsider our
  abuse^H^H^H^H^H use of resource attribute extension, but this is a
 story for
  a different thread.
 
  Regarding the incubator request, I think we need to wait for the
 process to
  be blessed. But you have my support and I would happy to help to
 assist
  with this work item through its process towards 

Re: [openstack-dev] [pbr] help needed fixing bad version= lines in setup.cfg in many projects

2014-08-20 Thread Ruslan Kamaldinov
On Wed, Aug 20, 2014 at 5:12 AM, Robert Collins
robe...@robertcollins.net wrote:
 Hi, in working on pbr I have run into some bad data in our
 setup.cfg's. Details are here:
 https://etherpad.openstack.org/p/bad-setup-cfg-versions

 Short version: we need to do a missed step in the release project for
 a bunch of stable branches, and in some cases master, for a bunch of
 projects.

 I'm going to script up an automated push up of reviews for all of
 this, so I'm looking for core and stable branch maintainers to help
 get the reviews through quickly - as this is a blocker on pbr.

 We don't need to fix branches that won't be sdist'd anymore - so if
 its just there for tracking, thats fine. Everything else we should
 fix.

Robert,
Thanks a lot for taking care of stackforge projects too! We (Murano)
will apply your changes to our active branches.

--
Ruslan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]Performance of security group

2014-08-20 Thread Miguel Angel Ajo Pelayo
I couldn't resist making a little benchmark test of the new RPC implementation
shihanzhang wrote:

http://www.ajo.es/post/95269040924/neutron-security-group-rules-for-devices-rpc-rewrite

The results are awesome :-)

We yet need to polish the tests a bit, and it's ready.

Best regards,
Miguel Ángel.

- Original Message -
 On Thu, Jul 10, 2014 at 4:30 AM, shihanzhang ayshihanzh...@126.com wrote:
 
  With the deployment 'nova + neutron + openvswitch', when we bulk create
  about 500 VM with a default security group, the CPU usage of neutron-server
  and openvswitch agent is very high, especially the CPU usage of openvswitch
  agent will be 100%, this will cause creating VMs failed.
 
  With the method discussed in mailist:
 
  1) ipset optimization   (https://review.openstack.org/#/c/100761/)
 
  3) sg rpc optimization (with fanout)
  (https://review.openstack.org/#/c/104522/)
 
  I have implement  these two scheme in my deployment,  when we again bulk
  create about 500 VM with a default security group, the CPU usage of
  openvswitch agent will reduce to 10%, even lower than 10%, so I think the
  iprovement of these two options are very efficient.
 
  Who can help us to review our spec?
 
 This is great work! These are on my list of things to review in detail
 soon, but given the Neutron sprint this week, I haven't had time yet.
 I'll try to remedy that by the weekend.
 
 Thanks!
 Kyle
 
 Best regards,
  shihanzhang
 
 
 
 
 
  At 2014-07-03 10:08:21, Ihar Hrachyshka ihrac...@redhat.com wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512
 
 Oh, so you have the enhancement implemented? Great! Any numbers that
 shows how much we gain from that?
 
 /Ihar
 
 On 03/07/14 02:49, shihanzhang wrote:
  Hi, Miguel Angel Ajo! Yes, the ipset implementation is ready, today
  I will modify my spec, when the spec is approved, I will commit the
  codes as soon as possilbe!
 
 
 
 
 
  At 2014-07-02 10:12:34, Miguel Angel Ajo majop...@redhat.com
  wrote:
 
  Nice Shihanzhang,
 
  Do you mean the ipset implementation is ready, or just the
  spec?.
 
 
  For the SG group refactor, I don't worry about who does it, or
  who takes the credit, but I believe it's important we address
  this bottleneck during Juno trying to match nova's scalability.
 
  Best regards, Miguel Ángel.
 
 
  On 07/02/2014 02:50 PM, shihanzhang wrote:
  hi Miguel Ángel and Ihar Hrachyshka, I agree with you that
  split  the work in several specs, I have finished the work (
  ipset optimization), you can do 'sg rpc optimization (without
  fanout)'. as the third part(sg rpc optimization (with fanout)),
  I think we need talk about it, because just using ipset to
  optimize security group agent codes does not bring the best
  results!
 
  Best regards, shihanzhang.
 
 
 
 
 
 
 
 
  At 2014-07-02 04:43:24, Ihar Hrachyshka ihrac...@redhat.com
  wrote:
  On 02/07/14 10:12, Miguel Angel Ajo wrote:
 
  Shihazhang,
 
  I really believe we need the RPC refactor done for this cycle,
  and given the close deadlines we have (July 10 for spec
  submission and July 20 for spec approval).
 
  Don't you think it's going to be better to split the work in
  several specs?
 
  1) ipset optimization   (you) 2) sg rpc optimization (without
  fanout) (me) 3) sg rpc optimization (with fanout) (edouard, you
  , me)
 
 
  This way we increase the chances of having part of this for the
  Juno cycle. If we go for something too complicated is going to
  take more time for approval.
 
 
  I agree. And it not only increases chances to get at least some of
  those highly demanded performance enhancements to get into Juno,
  it's also the right thing to do (c). It's counterproductive to
  put multiple vaguely related enhancements in single spec. This
  would dim review focus and put us into position of getting
  'all-or-nothing'. We can't afford that.
 
  Let's leave one spec per enhancement. @Shihazhang, what do you
  think?
 
 
  Also, I proposed the details of 2, trying to bring awareness
  on the topic, as I have been working with the scale lab in Red
  Hat to find and understand those issues, I have a very good
  knowledge of the problem and I believe I could make a very fast
  advance on the issue at the RPC level.
 
  Given that, I'd like to work on this specific part, whether or
  not we split the specs, as it's something we believe critical
  for neutron scalability and thus, *nova parity*.
 
  I will start a separate spec for 2, later on, if you find it
  ok, we keep them as separate ones, if you believe having just 1
  spec (for 1  2) is going be safer for juno-* approval, then we
  can incorporate my spec in yours, but then
  add-ipset-to-security is not a good spec title to put all this
  together.
 
 
  Best regards, Miguel Ángel.
 
 
  On 07/02/2014 03:37 AM, shihanzhang wrote:
 
  hi Miguel Angel Ajo Pelayo! I agree with you and modify my
  spes, but I will also optimization the RPC from security group
  agent to neutron server. Now the modle is
  

[openstack-dev] generate Windows exe

2014-08-20 Thread Szépe Viktor



Now bin\swift can only be started by python Scripts\swift
Windows does not support extensionless scripts.
Maybe I did it wrong, this is my first encounter with distutil.

Please consider modifying setup.py

import setuptools

setuptools.setup(
setup_requires=['pbr'],
pbr=True,
entry_points={
console_scripts: [
swift=swiftclient.shell:main
]
})

Thank you!



Szépe Viktor
--
+36-20-4242498  s...@szepe.net  skype: szepe.viktor
Budapest, XX. kerület





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]Performance of security group

2014-08-20 Thread Jay Pipes

On 08/20/2014 07:34 AM, Miguel Angel Ajo Pelayo wrote:

I couldn't resist making a little benchmark test of the new RPC implementation
shihanzhang wrote:

http://www.ajo.es/post/95269040924/neutron-security-group-rules-for-devices-rpc-rewrite

The results are awesome :-)


Indeed, fantastic news. ++

-jay


We yet need to polish the tests a bit, and it's ready.

Best regards,
Miguel Ángel.

- Original Message -

On Thu, Jul 10, 2014 at 4:30 AM, shihanzhang ayshihanzh...@126.com wrote:


With the deployment 'nova + neutron + openvswitch', when we bulk create
about 500 VM with a default security group, the CPU usage of neutron-server
and openvswitch agent is very high, especially the CPU usage of openvswitch
agent will be 100%, this will cause creating VMs failed.

With the method discussed in mailist:

1) ipset optimization   (https://review.openstack.org/#/c/100761/)

3) sg rpc optimization (with fanout)
(https://review.openstack.org/#/c/104522/)

I have implement  these two scheme in my deployment,  when we again bulk
create about 500 VM with a default security group, the CPU usage of
openvswitch agent will reduce to 10%, even lower than 10%, so I think the
iprovement of these two options are very efficient.

Who can help us to review our spec?


This is great work! These are on my list of things to review in detail
soon, but given the Neutron sprint this week, I haven't had time yet.
I'll try to remedy that by the weekend.

Thanks!
Kyle


Best regards,
shihanzhang





At 2014-07-03 10:08:21, Ihar Hrachyshka ihrac...@redhat.com wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Oh, so you have the enhancement implemented? Great! Any numbers that
shows how much we gain from that?

/Ihar

On 03/07/14 02:49, shihanzhang wrote:

Hi, Miguel Angel Ajo! Yes, the ipset implementation is ready, today
I will modify my spec, when the spec is approved, I will commit the
codes as soon as possilbe!





At 2014-07-02 10:12:34, Miguel Angel Ajo majop...@redhat.com
wrote:


Nice Shihanzhang,

Do you mean the ipset implementation is ready, or just the
spec?.


For the SG group refactor, I don't worry about who does it, or
who takes the credit, but I believe it's important we address
this bottleneck during Juno trying to match nova's scalability.

Best regards, Miguel Ángel.


On 07/02/2014 02:50 PM, shihanzhang wrote:

hi Miguel Ángel and Ihar Hrachyshka, I agree with you that
split  the work in several specs, I have finished the work (
ipset optimization), you can do 'sg rpc optimization (without
fanout)'. as the third part(sg rpc optimization (with fanout)),
I think we need talk about it, because just using ipset to
optimize security group agent codes does not bring the best
results!

Best regards, shihanzhang.








At 2014-07-02 04:43:24, Ihar Hrachyshka ihrac...@redhat.com
wrote:

On 02/07/14 10:12, Miguel Angel Ajo wrote:


Shihazhang,



I really believe we need the RPC refactor done for this cycle,
and given the close deadlines we have (July 10 for spec
submission and July 20 for spec approval).



Don't you think it's going to be better to split the work in
several specs?



1) ipset optimization   (you) 2) sg rpc optimization (without
fanout) (me) 3) sg rpc optimization (with fanout) (edouard, you
, me)




This way we increase the chances of having part of this for the
Juno cycle. If we go for something too complicated is going to
take more time for approval.



I agree. And it not only increases chances to get at least some of
those highly demanded performance enhancements to get into Juno,
it's also the right thing to do (c). It's counterproductive to
put multiple vaguely related enhancements in single spec. This
would dim review focus and put us into position of getting
'all-or-nothing'. We can't afford that.

Let's leave one spec per enhancement. @Shihazhang, what do you
think?



Also, I proposed the details of 2, trying to bring awareness
on the topic, as I have been working with the scale lab in Red
Hat to find and understand those issues, I have a very good
knowledge of the problem and I believe I could make a very fast
advance on the issue at the RPC level.



Given that, I'd like to work on this specific part, whether or
not we split the specs, as it's something we believe critical
for neutron scalability and thus, *nova parity*.



I will start a separate spec for 2, later on, if you find it
ok, we keep them as separate ones, if you believe having just 1
spec (for 1  2) is going be safer for juno-* approval, then we
can incorporate my spec in yours, but then
add-ipset-to-security is not a good spec title to put all this
together.




Best regards, Miguel Ángel.




On 07/02/2014 03:37 AM, shihanzhang wrote:


hi Miguel Angel Ajo Pelayo! I agree with you and modify my
spes, but I will also optimization the RPC from security group
agent to neutron server. Now the modle is
'port[rule1,rule2...], port...', I will change it to 'port[sg1,
sg2..]', this can reduce the size of RPC 

[openstack-dev] [Neutron] Mellanox plugin deprecation

2014-08-20 Thread Irena Berezovsky
Hi,

As announced in the last neutron meeting [1], the Mellanox plugin is being 
deprecated.  Juno is the last release to support Mellanox  plugin.

The Mellanox ML2 Mechanism Driver is replacing the plugin and introduced since 
Icehouse release.

[1] 
http://eavesdrop.openstack.org/meetings/networking/2014/networking.2014-08-18-21.02.log.html

BR,
Irena

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Change of meeting time

2014-08-20 Thread Tomas Sedovic
On 20/08/14 05:15, Derek Higgins wrote:
 On 24/05/14 01:21, James Polley wrote:
 Following a lengthy discussion under the subject Alternating meeting
 tmie for more TZ friendliness, the TripleO meeting now alternates
 between Tuesday 1900UTC (the former time) and Wednesday 0700UTC, for
 better coverage across Australia, India, China, Japan, and the other
 parts of the world that found it impossible to get to our previous
 meeting time.
 
 Raising a point that came up on this morning's irc meeting
 
 A lot (most?) of the people at this morning's meeting were based in
 western Europe, getting up earlier then usual for the meeting (me
 included), When daylight saving kicks in it might push them passed the
 threshold, would an hour later (0800 UTC) work better for people or is
 the current time what fits best?
 
 I'll try to make the meeting regardless if its moved or not but an hour
 later would certainly make it a little more palatable.

Same here

 

 https://wiki.openstack.org/wiki/Meetings/TripleO#Weekly_TripleO_team_meeting
 has been updated with a link to the iCal feed so you can figure out
 which time we're using each week.

 The coming meeting will be our first Wednesday 0700UTC meeting. We look
 forward to seeing some fresh faces (well, fresh nicks at least)!


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Scheduler split wrt Extensible Resource Tracking

2014-08-20 Thread Jay Pipes

On 08/20/2014 04:48 AM, Nikola Đipanov wrote:

On 08/20/2014 08:27 AM, Joe Gordon wrote:

On Aug 19, 2014 10:45 AM, Day, Phil philip@hp.com
mailto:philip@hp.com wrote:



-Original Message-
From: Nikola Đipanov [mailto:ndipa...@redhat.com

mailto:ndipa...@redhat.com]

Sent: 19 August 2014 17:50
To: openstack-dev@lists.openstack.org

mailto:openstack-dev@lists.openstack.org

Subject: Re: [openstack-dev] [Nova] Scheduler split wrt Extensible

Resource

Tracking

On 08/19/2014 06:39 PM, Sylvain Bauza wrote:

On the other hand, ERT discussion is decoupled from the scheduler
split discussion and will be delayed until Extensible Resource Tracker
owner (Paul Murray) is back from vacation.
In the mean time, we're considering new patches using ERT as
non-acceptable, at least until a decision is made about ERT.



Even though this was not officially agreed I think this is the least

we can do

under the circumstances.

A reminder that a revert proposal is up for review still, and I

consider it fair

game to approve, although it would be great if we could hear from

Paul first:


   https://review.openstack.org/115218


Given the general consensus seemed to be to wait some before deciding

what to do here, isn't putting the revert patch up for approval a tad
premature ?


There was a recent discussion about reverting patches, and from that
(but not only) my understanding is that we should revert whenever in doubt.


Right.

http://lists.openstack.org/pipermail/openstack-dev/2014-August/042728.html


Putting the patch back in is easy, and if proven wrong I'd be the first
to +2 it. As scary as they sound - I don't think reverts are a big deal.


Neither do I. I think it's more appropriate to revert quickly and then 
add it back after any discussions, per the above revert policy.




The RT may be not able to cope with all of the new and more complex

resource types we're now trying to schedule, and so it's not surprising
that the ERT can't fix that.  It does however address some specific use
cases that the current RT can't cope with,  the spec had a pretty
through review under the new process, and was discussed during the last
2 design summits.   It worries me that we're continually failing to make
even small and useful progress in this area.


Sylvain's approach of leaving the ERT in place so it can be used for

the use cases it was designed for while holding back on doing some of
the more complex things than might need either further work in the ERT,
or some more fundamental work in the RT (which feels like as L or M
timescales based on current progress) seemed pretty pragmatic to me.

++, I really don't like the idea of rushing the revert of a feature that
went through significant design discussion especially when the author is
away and cannot defend it.


Fair enough - I will WIP the revert until Phil is back. It's the right
thing to do seeing that he is away.


Well, it's as much (or more?) Paul Murray and Andrea Rosa :)


However - I don't agree with using the length of discussion around the
feature as a valid argument against reverting.


Neither do I.


I've supplied several technical arguments on the original thread to why
I think we should revert it, and would expect a discussion that either
refutes them, or provides alternative ways forward.

Saying 'but we talked about it at length' is the ultimate appeal to
imaginary authority and frankly not helping at all.


Agreed. Perhaps it's just my provocative nature, but I hear a lot of 
we've already decided/discussed this talk especially around the 
scheduler and RT stuff, and I don't think the argument holds much water. 
We should all be willing to reconsider design decisions and discussions 
when appropriate, and in the case of the RT, this discussion is timely 
and appropriate due to the push to split the scheduler out of Nova 
(prematurely IMO).


Best,
-jay


N.




Phil

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

mailto:OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] oslo.db 0.4.0 released

2014-08-20 Thread Victor Sergeyev
Hello Folks!

Oslo team is pleased to announce the new Oslo database handling library
release - oslo.db 0.4.0
Thanks all for contributions to this release.

Feel free to report issues using the launchpad tracker:
https://bugs.launchpad.net/oslo and mark them with ``db`` tag.

See the full list of changes:

$ git log --oneline --no-merges 0.3.0..0.4.0
ee176a8 Implement a dialect-level function dispatch system
6065b21 Move to oslo.utils
deeda38 Restore correct source file encodings
4dde38b Handle DB2 SmallInteger type for
change_deleted_column_type_to_boolean
4c18fca Imported Translations from Transifex
69f16bf Fixes comments to pass E265 check.
e1dbd31 Fixes indentations to pass E128 check.
423c17e Uses keyword params for i18n string to pass H703
3cb5927 Adds empty line to multilines docs to pass H405
0996c5d Updates one line docstring with dot to pass H402
a3ca010 Changes import orders to pass H305 check
584a883 Fixed DeprecationWarning in exc_filters
fc2fc90 Imported Translations from Transifex
3b17365 oslo.db.exceptions module documentation
c919585 Updated from global requirements
4685631 Extension of DBDuplicateEntry exception
7cb512c oslo.db.options module documentation
c0d9f36 oslo.db.api module documentation
93d95d4 Imported Translations from Transifex
e83e4ca Use SQLAlchemy cursor execute events for tracing
d845a16 Remove sqla_07 from tox.ini
9722ab6 Updated from global requirements
3bf8941 Specify raise_on_warnings=False for mysqlconnector
1814bf8 Make MySQL regexes generic across MySQL drivers
62729fb Allow tox tests with complex OS_TEST_DBAPI_CONNECTION URLs
a9e3af2 Raise DBReferenceError on foreign key violation
b69899e Add host argument to get_connect_string()
9a6aa50 Imported Translations from Transifex
f817555 Don't drop pre-existing database before tests
4499da7 Port _is_db_connection_error check to exception filters
9d5ab2a Integrate the ping listener into the filter system.
cbae81e Add disconnect modification support to exception handling
0a6c8a8 Implement new exception interception and filtering layer
69a4a03 Implement the SQLAlchemy ``handle_error()`` event.
f96deb8 Remove moxstubout.py from oslo.db
7d78e3e Added check for DB2 deadlock error
2df7e88 Bump hacking to version 0.9.2
c34c32e Opportunistic migration tests
108e2bd Move all db exception to exception.py
35afdf1 Enable skipped tests from test_models.py
e68a53b Use explicit loops instead of list comprehensions
44e96a8 Imported Translations from Transifex
817fd44 Allow usage of several iterators on ModelBase
baf30bf Add DBDuplicateEntry detection for mysqlconnector driver
4796d06 Check for mysql_sql_mode is not None in create_engine()
01b916c remove definitions of Python Source Code Encoding

Thanks,
Victor
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] oslo.db 0.4.0 released

2014-08-20 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Thanks!

And I'm immediately bumping the version [1] to utilize
raise_on_warnings=False for my mysql-connector effort. :)

[1]: https://review.openstack.org/115626

On 20/08/14 14:38, Victor Sergeyev wrote:
 Hello Folks!
 
 Oslo team is pleased to announce the new Oslo database handling
 library release - oslo.db 0.4.0 Thanks all for contributions to
 this release.
 
 Feel free to report issues using the launchpad tracker:
 https://bugs.launchpad.net/oslo and mark them with ``db`` tag.
 
 See the full list of changes:
 
 $ git log --oneline --no-merges 0.3.0..0.4.0 ee176a8 Implement a
 dialect-level function dispatch system 6065b21 Move to oslo.utils 
 deeda38 Restore correct source file encodings 4dde38b Handle DB2
 SmallInteger type for change_deleted_column_type_to_boolean 4c18fca
 Imported Translations from Transifex 69f16bf Fixes comments to pass
 E265 check. e1dbd31 Fixes indentations to pass E128 check. 423c17e
 Uses keyword params for i18n string to pass H703 3cb5927 Adds empty
 line to multilines docs to pass H405 0996c5d Updates one line
 docstring with dot to pass H402 a3ca010 Changes import orders to
 pass H305 check 584a883 Fixed DeprecationWarning in exc_filters 
 fc2fc90 Imported Translations from Transifex 3b17365
 oslo.db.exceptions module documentation c919585 Updated from global
 requirements 4685631 Extension of DBDuplicateEntry exception 
 7cb512c oslo.db.options module documentation c0d9f36 oslo.db.api
 module documentation 93d95d4 Imported Translations from Transifex 
 e83e4ca Use SQLAlchemy cursor execute events for tracing d845a16
 Remove sqla_07 from tox.ini 9722ab6 Updated from global
 requirements 3bf8941 Specify raise_on_warnings=False for
 mysqlconnector 1814bf8 Make MySQL regexes generic across MySQL
 drivers 62729fb Allow tox tests with complex
 OS_TEST_DBAPI_CONNECTION URLs a9e3af2 Raise DBReferenceError on
 foreign key violation b69899e Add host argument to
 get_connect_string() 9a6aa50 Imported Translations from Transifex 
 f817555 Don't drop pre-existing database before tests 4499da7 Port
 _is_db_connection_error check to exception filters 9d5ab2a
 Integrate the ping listener into the filter system. cbae81e Add
 disconnect modification support to exception handling 0a6c8a8
 Implement new exception interception and filtering layer 69a4a03
 Implement the SQLAlchemy ``handle_error()`` event. f96deb8 Remove
 moxstubout.py from oslo.db 7d78e3e Added check for DB2 deadlock
 error 2df7e88 Bump hacking to version 0.9.2 c34c32e Opportunistic
 migration tests 108e2bd Move all db exception to exception.py 
 35afdf1 Enable skipped tests from test_models.py e68a53b Use
 explicit loops instead of list comprehensions 44e96a8 Imported
 Translations from Transifex 817fd44 Allow usage of several
 iterators on ModelBase baf30bf Add DBDuplicateEntry detection for
 mysqlconnector driver 4796d06 Check for mysql_sql_mode is not None
 in create_engine() 01b916c remove definitions of Python Source Code
 Encoding
 
 Thanks, Victor
 
 
 ___ OpenStack-dev
 mailing list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJT9JnPAAoJEC5aWaUY1u57UWEIAMvWMn+hcRHypkRlETPaVa4w
k7ZGO77n1u/x9lMhVozGKVPBJL6OyYwD8ADk3SR8WDGKpwx1hS8njK+U0R1thucg
eFwlo0VCMWAvELLTAoWHn5Ppgc7tSx/0fJ2jddDw0yqAUqtO9i8fWjnbzpsy4vmK
RSJD4KF5Qr/9F/GqqhhpNHD39yw8dA6nSbBU+tW3eWk4o78NouOvqnqJBZ83H2x7
7ttBdFprOmG5kgeYB8he6SlVhQxypIk4kIS97ghSkrVLlzfOCekYhmX8J7Lz3XdT
Z6BJhcSQkhCb7ycb7kt1lom2SckLBUdSSnMofjWAhphxTIFfQ0nWY1THdsghdTI=
=X+ou
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] Picking a Name for the Tempest Library

2014-08-20 Thread David Kranz

On 08/18/2014 04:57 PM, Matthew Treinish wrote:

On Sat, Aug 16, 2014 at 06:27:19PM +0200, Marc Koderer wrote:

Hi all,

Am 15.08.2014 um 23:31 schrieb Jay Pipes jaypi...@gmail.com:

I suggest that tempest should be the name of the import'able library, and that the integration 
tests themselves should be what is pulled out of the current Tempest repository, into their own repo called 
openstack-integration-tests or os-integration-tests.

why not keeping it simple:

tempest: importable test library
tempest-tests: all the test cases

Simple, obvious and clear ;)


While I agree that I like how this looks, and that it keeps things simple, I
don't think it's too feasible. The problem is the tempest namespace is already
kind of large and established. The libification effort, while reducing some of
that, doesn't eliminate it completely. So what this ends meaning is that we'll
have to do a rename for a large project in order to split certain functionality
out into a smaller library. Which really doesn't seem like the best way to do
it, because a rename is a considerable effort.

Another wrinkle to consider is that the tempest namespace on pypi is already in
use: https://pypi.python.org/pypi/Tempest so if we wanted to publish the library
as tempest we'd need to figure out what to do about that.

-Matt Treinish
Yes, I agree. Tempest is also used by Refstack, Rally, and I'm sure many 
other parts of our ecosystem. I would vote for tempest-lib as the 
library and keeping tempest to mean the same thing in the ecosystem as 
it does at present. I would also not be opposed to a different name than 
tempest-lib if it were related to its function.


 -David



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Enable SSL between client and API exposed via public URL with HAProxy

2014-08-20 Thread Guillaume Thouvenin
Hi all,

 I wrote a design to enable SSL between external client and OpenStack
public endpoints that provide APIs on public network. This design is
available for reviewing here: https://review.openstack.org/#/c/102273/
Of course all comments are welcome :)

 I also started to work on puppet manifest [1] and [2] for the deployment.
I made the assumption that in the future version of Fuel (6.0 and above)
all deployments will be done in HA mode. That means that even if you have
only one controller, haproxy will be used. Can someone from fuel-core can
confirm this (or not)?

Best regards,
Guillaume

[1] https://review.openstack.org/102273
[2] https://review.openstack.org/114909
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-20 Thread Jay Pipes

Hi Thierry, thanks for the reply. Comments inline. :)

On 08/20/2014 06:32 AM, Thierry Carrez wrote:

Jay Pipes wrote:

[...] If either of the above answers is NO, then I believe the
Technical Committee should recommend that the integrated project be
removed from the integrated release.

HOWEVER, I *also* believe that the previously-integrated project
should not just be cast away back to Stackforge. I think the
project should remain in its designated Program and should remain
in the openstack/ code namespace. Furthermore, active, competing
visions and implementations of projects that address the Thing the
previously-integrated project addressed should be able to apply to
join the same Program, and *also* live in the openstack/
namespace.

All of these projects should be able to live in the Program, in
the openstack/ code namespace, for as long as the project is
actively developed, and let the contributor communities in these
competing projects *naturally* work to do any of the following:

* Pick a best-of-breed implementation from the projects that
address the same Thing * Combine code and efforts to merge the good
bits of multiple projects into one * Let multiple valid choices of
implementation live in the same Program with none of them being
blessed by the TC to be part of the integrated release


That would work if an OpenStack Program was just like a category
under which you can file projects. However, OpenStack programs are
not a competition category where we could let multiple competing
implementations fight it out for becoming the solution; they are
essentially just a team of people working toward a common goal,
having meetings and sharing/electing the same technical lead.

I'm not convinced you would set competing solutions for a fair
competition by growing them inside the same team (and under the same
PTL!) as the current mainstream/blessed option. How likely is the
Orchestration PTL to make the decision to drop Heat in favor of a
new contender ?


I don't believe the Programs are needed, as they are currently
structured. I don't really believe they serve any good purposes, and
actually serve to solidify positions of power, slanted towards existing
power centers, which is antithetical to a meritocratic community.

Furthermore, the structures we've built into the OpenStack community
governance has resulted in perverse incentives. There is this constant
struggle to be legitimized by being included in a Program, incubated,
and then included in the integrated release. Projects, IMO, should be
free to innovate in *any* area of OpenStack, including areas with
existing integrated projects. We should be more open, not less.


I'm also concerned with making a program a collection of competing
teams, rather than a single team sharing the same meetings and
electing the same leadership, working all together. I don't want the
teams competing to get a number of contributors that would let them
game the elections and take over the program leadership. I think such
a setup would just increase the political tension inside programs,
and we have enough of it already.


By prohibiting competition within a Program, you don't magically get rid
of the competition, though. :) The competition will continue to exist,
and divisions will continue to be increased among the people working on
the same general area. You can't force people to get in-line with a
project whose vision or architectural design principles they don't share.


If we want to follow your model, we probably would have to dissolve
programs as they stand right now, and have blessed categories on one
side, and teams on the other (with projects from some teams being
blessed as the current solution).


Why do we have to have blessed categories at all? I'd like to think of
a day when the TC isn't picking winners or losers at all. Level the
playing field and let the quality of the projects themselves determine
the winner in the space. Stop the incubation and graduation madness and 
change the role of the TC to instead play an advisory role to upcoming 
(and existing!) projects on the best ways to integrate with other 
OpenStack projects, if integration is something that is natural for the 
project to work towards.



That would leave the horizontal programs like Docs, QA or Infra,
where the team and the category are the same thing, as outliers again
(like they were before we did programs).


What is the purpose of having these programs, though? If it's just to 
have a PTL, then I think we need to reconsider the whole concept of 
Programs. We should not be putting in place structures that just serve 
to create centers of power. *Projects* will naturally find/elect/choose 
not to have one or more technical leads. Why should we limit entire 
categories of projects to having a single Lead person? What purpose does 
the role fill that could not be filled in a looser, more natural 
fashion? Since the TC is no longer composed of each integrated project 
PTL along with 

Re: [openstack-dev] [Nova] Scheduler split wrt Extensible Resource Tracking

2014-08-20 Thread Daniel P. Berrange
On Wed, Aug 20, 2014 at 08:33:31AM -0400, Jay Pipes wrote:
 On 08/20/2014 04:48 AM, Nikola Đipanov wrote:
 On 08/20/2014 08:27 AM, Joe Gordon wrote:
 On Aug 19, 2014 10:45 AM, Day, Phil philip@hp.com
 mailto:philip@hp.com wrote:
 
 -Original Message-
 From: Nikola Đipanov [mailto:ndipa...@redhat.com
 mailto:ndipa...@redhat.com]
 Sent: 19 August 2014 17:50
 To: openstack-dev@lists.openstack.org
 mailto:openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Nova] Scheduler split wrt Extensible
 Resource
 Tracking
 
 On 08/19/2014 06:39 PM, Sylvain Bauza wrote:
 On the other hand, ERT discussion is decoupled from the scheduler
 split discussion and will be delayed until Extensible Resource Tracker
 owner (Paul Murray) is back from vacation.
 In the mean time, we're considering new patches using ERT as
 non-acceptable, at least until a decision is made about ERT.
 
 
 Even though this was not officially agreed I think this is the least
 we can do
 under the circumstances.
 
 A reminder that a revert proposal is up for review still, and I
 consider it fair
 game to approve, although it would be great if we could hear from
 Paul first:
 
https://review.openstack.org/115218
 
 Given the general consensus seemed to be to wait some before deciding
 what to do here, isn't putting the revert patch up for approval a tad
 premature ?
 
 There was a recent discussion about reverting patches, and from that
 (but not only) my understanding is that we should revert whenever in doubt.
 
 Right.
 
 http://lists.openstack.org/pipermail/openstack-dev/2014-August/042728.html
 
 Putting the patch back in is easy, and if proven wrong I'd be the first
 to +2 it. As scary as they sound - I don't think reverts are a big deal.
 
 Neither do I. I think it's more appropriate to revert quickly and then add
 it back after any discussions, per the above revert policy.
 
 
 The RT may be not able to cope with all of the new and more complex
 resource types we're now trying to schedule, and so it's not surprising
 that the ERT can't fix that.  It does however address some specific use
 cases that the current RT can't cope with,  the spec had a pretty
 through review under the new process, and was discussed during the last
 2 design summits.   It worries me that we're continually failing to make
 even small and useful progress in this area.
 
 Sylvain's approach of leaving the ERT in place so it can be used for
 the use cases it was designed for while holding back on doing some of
 the more complex things than might need either further work in the ERT,
 or some more fundamental work in the RT (which feels like as L or M
 timescales based on current progress) seemed pretty pragmatic to me.
 
 ++, I really don't like the idea of rushing the revert of a feature that
 went through significant design discussion especially when the author is
 away and cannot defend it.
 
 Fair enough - I will WIP the revert until Phil is back. It's the right
 thing to do seeing that he is away.
 
 Well, it's as much (or more?) Paul Murray and Andrea Rosa :)
 
 However - I don't agree with using the length of discussion around the
 feature as a valid argument against reverting.
 
 Neither do I.
 
 I've supplied several technical arguments on the original thread to why
 I think we should revert it, and would expect a discussion that either
 refutes them, or provides alternative ways forward.
 
 Saying 'but we talked about it at length' is the ultimate appeal to
 imaginary authority and frankly not helping at all.
 
 Agreed. Perhaps it's just my provocative nature, but I hear a lot of we've
 already decided/discussed this talk especially around the scheduler and RT
 stuff, and I don't think the argument holds much water. We should all be
 willing to reconsider design decisions and discussions when appropriate, and
 in the case of the RT, this discussion is timely and appropriate due to the
 push to split the scheduler out of Nova (prematurely IMO).

Yes, this is absolutely right. Even if we have approved a spec / blueprint
we *always* reserve the right to change our minds at a later date if new
information or points of view come to light. Hopefully this will be fairly
infrequent and we won't do it lightly, but it is a key thing we accept as
a possible outcome of the process we follow.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Enable SSL between client and API exposed via public URL with HAProxy

2014-08-20 Thread Vladimir Kuklin
Hi, Guillaume. Yes, we are looking forward to removing simple non-HA mode


On Wed, Aug 20, 2014 at 5:14 PM, Guillaume Thouvenin thouv...@gmail.com
wrote:

 Hi all,

  I wrote a design to enable SSL between external client and OpenStack
 public endpoints that provide APIs on public network. This design is
 available for reviewing here: https://review.openstack.org/#/c/102273/
 Of course all comments are welcome :)

  I also started to work on puppet manifest [1] and [2] for the deployment.
 I made the assumption that in the future version of Fuel (6.0 and above)
 all deployments will be done in HA mode. That means that even if you have
 only one controller, haproxy will be used. Can someone from fuel-core can
 confirm this (or not)?

 Best regards,
 Guillaume

 [1] https://review.openstack.org/102273
 [2] https://review.openstack.org/114909

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
45bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com http://www.mirantis.ru/
www.mirantis.ru
vkuk...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Error at context exit for subnet in unit test case

2014-08-20 Thread Vijay Venkatachalam
Hi,

I am writing a unit testcase with context as subnet, code here [1]. 
When the context exits a delete of subnet is attempted and 
I am getting a MismatchError . Traceback posted here [2]. 

What could be going wrong here?

Testcase is written like the following
--
with self.subnet() as subnet:
blah1
blah2
blah3
--

I am getting a MismatchError: 409 != 204 error at blah3 when context exits.

[1] UnitTestCase Snippet - http://pastebin.com/rMtf2dQX
[2] Traceback  - http://pastebin.com/2sPcZ8Jk


Thanks,
Vijay V.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Scheduler split wrt Extensible Resource Tracking

2014-08-20 Thread Nikola Đipanov
On 08/20/2014 02:33 PM, Jay Pipes wrote:
 On 08/20/2014 04:48 AM, Nikola Đipanov wrote:
 Fair enough - I will WIP the revert until Phil is back. It's the right
 thing to do seeing that he is away.
 
 Well, it's as much (or more?) Paul Murray and Andrea Rosa :)
 

Yes sorry - meant to say Paul but was replying to Phil :)

So - until Paul Murray is back.

N.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Error at context exit for subnet in unit test case

2014-08-20 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 20/08/14 15:42, Vijay Venkatachalam wrote:
 Hi,
 
 I am writing a unit testcase with context as subnet, code here [1].
  When the context exits a delete of subnet is attempted and I am
 getting a MismatchError . Traceback posted here [2].
 
 What could be going wrong here?
 
 Testcase is written like the following -- with
 self.subnet() as subnet: blah1 blah2 blah3 --
 
 I am getting a MismatchError: 409 != 204 error at blah3 when
 context exits.
 
 [1] UnitTestCase Snippet - http://pastebin.com/rMtf2dQX [2]
 Traceback  - http://pastebin.com/2sPcZ8Jk
 
 

I guess that it fails to delete the subnet because the load balancer
is linked to it (you don't delete LB, see no_delete=True).

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJT9KrcAAoJEC5aWaUY1u57OCQH/3Z4o2tP7pkKzVGWCbP5PsyQ
ohKYAmRZ34nP7CrzHrN3fF9Q6oRKDAyyP1r2oklhV3iT3m/ZFbN1azZL4wB2WX1k
km+x7exQVckDDEwBUbP9x57zxIwheH5jAxgbRHasttDB3s06n/BOGOQ1h51+LHX6
IjODTiGI0r4jCygQFmcMKb85Nzc9CMJieh33RUDduFj3LkHwvx/mQWaZ0zl/O/NQ
huWjEd8ridD9lOgiUFRdBAn+3JRAEcdkIc8YLZcF40sqv1uJJlMtKtQEDjd9ve2c
jDpQSK263P7InxGo6c13O75mtA92imeKDmZUwbejOPWbtNWqdfyC70VB51xRIQI=
=1+pZ
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Error at context exit for subnet in unit test case

2014-08-20 Thread Vijay Venkatachalam
I observed the following text as well One or more ports have an IP allocation 
from this subnet.
Looks like loadbalancer context exit at blah2 is not cleaning up the port 
that was created. 
This ultimately resulted in failure of delete subnet at blah3.

--
with self.subnet() as subnet:
blah1
with self.loadbalancer() as lb:
blah2
blah3
--

-Original Message-
From: Vijay Venkatachalam [mailto:vijay.venkatacha...@citrix.com] 
Sent: 20 August 2014 19:12
To: OpenStack Development Mailing List (openstack-dev@lists.openstack.org)
Subject: [openstack-dev] [Neutron][LBaaS] Error at context exit for subnet in 
unit test case

Hi,

I am writing a unit testcase with context as subnet, code here [1]. 
When the context exits a delete of subnet is attempted and I am getting a 
MismatchError . Traceback posted here [2]. 

What could be going wrong here?

Testcase is written like the following
--
with self.subnet() as subnet:
blah1
blah2
blah3
--

I am getting a MismatchError: 409 != 204 error at blah3 when context exits.

[1] UnitTestCase Snippet - http://pastebin.com/rMtf2dQX [2] Traceback  - 
http://pastebin.com/2sPcZ8Jk


Thanks,
Vijay V.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] oslo.utils 0.2.0 release

2014-08-20 Thread Doug Hellmann
The Oslo team is pleased to announce the release of version 0.2.0 of 
oslo.utils, the catch-all library containing small tool modules used throughout 
OpenStack.

This release adds the mask_password() from the incubator to 
oslo.utils.strutils, along with some bug fixes and enhancements to the 
incubated version.

Please report issues using the Oslo bug tracker on launchpad: 
https://bugs.launchpad.net/oslo

Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][QoS] Request to be considered for neutron-incubator

2014-08-20 Thread Luke Gorrie
On 19 August 2014 23:15, Alan Kavanagh alan.kavan...@ericsson.com wrote:

 +1, I am hoping this is just a short term holding point and this will
 eventually be merged into main branch as this is a feature a lot of
 companies, us included would definitely benefit from having supported and
 many thanks to Sean for sticking with this and continue to push this.


Agreed. Thank you Sean for the great work on QoS. We are looking forward to
seeing this merged to master one day and meanwhile maintaining it on our
NFV branch for our own use.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][neutron] Migration from nova-network to Neutron for large production clouds

2014-08-20 Thread Tim Bell
Michael has been posting very informative blogs on the summary of the mid-cycle 
meetups for Nova. The one on the Nova Network to Neutron migration was of 
particular interest to me as it raises a number of potential impacts for the 
CERN production cloud. The blog itself is at 
http://www.stillhq.com/openstack/juno/14.html

I would welcome suggestions from the community on the approach to take and 
areas that the nova/neutron team could review to limit the impact on the cloud 
users.

For some background, CERN has been running nova-network in flat DHCP mode since 
our first Diablo deployment. We moved to production for our users in July last 
year and are currently supporting around 70,000 cores, 6 cells, 100s of 
projects and thousands of VMs. Upgrades generally involve disabling the API 
layer while allowing running VMs to carry on without disruption. Within the 
time scale of the migration to Neutron (M release at the latest), these numbers 
are expected to double.

For us, the concerns we have with the 'cold' approach would be on the user 
impact and operational risk of such a change. Specifically,


1.  A big bang approach of shutting down the cloud, upgrade and the 
resuming the cloud would cause significant user disruption

2.  The risks involved with a cloud of this size and the open source 
network drivers would be difficult to mitigate through testing and could lead 
to site wide downtime

3.  Rebooting VMs may be possible to schedule in batches but would need to 
be staggered to keep availability levels

Note, we are not looking to use Neutron features initially, just to find a 
functional equivalent of the flat DHCP network.

We would appreciate suggestions on how we could achieve a smooth migration for 
the simple flat DHCP models.

Tim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Meeting Thursday August 21st at 22:00 UTC

2014-08-20 Thread Matthew Treinish
Hi everyone,

Just a quick reminder that the weekly OpenStack QA team IRC meeting will be
tomorrow Thursday, August 21st at 22:00 UTC in the #openstack-meeting channel.

The agenda for tomorrow's meeting can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting
Anyone is welcome to add an item to the agenda.

To help people figure out what time 22:00 UTC is in other timezones tomorrow's
meeting will be at:

18:00 EDT
07:00 JST
07:30 ACST
0:00 CEST
17:00 CDT
15:00 PDT

-Matt Treinish


pgpizcC8saOds.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Incubator concerns from packaging perspective

2014-08-20 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Hi all,

I've read the proposal for incubator as described at [1], and I have
several comments/concerns/suggestions to this.

Overall, the idea of giving some space for experimentation that does
not alienate parts of community from Neutron is good. In that way, we
may relax review rules and quicken turnaround for preview features
without loosing control on those features too much.

Though the way it's to be implemented leaves several concerns, as follows:

1. From packaging perspective, having a separate repository and
tarballs seems not optimal. As a packager, I would better deal with a
single tarball instead of two. Meaning, it would be better to keep the
code in the same tree.

I know that we're afraid of shipping the code for which some users may
expect the usual level of support and stability and compatibility.
This can be solved by making it explicit that the incubated code is
unsupported and used on your user's risk. 1) The experimental code
wouldn't probably be installed unless explicitly requested, and 2) it
would be put in a separate namespace (like 'preview', 'experimental',
or 'staging', as the call it in Linux kernel world [2]).

This would facilitate keeping commit history instead of loosing it
during graduation.

Yes, I know that people don't like to be called experimental or
preview or incubator... And maybe neutron-labs repo sounds more
appealing than an 'experimental' subtree in the core project. Well,
there are lots of EXPERIMENTAL features in Linux kernel that we
actively use (for example, btrfs is still considered experimental by
Linux kernel devs, while being exposed as a supported option to RHEL7
users), so I don't see how that naming concern is significant.

2. If those 'extras' are really moved into a separate repository and
tarballs, this will raise questions on whether packagers even want to
cope with it before graduation. When it comes to supporting another
build manifest for a piece of code of unknown quality, this is not the
same as just cutting part of the code into a separate
experimental/labs package. So unless I'm explicitly asked to package
the incubator, I wouldn't probably touch it myself. This is just too
much effort (btw the same applies to moving plugins out of the tree -
once it's done, distros will probably need to reconsider which plugins
they really want to package; at the moment, those plugins do not
require lots of time to ship them, but having ~20 separate build
manifests for each of them is just too hard to handle without clear
incentive).

3. The fact that neutron-incubator is not going to maintain any stable
branches for security fixes and major failures concerns me too. In
downstream, we don't generally ship the latest and greatest from PyPI.
Meaning, we'll need to maintain our own downstream stable branches for
major fixes. [BTW we already do that for python clients.]

4. Another unclear part of the proposal is that notion of keeping
Horizon and client changes required for incubator features in
neutron-incubator. AFAIK the repo will be governed by Neutron Core
team, and I doubt the team is ready to review Horizon changes (?). I
think I don't understand how we're going to handle that. Can we just
postpone Horizon work till graduation?

5. The wiki page says that graduation will require full test coverage.
Does it mean 100% coverage in 'coverage' report? I don't think our
existing code is even near that point, so maybe it's not fair to
require that from graduated code.

A separate tree would probably be reasonable if it would be governed
by a separate team. But as it looks now, it's still Neutron Cores who
will do the review heavy-lifting. So I wonder why not just applying
different review rules for patches for core and the staging subtree.

[1]: https://wiki.openstack.org/wiki/Network/Incubator
[2]:
http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/drivers/staging

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJT9MDpAAoJEC5aWaUY1u57bYAH/0LsZonj3zVmWomUBBPriUOm
GRoNBHq6C7BCfO7gRnQQyRd/N4jCL4Y1Dfbfv2Ypulsgf0x+ugvmzOrWm2Sa7KiS
F3adumx+0OjJSMb5SSOxZQHpsZFjJmwtJjat9vwOYFXcCXhn8r9AgN3TPm5GyZ29
NPY+SQdqu+G/ZgXd94sE2+gGbx0H5nLZusJD0yiUpoNExhv4qvjHSZW1rwssb+Ac
3dU3LU1FqhM7UxkgnWk6AGYHfLjr5CfxXBrmikQsxXljl8Sko9DBTpKa3YtVcBX1
FdMWLGn13nFNasGAKHot/aRfmdfPIzN0TsjjfRstm0W1VLvvbQjLxGTQDEyey/U=
=vdaC
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-20 Thread Zane Bitter

On 19/08/14 10:37, Jay Pipes wrote:


By graduating an incubated project into the integrated release, the
Technical Committee is blessing the project as the OpenStack way to do
some thing. If there are projects that are developed *in the OpenStack
ecosystem* that are actively being developed to serve the purpose that
an integrated project serves, then I think it is the responsibility of
the Technical Committee to take another look at the integrated project
and answer the following questions definitively:

  a) Is the Thing that the project addresses something that the
Technical Committee believes the OpenStack ecosystem benefits from by
the TC making a judgement on what is the OpenStack way of addressing
that Thing.

and IFF the decision of the TC on a) is YES, then:

  b) Is the Vision and Implementation of the currently integrated
project the one that the Technical Committee wishes to continue to
bless as the the OpenStack way of addressing the Thing the project
does.


I disagree with part (b); projects are not code - projects, like Soylent 
Green, are people. So it's not critical that the implementation is the 
one the TC wants to bless, what's critical is that the right people are 
involved to get to an implementation that the TC would be comfortable 
blessing over time. For example, everyone agrees that Ceilometer has 
room for improvement, but any implication that the Ceilometer is not 
interested in or driving towards those improvements (because of NIH or 
whatever) is, as has been pointed out, grossly unfair to the Ceilometer 
team.


I think the rest of your plan is a way of recognising this 
appropriately, that the current implementation is actually not the 
be-all and end-all of how the TC should view a project.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][core] Expectations of core reviewers

2014-08-20 Thread John Garbutt
On 18 August 2014 11:18, Thierry Carrez thie...@openstack.org wrote:
 Doug Hellmann wrote:
 On Aug 13, 2014, at 4:42 PM, Russell Bryant rbry...@redhat.com wrote:
 Let me try to say it another way.  You seemed to say that it wasn't much
 to ask given the rate at which things happen in OpenStack.  I would
 argue that given the rate, we should not try to ask more of individuals
 (like this proposal) and risk burnout.  Instead, we should be doing our
 best to be more open an inclusive to give the project the best chance to
 grow, as that's the best way to get more done.

 I think an increased travel expectation is a raised bar that will hinder
 team growth, not help it.

 +1, well said.

 Sorry, I was away for a few days. This is a topic I have a few strong
 opinions on :)

Same here, sorry for posting so high up in the tread.

 There is no denial that the meetup format is working well, comparatively
 better than the design summit format. There is also no denial that that
 requiring 4 travels per year for a core dev is unreasonable. Where is
 the limit ? Wouldn't we be more productive and aligned if we did one per
 month ? No, the question is how to reach a sufficient level of focus and
 alignment while keeping the number of mandatory travel at 2 per year.

I think its important we try to get everyone together as often as
possible, it seems like 2 per year is the best compromise.

I prefer expected rather than mandatory, but thats a detail. Some
times there are more important family things, and thats totally fine.
But lack of budget seems like a fairly poor excuse for something thats
expected.

 I don't think our issue comes from not having enough F2F time. Our issue
 is that the design summit no longer reaches its objectives of aligning
 key contributors on a common plan, and we need to fix it.


 Why are design summits less productive that mid-cycle meetups those days
 ? Is it because there are too many non-contributors in the design summit
 rooms ? Is it the 40-min format ? Is it the distractions (having talks
 to give somewhere else, booths to attend, parties and dinners to be at)
 ? Is it that beginning of cycle is not the best moment ? Once we know
 WHY the design summit fails its main objective, maybe we can fix it.

What I remember about why we needed the mid-cycles:

In Hong Kong:
* we were missing some key people
* so the midcycle in the US was very useful to catch up those people
* but it was great to meet some new contributors

In Atlanta, although we have fairly good core attendance, but:
* done badly on tracking and following on what was discussed
* we had quite a few information and mentoring sessions, that were not
a great use of group time
* had some big debates that needed more time, but we didn't have any slack
* quite burnt out towards the end (after two previous days of ops and
cross project sessions)

 My gut feeling is that having a restricted audience and a smaller group
 lets people get to the bottom of an issue and reach consensus. And that
 you need at least half a day or a full day of open discussion to reach
 such alignment. And that it's not particularly great to get such
 alignment in the middle of the cycle, getting it at the start is still
 the right way to align with the release cycle.

It feels a bit exclusive. I think saying we prefer ATLs attending seems OK.

Maybe for Paris would could try out some of these things:

1) Get rid of sessions that can be replaced by the spec process:
* this was a popular idea at the mid-cycle
* Feature session, we should try write the spec first, to see if we
really need a session
* Also use the spec process to help mentor people
* maybe have more targeted face to face mentor meetings, where a
summit session is wasteful

2) Schedule an afternoon of freestyle big picture debates later in the week
* quite often come up with but we must fix X first discussions, can
follow through on those here
* maybe after lunch on the last day?
* doesn't mean we don't have other big picture sessions explicitly scheduled

3) Schedule a slot to discuss, and propose some release priorities
* it would be good to generate a release priority statement in the
last session
* sum up the key big issues that have come up and need resolving, etc

 If we manage to have alignment at the design summit, then it doesn't
 spell the end of the mid-cycle things. But then, ideally the extra
 mid-cycle gatherings should be focused on getting specific stuff done,
 rather than general team alignment. Think workshop/hackathon rather than
 private gathering. The goal of the workshop would be published in
 advance, and people could opt to join that. It would be totally optional.

+1

Cheers,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Error at context exit for subnet in unit test case

2014-08-20 Thread Brandon Logan
Hey Vijay,
Figured out the issue you are having.  In that particular test you are
creating the same subnet twice.  The first time you create it is in the
contextlib.nested, the second time is the self.loadbalancer method that
will create a subnet if you do not pass it.  So you should pass
subnet=subnet in the self.loadbalancer method.

After that there are other minor issues in the driver, mainly accessing
the obj passed in as a dictionary and not an object (obj[id] vs
obj.id).

Let me know if you need more information.

Thanks,
Brandon
On Wed, 2014-08-20 at 14:27 +, Vijay Venkatachalam wrote:
 I observed the following text as well One or more ports have an IP 
 allocation from this subnet.
 Looks like loadbalancer context exit at blah2 is not cleaning up the port 
 that was created. 
 This ultimately resulted in failure of delete subnet at blah3.
 
 --
 with self.subnet() as subnet:
 blah1
 with self.loadbalancer() as lb:
 blah2
 blah3
 --
 
 -Original Message-
 From: Vijay Venkatachalam [mailto:vijay.venkatacha...@citrix.com] 
 Sent: 20 August 2014 19:12
 To: OpenStack Development Mailing List (openstack-dev@lists.openstack.org)
 Subject: [openstack-dev] [Neutron][LBaaS] Error at context exit for subnet in 
 unit test case
 
 Hi,
 
 I am writing a unit testcase with context as subnet, code here [1]. 
 When the context exits a delete of subnet is attempted and I am getting a 
 MismatchError . Traceback posted here [2]. 
 
 What could be going wrong here?
 
 Testcase is written like the following
 --
 with self.subnet() as subnet:
 blah1
 blah2
 blah3
 --
 
 I am getting a MismatchError: 409 != 204 error at blah3 when context exits.
 
 [1] UnitTestCase Snippet - http://pastebin.com/rMtf2dQX [2] Traceback  - 
 http://pastebin.com/2sPcZ8Jk
 
 
 Thanks,
 Vijay V.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Scheduler split wrt Extensible Resource Tracking

2014-08-20 Thread Sylvain Bauza


Le 20/08/2014 15:13, Daniel P. Berrange a écrit :

On Wed, Aug 20, 2014 at 08:33:31AM -0400, Jay Pipes wrote:

On 08/20/2014 04:48 AM, Nikola Đipanov wrote:

On 08/20/2014 08:27 AM, Joe Gordon wrote:

On Aug 19, 2014 10:45 AM, Day, Phil philip@hp.com
mailto:philip@hp.com wrote:

-Original Message-
From: Nikola Đipanov [mailto:ndipa...@redhat.com

mailto:ndipa...@redhat.com]

Sent: 19 August 2014 17:50
To: openstack-dev@lists.openstack.org

mailto:openstack-dev@lists.openstack.org

Subject: Re: [openstack-dev] [Nova] Scheduler split wrt Extensible

Resource

Tracking

On 08/19/2014 06:39 PM, Sylvain Bauza wrote:

On the other hand, ERT discussion is decoupled from the scheduler
split discussion and will be delayed until Extensible Resource Tracker
owner (Paul Murray) is back from vacation.
In the mean time, we're considering new patches using ERT as
non-acceptable, at least until a decision is made about ERT.


Even though this was not officially agreed I think this is the least

we can do

under the circumstances.

A reminder that a revert proposal is up for review still, and I

consider it fair

game to approve, although it would be great if we could hear from

Paul first:

   https://review.openstack.org/115218

Given the general consensus seemed to be to wait some before deciding

what to do here, isn't putting the revert patch up for approval a tad
premature ?

There was a recent discussion about reverting patches, and from that
(but not only) my understanding is that we should revert whenever in doubt.

Right.

http://lists.openstack.org/pipermail/openstack-dev/2014-August/042728.html


Putting the patch back in is easy, and if proven wrong I'd be the first
to +2 it. As scary as they sound - I don't think reverts are a big deal.

Neither do I. I think it's more appropriate to revert quickly and then add
it back after any discussions, per the above revert policy.


The RT may be not able to cope with all of the new and more complex

resource types we're now trying to schedule, and so it's not surprising
that the ERT can't fix that.  It does however address some specific use
cases that the current RT can't cope with,  the spec had a pretty
through review under the new process, and was discussed during the last
2 design summits.   It worries me that we're continually failing to make
even small and useful progress in this area.

Sylvain's approach of leaving the ERT in place so it can be used for

the use cases it was designed for while holding back on doing some of
the more complex things than might need either further work in the ERT,
or some more fundamental work in the RT (which feels like as L or M
timescales based on current progress) seemed pretty pragmatic to me.

++, I really don't like the idea of rushing the revert of a feature that
went through significant design discussion especially when the author is
away and cannot defend it.

Fair enough - I will WIP the revert until Phil is back. It's the right
thing to do seeing that he is away.

Well, it's as much (or more?) Paul Murray and Andrea Rosa :)


However - I don't agree with using the length of discussion around the
feature as a valid argument against reverting.

Neither do I.


I've supplied several technical arguments on the original thread to why
I think we should revert it, and would expect a discussion that either
refutes them, or provides alternative ways forward.

Saying 'but we talked about it at length' is the ultimate appeal to
imaginary authority and frankly not helping at all.

Agreed. Perhaps it's just my provocative nature, but I hear a lot of we've
already decided/discussed this talk especially around the scheduler and RT
stuff, and I don't think the argument holds much water. We should all be
willing to reconsider design decisions and discussions when appropriate, and
in the case of the RT, this discussion is timely and appropriate due to the
push to split the scheduler out of Nova (prematurely IMO).

Yes, this is absolutely right. Even if we have approved a spec / blueprint
we *always* reserve the right to change our minds at a later date if new
information or points of view come to light. Hopefully this will be fairly
infrequent and we won't do it lightly, but it is a key thing we accept as
a possible outcome of the process we follow.


While I totally understand the possibility to change our minds about an 
approved spec, I'm a bit worried about the timeline and how it can 
possibly impact deliveries.
In our case, the problem was raised in between 2 and 3 months after the 
design session occurred, and more importantly, 1 week before Feature 
Proposal Freeze.
As a consequence, it freezes the specification and we loose the 
possibility to split the Scheduler by Juno.


I know there are discussions about how Nova should handle feature 
delivery, I just want to make sure that this corner case is part of the 
talks. IMHO, we need to make sure at least all cores validate a spec 
(either explicitely or 

Re: [openstack-dev] [neutron] Incubator concerns from packaging perspective

2014-08-20 Thread Salvatore Orlando
Some comments inline.

Salvatore

On 20 August 2014 17:38, Ihar Hrachyshka ihrac...@redhat.com wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512

 Hi all,

 I've read the proposal for incubator as described at [1], and I have
 several comments/concerns/suggestions to this.

 Overall, the idea of giving some space for experimentation that does
 not alienate parts of community from Neutron is good. In that way, we
 may relax review rules and quicken turnaround for preview features
 without loosing control on those features too much.

 Though the way it's to be implemented leaves several concerns, as follows:

 1. From packaging perspective, having a separate repository and
 tarballs seems not optimal. As a packager, I would better deal with a
 single tarball instead of two. Meaning, it would be better to keep the
 code in the same tree.

 I know that we're afraid of shipping the code for which some users may
 expect the usual level of support and stability and compatibility.
 This can be solved by making it explicit that the incubated code is
 unsupported and used on your user's risk. 1) The experimental code
 wouldn't probably be installed unless explicitly requested, and 2) it
 would be put in a separate namespace (like 'preview', 'experimental',
 or 'staging', as the call it in Linux kernel world [2]).

 This would facilitate keeping commit history instead of loosing it
 during graduation.

 Yes, I know that people don't like to be called experimental or
 preview or incubator... And maybe neutron-labs repo sounds more
 appealing than an 'experimental' subtree in the core project. Well,
 there are lots of EXPERIMENTAL features in Linux kernel that we
 actively use (for example, btrfs is still considered experimental by
 Linux kernel devs, while being exposed as a supported option to RHEL7
 users), so I don't see how that naming concern is significant.


I think this is the whole point of the discussion around the incubator and
the reason for which, to the best of my knowledge, no proposal has been
accepted yet.


 2. If those 'extras' are really moved into a separate repository and
 tarballs, this will raise questions on whether packagers even want to
 cope with it before graduation. When it comes to supporting another
 build manifest for a piece of code of unknown quality, this is not the
 same as just cutting part of the code into a separate
 experimental/labs package. So unless I'm explicitly asked to package
 the incubator, I wouldn't probably touch it myself. This is just too
 much effort (btw the same applies to moving plugins out of the tree -
 once it's done, distros will probably need to reconsider which plugins
 they really want to package; at the moment, those plugins do not
 require lots of time to ship them, but having ~20 separate build
 manifests for each of them is just too hard to handle without clear
 incentive).


One reason instead for moving plugins out of the main tree is allowing
their maintainers to have full control over them.
If there was a way with gerrit or similars to give somebody rights to merge
code only on a subtree I probably would not even consider the option of
moving plugin and drivers away. From my perspective it's not that I don't
want them in the main tree, it's that I don't think it's fair for core team
reviewers to take responsibility of approving code that they can't fully
tests (3rd partt CI helps, but is still far from having a decent level of
coverage).



 3. The fact that neutron-incubator is not going to maintain any stable
 branches for security fixes and major failures concerns me too. In
 downstream, we don't generally ship the latest and greatest from PyPI.
 Meaning, we'll need to maintain our own downstream stable branches for
 major fixes. [BTW we already do that for python clients.]


This is a valid point. We need to find an appropriate trade off. My
thinking was that incubated projects could be treated just like client
libraries from a branch perspective.



 4. Another unclear part of the proposal is that notion of keeping
 Horizon and client changes required for incubator features in
 neutron-incubator. AFAIK the repo will be governed by Neutron Core
 team, and I doubt the team is ready to review Horizon changes (?). I
 think I don't understand how we're going to handle that. Can we just
 postpone Horizon work till graduation?


I too do not think it's a great idea, mostly because there will be horizon
bits not shipped with horizon, and not verified by horizon core team.
I think it would be ok to have horizon support for neutron incubator. It
won't be the first time that support for experimental features is added in
horizon.


5. The wiki page says that graduation will require full test coverage.
 Does it mean 100% coverage in 'coverage' report? I don't think our
 existing code is even near that point, so maybe it's not fair to
 require that from graduated code.


I agree that by these standards we should take the whole neutron and return

Re: [openstack-dev] [all] The future of the integrated release

2014-08-20 Thread Zane Bitter

On 11/08/14 05:24, Thierry Carrez wrote:

So the idea that being (and remaining) in the integrated release should
also be judged on technical merit is a slightly different effort. It's
always been a factor in our choices, but like Devananda says, it's more
difficult than just checking a number of QA/integration checkboxes. In
some cases, blessing one project in a problem space stifles competition,
innovation and alternate approaches. In some other cases, we reinvent
domain-specific solutions rather than standing on the shoulders of
domain-specific giants in neighboring open source projects.


I totally agree that these are the things we need to be vigilant about.

Stifling competition is a big worry, but it appears to me that a lot of 
the stifling is happening even before incubation. Everyone's time is 
limited, so if you happen to notice a new project on the incubation 
trajectory doing things in what you think is the Wrong Way, you're most 
likely to either leave some drive-by feedback or to just ignore it and 
carry on with your life. What you're most likely *not* to do is to start 
a competing project to prove them wrong, or to jump in full time to the 
existing project and show them the light. It's really hard to argue 
against the domain experts too - when you're acutely aware of how 
shallow your knowledge is in a particular area it's very hard to know 
how hard to push. (Perhaps ironically, since becoming a PTL I feel I 
have to be much more cautious in what I say too, because people are 
inclined to read too much into my opinion - I wonder if TC members feel 
the same pressure.) I speak from first-hand instances of guilt here - 
for example, I gave some feedback to the Mistral folks just before the 
last design summit[1], but I haven't had time to follow it up at all. I 
wouldn't be a bit surprised if they showed up with an incubation 
request, a largely-unchanged user interface and an expectation that I 
would support it.


The result is that projects often don't hear the feedback they need 
until far too late - often when they get to the incubation review (maybe 
not even their first incubation review). In the particularly unfortunate 
case of Marconi, it wasn't until the graduation review. (More about that 
in a second.) My best advice to new projects here is that you must be 
like a ferret up the pant-leg of any negative feedback. Grab hold of any 
criticism and don't let go until you have either converted the person 
giving it into your biggest supporter, been converted by them, or 
provoked them to start a competing project. (Any of those is a win as 
far as the community is concerned.)


Perhaps we could consider a space like a separate mailing list 
(openstack-future?) reserved just for announcements of Related projects, 
their architectural principles, and discussions of the same?  They 
certainly tend to get drowned out amidst the noise of openstack-dev. 
(Project management, meeting announcements, and internal project 
discussion would all be out of scope for this list.)


As for reinventing domain-specific solutions, I'm not sure that happens 
as often as is being made out. IMO the defining feature of IaaS that 
makes the cloud the cloud is on-demand (i.e. real-time) self-service. 
Everything else more or less falls out of that requirement, but the very 
first thing to fall out is multi-tenancy and there just aren't that many 
multi-tenant services floating around out there. There are a couple of 
obvious strategies to deal with that: one is to run existing software 
within a tenant-local resource provisioned by OpenStack (Trove and 
Sahara are examples of this), and the other is to wrap a multi-tenancy 
framework around an existing piece of software (Nova and Cinder are 
examples of this). (BTW the former is usually inherently less 
satisfying, because it scales at a much coarser granularity.) The answer 
to a question of the form:


Why do we need OpenStack project $X, when open source project $Y 
already exists?


is almost always:

Because $Y is not multi-tenant aware; we need to wrap it with a 
multi-tenancy layer with OpenStack-native authentication, metering and 
quota management. That even allows us to set up an abstraction layer so 
that you can substitute $Z as the back end too.


This is completely uncontroversial when you substitute X, Y, Z = Nova, 
libvirt, Xen. However, when you instead substitute X, Y, Z = 
Zaqar/Marconi, Qpid, MongoDB it suddenly becomes *highly* controversial. 
I'm all in favour of a healthy scepticism, but I think we've passed that 
point now. (How would *you* make an AMQP bus multi-tenant?)


To be clear, Marconi did made a mistake. The Marconi API presented 
semantics to the user that excluded many otherwise-obvious choices of 
back-end plugin (i.e. Qpid/RabbitMQ). It seems to be a common thing (see 
also: Mistral) to want to design for every feature an existing 
Enterprisey application might use, which IMHO kind of ignores the fact 
that 

Re: [openstack-dev] [oslo] Issues with POSIX semaphores and other locks in lockutils

2014-08-20 Thread Ben Nemec
On 08/20/2014 01:40 AM, Angus Lees wrote:
 On Mon, 18 Aug 2014 10:05:28 PM Pádraig Brady wrote:
 On 08/18/2014 03:38 PM, Julien Danjou wrote:
 On Thu, Aug 14 2014, Yuriy Taraday wrote:

 Hi Yuriy,

 […]

 Looking forward to your opinions.

 This looks like a good summary of the situation.

 I've added a solution E based on pthread, but didn't get very far about
 it for now.

 In my experience I would just go with the fcntl locks.
 They're auto unlocked and well supported, and importantly,
 supported for distributed processes.

 I'm not sure how problematic the lock_path config is TBH.
 That is adjusted automatically in certain cases where needed anyway.

 Pádraig.
 
 I added a C2 (fcntl locks on byte ranges) option to the etherpad that I 
 believe addresses all the issues raised.I must be missing something 
 because it seems too obvious to have not been considered before :/
 

I've added some additional comments to the etherpad, but the only
problem I'm aware of with using lock files was that it requires
lock_path to be set and have secure permissions.  Whether we lock entire
files or byte ranges, that is still true.  Also, we still need the
ability to use different lock files because some locks use a specific
file path that isn't in our configured lock_path, and this is necessary
behavior until we can replace it with tooz or something.

Also, I believe it has the same hash collision issue as sysv semaphores.
 While it's hugely unlikely that a hash would collide in the number of
bits we're dealing with, an unlucky collision would be a nightmare to
debug.  All things being equal I'd prefer to eliminate the chance entirely.

-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] indicating sample provenance

2014-08-20 Thread Chris Dent


One of the outcomes from Juno will be horizontal scalability in the
central agent and alarm evaluator via partitioning[1]. The compute
agent will get the same capability if you choose to use it, but it
doesn't make quite as much sense.

I haven't investigated the alarm evaluator side closely yet, but one
concern I have with the central agent partitioning is that, as far
as I can tell, it will result in stored samples that give no
indication of which (of potentially very many) central-agent it came
from.

This strikes me as a debugging nightmare when something goes wrong
with the content of a sample that makes it all the way to storage.
We need some way, via the artifact itself, to narrow the scope of
our investigation.

a) Am I right that no indicator is there?

b) Assuming there should be one:

   * Where should it go? Presumably it needs to be an attribute of
 each sample because as agents leave and join the group, where
 samples are published from can change.

   * How should it be named? The never-ending problem.

Thoughts?

[1] https://review.openstack.org/#/c/113549/
[2] https://review.openstack.org/#/c/115237/


--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Future CI jobs

2014-08-20 Thread Gregory Haynes
Excerpts from Derek Higgins's message of 2014-08-20 09:06:48 +:
 On 19/08/14 20:58, Gregory Haynes wrote:
  Excerpts from Giulio Fidente's message of 2014-08-19 12:07:53 +:
  One last comment, maybe a bit OT but I'm raising it here to see what is 
  the other people opinion: how about we modify the -ha job so that at 
  some point we actually kill one of the controllers and spawn a second 
  user image?
  
  I think this is a great long term goal, but IMO performing an update
  isnt really the type of verification we want for this kind of test. We
  really should have some minimal tempest testing in place first so we can
  verify that when these types of failures occur our cloud remains in a
  functioning state.
 
 Greg, you said performing an update did you mean killing a controller
 node ?
 
 if so I agree, verifying our cloud is still in a working order with
 tempest would get us more coverage then spawning a node. So once we have
 tempest in place we can add a test to kill a controller node.
 

Ah, I misread the original message a bit, but sounds like were all on
the same page.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Issues with POSIX semaphores and other locks in lockutils

2014-08-20 Thread Vishvananda Ishaya
This may be slightly off-topic but it is worth mentioning that the use of 
threading.Lock[1]
which was included to make the locks thread safe seems to be leading to a 
deadlock in eventlet[2].
It seems like we have rewritten this too many times in order to fix minor pain 
points and are
adding risk to a very important component of the system.

[1] https://review.openstack.org/#/c/54581
[2] https://bugs.launchpad.net/nova/+bug/1349452

On Aug 18, 2014, at 2:05 PM, Pádraig Brady p...@draigbrady.com wrote:

 On 08/18/2014 03:38 PM, Julien Danjou wrote:
 On Thu, Aug 14 2014, Yuriy Taraday wrote:
 
 Hi Yuriy,
 
 […]
 
 Looking forward to your opinions.
 
 This looks like a good summary of the situation.
 
 I've added a solution E based on pthread, but didn't get very far about
 it for now.
 
 In my experience I would just go with the fcntl locks.
 They're auto unlocked and well supported, and importantly,
 supported for distributed processes.
 
 I'm not sure how problematic the lock_path config is TBH.
 That is adjusted automatically in certain cases where needed anyway.
 
 Pádraig.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Scheduler split wrt Extensible Resource Tracking

2014-08-20 Thread Day, Phil


 -Original Message-
 From: Daniel P. Berrange [mailto:berra...@redhat.com]
 Sent: 20 August 2014 14:13
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Nova] Scheduler split wrt Extensible Resource
 Tracking
 
 On Wed, Aug 20, 2014 at 08:33:31AM -0400, Jay Pipes wrote:
  On 08/20/2014 04:48 AM, Nikola Đipanov wrote:
  On 08/20/2014 08:27 AM, Joe Gordon wrote:
  On Aug 19, 2014 10:45 AM, Day, Phil philip@hp.com
  mailto:philip@hp.com wrote:
  
  -Original Message-
  From: Nikola Đipanov [mailto:ndipa...@redhat.com
  mailto:ndipa...@redhat.com]
  Sent: 19 August 2014 17:50
  To: openstack-dev@lists.openstack.org
  mailto:openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [Nova] Scheduler split wrt Extensible
  Resource
  Tracking
  
  On 08/19/2014 06:39 PM, Sylvain Bauza wrote:
  On the other hand, ERT discussion is decoupled from the scheduler
  split discussion and will be delayed until Extensible Resource
  Tracker owner (Paul Murray) is back from vacation.
  In the mean time, we're considering new patches using ERT as
  non-acceptable, at least until a decision is made about ERT.
  
  
  Even though this was not officially agreed I think this is the
  least
  we can do
  under the circumstances.
  
  A reminder that a revert proposal is up for review still, and I
  consider it fair
  game to approve, although it would be great if we could hear from
  Paul first:
  
 https://review.openstack.org/115218
  
  Given the general consensus seemed to be to wait some before
  deciding
  what to do here, isn't putting the revert patch up for approval a
  tad premature ?
  
  There was a recent discussion about reverting patches, and from that
  (but not only) my understanding is that we should revert whenever in
 doubt.
 
  Right.
 
  http://lists.openstack.org/pipermail/openstack-dev/2014-August/042728.
  html
 
  Putting the patch back in is easy, and if proven wrong I'd be the
  first to +2 it. As scary as they sound - I don't think reverts are a big 
  deal.
 
  Neither do I. I think it's more appropriate to revert quickly and then
  add it back after any discussions, per the above revert policy.
 
  
  The RT may be not able to cope with all of the new and more complex
  resource types we're now trying to schedule, and so it's not
  surprising that the ERT can't fix that.  It does however address
  some specific use cases that the current RT can't cope with,  the
  spec had a pretty through review under the new process, and was
 discussed during the last
  2 design summits.   It worries me that we're continually failing to make
  even small and useful progress in this area.
  
  Sylvain's approach of leaving the ERT in place so it can be used
  for
  the use cases it was designed for while holding back on doing some
  of the more complex things than might need either further work in
  the ERT, or some more fundamental work in the RT (which feels like
  as L or M timescales based on current progress) seemed pretty
 pragmatic to me.
  
  ++, I really don't like the idea of rushing the revert of a feature
  ++that
  went through significant design discussion especially when the
  author is away and cannot defend it.
  
  Fair enough - I will WIP the revert until Phil is back. It's the
  right thing to do seeing that he is away.
 
  Well, it's as much (or more?) Paul Murray and Andrea Rosa :)
 
  However - I don't agree with using the length of discussion around
  the feature as a valid argument against reverting.
 
  Neither do I.
 
  I've supplied several technical arguments on the original thread to
  why I think we should revert it, and would expect a discussion that
  either refutes them, or provides alternative ways forward.
  
  Saying 'but we talked about it at length' is the ultimate appeal to
  imaginary authority and frankly not helping at all.
 
  Agreed. Perhaps it's just my provocative nature, but I hear a lot of
  we've already decided/discussed this talk especially around the
  scheduler and RT stuff, and I don't think the argument holds much
  water. We should all be willing to reconsider design decisions and
  discussions when appropriate, and in the case of the RT, this
  discussion is timely and appropriate due to the push to split the scheduler
 out of Nova (prematurely IMO).
 
 Yes, this is absolutely right. Even if we have approved a spec / blueprint we
 *always* reserve the right to change our minds at a later date if new
 information or points of view come to light. Hopefully this will be fairly
 infrequent and we won't do it lightly, but it is a key thing we accept as a
 possible outcome of the process we follow.
 
My point was more that reverting a patch that does meet the use cases it was 
designed to cover, even if there is something more fundamental that needs to be 
looked at to cover some new use cases that weren't considered at the time is 
the route to stagnation.   

It seems (unless 

Re: [openstack-dev] generate Windows exe

2014-08-20 Thread Szépe Viktor

Is it the right list?

The PR was rejected: https://github.com/openstack/python-swiftclient/pull/14



Idézem/Quoting Szépe Viktor vik...@szepe.net:


Now bin\swift can only be started by python Scripts\swift
Windows does not support extensionless scripts.
Maybe I did it wrong, this is my first encounter with setuptools.

Please consider modifying setup.py

import setuptools

setuptools.setup(
setup_requires=['pbr'],
pbr=True,
entry_points={
console_scripts: [
swift=swiftclient.shell:main
]
})

Thank you!


Szépe Viktor
--
+36-20-4242498  s...@szepe.net  skype: szepe.viktor
Budapest, XX. kerület





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] indicating sample provenance

2014-08-20 Thread gordon chung
 a) Am I right that no indicator is there?
 
 b) Assuming there should be one:
 
 * Where should it go? Presumably it needs to be an attribute of
 each sample because as agents leave and join the group, where
 samples are published from can change.
 
 * How should it be named? The never-ending problem.
disclaimer: i'm just riffing and the following might be nonsense.
i don't think we have a formal indicator on where a sample came from. we do 
attach a message signature to each sample which verifies it hasn't been 
tampered with.[1] i could envision that being a way to trace a path (right now 
i'm not sure you're able to set a unique metering hash key per agent)
that said, i guess it's really dependent on how you plan on debugging? it might 
be easy to have some sort of logging to include the agents id and what sample 
it's publishing.
i guess also to extend your question about agents leaving/joining. i'd expect 
there is some volatility to the agents where an agent may or may not exist at 
the point of debugging... just curious what the benefit is of knowing who sent 
it if all the agents are just clones of each other.
[1] 
https://github.com/openstack/ceilometer/blob/a77dd2b5408eb120b7397a6f08bcbeef6e521443/ceilometer/publisher/rpc.py#L119-L124
cheers,
gord

  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] generate Windows exe

2014-08-20 Thread Adam Lawson
Curious, did you follow the link and follow the Gerrit workflow? Seems your
rejection letter (unlike mine I received from my sweetheart in high school)
was due to process rather than merit. (
http://wiki.openstack.org/GerritWorkflow)


*Adam Lawson*
AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072



On Wed, Aug 20, 2014 at 11:15 AM, Szépe Viktor vik...@szepe.net wrote:

 Is it the right list?

 The PR was rejected: https://github.com/openstack/
 python-swiftclient/pull/14



 Idézem/Quoting Szépe Viktor vik...@szepe.net:

  Now bin\swift can only be started by python Scripts\swift
 Windows does not support extensionless scripts.
 Maybe I did it wrong, this is my first encounter with setuptools.


 Please consider modifying setup.py

 import setuptools

 setuptools.setup(
 setup_requires=['pbr'],
 pbr=True,
 entry_points={
 console_scripts: [
 swift=swiftclient.shell:main
 ]
 })

 Thank you!


 Szépe Viktor
 --
 +36-20-4242498  s...@szepe.net  skype: szepe.viktor
 Budapest, XX. kerület





 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] generate Windows exe

2014-08-20 Thread Szépe Viktor

Thank you for your answer.
That workflow seems a huge job for me.

I leave this patch up to you.


Idézem/Quoting Adam Lawson alaw...@aqorn.com:


Curious, did you follow the link and follow the Gerrit workflow? Seems your
rejection letter (unlike mine I received from my sweetheart in high school)
was due to process rather than merit. (
http://wiki.openstack.org/GerritWorkflow)


*Adam Lawson*
AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072



On Wed, Aug 20, 2014 at 11:15 AM, Szépe Viktor vik...@szepe.net wrote:


Is it the right list?

The PR was rejected: https://github.com/openstack/
python-swiftclient/pull/14



Idézem/Quoting Szépe Viktor vik...@szepe.net:

 Now bin\swift can only be started by python Scripts\swift

Windows does not support extensionless scripts.
Maybe I did it wrong, this is my first encounter with setuptools.


Please consider modifying setup.py

import setuptools

setuptools.setup(
setup_requires=['pbr'],
pbr=True,
entry_points={
console_scripts: [
swift=swiftclient.shell:main
]
})

Thank you!



Szépe Viktor
--
+36-20-4242498  s...@szepe.net  skype: szepe.viktor
Budapest, XX. kerület





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




Szépe Viktor
--
+36-20-4242498  s...@szepe.net  skype: szepe.viktor
Budapest, XX. kerület





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Error at context exit for subnet in unit test case

2014-08-20 Thread Vijay Venkatachalam
Hey Brandon,

Thanks for looking into it. Yes I figured this out too and passed the 
pre-created subnet to loadbalancer method.
Issue here is  slightly different, please have a look at the review 
https://review.openstack.org/#/c/114173/
Inorder to make it work; I have made network, subnet and loadbalancer with 
delete=False.

The original problem creating snippet code is 

with self.subnet() as subnet:
with self.loadbalancer(subnet=subnet, no_delete=True,
   provider=LBAAS_PROVIDER_NAME) as testlb:
print error after this statement

The original problem creating full code is here http://pastebin.com/jWmrpiG1

Thanks,
Vijay V.

-Original Message-
From: Brandon Logan [mailto:brandon.lo...@rackspace.com] 
Sent: 20 August 2014 21:24
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS] Error at context exit for subnet 
in unit test case

Hey Vijay,
Figured out the issue you are having.  In that particular test you are creating 
the same subnet twice.  The first time you create it is in the 
contextlib.nested, the second time is the self.loadbalancer method that will 
create a subnet if you do not pass it.  So you should pass subnet=subnet in the 
self.loadbalancer method.

After that there are other minor issues in the driver, mainly accessing the obj 
passed in as a dictionary and not an object (obj[id] vs obj.id).

Let me know if you need more information.

Thanks,
Brandon
On Wed, 2014-08-20 at 14:27 +, Vijay Venkatachalam wrote:
 I observed the following text as well One or more ports have an IP 
 allocation from this subnet.
 Looks like loadbalancer context exit at blah2 is not cleaning up the port 
 that was created. 
 This ultimately resulted in failure of delete subnet at blah3.
 
 --
 with self.subnet() as subnet:
 blah1
 with self.loadbalancer() as lb:
 blah2
 blah3
 --
 
 -Original Message-
 From: Vijay Venkatachalam [mailto:vijay.venkatacha...@citrix.com]
 Sent: 20 August 2014 19:12
 To: OpenStack Development Mailing List 
 (openstack-dev@lists.openstack.org)
 Subject: [openstack-dev] [Neutron][LBaaS] Error at context exit for 
 subnet in unit test case
 
 Hi,
 
 I am writing a unit testcase with context as subnet, code here [1]. 
 When the context exits a delete of subnet is attempted and I am getting a 
 MismatchError . Traceback posted here [2]. 
 
 What could be going wrong here?
 
 Testcase is written like the following
 --
 with self.subnet() as subnet:
 blah1
 blah2
 blah3
 --
 
 I am getting a MismatchError: 409 != 204 error at blah3 when context exits.
 
 [1] UnitTestCase Snippet - http://pastebin.com/rMtf2dQX [2] Traceback  
 - http://pastebin.com/2sPcZ8Jk
 
 
 Thanks,
 Vijay V.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] indicating sample provenance

2014-08-20 Thread Chris Dent

On Wed, 20 Aug 2014, gordon chung wrote:


disclaimer: i'm just riffing and the following might be nonsense.


/me is a huge fan of riffing


i guess also to extend your question about agents leaving/joining. i'd
expect there is some volatility to the agents where an agent may or
may not exist at the point of debugging... just curious what the
benefit is of knowing who sent it if all the agents are just clones of
each other.


What I'm thinking of is situation where some chunk of samples is
arriving at the data store and is in some fashion outside the
expected norms when compared to others.

If, from looking at the samples, you can tell that they were all
published from the (used-to-be-)central-agent on host X then you can
go to host X and have a browse around there to see what might be up.

It's unlikely that the agent is going to be the cause of any
weirdness but if it _is_ then we'd like to be able to locate it. As
things currently stand there's no way, from the sample itself, to do
so.

Thus, the benefit of knowing who sent it is that though the agents
themselves are clones, they are in regions and on hosts that are
not.

Beyond all those potentially good reasons there's also just the
simple matter that it is good data hygiene to know where stuff came
from?

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] generate Windows exe

2014-08-20 Thread Stefano Maffulli
Hi Szepe,

On Wed 20 Aug 2014 11:33:47 AM PDT, Szépe Viktor wrote:
 Thank you for your answer.
 That workflow seems a huge job for me.

 I leave this patch up to you.

thanks for sending this fix. You've stumbled upon one of the known
issues of OpenStack's way to deal with small patches like yours. We have
a workflow optimized for the extremely large amount of patches we get
(around 60-70 *per hour*). Our workflow does not use github and its pull
requests.

Unfortunately this optimization for large scale contributions is leaving
occasional contributors like you outside.

I would suggest you to file a bug on
https://bugs.launchpad.net/python-swiftclient instead of a pull request
as we don't use github at all, only drop there code as 'mirror' and
nothing else.

I'll suggest also to add 'file a bug' as a suggestion to the message
sent upon automatically closing the PR.

/stef
-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Octavia] Proposal to support multiple listeners on one HAProxy instance

2014-08-20 Thread Michael Johnson
I am proposing that Octavia should support deployment models that
enable multiple listeners to be configured inside the HAProxy
instance.

The model I am proposing is:

1. One or more VIP per Octavia VM (propose one VIP in 0.5 release)
2. One or more HAProxy instance per Octavia VM
3. One or more listeners on each HAProxy instance
4. Zero or more pools per listener (shared pools should be supported
as a configuration render optimization, but propose support post 0.5
release)
5. One or more members per pool

This provides flexibility to the operator to support multiple
deployment models,  including active-active and hot standby Octavia
VMs.  Without the flexibility to have multiple listeners per HAProxy
instance we are limiting the operators deployment models.

I am advocating for multiple listeners per HAProxy instance because I
think it provides the following advantages:

1. It reduces memory overhead due to running multiple HAProxy
instances on one Octavia VM.  Since the Octavia constitution states
that Octavia is for large operators where this memory overhead could
have a financial impact we should allow alternate deployment options.
2. It reduces host CPU overhead due to reduced context switching that
would occur between HAProxy instances.  HAProxy is event driven and
will mostly be idle waiting for traffic, where multiple instances of
HAProxy will require context switching between the processes which
increases the VM’s CPU load.  Since the Octavia constitution states
that we are designing for large operators, anything we can do to
reduce the host CPU load reduces the operator’s costs.
3. Hosting multiple HAProxy instances on one Octavia VM will increase
the load balancer build time because multiple configuration files,
start/stop scripts, health monitors, and HAProxy Unix sockets will
have to be created.  This could significantly impact operator
topologies that use hot standby Octavia VMs for failover.
4. It reduces network traffic and health monitoring overhead because
only one HAProxy instance per Octavia VM will need to be monitored.
This again, saves the operator money and increases the scalability for
large operators.
5. Multiple listeners per instance allows the sharing of backend pools
which reduces the amount of health monitoring traffic required to the
backend servers.  It also has the potential to share SSL certificates
and keys.
6. It allows customers to think of load balancers (floating IPs) as an
application service, sharing the fate of multiple listeners and
providing a consolidated log file.  This also provides a natural
grouping of services (around the floating IP) for a defined
performance floor.  With multiple instances per Octavia VM one
instance could negatively impact all of the other instances which may
or may not be related to the other floating IP(s).
7. Multiple listeners per instance reduces the number of TCP ports
used on the Octavia VM, increasing the per-VM scalability.


I don’t want us, by design, to limit the operator flexibility in
deployment an topologies, especially when it potentially impacts the
costs for large operators.  Having multiple listeners per HAProxy
instance is a very common topology in the HAProxy community and I
don’t think we should block that use case with Octavia deployments.

Michael

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-20 Thread Jay Pipes

On 08/20/2014 11:41 AM, Zane Bitter wrote:

On 19/08/14 10:37, Jay Pipes wrote:


By graduating an incubated project into the integrated release, the
Technical Committee is blessing the project as the OpenStack way to do
some thing. If there are projects that are developed *in the OpenStack
ecosystem* that are actively being developed to serve the purpose that
an integrated project serves, then I think it is the responsibility of
the Technical Committee to take another look at the integrated project
and answer the following questions definitively:

  a) Is the Thing that the project addresses something that the
Technical Committee believes the OpenStack ecosystem benefits from by
the TC making a judgement on what is the OpenStack way of addressing
that Thing.

and IFF the decision of the TC on a) is YES, then:

  b) Is the Vision and Implementation of the currently integrated
project the one that the Technical Committee wishes to continue to
bless as the the OpenStack way of addressing the Thing the project
does.


I disagree with part (b); projects are not code - projects, like Soylent
Green, are people.


Hey! Don't steal my slide content! :P

http://bit.ly/navigating-openstack-community (slide 3)

 So it's not critical that the implementation is the

one the TC wants to bless, what's critical is that the right people are
involved to get to an implementation that the TC would be comfortable
blessing over time. For example, everyone agrees that Ceilometer has
room for improvement, but any implication that the Ceilometer is not
interested in or driving towards those improvements (because of NIH or
whatever) is, as has been pointed out, grossly unfair to the Ceilometer
team.


I certainly have not made such an implication about Ceilometer. What I 
see in the Ceilometer space, though, is that there are clearly a number 
of *active* communities of OpenStack engineers developing code that 
crosses similar problem spaces. I think the TC blessing one of those 
communities before the market has had a chance to do a bit more 
natural filtering of quality is a barrier to innovation. I think having 
all of those separate teams able to contribute code to an openstack/ 
code namespace and naturally work to resolve differences and merge 
innovation is a better fit for a meritocracy.



I think the rest of your plan is a way of recognising this
appropriately, that the current implementation is actually not the
be-all and end-all of how the TC should view a project.


Yes, quite well said.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Way to get wrapped method's name/class using Pecan secure decorators?

2014-08-20 Thread Pendergrass, Eric
Hi Ryan,

We tried globally applying the hook but could not get execution to enter the
hook class. 

Perhaps we made a mistake, but we concluded the Controller still had to
inherit from HookController using the project-wide method.  Otherwise we
would have been satisfied applying it project-wide.

Thanks,
Eric

 Eric,

 
 Doug's correct - this looks like a bug in pecan that occurs when you
subclass both rest.RestController and hooks.HookController.  I'm working on
a bug fix as we speak.  In the meantime, have you tried applying hooks at a
global application level?  This approach should still work.
 
 On 08/14/14 04:38 PM, Pendergrass, Eric wrote:
  Sure, Doug.  We want the ability to selectively apply policies to 
  certain Ceilometer API methods based on user/tenant roles.
  
  For example, we want to restrict the ability to execute Alarm deletes 
  to admins and user/tenants who have a special role, say domainadmin.
  
  The policy file might look like this:
  {
  context_is_admin:  [[role:admin]],
  admin_and_matching_project_domain_id:  [[role:domainadmin]],
  admin_or_cloud_admin: [[rule:context_is_admin], 
  [rule:admin_and_matching_project_domain_id]],
  telemetry:delete_alarms:  [[rule:admin_or_cloud_admin]] }
  
  The current acl.py and _query_to_kwargs access control setup either 
  sets project_id scope to None (do everything) or to the project_id in 
  the request header 'X-Project-Id'.  This allows for admin or project 
  scope, but nothing in between.
  
  We tried hooks.  Unfortunately we can't seem to turn the API 
  controllers into HookControllers just by adding HookController to the 
  Controller class definition.  It causes infinite recursion on API 
  startup.  For example, this doesn't work because ceilometer-api will 
  not start with it:
  class MetersController(rest.RestController, HookController):
  
  If there was a way to use hooks with the v2. API controllers that 
  might work really well.
  
  So we are left using the @secure decorator and deriving the method 
  name from the request environ PATH_INFO and REQUEST_METHOD values.  
  This is how we determine the wrapped method within the class 
  (REQUEST_METHOD + PATH_INFO = telemetry:delete_alarms with some 
  munging).  We need the method name in order to selectively apply acces 
  control to certain methods.
  
  Deriving the method this way isn't ideal but it's the only thing we've 
  gotten working between hooks, @secure, and regular decorators.
  
  I submitted a WIP BP here: https://review.openstack.org/#/c/112137/3.  
  It is slightly out of date but should give you a beter idea of our
goals.
  
  Thanks
  
   Eric,
  
   If you can give us some more information about your end goal, 
   independent
  of the implementation, maybe we can propose an alternate technique to 
  achieve the same thing.
  
   Doug
  
   On Aug 12, 2014, at 6:21 PM, Ryan Petrello 
   ryan.petre...@dreamhost.com
  wrote:
  
Yep, you're right, this doesn't seem to work.  The issue is that 
security is enforced at routing time (while the controller is 
still actually being discovered).  In order to do this sort of 
thing with the `check_permissions`, we'd probably need to add a
feature to pecan.
   
On 08/12/14 06:38 PM, Pendergrass, Eric wrote:
 Sure, here's the decorated method from v2.py:

class MetersController(rest.RestController):
Works on meters.

@pecan.expose()
def _lookup(self, meter_name, *remainder):
return MeterController(meter_name), remainder

@wsme_pecan.wsexpose([Meter], [Query])
@secure(RBACController.check_permissions)
def get_all(self, q=None):

 and here's the decorator called by the secure tag:

class RBACController(object):
global _ENFORCER
if not _ENFORCER:
_ENFORCER = policy.Enforcer()


@classmethod
def check_permissions(cls):
# do some stuff

 In check_permissions I'd like to know the class and method with 
 the
  @secure tag that caused check_permissions to be invoked.  In this 
  case, that would be MetersController.get_all.

 Thanks


  Can you share some code?  What do you mean by, is there a way 
  for the
  decorator code to know it was called by MetersController.get_all
 
  On 08/12/14 04:46 PM, Pendergrass, Eric wrote:
   Thanks Ryan, but for some reason the controller attribute is
None:
  
   (Pdb) from pecan.core import state
   (Pdb) state.__dict__
   {'hooks': [ceilometer.api.hooks.ConfigHook object at 
   0x31894d0, ceilometer.api.hooks.DBHook object at 0x3189650,

   ceilometer.api.hooks.PipelineHook object at 0x39871d0, 
   ceilometer.api.hooks.TranslationHook object at 0x3aa5510],
'app':
   pecan.core.Pecan object at 0x2e76390, 'request': Request at
   0x3ed7390 GET 

Re: [openstack-dev] generate Windows exe

2014-08-20 Thread Szépe Viktor

Thank you!

https://bugs.launchpad.net/python-swiftclient/+bug/1359360



Idézem/Quoting Stefano Maffulli stef...@openstack.org:


Hi Szepe,

On Wed 20 Aug 2014 11:33:47 AM PDT, Szépe Viktor wrote:

Thank you for your answer.
That workflow seems a huge job for me.

I leave this patch up to you.


thanks for sending this fix. You've stumbled upon one of the known
issues of OpenStack's way to deal with small patches like yours. We have
a workflow optimized for the extremely large amount of patches we get
(around 60-70 *per hour*). Our workflow does not use github and its pull
requests.

Unfortunately this optimization for large scale contributions is leaving
occasional contributors like you outside.

I would suggest you to file a bug on
https://bugs.launchpad.net/python-swiftclient instead of a pull request
as we don't use github at all, only drop there code as 'mirror' and
nothing else.

I'll suggest also to add 'file a bug' as a suggestion to the message
sent upon automatically closing the PR.

/stef
--
Ask and answer questions on https://ask.openstack.org



Szépe Viktor
--
+36-20-4242498  s...@szepe.net  skype: szepe.viktor
Budapest, XX. kerület





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][L3] Team Meeting Thursday at 1500 UTC

2014-08-20 Thread Carl Baldwin
The Neutron L3 Subteam will meet tomorrow at the regular time in
#openstack-meeting-3.  The agenda [1] is posted, please update as
needed.

Carl

[1] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam#Agenda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Issues with POSIX semaphores and other locks in lockutils

2014-08-20 Thread Ben Nemec
On 08/20/2014 01:03 PM, Vishvananda Ishaya wrote:
 This may be slightly off-topic but it is worth mentioning that the use of 
 threading.Lock[1]
 which was included to make the locks thread safe seems to be leading to a 
 deadlock in eventlet[2].
 It seems like we have rewritten this too many times in order to fix minor 
 pain points and are
 adding risk to a very important component of the system.
 
 [1] https://review.openstack.org/#/c/54581
 [2] https://bugs.launchpad.net/nova/+bug/1349452

This is pretty much why I'm pushing to just revert to the file locking
behavior we had up until a couple of months ago, rather than
implementing some new shiny lock thing that will probably cause more
subtle issues in the consuming projects.  It's become clear to me that
lockutils is too deeply embedded in the other projects, and there are
too many implementation details that they rely on, to make significant
changes to its default code path.

 
 On Aug 18, 2014, at 2:05 PM, Pádraig Brady p...@draigbrady.com wrote:
 
 On 08/18/2014 03:38 PM, Julien Danjou wrote:
 On Thu, Aug 14 2014, Yuriy Taraday wrote:

 Hi Yuriy,

 […]

 Looking forward to your opinions.

 This looks like a good summary of the situation.

 I've added a solution E based on pthread, but didn't get very far about
 it for now.

 In my experience I would just go with the fcntl locks.
 They're auto unlocked and well supported, and importantly,
 supported for distributed processes.

 I'm not sure how problematic the lock_path config is TBH.
 That is adjusted automatically in certain cases where needed anyway.

 Pádraig.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance][Heat] Murano split dsicussion

2014-08-20 Thread Georgy Okrokvertskhov
During last Atlanta summit there were couple discussions about Application
Catalog and Application space projects in OpenStack. These cross-project
discussions occurred as a result of Murano incubation request [1] during
Icehouse cycle.  On the TC meeting devoted to Murano incubation there was
an idea about splitting the Murano into parts which might belong to
different programs[2].


Today, I would like to initiate a discussion about potential splitting of
Murano between two or three programs.


*App Catalog API to Catalog Program*

Application Catalog part can belong to Catalog program, the package
repository will move to artifacts repository part where Murano team already
participates. API part of App Catalog will add a thin layer of API methods
specific to Murano applications and potentially can be implemented as a
plugin to artifacts repository. Also this API layer will expose other 3rd
party systems API like CloudFoundry ServiceBroker API which is used by
CloudFoundry marketplace feature to provide an integration layer between
OpenStack Application packages and 3rd party PaaS tools.



*Murano Engine to Orchestration Program*

Murano engine orchestrates the Heat template generation. Complementary to a
Heat declarative approach, Murano engine uses imperative approach so that
it is possible to control the whole flow of the template generation. The
engine uses Heat updates to update Heat templates to reflect changes in
applications layout. Murano engine has a concept of actions - special flows
which can be called at any time after application deployment to change
application parameters or update stacks. The engine is actually
complementary to Heat engine and adds the following:


   - orchestrate multiple Heat stacks - DR deployments, HA setups, multiple
   datacenters deployment
   - Initiate and controls stack updates on application specific events
   - Error handling and self-healing - being imperative Murano allows you
   to handle issues and implement additional logic around error handling and
   self-healing.



*Murano UI to Dashboard Program*

Application Catalog requires  a UI focused on user experience. Currently
there is a Horizon plugin for Murano App Catalog which adds Application
catalog page to browse, search and filter applications. It also adds a
dynamic UI functionality to render a Horizon forms without writing an
actual code.




[1]
http://lists.openstack.org/pipermail/openstack-dev/2014-February/027736.html

[2]
http://eavesdrop.openstack.org/meetings/tc/2014/tc.2014-03-04-20.02.log.txt



--
Georgy Okrokvertskhov
Architect,
OpenStack Platform Products,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Issues with POSIX semaphores and other locks in lockutils

2014-08-20 Thread Davanum Srinivas
Ben, +1 to the plan you outlined.

-- dims

On Wed, Aug 20, 2014 at 4:13 PM, Ben Nemec openst...@nemebean.com wrote:
 On 08/20/2014 01:03 PM, Vishvananda Ishaya wrote:
 This may be slightly off-topic but it is worth mentioning that the use of 
 threading.Lock[1]
 which was included to make the locks thread safe seems to be leading to a 
 deadlock in eventlet[2].
 It seems like we have rewritten this too many times in order to fix minor 
 pain points and are
 adding risk to a very important component of the system.

 [1] https://review.openstack.org/#/c/54581
 [2] https://bugs.launchpad.net/nova/+bug/1349452

 This is pretty much why I'm pushing to just revert to the file locking
 behavior we had up until a couple of months ago, rather than
 implementing some new shiny lock thing that will probably cause more
 subtle issues in the consuming projects.  It's become clear to me that
 lockutils is too deeply embedded in the other projects, and there are
 too many implementation details that they rely on, to make significant
 changes to its default code path.


 On Aug 18, 2014, at 2:05 PM, Pádraig Brady p...@draigbrady.com wrote:

 On 08/18/2014 03:38 PM, Julien Danjou wrote:
 On Thu, Aug 14 2014, Yuriy Taraday wrote:

 Hi Yuriy,

 […]

 Looking forward to your opinions.

 This looks like a good summary of the situation.

 I've added a solution E based on pthread, but didn't get very far about
 it for now.

 In my experience I would just go with the fcntl locks.
 They're auto unlocked and well supported, and importantly,
 supported for distributed processes.

 I'm not sure how problematic the lock_path config is TBH.
 That is adjusted automatically in certain cases where needed anyway.

 Pádraig.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-20 Thread Chris Friesen

On 08/20/2014 07:21 AM, Jay Pipes wrote:

Hi Thierry, thanks for the reply. Comments inline. :)

On 08/20/2014 06:32 AM, Thierry Carrez wrote:

If we want to follow your model, we probably would have to dissolve
programs as they stand right now, and have blessed categories on one
side, and teams on the other (with projects from some teams being
blessed as the current solution).


Why do we have to have blessed categories at all? I'd like to think of
a day when the TC isn't picking winners or losers at all. Level the
playing field and let the quality of the projects themselves determine
the winner in the space. Stop the incubation and graduation madness and
change the role of the TC to instead play an advisory role to upcoming
(and existing!) projects on the best ways to integrate with other
OpenStack projects, if integration is something that is natural for the
project to work towards.


It seems to me that at some point you need to have a recommended way of 
doing things, otherwise it's going to be *really hard* for someone to 
bring up an OpenStack installation.


We already run into issues with something as basic as competing SQL 
databases.  If every component has several competing implementations and 
none of them are official how many more interaction issues are going 
to trip us up?


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] generate Windows exe

2014-08-20 Thread Alessandro Pilotti
Hi Viktor,

I just submitted the patch, adding you as co-author with the email address 
that you used here, please let me know if this is ok for you:

https://review.openstack.org/#/c/115782/

The patch is even easier compared to what you sent as all we need is to add
declaratively the requested entry point in setup.cfg:

[entry_points]
console_scripts =
swift = swiftclient.shell:main

The OpenStack patch workflow is not that complicated if you’re already familiar
with git, but it can be challenging at the beginning.

If you’d like to join our dev IRC channel (#openstack-dev, Freenode), I can
explain you the basics in just a few minutes (nic: alexpilotti) in case
you’d like to contribute again in the future.

In case you’d like to take a look here are the general guideline:
https://wiki.openstack.org/wiki/How_To_Contribute

In particular it'd be great if you could sign online the contributor license
agreement, it's just a matter of having a Launchpad account and doing a 
couple of clicks.


Thanks!

Alessandro


On 20 Aug 2014, at 22:47, Szépe Viktor vik...@szepe.net wrote:

 Thank you!
 
 https://bugs.launchpad.net/python-swiftclient/+bug/1359360
 
 
 
 Idézem/Quoting Stefano Maffulli stef...@openstack.org:
 
 Hi Szepe,
 
 On Wed 20 Aug 2014 11:33:47 AM PDT, Szépe Viktor wrote:
 Thank you for your answer.
 That workflow seems a huge job for me.
 
 I leave this patch up to you.
 
 thanks for sending this fix. You've stumbled upon one of the known
 issues of OpenStack's way to deal with small patches like yours. We have
 a workflow optimized for the extremely large amount of patches we get
 (around 60-70 *per hour*). Our workflow does not use github and its pull
 requests.
 
 Unfortunately this optimization for large scale contributions is leaving
 occasional contributors like you outside.
 
 I would suggest you to file a bug on
 https://bugs.launchpad.net/python-swiftclient instead of a pull request
 as we don't use github at all, only drop there code as 'mirror' and
 nothing else.
 
 I'll suggest also to add 'file a bug' as a suggestion to the message
 sent upon automatically closing the PR.
 
 /stef
 --
 Ask and answer questions on https://ask.openstack.org
 
 
 Szépe Viktor
 -- 
 +36-20-4242498  s...@szepe.net  skype: szepe.viktor
 Budapest, XX. kerület
 
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-20 Thread Jay Pipes

On 08/20/2014 05:06 PM, Chris Friesen wrote:

On 08/20/2014 07:21 AM, Jay Pipes wrote:

Hi Thierry, thanks for the reply. Comments inline. :)

On 08/20/2014 06:32 AM, Thierry Carrez wrote:

If we want to follow your model, we probably would have to dissolve
programs as they stand right now, and have blessed categories on one
side, and teams on the other (with projects from some teams being
blessed as the current solution).


Why do we have to have blessed categories at all? I'd like to think of
a day when the TC isn't picking winners or losers at all. Level the
playing field and let the quality of the projects themselves determine
the winner in the space. Stop the incubation and graduation madness and
change the role of the TC to instead play an advisory role to upcoming
(and existing!) projects on the best ways to integrate with other
OpenStack projects, if integration is something that is natural for the
project to work towards.


It seems to me that at some point you need to have a recommended way of
doing things, otherwise it's going to be *really hard* for someone to
bring up an OpenStack installation.


Why can't there be multiple recommended ways of setting up an OpenStack 
installation? Matter of fact, in reality, there already are multiple 
recommended ways of setting up an OpenStack installation, aren't there?


There's multiple distributions of OpenStack, multiple ways of doing 
bare-metal deployment, multiple ways of deploying different message 
queues and DBs, multiple ways of establishing networking, multiple open 
and proprietary monitoring systems to choose from, etc. And I don't 
really see anything wrong with that.



We already run into issues with something as basic as competing SQL
databases.


If the TC suddenly said Only MySQL will be supported, that would not 
mean that the greater OpenStack community would be served better. It 
would just unnecessarily take options away from deployers.


 If every component has several competing implementations and

none of them are official how many more interaction issues are going
to trip us up?


IMO, OpenStack should be about choice. Choice of hypervisor, choice of 
DB and MQ infrastructure, choice of operating systems, choice of storage 
vendors, choice of networking vendors.


If there are multiple actively-developed projects that address the same 
problem space, I think it serves our OpenStack users best to let the 
projects work things out themselves and let the cream rise to the top. 
If the cream ends up being one of those projects, so be it. If the cream 
ends up being a mix of both projects, so be it. The production community 
will end up determining what that cream should be based on what it 
deploys into its clouds and what input it supplies to the teams working 
on competing implementations.


And who knows... what works or is recommended by one deployer may not be 
what is best for another type of deployer and I believe we (the 
TC/governance) do a disservice to our user community by picking a winner 
in a space too early (or continuing to pick a winner in a clearly 
unsettled space).


Just my thoughts on the topic, as they've evolved over the years from 
being a pure developer, to doing QA, then deploy/ops work, and back to 
doing development on OpenStack...


Best,
-jay






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] Ceilometer API RBAC enhancement proposal (work in progress)

2014-08-20 Thread Pendergrass, Eric
To anyone who's interested, we're working on a proposal and implementation
of a fine-grained v3 compatible RBAC scheme for the API.

The WIP spec change is here:  
https://review.openstack.org/#/c/112137/

I realize this will need to move from Juno to Kilo because it won't make
Juno.

And the code which is mostly implemented is here:  
https://review.openstack.org/#/c/115717/

I'd be interested in any comments from the community on this.  Those
following the recent discussions on hooks and decorators will know why we
chose the @secure decorators instead of Hooks.

Please keep in mind this is not final code or blueprints, but an early work
in progress.  Still, any suggestions appreciated.

Thanks!
Eric


smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] How to handle blocking bugs/changes in Neutron 3rd party CI

2014-08-20 Thread Dane Leblanc (leblancd)
Preface: I posed this problem on the #openstack-infra IRC, and they couldn't 
offer an easy or obvious solution, and suggested that I get some consensus from 
the Neutron community as to how we want to handle this situation. So I'd like 
to bounce this around, get some ideas, and maybe bring this up in the 3rd party 
CI IRC.

The challenge is this: Occasionally, a blocking bug is introduced which causes 
our 3rd party CI tests to consistently fail on every change set that we're 
testing against. We can develop a fix for the problem, but until that fix gets 
merged upstream, tests against all other change sets are seen to fail.

(Note that we have a similar situation whenever we introduce a completely new 
plugin with its associated 3rd party CI... until the plugin code, or an 
enabling subset of that plugin code is merged upstream, then typically all 
other commits would fail on that CI setup.)

In the past, we've tried dynamically patching the fix(es) on top of the fetched 
code being reviewed, but this isn't always reliable due to merge conflicts, and 
we've had to monkey patch DevStack to apply the fixes after cloning Neutron but 
before installing Neutron.

So we'd prefer to enter a throttled or filtering CI mode when we hit this 
situation, where we're (temporarily) only testing against commits related to 
our plugin/driver which contain (or have a dependency on) the fix for the 
blocking bug until the fix is merged.

In an ideal world, for the sake of transparency, we would love to be able to 
have Jenkins/Zuul report back to Gerrit with a descriptive test result such as 
N/A, Not tested, or even Aborted for all other change sets, letting the 
committer know that, Yeah, we see your review, but we're unable to test it at 
the moment. Zuul does have the ability to report Aborted status to Gerrit, 
but this is sent e.g. when Zuul decides to abort change set 'N' for a review 
when change set 'N+1' has just been submitted, or when a Jenkins admin manually 
aborts a Jenkins job.  Unfortunately, this type of status is not available 
programmatically within a Jenkins job script; the only outcomes are pass (zero 
RC) or fail (non-zero RC). (Note that we can't directly filter at the Zuul 
level in our topology, since we have one Zuul server servicing multiple 3rd 
party CI setups.)

As a second option, we'd like to not run any tests for the other changes, and 
report NOTHING to Gerrit, while continuing to run against changes related to 
our plugin (as required for the plugin changes to be approved).  This was the 
favored approach discussed in the Neutron IRC on Monday. But herein lies the 
rub. By the time our Jenkins job script discovers that the change set that is 
being tested is not in a list of preferred/allowed change sets, the script has 
2 options: pass or fail. With the current Jenkins, there is no programmatic way 
for a Jenkins script to signal to Gearman/Zuul that the job should be aborted.

There was supposedly a bug filed with Jenkins to allow it to interpret 
different exit codes from job scripts as different result values, but this 
hasn't made any progress.

There may be something that can be changed in Zuul to allow it to interpret 
different result codes other than success/fail, or maybe to allow Zuul to do 
change ID filtering on a per Jenkins job basis, but this would require the 
infra team to make changes to Zuul.

The bottom line is that based on the current Zuul/Jenkins infrastructure, 
whenever our 3rd party CI is blocked by a bug, I'm struggling with the 
conflicting requirements:
* Continue testing against change sets for the blocking bug (or plugin related 
changes)
* Don't report anything to Gerrit for all other change sets, since these can't 
be meaningfully tested against the CI hardware

Let me know if I'm missing a solution to this. I appreciate any suggestions!

-Dane


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Improve /validate call

2014-08-20 Thread Anderson Mesquita
Hey folks,

I just submitted a new spec to change the behavior of our current /validate
call.

TL;DR: we should at least add a request param to allow the user to change
the format of the output they get back or, possibly refactor it into two
separate calls.

You can see the blueprint and spec here:
https://blueprints.launchpad.net/heat/+spec/improve-template-validate

What do you think?

Cheers,

~andersonvom
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Help with sql upgrade and downgrade

2014-08-20 Thread Murali Balcha
Hi,
I am trying to add two new columns to backups table  in cinder. I created the 
new version file as follows:


from sqlalchemy import Column, MetaData, String, Table, Boolean


def upgrade(migrate_engine):

meta = MetaData()

meta.bind = migrate_engine


backups = Table('backups', meta, autoload=True)


snapshot = Column('snapshot', Boolean(create_constraint=False, name=None))

parent_id = Column('parent_id', String(length=255))


backups.create_column(snapshot)

backups.create_column(parent_id)


backups.update().values(snapshot=False).execute()

backups.update().values(parent_id=None).execute()



def downgrade(migrate_engine):

meta = MetaData()

meta.bind = migrate_engine


backups = Table('backups', meta, autoload=True)


snapshot = backups.columns.snapshot

parent_id = backups.columns.parent_id


backups.drop_column(snapshot)

backups.drop_column(parent_id)

I can successfully add string column parent_id without any problem. However 
adding a boolean column is vexing. Adding a boolean column adds a check 
constraint on the table but when I remove the column in the downgrade, the 
check constraint for snapshot still remains on the table which resulting in the 
following exception. Has anyone run into this problem?


OperationalError: (OperationalError) no such column: snapshot u'\nCREATE TABLE 
backups (\n\tcreated_at DATETIME, \n\tupdated_at DATETIME, \n\tdeleted_at 
DATETIME, \n\tdeleted BOOLEAN, \n\tid VARCHAR(36) NOT NULL, \n\tvolume_id 
VARCHAR(36) NOT NULL, \n\tuser_id VARCHAR(255), \n\tproject_id VARCHAR(255), 
\n\thost VARCHAR(255), \n\tavailability_zone VARCHAR(255), \n\tdisplay_name 
VARCHAR(255), \n\tdisplay_description VARCHAR(255), \n\tcontainer VARCHAR(255), 
\n\tstatus VARCHAR(255), \n\tfail_reason VARCHAR(255), \n\tservice_metadata 
VARCHAR(255), \n\tservice VARCHAR(255), \n\tsize INTEGER, \n\tobject_count 
INTEGER, \n\tparent_id VARCHAR(255), \n\tPRIMARY KEY (id), \n\tCHECK (deleted 
IN (0, 1)), \n\tCHECK (snapshot IN (0, 1))\n)\n\n' ()

Thanks,
Murali Balcha
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Improve /validate call

2014-08-20 Thread Steve Baker
On 21/08/14 10:27, Anderson Mesquita wrote:
 Hey folks,

 I just submitted a new spec to change the behavior of our current
 /validate call.

 TL;DR: we should at least add a request param to allow the user to
 change the format of the output they get back or, possibly refactor it
 into two separate calls.

 You can see the blueprint and spec here:
 https://blueprints.launchpad.net/heat/+spec/improve-template-validate

 What do you think?
Horizon relies on the current behaviour to build the UI form that
prompts for parameters, so we can't really change this at the REST API
or python lib level.

Something at the cli level would be good though. The default output
should be human friendly and an extra cli option to could be added to
return the full parsed parameters json.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Help with sql upgrade and downgrade

2014-08-20 Thread Mathieu Gagné

On 2014-08-20 6:42 PM, Murali Balcha wrote:


I can successfully add string column parent_id without any problem.
However adding a boolean column is vexing. Adding a boolean column adds
a check constraint on the table but when I remove the column in the
downgrade, the check constraint for snapshot still remains on the table
which resulting in the following exception. Has anyone run into this
problem?

OperationalError: (OperationalError) no such column: snapshot u'\nCREATE
TABLE backups (\n\tcreated_at DATETIME, \n\tupdated_at DATETIME,
\n\tdeleted_at DATETIME, \n\tdeleted BOOLEAN, \n\tid VARCHAR(36) NOT
NULL, \n\tvolume_id VARCHAR(36) NOT NULL, \n\tuser_id VARCHAR(255),
\n\tproject_id VARCHAR(255), \n\thost VARCHAR(255),
\n\tavailability_zone VARCHAR(255), \n\tdisplay_name VARCHAR(255),
\n\tdisplay_description VARCHAR(255), \n\tcontainer VARCHAR(255),
\n\tstatus VARCHAR(255), \n\tfail_reason VARCHAR(255),
\n\tservice_metadata VARCHAR(255), \n\tservice VARCHAR(255), \n\tsize
INTEGER, \n\tobject_count INTEGER, \n\tparent_id VARCHAR(255),
\n\tPRIMARY KEY (id), \n\tCHECK (deleted IN (0, 1)), \n\tCHECK (snapshot
IN (0, 1))\n)\n\n' ()



I had a similar issue when trying to add a boolean column to 
volume_types in Cinder. It looks sqlite does not support DROP CHECK 
required by the downgrade process, therefore resulting in the error you see.


You have to create a sql script specifically for sqlite.

See my change: https://review.openstack.org/#/c/114395/6

In cinder/db/sqlalchemy/migrate_repo/versions/024_sqlite_downgrade.sql

The hack consists of creating a new table without the column and 
copying over the data to it.


--
Mathieu

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-20 Thread Angus Salkeld
On Thu, Aug 21, 2014 at 2:37 AM, Zane Bitter zbit...@redhat.com wrote:

 On 11/08/14 05:24, Thierry Carrez wrote:

 So the idea that being (and remaining) in the integrated release should
 also be judged on technical merit is a slightly different effort. It's
 always been a factor in our choices, but like Devananda says, it's more
 difficult than just checking a number of QA/integration checkboxes. In
 some cases, blessing one project in a problem space stifles competition,
 innovation and alternate approaches. In some other cases, we reinvent
 domain-specific solutions rather than standing on the shoulders of
 domain-specific giants in neighboring open source projects.


 I totally agree that these are the things we need to be vigilant about.

 Stifling competition is a big worry, but it appears to me that a lot of
 the stifling is happening even before incubation. Everyone's time is
 limited, so if you happen to notice a new project on the incubation
 trajectory doing things in what you think is the Wrong Way, you're most
 likely to either leave some drive-by feedback or to just ignore it and
 carry on with your life. What you're most likely *not* to do is to start a
 competing project to prove them wrong, or to jump in full time to the
 existing project and show them the light. It's really hard to argue against
 the domain experts too - when you're acutely aware of how shallow your
 knowledge is in a particular area it's very hard to know how hard to push.
 (Perhaps ironically, since becoming a PTL I feel I have to be much more
 cautious in what I say too, because people are inclined to read too much
 into my opinion - I wonder if TC members feel the same pressure.) I speak
 from first-hand instances of guilt here - for example, I gave some feedback
 to the Mistral folks just before the last design summit[1], but I haven't
 had time to follow it up at all. I wouldn't be a bit surprised if they
 showed up with an incubation request, a largely-unchanged user interface
 and an expectation that I would support it.

 The result is that projects often don't hear the feedback they need until
 far too late - often when they get to the incubation review (maybe not even
 their first incubation review). In the particularly unfortunate case of
 Marconi, it wasn't until the graduation review. (More about that in a
 second.) My best advice to new projects here is that you must be like a
 ferret up the pant-leg of any negative feedback. Grab hold of any criticism
 and don't let go until you have either converted the person giving it into
 your biggest supporter, been converted by them, or provoked them to start a
 competing project. (Any of those is a win as far as the community is
 concerned.)

 Perhaps we could consider a space like a separate mailing list
 (openstack-future?) reserved just for announcements of Related projects,
 their architectural principles, and discussions of the same?  They
 certainly tend to get drowned out amidst the noise of openstack-dev.
 (Project management, meeting announcements, and internal project discussion
 would all be out of scope for this list.)

 As for reinventing domain-specific solutions, I'm not sure that happens as
 often as is being made out. IMO the defining feature of IaaS that makes the
 cloud the cloud is on-demand (i.e. real-time) self-service. Everything else
 more or less falls out of that requirement, but the very first thing to
 fall out is multi-tenancy and there just aren't that many multi-tenant
 services floating around out there. There are a couple of obvious
 strategies to deal with that: one is to run existing software within a
 tenant-local resource provisioned by OpenStack (Trove and Sahara are
 examples of this), and the other is to wrap a multi-tenancy framework
 around an existing piece of software (Nova and Cinder are examples of
 this). (BTW the former is usually inherently less satisfying, because it
 scales at a much coarser granularity.) The answer to a question of the form:

 Why do we need OpenStack project $X, when open source project $Y already
 exists?

 is almost always:

 Because $Y is not multi-tenant aware; we need to wrap it with a
 multi-tenancy layer with OpenStack-native authentication, metering and
 quota management. That even allows us to set up an abstraction layer so
 that you can substitute $Z as the back end too.

 This is completely uncontroversial when you substitute X, Y, Z = Nova,
 libvirt, Xen. However, when you instead substitute X, Y, Z = Zaqar/Marconi,
 Qpid, MongoDB it suddenly becomes *highly* controversial. I'm all in favour
 of a healthy scepticism, but I think we've passed that point now. (How
 would *you* make an AMQP bus multi-tenant?)

 To be clear, Marconi did made a mistake. The Marconi API presented
 semantics to the user that excluded many otherwise-obvious choices of
 back-end plugin (i.e. Qpid/RabbitMQ). It seems to be a common thing (see
 also: Mistral) to want to design for every feature an existing 

[openstack-dev] [Neutron][LBaaS] Weekly IRC Agenda

2014-08-20 Thread Jorge Miramontes
Hey LBaaS folks,

This is you friendly reminder to provide any agenda items for tomorrow's weekly 
IRC meeting. Please add them to the agenda wiki == 
https://wiki.openstack.org/wiki/Network/LBaaS#Agenda.

Cheers,
--Jorge
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][all] switch from mysqldb to another eventlet aware mysql client

2014-08-20 Thread Clark Boylan
On Mon, Aug 18, 2014, at 01:59 AM, Ihar Hrachyshka wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512
 
 On 17/08/14 02:09, Angus Lees wrote:
  
  On 16 Aug 2014 06:09, Doug Hellmann d...@doughellmann.com 
  mailto:d...@doughellmann.com wrote:
  
  
  On Aug 15, 2014, at 9:29 AM, Ihar Hrachyshka
  ihrac...@redhat.com
  mailto:ihrac...@redhat.com wrote:
  
  Signed PGP part Some updates on the matter:
  
  - oslo-spec was approved with narrowed scope which is now
  'enabled mysqlconnector as an alternative in gate' instead of
  'switch the default db driver to mysqlconnector'. We'll revisit
  the switch part the next cycle once we have the new driver
  running in gate and real benchmarking is heavy-lifted.
  
  - there are several patches that are needed to make devstack
  and tempest passing deployment and testing. Those are collected
  under the hood of: https://review.openstack.org/#/c/114207/ Not
  much of them.
  
  - we'll need a new oslo.db release to bump versions (this is
  needed to set raise_on_warnings=False for the new driver, which
  was incorrectly set to True in sqlalchemy till very recently).
  This is expected to be released this month (as per Roman
  Podoliaka).
  
  This release is currently blocked on landing some changes in
  projects
  using the library so they don?t break when the new version starts
  using different exception classes. We?re tracking that work in 
  https://etherpad.openstack.org/p/sqla_exceptions_caught
  
  It looks like we?re down to 2 patches, one for cinder
  (https://review.openstack.org/#/c/111760/) and one for glance 
  (https://review.openstack.org/#/c/109655). Roman, can you verify
  that those are the only two projects that need changes for the
  exception issue?
  
  
  - once the corresponding patch for sqlalchemy-migrate is
  merged, we'll also need a new version released for this.
  
  So we're going for a new version of sqlalchemy?  (We have a
  separate workaround for raise_on_warnings that doesn't require the
  new sqlalchemy release if this brings too many other issues)
 
 Wrong. We're going for a new version of *sqlalchemy-migrate*. Which is
 the code that we inherited from Mike and currently track in stackforge.
 
  
  - on PyPI side, no news for now. The last time I've heard from
  Geert (the maintainer of MySQL Connector for Python), he was
  working on this. I suspect there are some legal considerations
  running inside Oracle. I'll update once I know more about
  that.
  
  If we don?t have the new package on PyPI, how do we plan to
  include it
  in the gate? Are there options to allow an exception, or to make
  the mirroring software download it anyway?
  
  We can test via devstack without waiting for pypi, since devstack
  will install via rpms/debs.
 
 I expect that it will be settled. I have no indication that the issue
 is unsolvable, it will just take a bit more time than we're accustomed
 to. :)
 
 At the moment, we install MySQLdb from distro packages for devstack.
 Same applies to new driver. It will be still great to see the package
 published on PyPI so that we can track its version requirements
 instead of relying on distros to package it properly. But I don't see
 it as a blocker.
 
 Also, we will probably be able to run with other drivers supported by
 SQLAlchemy once all the work is done.
 
So I got bored last night and decided to take a stab at making PyMySQL
work since I was a proponent of it earlier. Thankfully it did just
mostly work like I thought it would.
https://review.openstack.org/#/c/115495/ is the WIP devstack change to
test this out.

Postgres tests fail because it was applying the pymysql driver to the
postgres connection string. We can clean this up later in devstack
and/or devstack-gate depending on how we need to apply this stuff.
Bashate failed because I had to monkeypatch in a fix for a ceilometer
issue loading sqlalchemy drivers. The tempest neutron full job fails on
one test occasionally. Not sure yet if that is normal neutron full
failure mode or if a new thing from PyMySQL. The regular tempest job
passes just fine.

There are also some DB related errors in the logs that will need to be
cleaned up but overall it just works. So I would like to repropose that
we stop focusing all of this effort on the hard thing and use the easy
thing if it meets our needs. We can continue to make alternatives work,
but that is a different problem that we can solve at a different pace. I
am not sure how to test the neutron thing that Gus was running into
though so we should also check that really quickly.

Also, the tests themselves don't seem to run any faster or slower than
when using the default mysql driver. Hard to complain about that :)

Clark
  
  - Gus
  
  
  
  ___ OpenStack-dev
  mailing list OpenStack-dev@lists.openstack.org 
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 -BEGIN PGP SIGNATURE-
 Version: GnuPG/MacGPG2 v2.0.22 

Re: [openstack-dev] [All] Switching default test node to Trusty for all projects and upgrading tox to version 1.7.2

2014-08-20 Thread Clark Boylan
On Tue, Aug 5, 2014, at 05:04 PM, Clark Boylan wrote:
 Hello,
 
 On August 20th the Infra team will be switching the default test node
 from Ubuntu Precise to Ubuntu Trusty. This means that `tox -epy27`, `tox
 -ecover`, `tox -epep8`, `tox -evenv -- python setup.py build_sphinx`
 will run on Trusty build nodes intead of Precise nodes. This will not
 affect python 2.6 or 3.3 tests as they will remain on Centos6 and
 Precise as that is where those python versions are available.
 
 Many projects have made the the switch to Trusty already and it has been
 painless for the most part. The only things we have run into so far are
 apache 2.2 vs apache 2.4 differences and Solum had rabbitmq problems in
 their functional test on Trusty (this test will not move to Trusty as
 part of this). If your project has not already made the transition to
 Trusty you can make sure you are prepared by running `tox -epy27`, `tox
 -epep8`, and `tox -evenv -- python setup.py build_sphinx`. I have been
 trying to do this for as many projects as I can but only have so much
 time :) your help is appreciated.
 
 We will also be upgrading tox on our test slaves to tox 1.7.2. We have
 pinned to version 1.6.1 for a long time because 1.7.0 broke
 compatibility with many of our tox.ini files. That problem has since
 been fixed but another minor issue has cropped up in 1.7.0. Details are
 at
 http://lists.openstack.org/pipermail/openstack-dev/2014-July/041283.html.
 TL;DR is if your tests cannot handle a random python hashseed value you
 will need to fix your tests or merge a change like
 https://review.openstack.org/#/c/109700/ to your project. Again, I have
 done my best to poke at this for as many projects as I can but there are
 so many of them. Your help is appreciated with this as well.
 
 Upgrading tox is important so that we can stop relying on a specific
 version of tox that developers need to know to install. It will also
 allow us to move our py33 testing to py34 on Trusty which gives python3
 testing a long term home.
 
 If you have any questions let me know. Thanks again,
 Clark
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

This work is mostly done. The switch to trusty by default is in for
everyone now. The tox upgrade is mostly complete but won't be done
everywhere until I have completed rebuilding all of our nodepool images
and then let nodes built on the old image cycle out. This should be done
shortly.

Please do let us know if you see oddness. Happy to help work through
anything that I missed when testing this stuff.

Clark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Non-readonly connection to libvirt in unit tests

2014-08-20 Thread Matt Riedemann



On 8/11/2014 4:42 AM, Daniel P. Berrange wrote:

On Mon, Aug 04, 2014 at 06:46:13PM -0400, Solly Ross wrote:

Hi,
I was wondering if there was a way to get a non-readonly connection
to libvirt when running the unit tests
on the CI.  If I call `LibvirtDriver._connect(LibvirtDriver.uri())`,
it works fine locally, but the upstream
CI barfs with libvirtError: internal error Unable to locate libvirtd
daemon in /usr/sbin (to override, set $LIBVIRTD_PATH to the name of the
libvirtd binary).
If I try to connect by calling libvirt.open(None), it also barfs, saying
I don't have permission to connect.  I could just set it to always use
fakelibvirt,
but it would be nice to be able to run some of the tests against a real
target.  The tests in question are part of 
https://review.openstack.org/#/c/111459/,
and involve manipulating directory-based libvirt storage pools.


Nothing in the unit tests should rely on being able to connect to the
libvirt daemon, as the unit tests should still be able to pass when the
daemon is not running at all. We should be either using fakelibvirt or
mocking any libvirt APIs that need to be tested

Regards,
Daniel



So this is busted then right because the new flags being used aren't 
defined in fakelibvirt:


https://github.com/openstack/nova/commit/26504d71ceaecf22f135d8321769db801290c405

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Non-readonly connection to libvirt in unit tests

2014-08-20 Thread Matt Riedemann



On 8/11/2014 4:42 AM, Daniel P. Berrange wrote:

On Mon, Aug 04, 2014 at 06:46:13PM -0400, Solly Ross wrote:

Hi,
I was wondering if there was a way to get a non-readonly connection
to libvirt when running the unit tests
on the CI.  If I call `LibvirtDriver._connect(LibvirtDriver.uri())`,
it works fine locally, but the upstream
CI barfs with libvirtError: internal error Unable to locate libvirtd
daemon in /usr/sbin (to override, set $LIBVIRTD_PATH to the name of the
libvirtd binary).
If I try to connect by calling libvirt.open(None), it also barfs, saying
I don't have permission to connect.  I could just set it to always use
fakelibvirt,
but it would be nice to be able to run some of the tests against a real
target.  The tests in question are part of 
https://review.openstack.org/#/c/111459/,
and involve manipulating directory-based libvirt storage pools.


Nothing in the unit tests should rely on being able to connect to the
libvirt daemon, as the unit tests should still be able to pass when the
daemon is not running at all. We should be either using fakelibvirt or
mocking any libvirt APIs that need to be tested

Regards,
Daniel



Also, doesn't this kind of break with the test requirement on 
libvirt-python now?  Before I was on trusty and trying to install that 
it was failing because I didn't have a new enough version of libvirt-bin 
installed.  So if we require libvirt-python for tests and that requires 
libvirt-bin, what's stopping us from just removing fakelibvirt since 
it's kind of useless now anyway, right?


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thought on service plugin architecture (was [Neutron][QoS] Request to be considered for neutron-incubator)

2014-08-20 Thread loy wolfe
On Wed, Aug 20, 2014 at 7:03 PM, Salvatore Orlando sorla...@nicira.com
wrote:

 As the original thread had a completely different subject, I'm starting a
 new one here.

 More specifically the aim of this thread is about:
 1) Define when a service is best implemented with a service plugin or with
 a ML2 driver
 2) Discuss how bindings between a core resource and the one provided by
 the service plugin should be exposed at the management plane, implemented
 at the control plane, and if necessary also at the data plane.

 Some more comments inline.

 Salvatore


 When a port is created, and it has Qos enforcement thanks to the service
 plugin,
 let's assume that a ML2 Qos Mech Driver can fetch Qos info and send
 them back to the L2 agent.
 We would probably need a Qos Agent which communicates with the plugin
 through a dedicated topic.


 A distinct agent has pro and cons. I think however that we should try and
 limit the number of agents on the hosts to a minimum. And this minimum in
 my opinion should be 1! There is already a proposal around a modular agent
 which should be able of loading modules for handling distinct services. I
 think that's the best way forward.



+1
consolidated modular agent can greatly reduce rpc communication with
plugin, and redundant code . If we can't merge it to a single Neutron
agent now, we can at least merge into two agents: modular L2 agent, and
modular L3+ agent



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]Performance of security group

2014-08-20 Thread Martinx - ジェームズ
+1 NFTablesDriver!

Also, NFTables, AFAIK, improves IDS systems, like Suricata, for example:
https://home.regit.org/2014/02/suricata-and-nftables/

Then, I'm wondering here... What benefits might come for OpenStack Nova /
Neutron, if it comes with a NFTables driver, instead of the current
IPTables?!

* Efficient Security Group design?
* Better FWaaS, maybe with NAT(44/66) support?
* Native support for IPv6, with the defamed NAT66 built-in, simpler
Floating IP implementation, for both v4 and v6 networks under a single
implementation (*I don't like NAT66, I prefer a `routed Floating IPv6`
version*)?
* Metadata over IPv6 still using NAT(66) (*I don't like NAT66*), single
implementation?
* Suricata-as-a-Service?!

It sounds pretty cool!   :-)


On 20 August 2014 23:16, Baohua Yang yangbao...@gmail.com wrote:

 Great!
 We met similar problems.
 The current mechanisms produce too many iptables rules, and it's hard to
 debug.
 Really look forward to seeing a more efficient security group design.


 On Thu, Jul 10, 2014 at 11:44 PM, Kyle Mestery mest...@noironetworks.com
 wrote:

 On Thu, Jul 10, 2014 at 4:30 AM, shihanzhang ayshihanzh...@126.com
 wrote:
 
  With the deployment 'nova + neutron + openvswitch', when we bulk create
  about 500 VM with a default security group, the CPU usage of
 neutron-server
  and openvswitch agent is very high, especially the CPU usage of
 openvswitch
  agent will be 100%, this will cause creating VMs failed.
 
  With the method discussed in mailist:
 
  1) ipset optimization   (https://review.openstack.org/#/c/100761/)
 
  3) sg rpc optimization (with fanout)
  (https://review.openstack.org/#/c/104522/)
 
  I have implement  these two scheme in my deployment,  when we again bulk
  create about 500 VM with a default security group, the CPU usage of
  openvswitch agent will reduce to 10%, even lower than 10%, so I think
 the
  iprovement of these two options are very efficient.
 
  Who can help us to review our spec?
 
 This is great work! These are on my list of things to review in detail
 soon, but given the Neutron sprint this week, I haven't had time yet.
 I'll try to remedy that by the weekend.

 Thanks!
 Kyle

 Best regards,
  shihanzhang
 
 
 
 
 
  At 2014-07-03 10:08:21, Ihar Hrachyshka ihrac...@redhat.com wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512
 
 Oh, so you have the enhancement implemented? Great! Any numbers that
 shows how much we gain from that?
 
 /Ihar
 
 On 03/07/14 02:49, shihanzhang wrote:
  Hi, Miguel Angel Ajo! Yes, the ipset implementation is ready, today
  I will modify my spec, when the spec is approved, I will commit the
  codes as soon as possilbe!
 
 
 
 
 
  At 2014-07-02 10:12:34, Miguel Angel Ajo majop...@redhat.com
  wrote:
 
  Nice Shihanzhang,
 
  Do you mean the ipset implementation is ready, or just the
  spec?.
 
 
  For the SG group refactor, I don't worry about who does it, or
  who takes the credit, but I believe it's important we address
  this bottleneck during Juno trying to match nova's scalability.
 
  Best regards, Miguel Ángel.
 
 
  On 07/02/2014 02:50 PM, shihanzhang wrote:
  hi Miguel Ángel and Ihar Hrachyshka, I agree with you that
  split  the work in several specs, I have finished the work (
  ipset optimization), you can do 'sg rpc optimization (without
  fanout)'. as the third part(sg rpc optimization (with fanout)),
  I think we need talk about it, because just using ipset to
  optimize security group agent codes does not bring the best
  results!
 
  Best regards, shihanzhang.
 
 
 
 
 
 
 
 
  At 2014-07-02 04:43:24, Ihar Hrachyshka ihrac...@redhat.com
  wrote:
  On 02/07/14 10:12, Miguel Angel Ajo wrote:
 
  Shihazhang,
 
  I really believe we need the RPC refactor done for this cycle,
  and given the close deadlines we have (July 10 for spec
  submission and July 20 for spec approval).
 
  Don't you think it's going to be better to split the work in
  several specs?
 
  1) ipset optimization   (you) 2) sg rpc optimization (without
  fanout) (me) 3) sg rpc optimization (with fanout) (edouard, you
  , me)
 
 
  This way we increase the chances of having part of this for the
  Juno cycle. If we go for something too complicated is going to
  take more time for approval.
 
 
  I agree. And it not only increases chances to get at least some of
  those highly demanded performance enhancements to get into Juno,
  it's also the right thing to do (c). It's counterproductive to
  put multiple vaguely related enhancements in single spec. This
  would dim review focus and put us into position of getting
  'all-or-nothing'. We can't afford that.
 
  Let's leave one spec per enhancement. @Shihazhang, what do you
  think?
 
 
  Also, I proposed the details of 2, trying to bring awareness
  on the topic, as I have been working with the scale lab in Red
  Hat to find and understand those issues, I have a very good
  knowledge of the problem and I believe I could make a very fast
  advance on the issue at the RPC level.
 

Re: [openstack-dev] [neutron]Performance of security group

2014-08-20 Thread shihanzhang
hi neutroner!
my patch about 
BP:https://blueprints.launchpad.net/openstack/?searchtext=add-ipset-to-security 
need install ipset in devstack, I have commit the 
patch:https://review.openstack.org/#/c/113453/, who can help me review it, 
thanks very much!


 Best regards,
shihanzhang





At 2014-08-21 10:47:59, Martinx - ジェームズ thiagocmarti...@gmail.com wrote:

+1 NFTablesDriver!


Also, NFTables, AFAIK, improves IDS systems, like Suricata, for example: 
https://home.regit.org/2014/02/suricata-and-nftables/


Then, I'm wondering here... What benefits might come for OpenStack Nova / 
Neutron, if it comes with a NFTables driver, instead of the current IPTables?!


* Efficient Security Group design?
* Better FWaaS, maybe with NAT(44/66) support?
* Native support for IPv6, with the defamed NAT66 built-in, simpler Floating 
IP implementation, for both v4 and v6 networks under a single implementation 
(I don't like NAT66, I prefer a `routed Floating IPv6` version)?
* Metadata over IPv6 still using NAT(66) (I don't like NAT66), single 
implementation?
* Suricata-as-a-Service?!


It sounds pretty cool!   :-)



On 20 August 2014 23:16, Baohua Yang yangbao...@gmail.com wrote:

Great!
We met similar problems.
The current mechanisms produce too many iptables rules, and it's hard to debug.
Really look forward to seeing a more efficient security group design.



On Thu, Jul 10, 2014 at 11:44 PM, Kyle Mestery mest...@noironetworks.com 
wrote:

On Thu, Jul 10, 2014 at 4:30 AM, shihanzhang ayshihanzh...@126.com wrote:

 With the deployment 'nova + neutron + openvswitch', when we bulk create
 about 500 VM with a default security group, the CPU usage of neutron-server
 and openvswitch agent is very high, especially the CPU usage of openvswitch
 agent will be 100%, this will cause creating VMs failed.

 With the method discussed in mailist:

 1) ipset optimization   (https://review.openstack.org/#/c/100761/)

 3) sg rpc optimization (with fanout)
 (https://review.openstack.org/#/c/104522/)

 I have implement  these two scheme in my deployment,  when we again bulk
 create about 500 VM with a default security group, the CPU usage of
 openvswitch agent will reduce to 10%, even lower than 10%, so I think the
 iprovement of these two options are very efficient.

 Who can help us to review our spec?


This is great work! These are on my list of things to review in detail
soon, but given the Neutron sprint this week, I haven't had time yet.
I'll try to remedy that by the weekend.

Thanks!
Kyle


Best regards,
 shihanzhang





 At 2014-07-03 10:08:21, Ihar Hrachyshka ihrac...@redhat.com wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Oh, so you have the enhancement implemented? Great! Any numbers that
shows how much we gain from that?

/Ihar

On 03/07/14 02:49, shihanzhang wrote:
 Hi, Miguel Angel Ajo! Yes, the ipset implementation is ready, today
 I will modify my spec, when the spec is approved, I will commit the
 codes as soon as possilbe!





 At 2014-07-02 10:12:34, Miguel Angel Ajo majop...@redhat.com
 wrote:

 Nice Shihanzhang,

 Do you mean the ipset implementation is ready, or just the
 spec?.


 For the SG group refactor, I don't worry about who does it, or
 who takes the credit, but I believe it's important we address
 this bottleneck during Juno trying to match nova's scalability.

 Best regards, Miguel Ángel.


 On 07/02/2014 02:50 PM, shihanzhang wrote:
 hi Miguel Ángel and Ihar Hrachyshka, I agree with you that
 split  the work in several specs, I have finished the work (
 ipset optimization), you can do 'sg rpc optimization (without
 fanout)'. as the third part(sg rpc optimization (with fanout)),
 I think we need talk about it, because just using ipset to
 optimize security group agent codes does not bring the best
 results!

 Best regards, shihanzhang.








 At 2014-07-02 04:43:24, Ihar Hrachyshka ihrac...@redhat.com
 wrote:
 On 02/07/14 10:12, Miguel Angel Ajo wrote:

 Shihazhang,

 I really believe we need the RPC refactor done for this cycle,
 and given the close deadlines we have (July 10 for spec
 submission and July 20 for spec approval).

 Don't you think it's going to be better to split the work in
 several specs?

 1) ipset optimization   (you) 2) sg rpc optimization (without
 fanout) (me) 3) sg rpc optimization (with fanout) (edouard, you
 , me)


 This way we increase the chances of having part of this for the
 Juno cycle. If we go for something too complicated is going to
 take more time for approval.


 I agree. And it not only increases chances to get at least some of
 those highly demanded performance enhancements to get into Juno,
 it's also the right thing to do (c). It's counterproductive to
 put multiple vaguely related enhancements in single spec. This
 would dim review focus and put us into position of getting
 'all-or-nothing'. We can't afford that.

 Let's leave one spec per enhancement. @Shihazhang, what do you
 think?


 Also, I proposed the details of 2, trying to 

Re: [openstack-dev] [all] The future of the integrated release

2014-08-20 Thread Clint Byrum
Excerpts from Robert Collins's message of 2014-08-18 23:41:20 -0700:
 On 18 August 2014 09:32, Clint Byrum cl...@fewbar.com wrote:
 
 I can see your perspective but I don't think its internally consistent...
 
  Here's why folk are questioning Ceilometer:
 
  Nova is a set of tools to abstract virtualization implementations.
 
 With a big chunk of local things - local image storage (now in
 glance), scheduling, rebalancing, ACLs and quotas. Other
 implementations that abstract over VM's at various layers already
 existed when Nova started - some bad ( some very bad!) and others
 actually quite ok.
 

The fact that we have local implementations of domain specific things is
irrelevant to the difference I'm trying to point out. Glance needs to
work with the same authentication semantics and share a common access
catalog to work well with Nova. It's unlikely there's a generic image
catalog that would ever fit this bill. In many ways glance is just an
abstraction of file storage backends and a database to track a certain
domain of files (images, and soon, templates and other such things).

The point of mentioning Nova is, we didn't write libvirt, or xen, we
wrote an abstraction so that users could consume them via a REST API
that shares these useful automated backends like glance.

  Neutron is a set of tools to abstract SDN/NFV implementations.
 
 And implements a DHCP service, DNS service, overlay networking : its
 much more than an abstraction-over-other-implementations.
 

Native DHCP and overlay? Last I checked Neutron used dnsmasq and
openvswitch, but it has been a few months, and I know that is an eon in
OpenStack time.

  Cinder is a set of tools to abstract block-device implementations.
  Trove is a set of tools to simplify consumption of existing databases.
  Sahara is a set of tools to simplify Hadoop consumption.
  Swift is a feature-complete implementation of object storage, none of
  which existed when it was started.
 
 Swift was started in 2009; Eucalyptus goes back to 2007, with Walrus
 part of that - I haven't checked precise dates, but I'm pretty sure
 that it existed and was usable by the start of 2009. There may well be
 other object storage implementations too - I simply haven't checked.
 

Indeed, and MogileFS was sort of like Swift but not HTTP based. Perhaps
Walrus was evaluated and inadequate for the CloudFiles product
requirements? I don't know. But there weren't de-facto object stores
at the time because object stores were just becoming popular.

  Keystone supports all of the above, unifying their auth.
 
 And implementing an IdP (which I know they want to stop doing ;)). And
 in fact lots of OpenStack projects, for various reasons support *not*
 using Keystone (something that bugs me, but thats a different
 discussion).
 

My point was it is justified to have a whole implementation and not
just abstraction because it is meant to enable the ecosystem, not _be_
the ecosystem. I actually think Keystone is problematic too, and I often
wonder why we haven't just do OAuth, but I'm not trying to throw every
project under the bus. I'm trying to state that we accept Keystone because
it has grown organically to support the needs of all the other pieces.

  Horizon supports all of the above, unifying their GUI.
 
  Ceilometer is a complete implementation of data collection and alerting.
  There is no shortage of implementations that exist already.
 
  I'm also core on two projects that are getting some push back these
  days:
 
  Heat is a complete implementation of orchestration. There are at least a
  few of these already in existence, though not as many as their are data
  collection and alerting systems.
 
  TripleO is an attempt to deploy OpenStack using tools that OpenStack
  provides. There are already quite a few other tools that _can_ deploy
  OpenStack, so it stands to reason that people will question why we
  don't just use those. It is my hope we'll push more into the unifying
  the implementations space and withdraw a bit from the implementing
  stuff space.
 
  So, you see, people are happy to unify around a single abstraction, but
  not so much around a brand new implementation of things that already
  exist.
 
 If the other examples we had were a lot purer, this explanation would
 make sense. I think there's more to it than that though :).
 

If purity is required to show a difference, then I don't think I know
how to demonstrate what I think is obvious to most of us: Ceilometer
is an end to end implementation of things that exist in many battle
tested implementations. I struggle to think of another component of
OpenStack that has this distinction.

 What exactly, I don't know, but its just too easy an answer, and one
 that doesn't stand up to non-trivial examination :(.
 
 I'd like to see more unification of implementations in TripleO - but I
 still believe our basic principle of using OpenStack technologies that
 already exist in preference to third party ones is still 

Re: [openstack-dev] [all] The future of the integrated release

2014-08-20 Thread Clint Byrum
Excerpts from Jay Pipes's message of 2014-08-20 14:53:22 -0700:
 On 08/20/2014 05:06 PM, Chris Friesen wrote:
  On 08/20/2014 07:21 AM, Jay Pipes wrote:
  Hi Thierry, thanks for the reply. Comments inline. :)
 
  On 08/20/2014 06:32 AM, Thierry Carrez wrote:
  If we want to follow your model, we probably would have to dissolve
  programs as they stand right now, and have blessed categories on one
  side, and teams on the other (with projects from some teams being
  blessed as the current solution).
 
  Why do we have to have blessed categories at all? I'd like to think of
  a day when the TC isn't picking winners or losers at all. Level the
  playing field and let the quality of the projects themselves determine
  the winner in the space. Stop the incubation and graduation madness and
  change the role of the TC to instead play an advisory role to upcoming
  (and existing!) projects on the best ways to integrate with other
  OpenStack projects, if integration is something that is natural for the
  project to work towards.
 
  It seems to me that at some point you need to have a recommended way of
  doing things, otherwise it's going to be *really hard* for someone to
  bring up an OpenStack installation.
 
 Why can't there be multiple recommended ways of setting up an OpenStack 
 installation? Matter of fact, in reality, there already are multiple 
 recommended ways of setting up an OpenStack installation, aren't there?
 
 There's multiple distributions of OpenStack, multiple ways of doing 
 bare-metal deployment, multiple ways of deploying different message 
 queues and DBs, multiple ways of establishing networking, multiple open 
 and proprietary monitoring systems to choose from, etc. And I don't 
 really see anything wrong with that.
 

This is an argument for loosely coupling things, rather than tightly
integrating things. You will almost always win my vote with that sort of
movement, and you have here. +1.

  We already run into issues with something as basic as competing SQL
  databases.
 
 If the TC suddenly said Only MySQL will be supported, that would not 
 mean that the greater OpenStack community would be served better. It 
 would just unnecessarily take options away from deployers.
 

This is really where supported becomes the mutex binding us all. The
more supported options, the larger the matrix, the more complex a
user's decision process becomes.

   If every component has several competing implementations and
  none of them are official how many more interaction issues are going
  to trip us up?
 
 IMO, OpenStack should be about choice. Choice of hypervisor, choice of 
 DB and MQ infrastructure, choice of operating systems, choice of storage 
 vendors, choice of networking vendors.
 

Err, uh. I think OpenStack should be about users. If having 400 choices
means users are just confused, then OpenStack becomes nothing and
everything all at once. Choices should be part of the whole not when 1%
of the market wants a choice, but when 20%+ of the market _requires_
a choice.

What we shouldn't do is harm that 1%'s ability to be successful. We should
foster it and help it grow, but we don't just pull it into the program and
say You're ALSO in OpenStack now! and we also don't want to force those
users to make a hard choice because the better solution is not blessed.

 If there are multiple actively-developed projects that address the same 
 problem space, I think it serves our OpenStack users best to let the 
 projects work things out themselves and let the cream rise to the top. 
 If the cream ends up being one of those projects, so be it. If the cream 
 ends up being a mix of both projects, so be it. The production community 
 will end up determining what that cream should be based on what it 
 deploys into its clouds and what input it supplies to the teams working 
 on competing implementations.
 

I'm really not a fan of making it a competitive market. If a space has a
diverse set of problems, we can expect it will have a diverse set of
solutions that overlap. But that doesn't mean they both need to drive
toward making that overlap all-encompassing. Sometimes that happens and
it is good, and sometimes that happens and it causes horrible bloat.

 And who knows... what works or is recommended by one deployer may not be 
 what is best for another type of deployer and I believe we (the 
 TC/governance) do a disservice to our user community by picking a winner 
 in a space too early (or continuing to pick a winner in a clearly 
 unsettled space).
 

Right, I think our current situation crowds out diversity, when what we
want to do is enable it, without confusing the users.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] indicating sample provenance

2014-08-20 Thread Nejc Saje



On 08/20/2014 09:25 PM, Chris Dent wrote:

On Wed, 20 Aug 2014, gordon chung wrote:


disclaimer: i'm just riffing and the following might be nonsense.


/me is a huge fan of riffing


i guess also to extend your question about agents leaving/joining. i'd
expect there is some volatility to the agents where an agent may or
may not exist at the point of debugging... just curious what the
benefit is of knowing who sent it if all the agents are just clones of
each other.


What I'm thinking of is situation where some chunk of samples is
arriving at the data store and is in some fashion outside the
expected norms when compared to others.

If, from looking at the samples, you can tell that they were all
published from the (used-to-be-)central-agent on host X then you can
go to host X and have a browse around there to see what might be up.

It's unlikely that the agent is going to be the cause of any
weirdness but if it _is_ then we'd like to be able to locate it. As
things currently stand there's no way, from the sample itself, to do
so.

Thus, the benefit of knowing who sent it is that though the agents
themselves are clones, they are in regions and on hosts that are
not.

Beyond all those potentially good reasons there's also just the
simple matter that it is good data hygiene to know where stuff came
from?



More riffing: we are moving away from per-sample specific data with 
Gnocchi. I don't think we should store this per-sample, since the user 
doesn't actually care about which agent the sample came from. The user 
cares about which *resource* it came from.


I could see this going into an agent's log. On each polling cycle, we 
could log which *resources* we are responsible (not samples).


Cheers,
Nejc


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] repackage ceilometer and ceilometerclient

2014-08-20 Thread Osanai, Hisashi

Folks,

I wrote the following BP regarding repackaging ceilometer and ceilometerclient.

https://blueprints.launchpad.net/ceilometer/+spec/repackaging-ceilometerclient

I need to install the ceilometer package when the swift_middlware middleware 
uses.
And the ceilometer package has dependencies with the following:

- requirements.txt in the ceilometer package
...
python-ceilometerclient=1.0.6
python-glanceclient=0.13.1
python-keystoneclient=0.9.0
python-neutronclient=2.3.5,3
python-novaclient=2.17.0
python-swiftclient=2.0.2
...

From maintenance point of view, these dependencies are undesirable. What do 
you think?

# To fix this we need to touch some repos so I wrote the BP instead of a bug 
report.

Best Regards,
Hisashi Osanai



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev