[openstack-dev] [nova] nova networking API and CLI are poorly documented and buggy

2014-06-17 Thread Mike Spreitzer
I am not even sure what is the intent, but some of the behavior looks like 

it is clearly unintended and not useful (i.e., buggy).

IMHO, the API and CLI documentation should explain these calls/commands in 

enough detail that the reader can tell the difference.  And the difference 

should be useful in at least some networking configurations.  It seems to 
me that in some configurations an administrative user may want THREE 
varieties of the network listing call/command: one that shows networks 
assigned to his tenant, one that also shows networks available to be 
assigned, and one that shows all networks.  And in no configurations 
should a non-administrative user be blind to all categories of networks.

In the API, there are the calls on /v2/{tenant_id}/os-networks and they 
are documented at 
http://docs.openstack.org/api/openstack-compute/2/content/ext-os-networks.html

.  There are also calls on /v2/{tenant_id}/os-tenant-networks --- but I 
can not find documentation for them.

http://docs.openstack.org/api/openstack-compute/2/content/ext-os-networks.html 

does not describe the meaning of the calls in much detail.  For example, 
about GET /v2/{tenant_id}/os-networks that doc says only Lists networks 

that are available to the tenant.  In some networking configurations, 
there are two levels of availability: a network might be assigned to a 
tenant, or a network might be available for assignment.  In other 
networking configurations there are NOT two levels of availability.  For 
example, in Flat DHCP nova networking (which is the default in DevStack), 
a network CAN NOT be assigned to a tenant.

You might think that the to the tenant qualification implies filtering 
by the invoker's tenant.  But you would be wrong in the case of an 
administrative user; see the model_query method in 
nova/db/sqlalchemy/api.py 

In the CLI, we have two sets of similar-seeming commands.  For example,

$ nova help net-list
usage: nova net-list

List networks

$ nova help network-list
usage: nova network-list

Print a list of available networks.

Those remarks are even briefer than the one description in the API doc, 
omitting the qualification to the tenant.

Experimentation shows that, in the case of flat DHCP nova networking, both 

of those commands show zero networks to a non-administrative user (and 
remember that networks can not be assigned to tenants in that 
configuration) and all the networks to an administrative user.  At the API 

the GET calls behave the same way.  The fact that a non-administrative 
user sees zero networks looks unintended and not useful.

See https://bugs.launchpad.net/openstack-manuals/+bug/1152862 and 
https://bugs.launchpad.net/nova/+bug/1327406

Can anyone tell me why there are both /os-networks and /os-tenant-networks 

calls and what their intended semantics are?

Thanks,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Using saltstack as orchestrator for fuel

2014-06-17 Thread Renat Akhmerov
Dmitri, this is a true statement. Mistral is fully abstracted out from a 
concrete transport.

Renat Akhmerov
@ Mirantis Inc.



On 17 Jun 2014, at 00:52, Dmitry Borodaenko dborodae...@mirantis.com wrote:

 Mistral doesn't have to be married to RabbitMQ, there's a ZeroMQ
 driver in oslo.messaging, so in theory Mistral should be able to make
 use of that.
 
 On Mon, Jun 16, 2014 at 1:42 AM, Vladimir Kozhukalov
 vkozhuka...@mirantis.com wrote:
 Guys,
 
 First of all we need to agree about what orchestration is. In terms of Fuel
 orchestration is task management (or scheduling) + task running. In other
 words an orchestrator needs to be able to get data (yaml, json, etc.) and to
 decide what to do, when and where and then do that. For task management we
 need to have a kind of logic like that is provided by Mistral. For launching
 it just needs to have a kind of transport like that is available when we use
 mcollective or saltstack or ssh.
 
 As far as I know (I did research in Saltstack a year ago), Saltstack does
 not have mature management mechanism. What it has is so called overstate
 mechanism which allows one to write a script for managing tasks in multiple
 node environments like launch task-1 on node-1, then launch task-2 on
 node-2 and then launch task-3 on node-1 again. It works, but it is
 semi-manual. I mean it is exactly what we already have and call it Astute.
 The only difference is that Astute is a wrapper around Mcollective.
 
 The only advantages I see in using Saltstack instead of Mcollective is that
 it is written in Python (Mcollective still does not have python binding) and
 that it uses ZeroMQ. Maybe those advantages are not so subtle, but let's
 take a look carefully.
 
 For example, the fact that Saltstack is written in Python allows us to use
 Saltstack directly from Nailgun. But I am absolutely sure that everyone will
 agree that would be a great architectural lack. If you ask me, Nailgun has
 to use an external task management service with highly outlined API  such as
 Mistral. Mistral already has plenty of capabilities for that. Do we really
 need to implement all that stuff?
 
 ZeroMQ is a great advantage if you have thousands of nodes. It is highly
 scalaeble. It is also allows one to avoid using one additional external
 service like Rabbit. Minus one point of failure, right? On the other hand,
 it brings us into the world of Saltstack with its own bugs despite its
 maturity.
 
 Consequently, my personal preference is to concentrate on splitting puppet
 code into independent tasks and using Mistral for resolving task
 dependencies. As our transport layer we'll then be able to use whatever we
 want (Saltstack, Mcollective, bare ssh, any kind of custom implementation,
 etc.)
 
 
 
 
 Vladimir Kozhukalov
 
 
 On Fri, Jun 13, 2014 at 8:45 AM, Mike Scherbakov mscherba...@mirantis.com
 wrote:
 
 Dmitry,
 please read design doc attached to
 https://blueprints.launchpad.net/fuel/+spec/fuel-library-modularization.
 I think it can serve as a good source of requirements which we have, and
 then we can see what tool is the best.
 
 Regards,
 
 
 
 
 On Thu, Jun 12, 2014 at 12:28 PM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:
 
 Guys, what we really need from orchestration tool is an ability
 orchestrate a big amount of task accross the nodes with all the complicated
 dependencies, dynamic actions (e.g. what to do on failure and on success)
 and parallel execution including those, that can have no additional 
 software
 installed somewhere deep in the user's infrastructure (e.g. we need to send
 a RESTful request to vCenter). And this is the usecase of our pluggable
 architecture. I am wondering if saltstack can do this.
 
 
 On Wed, Jun 11, 2014 at 9:08 PM, Sergii Golovatiuk
 sgolovat...@mirantis.com wrote:
 
 Hi,
 
 That would be nice to compare Ansible and Salt. They are both Python
 based. Also, Ansible has pull model also. Personally, I am big fan of
 Ansible because of its simplicity and speed of playbook development.
 
 ~Sergii
 
 
 On Wed, Jun 11, 2014 at 1:21 PM, Dmitriy Shulyak dshul...@mirantis.com
 wrote:
 
 well, i dont have any comparison chart, i can work on one based on
 requirements i've provided in initial letter, but:
 i like ansible, but it is agentless, and it wont fit well in our
 current model of communication between nailgun and orchestrator
 cloudify - java based application, even if it is pluggable with other
 language bindings - we will benefit from application in python
 salt is been around for 3-4 years, and simply compare github graphs, it
 one of the most used and active projects in python community
 
 https://github.com/stackforge/mistral/graphs/contributors
 https://github.com/saltstack/salt/graphs/contributors
 
 
 On Wed, Jun 11, 2014 at 1:04 PM, Sergii Golovatiuk
 sgolovat...@mirantis.com wrote:
 
 Hi,
 
 There are many mature orchestration applications (Salt, Ansible,
 Cloudify, Mistral). Is there any comparison chart? That would be nice to
 compare 

Re: [openstack-dev] [Neutron] - Location for common third-party libs?

2014-06-17 Thread Armando M.
I don't think that a common area as being proposed is a silver bullet for
solving packaging issues, such as this one. Knowing that the right source
tree bits are dropped onto the file system is not enough to guarantee that
the end-to-end solution will work on a specific distro. Other issues may
arise after configuration and execution.

IMO, this is a bug in the packages spec, and should be taken care of during
the packaging implementation, testing and validation.

That said, I think the right approach is to provide a 'python-neutron'
package that installs the entire source tree; the specific plugin package
can then take care of the specifics, like config files.

Armando


On 17 June 2014 06:43, Shiv Haris sha...@brocade.com wrote:

 Right Armando.

 Brocade’s mech driver problem is due to NETCONF templates - would also
 prefer to see a common area for such templates – not just common code.

 Sort of like:

 common/brocade/templates
 common/bigswitch/*

 -Shiv
 From: Armando M. arma...@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Neutron] - Location for common third-party
 libs?

 I believe the Brocade's mech driver might have the same problem.

 That said, if the content of the rpm that installs the BigSwitch plugin is
 just the sub-tree for bigswitch (plus the config files, perhaps), you might
 get away with this issue by just installing the bigswitch-plugin package. I
 assume you tried that and didn't work?

 I was unable to find the rpm specs for CentOS to confirm.

 A.


 On 17 June 2014 00:02, Kevin Benton blak...@gmail.com wrote:

 Hello,

 In the Big Switch ML2 driver, we rely on quite a bit of code from the Big
 Switch plugin. This works fine for distributions that include the entire
 neutron code base. However, some break apart the neutron code base into
 separate packages. For example, in CentOS I can't use the Big Switch ML2
 driver with just ML2 installed because the Big Switch plugin directory is
 gone.

 Is there somewhere where we can put common third party code that will be
 safe from removal during packaging?


 Thanks
 --
 Kevin Benton

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Swift][third-party] Most Third Party CI's failing

2014-06-17 Thread Luke Gorrie
On 15 June 2014 02:45, Sukhdev Kapur sukhdevka...@gmail.com wrote:

 I thought I send out this message in case other CI maintainers are
 investigating this issue.


I have a problem that appeared at the same time and may be related? testr
list-tests in the tempest directory is failing with an obscure error
message. Seems to be exactly the situation described here:
https://bugs.launchpad.net/subunit/+bug/1278539

Any tips?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] New Contributor Agreement is not working?

2014-06-17 Thread Sami J. Mäkinen

Hello all.

I am trying to sign the New Contributor Agreement to be able to
submit a new blueprint for review.

For several weeks now, I just always get an error message included below.
I even suspected browser compatibility problems, but trying with Google
Chrome, Opera or Firefox dit not help at all.

***

Code Review - Error
Server Error
Cannot store contact information
(button) Continue

***

Pls halp!

BR,

-sjm


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Nominating Ken'ichi Ohmichi for nova-core

2014-06-17 Thread Michael Still
Hi. I'm going to let this sit for another 24 hours, and then we'll
declare it closed.

Cheers,
Michael

On Tue, Jun 17, 2014 at 6:16 AM, Mark McLoughlin mar...@redhat.com wrote:
 On Sat, 2014-06-14 at 08:40 +1000, Michael Still wrote:
 Greetings,

 I would like to nominate Ken'ichi Ohmichi for the nova-core team.

 Ken'ichi has been involved with nova for a long time now.  His reviews
 on API changes are excellent, and he's been part of the team that has
 driven the new API work we've seen in recent cycles forward. Ken'ichi
 has also been reviewing other parts of the code base, and I think his
 reviews are detailed and helpful.

 +1, great to see Ken'ichi join the team

 Mark.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Distributed locking

2014-06-17 Thread Matthew Booth
On 17/06/14 00:28, Joshua Harlow wrote:
 So this is a reader/write lock then?
 
 I have seen https://github.com/python-zk/kazoo/pull/141 come up in the
 kazoo (zookeeper python library) but there was a lack of a maintainer for
 that 'recipe', perhaps if we really find this needed we can help get that
 pull request 'sponsored' so that it can be used for this purpose?
 
 
 As far as resiliency, the thing I was thinking about was how correct do u
 want this lock to be?
 
 If u say go with memcached and a locking mechanism using it this will not
 be correct but it might work good enough under normal usage. So that¹s why
 I was wondering about what level of correctness do you want and what do
 you want to happen if a server that is maintaining the lock record dies.
 In memcaches case this will literally be 1 server, even if sharding is
 being used, since a key hashes to one server. So if that one server goes
 down (or a network split happens) then it is possible for two entities to
 believe they own the same lock (and if the network split recovers this
 gets even weirder); so that¹s what I was wondering about when mentioning
 resiliency and how much incorrectness you are willing to tolerate.

From my POV, the most important things are:

* 2 nodes must never believe they hold the same lock
* A node must eventually get the lock

I was expecting to implement locking on all three backends as long as
they support it. I haven't looked closely at memcached, but if it can
detect a split it should be able to have a fencing race with the
possible lock holder before continuing. This is obviously undesirable,
as you will probably be fencing an otherwise correctly functioning node,
but it will be correct.

Matt

 
 -Original Message-
 From: Matthew Booth mbo...@redhat.com
 Organization: Red Hat
 Date: Friday, June 13, 2014 at 1:40 AM
 To: Joshua Harlow harlo...@yahoo-inc.com, OpenStack Development Mailing
 List (not for usage questions) openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [nova] Distributed locking
 
 On 12/06/14 21:38, Joshua Harlow wrote:
 So just a few thoughts before going to far down this path,

 Can we make sure we really really understand the use-case where we think
 this is needed. I think it's fine that this use-case exists, but I just
 want to make it very clear to others why its needed and why distributing
 locking is the only *correct* way.

 An example use of this would be side-loading an image from another
 node's image cache rather than fetching it from glance, which would have
 very significant performance benefits in the VMware driver, and possibly
 other places. The copier must take a read lock on the image to prevent
 the owner from ageing it during the copy. Holding a read lock would also
 assure the copier that the image it is copying is complete.

 This helps set a good precedent for others that may follow down this
 path
 that they also clearly explain the situation, how distributed locking
 fixes it and all the corner cases that now pop-up with distributed
 locking.

 Some of the questions that I can think of at the current moment:

 * What happens when a node goes down that owns the lock, how does the
 software react to this?

 This can be well defined according to the behaviour of the backend. For
 example, it is well defined in zookeeper when a node's session expires.
 If the lock holder is no longer a valid node, it would be fenced before
 deleting its lock, allowing other nodes to continue.

 Without fencing it would not be possible to safely continue in this case.

 * What resources are being locked; what is the lock target, what is its
 lifetime?

 These are not questions for a locking implementation. A lock would be
 held on a name, and it would be up to the api user to ensure that the
 protected resource is only used while correctly locked, and that the
 lock is not held longer than necessary.

 * What resiliency do you want this lock to provide (this becomes a
 critical question when considering memcached, since memcached is not
 really the best choice for a resilient distributing locking backend)?

 What does resiliency mean in this context? We really just need the lock
 to be correct

 * What do entities that try to acquire a lock do when they can't acquire
 it?

 Typically block, but if a use case emerged for trylock() it would be
 simple to implement. For example, in the image side-loading case we may
 decide that if it isn't possible to immediately acquire the lock it
 isn't worth waiting, and we just fetch it from glance anyway.

 A useful thing I wrote up a while ago, might still be useful:

 https://wiki.openstack.org/wiki/StructuredWorkflowLocks

 Feel free to move that wiki if u find it useful (its sorta a high-level
 doc on the different strategies and such).

 Nice list of implementation pros/cons.

 Matt


 -Josh

 -Original Message-
 From: Matthew Booth mbo...@redhat.com
 Organization: Red Hat
 Reply-To: OpenStack Development 

Re: [openstack-dev] [Neutron] - Location for common third-party libs?

2014-06-17 Thread Kevin Benton
Hi Ihar,

What is the reason to breakup neutron into so many packages? A quick disk
usage stat shows the plugins directory is currently 3.4M.
Is that considered to be too much space for a package, or was it for
another reason?

Thanks,
Kevin Benton


On Mon, Jun 16, 2014 at 3:37 PM, Ihar Hrachyshka ihrac...@redhat.com
wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512

 On 17/06/14 00:10, Anita Kuno wrote:
  On 06/16/2014 06:02 PM, Kevin Benton wrote:
  Hello,
 
  In the Big Switch ML2 driver, we rely on quite a bit of code from
  the Big Switch plugin. This works fine for distributions that
  include the entire neutron code base. However, some break apart
  the neutron code base into separate packages. For example, in
  CentOS I can't use the Big Switch ML2 driver with just ML2
  installed because the Big Switch plugin directory is gone.
 
  Is there somewhere where we can put common third party code that
  will be safe from removal during packaging?
 

 Hi,

 I'm a neutron packager for redhat based distros.

 AFAIK the main reason is to avoid installing lots of plugins to
 systems that are not going to use them. No one really spent too much
 time going file by file and determining internal interdependencies.

 In your case, I would move those Brocade specific ML2 files to Brocade
 plugin package. I would suggest to report the bug in Red Hat bugzilla.
 I think this won't get the highest priority, but once packagers will
 have spare cycles, this can be fixed.

 Cheers,
 /Ihar
 -BEGIN PGP SIGNATURE-
 Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
 Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

 iQEcBAEBCgAGBQJTn3G6AAoJEC5aWaUY1u573y4IAOlglKdT8nACA4MqOKt6YPEm
 7+8yxLyIfUIJx/B61MvIA94m2j2O7uuUWbqyx+kGwSQQYXaXrHwnfgf43OSPktSf
 GZsms/iRCxe5/rKS+WPn51UNj6aauvR5O7SHAu4kpGS8Y25mb7nVcIAgtXzE3IwR
 rsEff1u8UAZ8uFPG1ZgerN5X2n2pn1R7xcSXB2g1rlbqbRdwtS2toHL7iSSwwJgq
 6GH5iC+GmV1iMb1c7f0ZQvJm8jQYF2vBhl7efEXUnDEx6DQlpqDE/QI9tIwrwVkb
 nbwz0FnVCtdiN5rGdmJZ/NW/Uldx/seVmjzlD/u25GO8mKacm0WEwc8VCIpbsYk=
 =V+7m
 -END PGP SIGNATURE-

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] [Heat] Reminder: Mid-cycle Meetup - Attendance Confirmation

2014-06-17 Thread Jaromir Coufal

Hi all,

I would like to remind you to sign up for mid-cycle meetup which is 
happening July 21-25 in Raleigh:


* https://etherpad.openstack.org/p/juno-midcycle-meetup

We need number of participants as soon as possible so that we can ask 
for group discount at the hotel. Also if we don't get rooms in one of 
these two hotels (see etherpad) the experience of accommodation is 
rapidly decreasing. Therefor I would like to ask everybody, if you can 
confirm your attendance as soon as possible.


If you have any related question just let me know, I will be happy to help.

Looking forward to seeing you all in Raleigh
-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] SSL VPN Implemenatation

2014-06-17 Thread Zang MingJie
On Thu, May 29, 2014 at 6:57 AM, Nachi Ueno na...@ntti3.com wrote:
 Hi Zang

 Since, SSL-VPN for Juno bp is approved in neturon-spec,
 I would like to restart this work.
 Could you share your code if it is possible?
 Also, Let's discuss how we can collaborate in here.

Currently We are running havana branch, I'm trying to rebase it to master


 Best
 Nachi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Nominating Ken'ichi Ohmichi for nova-core

2014-06-17 Thread Nikola Đipanov
On 06/14/2014 12:40 AM, Michael Still wrote:
 Greetings,
 
 I would like to nominate Ken'ichi Ohmichi for the nova-core team.
 
 Ken'ichi has been involved with nova for a long time now.  His reviews
 on API changes are excellent, and he's been part of the team that has
 driven the new API work we've seen in recent cycles forward. Ken'ichi
 has also been reviewing other parts of the code base, and I think his
 reviews are detailed and helpful.
 
 Please respond with +1s or any concerns.
 

+1 - welcome to the team Ken'ichi!


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [taskflow] Recommendations for the granularity of tasks and their stickiness to workers

2014-06-17 Thread Eoghan Glynn

Folks,

A question for the taskflow ninjas.

Any thoughts on best practice WRT $subject?

Specifically I have in mind this ceilometer review[1] which adopts
the approach of using very fine-grained tasks (at the level of an
individual alarm evaluation) combined with short-term assignments
to individual workers.

But I'm also thinking of future potential usage of taskflow within
ceilometer, to support partitioning of work over a scaled-out array
of central agents.

Does taskflow also naturally support a model whereby more chunky
tasks (possibly including ongoing periodic work) are assigned to
workers in a stickier fashion, such that re-balancing of workload
can easily be triggered when a change is detected in the pool of
available workers?

Cheers,
Eoghan

[1] https://review.openstack.org/91763

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] A modest proposal to reduce reviewer load

2014-06-17 Thread Matthew Booth
We all know that review can be a bottleneck for Nova patches.Not only
that, but a patch lingering in review, no matter how trivial, will
eventually accrue rebases which sap gate resources, developer time, and
will to live.

It occurs to me that there are a significant class of patches which
simply don't require the attention of a core reviewer. Some examples:

* Indentation cleanup/comment fixes
* Simple code motion
* File permission changes
* Trivial fixes which are obviously correct

The advantage of a core reviewer is that they have experience of the
whole code base, and have proven their ability to make and judge core
changes. However, some fixes don't require this level of attention, as
they are self-contained and obvious to any reasonable programmer.

Without knowing anything of the architecture of gerrit, I propose
something along the lines of a '+1 (trivial)' review flag. If a review
gained some small number of these, I suggest 2 would be reasonable, it
would be equivalent to a +2 from a core reviewer. The ability to set
this flag would be a privilege. However, the bar to gaining this
privilege would be low, and preferably automatically set, e.g. 5
accepted patches. It would be removed for abuse.

Is this practical? Would it help?

Matt
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A modest proposal to reduce reviewer load

2014-06-17 Thread Duncan Thomas
A far more effective way to reduce the load of trivial review issues
on core reviewers is for none-core reviewers to get in there first,
spot the problems and add a -1 - the trivial issues are then hopefully
fixed up before a core reviewer even looks at the patch.

The fundamental problem with review is that there are more people
submitting than doing regular reviews. If you want the review queue to
shrink, do five reviews for every one you submit. A -1 from a
none-core (followed by a +1 when all the issues are fixed) is far,
far, far more useful in general than a +1 on a new patch.



On 17 June 2014 11:04, Matthew Booth mbo...@redhat.com wrote:
 We all know that review can be a bottleneck for Nova patches.Not only
 that, but a patch lingering in review, no matter how trivial, will
 eventually accrue rebases which sap gate resources, developer time, and
 will to live.

 It occurs to me that there are a significant class of patches which
 simply don't require the attention of a core reviewer. Some examples:

 * Indentation cleanup/comment fixes
 * Simple code motion
 * File permission changes
 * Trivial fixes which are obviously correct

 The advantage of a core reviewer is that they have experience of the
 whole code base, and have proven their ability to make and judge core
 changes. However, some fixes don't require this level of attention, as
 they are self-contained and obvious to any reasonable programmer.

 Without knowing anything of the architecture of gerrit, I propose
 something along the lines of a '+1 (trivial)' review flag. If a review
 gained some small number of these, I suggest 2 would be reasonable, it
 would be equivalent to a +2 from a core reviewer. The ability to set
 this flag would be a privilege. However, the bar to gaining this
 privilege would be low, and preferably automatically set, e.g. 5
 accepted patches. It would be removed for abuse.

 Is this practical? Would it help?

 Matt
 --
 Matthew Booth
 Red Hat Engineering, Virtualisation Team

 Phone: +442070094448 (UK)
 GPG ID:  D33C3490
 GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] LBaaS Mid Cycle Sprint

2014-06-17 Thread Avishay Balderman
Hi
As the lbass mid cycle sprint starts today, is there any way to track and 
understand the progress (without flying to Texas... )

Thanks

Avishay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A modest proposal to reduce reviewer load

2014-06-17 Thread Daniel P. Berrange
On Tue, Jun 17, 2014 at 11:04:17AM +0100, Matthew Booth wrote:
 We all know that review can be a bottleneck for Nova patches.Not only
 that, but a patch lingering in review, no matter how trivial, will
 eventually accrue rebases which sap gate resources, developer time, and
 will to live.
 
 It occurs to me that there are a significant class of patches which
 simply don't require the attention of a core reviewer. Some examples:
 
 * Indentation cleanup/comment fixes
 * Simple code motion
 * File permission changes
 * Trivial fixes which are obviously correct
 
 The advantage of a core reviewer is that they have experience of the
 whole code base, and have proven their ability to make and judge core
 changes. However, some fixes don't require this level of attention, as
 they are self-contained and obvious to any reasonable programmer.
 
 Without knowing anything of the architecture of gerrit, I propose
 something along the lines of a '+1 (trivial)' review flag. If a review
 gained some small number of these, I suggest 2 would be reasonable, it
 would be equivalent to a +2 from a core reviewer. The ability to set
 this flag would be a privilege. However, the bar to gaining this
 privilege would be low, and preferably automatically set, e.g. 5
 accepted patches. It would be removed for abuse.
 
 Is this practical? Would it help?

You are right that some types of fix are so straightforward that
most reasonable programmers can validate them. At the same time
though, this means that they also don't really consume significant
review time from core reviewers.  So having non-cores' approve
trivial fixes wouldn't really reduce the burden on core devs.

The main positive impact would probably be a faster turn around
time on getting the patches approved because it is easy for the
trivial fixes to drown in the noise.

IME any non-trivial change to gerrit is just not going to happen
in any reasonably useful timeframe though. Perhaps an alternative
strategy would be to focus on identifying which the trivial
fixes are. If there was an good way to get a list of all pending
trivial fixes, then it would make it straightforward for cores
to jump in and approve those simple patches as a priority, to avoid
them languishing too long.

If would be nice if gerrit had simple keyword tagging so any reviewer
can tag an existing commit as trivial, but that doesn't seem to
exist as a concept yet.

So an alternative perhaps submit trivial stuff using a well known
topic eg

  # git review --topic trivial 

Then you can just query all changes in that topic to find easy stuff
to approve.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2] Modular L2 agent architecture

2014-06-17 Thread Zang MingJie
Hi:

Awesome! Currently we are suffering lots of bugs in ovs-agent, also
intent to rebuild a more stable flexible agent.

Taking the experience of ovs-agent bugs, I think the concurrency
problem is also a very important problem, the agent gets lots of event
from different greenlets, the rpc, the ovs monitor or the main loop.
I'd suggest to serialize all event to a queue, then process events in
a dedicated thread. The thread check the events one by one ordered,
and resolve what has been changed, then apply the corresponding
changes. If there is any error occurred in the thread, discard the
current processing event, do a fresh start event, which reset
everything, then apply the correct settings.

The threading model is so important and may prevent tons of bugs in
the future development, we should describe it clearly in the
architecture


On Wed, Jun 11, 2014 at 4:19 AM, Mohammad Banikazemi m...@us.ibm.com wrote:
 Following the discussions in the ML2 subgroup weekly meetings, I have added
 more information on the etherpad [1] describing the proposed architecture
 for modular L2 agents. I have also posted some code fragments at [2]
 sketching the implementation of the proposed architecture. Please have a
 look when you get a chance and let us know if you have any comments.

 [1] https://etherpad.openstack.org/p/modular-l2-agent-outline
 [2] https://review.openstack.org/#/c/99187/


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A modest proposal to reduce reviewer load

2014-06-17 Thread Sean Dague
On 06/17/2014 07:23 AM, Daniel P. Berrange wrote:
 On Tue, Jun 17, 2014 at 11:04:17AM +0100, Matthew Booth wrote:
 We all know that review can be a bottleneck for Nova patches.Not only
 that, but a patch lingering in review, no matter how trivial, will
 eventually accrue rebases which sap gate resources, developer time, and
 will to live.

 It occurs to me that there are a significant class of patches which
 simply don't require the attention of a core reviewer. Some examples:

 * Indentation cleanup/comment fixes
 * Simple code motion
 * File permission changes
 * Trivial fixes which are obviously correct

 The advantage of a core reviewer is that they have experience of the
 whole code base, and have proven their ability to make and judge core
 changes. However, some fixes don't require this level of attention, as
 they are self-contained and obvious to any reasonable programmer.

 Without knowing anything of the architecture of gerrit, I propose
 something along the lines of a '+1 (trivial)' review flag. If a review
 gained some small number of these, I suggest 2 would be reasonable, it
 would be equivalent to a +2 from a core reviewer. The ability to set
 this flag would be a privilege. However, the bar to gaining this
 privilege would be low, and preferably automatically set, e.g. 5
 accepted patches. It would be removed for abuse.

 Is this practical? Would it help?
 
 You are right that some types of fix are so straightforward that
 most reasonable programmers can validate them. At the same time
 though, this means that they also don't really consume significant
 review time from core reviewers.  So having non-cores' approve
 trivial fixes wouldn't really reduce the burden on core devs.
 
 The main positive impact would probably be a faster turn around
 time on getting the patches approved because it is easy for the
 trivial fixes to drown in the noise.
 
 IME any non-trivial change to gerrit is just not going to happen
 in any reasonably useful timeframe though. Perhaps an alternative
 strategy would be to focus on identifying which the trivial
 fixes are. If there was an good way to get a list of all pending
 trivial fixes, then it would make it straightforward for cores
 to jump in and approve those simple patches as a priority, to avoid
 them languishing too long.
 
 If would be nice if gerrit had simple keyword tagging so any reviewer
 can tag an existing commit as trivial, but that doesn't seem to
 exist as a concept yet.
 
 So an alternative perhaps submit trivial stuff using a well known
 topic eg
 
   # git review --topic trivial 
 
 Then you can just query all changes in that topic to find easy stuff
 to approve.

It could go in the commit message:

TrivialFix

Then could be queried with -
https://review.openstack.org/#/q/message:TrivialFix,n,z

If a reviewer felt it wasn't a trivial fix, they could just edit the
commit message inline to drop it out.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A modest proposal to reduce reviewer load

2014-06-17 Thread Gary Kotton


On 6/17/14, 1:56 PM, Duncan Thomas duncan.tho...@gmail.com wrote:

A far more effective way to reduce the load of trivial review issues
on core reviewers is for none-core reviewers to get in there first,
spot the problems and add a -1 - the trivial issues are then hopefully
fixed up before a core reviewer even looks at the patch.

The fundamental problem with review is that there are more people
submitting than doing regular reviews. If you want the review queue to
shrink, do five reviews for every one you submit.

+1

What you give is what you get!

 A -1 from a
none-core (followed by a +1 when all the issues are fixed) is far,
far, far more useful in general than a +1 on a new patch.



On 17 June 2014 11:04, Matthew Booth mbo...@redhat.com wrote:
 We all know that review can be a bottleneck for Nova patches.Not only
 that, but a patch lingering in review, no matter how trivial, will
 eventually accrue rebases which sap gate resources, developer time, and
 will to live.

 It occurs to me that there are a significant class of patches which
 simply don't require the attention of a core reviewer. Some examples:

 * Indentation cleanup/comment fixes
 * Simple code motion
 * File permission changes
 * Trivial fixes which are obviously correct

 The advantage of a core reviewer is that they have experience of the
 whole code base, and have proven their ability to make and judge core
 changes. However, some fixes don't require this level of attention, as
 they are self-contained and obvious to any reasonable programmer.

 Without knowing anything of the architecture of gerrit, I propose
 something along the lines of a '+1 (trivial)' review flag. If a review
 gained some small number of these, I suggest 2 would be reasonable, it
 would be equivalent to a +2 from a core reviewer. The ability to set
 this flag would be a privilege. However, the bar to gaining this
 privilege would be low, and preferably automatically set, e.g. 5
 accepted patches. It would be removed for abuse.

 Is this practical? Would it help?

 Matt
 --
 Matthew Booth
 Red Hat Engineering, Virtualisation Team

 Phone: +442070094448 (UK)
 GPG ID:  D33C3490
 GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [taskflow] Recommendations for the granularity of tasks and their stickiness to workers

2014-06-17 Thread Julien Danjou
On Tue, Jun 17 2014, Eoghan Glynn wrote:

 Any thoughts on best practice WRT $subject?

First thing on my mind is that having smaller task can allow to have a
better repartition of the work load. :)

-- 
Julien Danjou
# Free Software hacker
# http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Backwards compatibility policy for our projects

2014-06-17 Thread Tomas Sedovic
On 16/06/14 18:51, Clint Byrum wrote:
 Excerpts from Tomas Sedovic's message of 2014-06-16 09:19:40 -0700:
 All,

 After having proposed some changes[1][2] to tripleo-heat-templates[3],
 reviewers suggested adding a deprecation period for the merge.py script.

 While TripleO is an official OpenStack program, none of the projects
 under its umbrella (including tripleo-heat-templates) have gone through
 incubation and integration nor have they been shipped with Icehouse.

 So there is no implicit compatibility guarantee and I have not found
 anything about maintaining backwards compatibility neither on the
 TripleO wiki page[4], tripleo-heat-template's readme[5] or
 tripleo-incubator's readme[6].

 The Release Management wiki page[7] suggests that we follow Semantic
 Versioning[8], under which prior to 1.0.0 (t-h-t is ) anything goes.
 According to that wiki, we are using a stronger guarantee where we do
 promise to bump the minor version on incompatible changes -- but this
 again suggests that we do not promise to maintain backwards
 compatibility -- just that we document whenever we break it.

 
 I think there are no guarantees, and no promises. I also think that we've
 kept tripleo_heat_merge pretty narrow in surface area since making it
 into a module, so I'm not concerned that it will be incredibly difficult
 to keep those features alive for a while.
 
 According to Robert, there are now downstreams that have shipped things
 (with the implication that they don't expect things to change without a
 deprecation period) so there's clearly a disconnect here.

 
 I think it is more of a we will cause them extra work thing. If we
 can make a best effort and deprecate for a few releases (as in, a few
 releases of t-h-t, not OpenStack), they'll likely appreciate that. If
 we can't do it without a lot of effort, we shouldn't bother.

Oh. I did assume we were talking about OpenStack releases, not t-h-t,
sorry. I have nothing against making a new tht release that deprecates
the features we're no longer using and dropping them for good in a later
release.

What do you suggest would be a reasonable waiting period? Say a month or
so? I think it would be good if we could remove all the deprecated stuff
before we start porting our templates to HOT.

 
 If we do promise backwards compatibility, we should document it
 somewhere and if we don't we should probably make that more visible,
 too, so people know what to expect.

 I prefer the latter, because it will make the merge.py cleanup easier
 and every published bit of information I could find suggests that's our
 current stance anyway.

 
 This is more about good will than promising. If it is easy enough to
 just keep the code around and have it complain to us if we accidentally
 resurrect a feature, that should be enough. We could even introduce a
 switch to the CLI like --strict that we can run in our gate and that
 won't allow us to keep using deprecated features.
 
 So I'd like to see us deprecate not because we have to, but because we
 can do it with only a small amount of effort.

Right, that's fair enough. I've thought about adding a strict switch,
too, but I'd like to start removing code from merge.py, not adding more :-).

 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [NFV] What is NFV - Wiki page updated with proposal

2014-06-17 Thread Nicolas Barcet
Hello,

Just a quick note to mention that I just updated the subteam wiki page with
a proposed definition of what NFV means.  Comments and updates are of
course welcome.

  https://wiki.openstack.org/wiki/Teams/NFV#What_is_NFV.3F

Cheers,
-- 
Nicolas Barcet nico...@barcet.com
a.k.a. nijaba, nick
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A modest proposal to reduce reviewer load

2014-06-17 Thread Matthew Booth
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 17/06/14 12:36, Sean Dague wrote:
 On 06/17/2014 07:23 AM, Daniel P. Berrange wrote:
 On Tue, Jun 17, 2014 at 11:04:17AM +0100, Matthew Booth wrote:
 We all know that review can be a bottleneck for Nova
 patches.Not only that, but a patch lingering in review, no
 matter how trivial, will eventually accrue rebases which sap
 gate resources, developer time, and will to live.
 
 It occurs to me that there are a significant class of patches
 which simply don't require the attention of a core reviewer.
 Some examples:
 
 * Indentation cleanup/comment fixes * Simple code motion * File
 permission changes * Trivial fixes which are obviously correct
 
 The advantage of a core reviewer is that they have experience
 of the whole code base, and have proven their ability to make
 and judge core changes. However, some fixes don't require this
 level of attention, as they are self-contained and obvious to
 any reasonable programmer.
 
 Without knowing anything of the architecture of gerrit, I
 propose something along the lines of a '+1 (trivial)' review
 flag. If a review gained some small number of these, I suggest
 2 would be reasonable, it would be equivalent to a +2 from a
 core reviewer. The ability to set this flag would be a
 privilege. However, the bar to gaining this privilege would be
 low, and preferably automatically set, e.g. 5 accepted patches.
 It would be removed for abuse.
 
 Is this practical? Would it help?
 
 You are right that some types of fix are so straightforward that 
 most reasonable programmers can validate them. At the same time 
 though, this means that they also don't really consume
 significant review time from core reviewers.  So having
 non-cores' approve trivial fixes wouldn't really reduce the
 burden on core devs.
 
 The main positive impact would probably be a faster turn around 
 time on getting the patches approved because it is easy for the 
 trivial fixes to drown in the noise.
 
 IME any non-trivial change to gerrit is just not going to happen 
 in any reasonably useful timeframe though. Perhaps an
 alternative strategy would be to focus on identifying which the
 trivial fixes are. If there was an good way to get a list of all
 pending trivial fixes, then it would make it straightforward for
 cores to jump in and approve those simple patches as a priority,
 to avoid them languishing too long.
 
 If would be nice if gerrit had simple keyword tagging so any
 reviewer can tag an existing commit as trivial, but that
 doesn't seem to exist as a concept yet.
 
 So an alternative perhaps submit trivial stuff using a well
 known topic eg
 
 # git review --topic trivial
 
 Then you can just query all changes in that topic to find easy
 stuff to approve.
 
 It could go in the commit message:
 
 TrivialFix
 
 Then could be queried with - 
 https://review.openstack.org/#/q/message:TrivialFix,n,z
 
 If a reviewer felt it wasn't a trivial fix, they could just edit
 the commit message inline to drop it out.

+1. If possible I'd update the query to filter out anything with a -1.

Where do we document these things? I'd be happy to propose a docs update.

Matt
- -- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlOgML0ACgkQNEHqGdM8NJCmpgCfbSy9bljyEqksjrJ7oRjE8LNH
8nUAoJBE5L+uAcQbew5ff/98eeqoRvW2
=+SOM
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A modest proposal to reduce reviewer load

2014-06-17 Thread Daniel P. Berrange
On Tue, Jun 17, 2014 at 01:12:45PM +0100, Matthew Booth wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 On 17/06/14 12:36, Sean Dague wrote:
  On 06/17/2014 07:23 AM, Daniel P. Berrange wrote:
  If would be nice if gerrit had simple keyword tagging so any
  reviewer can tag an existing commit as trivial, but that
  doesn't seem to exist as a concept yet.
  
  So an alternative perhaps submit trivial stuff using a well
  known topic eg
  
  # git review --topic trivial
  
  Then you can just query all changes in that topic to find easy
  stuff to approve.
  
  It could go in the commit message:
  
  TrivialFix
  
  Then could be queried with - 
  https://review.openstack.org/#/q/message:TrivialFix,n,z
  
  If a reviewer felt it wasn't a trivial fix, they could just edit
  the commit message inline to drop it out.

Yes, that would be a workable idea.

 +1. If possible I'd update the query to filter out anything with a -1.
 
 Where do we document these things? I'd be happy to propose a docs update.

Lets see if any other nova cores dissent, but then can add it to these 2
wiki pages

  https://wiki.openstack.org/wiki/ReviewChecklist
  
https://wiki.openstack.org/wiki/GitCommitMessages#Including_external_references

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [NFV] What is NFV - Wiki page updated with proposal

2014-06-17 Thread Russell Bryant
On 06/17/2014 08:00 AM, Nicolas Barcet wrote:
 Hello,
 
 Just a quick note to mention that I just updated the subteam wiki page
 with a proposed definition of what NFV means.  Comments and updates are
 of course welcome.
 
   https://wiki.openstack.org/wiki/Teams/NFV#What_is_NFV.3F

Overview sounds good to me.  Thanks for writing it up!

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [NFV] Patches tagging with NFVImpact

2014-06-17 Thread Sylvain Bauza
Hi,

There is an action for creating a Gerrit dashboard to show all
NFV-related specs and patches.
As it requires to have a specific label for discriminating them, I'm
adding NFVImpact in all the patches proposed in [1].
The side effect is that it creates another patchset and so runs another
Jenkins check so don't worry about it.

I'll do a team status by tomorrow but meanwhile, please make sure that
if you upload a new patch to Gerrit, it will be marked as NFVImpact in
the commit message.

Many thanks,
-Sylvain


[1] https://wiki.openstack.org/wiki/Teams/NFV#Active_Blueprints

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A modest proposal to reduce reviewer load

2014-06-17 Thread Russell Bryant
On 06/17/2014 08:20 AM, Daniel P. Berrange wrote:
 On Tue, Jun 17, 2014 at 01:12:45PM +0100, Matthew Booth wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 17/06/14 12:36, Sean Dague wrote:
 On 06/17/2014 07:23 AM, Daniel P. Berrange wrote:
 If would be nice if gerrit had simple keyword tagging so any
 reviewer can tag an existing commit as trivial, but that
 doesn't seem to exist as a concept yet.

 So an alternative perhaps submit trivial stuff using a well
 known topic eg

 # git review --topic trivial

 Then you can just query all changes in that topic to find easy
 stuff to approve.

 It could go in the commit message:

 TrivialFix

 Then could be queried with - 
 https://review.openstack.org/#/q/message:TrivialFix,n,z

 If a reviewer felt it wasn't a trivial fix, they could just edit
 the commit message inline to drop it out.
 
 Yes, that would be a workable idea.
 
 +1. If possible I'd update the query to filter out anything with a -1.

 Where do we document these things? I'd be happy to propose a docs update.
 
 Lets see if any other nova cores dissent, but then can add it to these 2
 wiki pages
 
   https://wiki.openstack.org/wiki/ReviewChecklist
   
 https://wiki.openstack.org/wiki/GitCommitMessages#Including_external_references

Seems reasonable to me.

Of course, I just hope it doesn't put reviewers in a mode of only
looking for the trivial stuff and helping less with the big stuff.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [taskflow] Recommendations for the granularity of tasks and their stickiness to workers

2014-06-17 Thread Sandy Walsh
On 6/17/2014 7:04 AM, Eoghan Glynn wrote:
 Folks,

 A question for the taskflow ninjas.

 Any thoughts on best practice WRT $subject?

 Specifically I have in mind this ceilometer review[1] which adopts
 the approach of using very fine-grained tasks (at the level of an
 individual alarm evaluation) combined with short-term assignments
 to individual workers.

 But I'm also thinking of future potential usage of taskflow within
 ceilometer, to support partitioning of work over a scaled-out array
 of central agents.

 Does taskflow also naturally support a model whereby more chunky
 tasks (possibly including ongoing periodic work) are assigned to
 workers in a stickier fashion, such that re-balancing of workload
 can easily be triggered when a change is detected in the pool of
 available workers?

I don't think taskflow today is really focused on load balancing of
tasks. Something like gearman [1] might be better suited in the near term?

My understanding is that taskflow is really focused on in-process tasks
(with retry, restart, etc) and later will support distributed tasks. But
my data could be stale too. (jharlow?)

Even still, the decision of smaller tasks vs. chunky ones really comes
down to how much work you want to re-do if there is a failure. I've seen
some uses of taskflow where the breakdown of tasks seemed artificially
small. Meaning, the overhead of going back to the library on an
undo/rewind is greater than the undo itself.

-S

[1] http://gearman.org/


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][QA] Disabling the v3 API tests in the gate

2014-06-17 Thread John Garbutt
On 12 June 2014 17:10, Sean Dague s...@dague.net wrote:
 On 06/12/2014 12:02 PM, Matt Riedemann wrote:


 On 6/12/2014 10:51 AM, Matthew Treinish wrote:
 On Fri, Jun 13, 2014 at 12:41:19AM +0930, Christopher Yeoh wrote:
 On Fri, Jun 13, 2014 at 12:25 AM, Dan Smith d...@danplanet.com wrote:

 I think it'd be OK to move them to the experimental queue and a
 periodic
 nightly job until the v2.1 stuff shakes out.  The v3 API is marked
 experimental right now so it seems fitting that it'd be running
 tests in
 the experimental queue until at least the spec is approved and
 microversioning starts happening in the code base.


 I think this is reasonable. Continuing to run the full set of tests on
 every patch for something we never expect to see the light of day
 (in its
 current form) seems wasteful to me. Plus, we're going to
 (presumably) be
 ramping up tests on v2.1, which means to me that we'll need to clear
 out
 some capacity to make room for that.


 Thats true, though I was suggesting as v2.1microversions rolls out
 we drop
 the test out of v3 and move it to v2.1microversions testing, so
 there's no
 change in capacity required.

 That's why I wasn't proposing that we rip the tests out of the tree.
 I'm just
 trying to weigh the benefit of leaving them enabled on every run against
 the increased load they cause in an arguably overworked gate.


 Matt - how much of the time overhead is scenario tests? That's something
 that would have a lot less impact if moved to and experimental queue.
 Although the v3 api as a whole won't be officially exposed, the api
 tests
 test specific features fairly indepdently which are slated for
 v2.1microversions on a case by case basis and I don't want to see those
 regress. I guess my concern is how often the experimental queue
 results get
 really looked at and how hard/quick it is to revert when lots of stuff
 merges in a short period of time)

 The scenario tests tend to be the slower tests in tempest. I have to
 disagree
 that removing them would have lower impact. The scenario tests provide
 the best
 functional verification, which is part of the reason we always have
 failures in
 the gate on them. While it would make the gate faster the decrease in
 what were
 testing isn't worth it. Also, for reference I pulled the test run
 times that
 were greater than 10sec out of a recent gate run:
 http://paste.openstack.org/show/83827/

 The experimental jobs aren't automatically run, they have to be manually
 triggered by leaving a 'check experimental' comment. So for changes
 that we want
 to test the v3 api on a comment would have to left. To prevent
 regression is why
 we'd also have the nightly job, which I think is a better compromise
 for the v3
 tests while we wait to migrate them to the v2.1 microversion tests.

 Another, option is that we make the v3 job run only on the check queue
 and not
 on the gate. But the benefits of that are slightly more limited,
 because we'd
 still be holding up the check queue.

 -Matt Treinish



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 Yeah the scenario tests need to stay, that's how we've exposed the two
 big ssh bugs in the last couple of weeks which are obvious issues at scale.

 I still think experimental/periodic is the way to go, not a hybrid of
 check-on/gate-off.  If we want to explicitly test v3 API changes we can
 do that with 'recheck experimental'.  Granted someone has to remember to
 run those, much like checking/rechecking 3rd party CI results.

 One issue I've had with the nightly periodic job is finding out where
 the results are in an easy to consume format.  Is there something out
 there for that?  I'm thinking specifically of things we've turned off in
 the gate before like multi-backend volume tests and
 allow_tenant_isolation=False.

 It's getting emailed to the otherwise defunct openstack-qa list.
 Subscribe there for nightlies.

 Also agreed, the scenario tests find and prevent *tons* of real issues.
 Those have to stay. There is a reason we use them in the smoke runs for
 grenade, they are a very solid sniff test of real working.

 I also think by policy we should probably pull v3 out of the main job,
 as it's not a stable API. We've had issues in Tempest with people
 landing tests, then trying to go and change the API. The biggest issue
 in taking branchless tempest back to stable/havana was Nova v3 API, as
 it's actually quite different in havana than icehouse.

 We have a chicken / egg challenge in testing experimental APIs which
 will need to get resolved, but for now I think turning off v3 is the
 right approach.

+1

Seems like we should concentrate on v2 tests for now.

To stop v3 code from regressing, we should be merging and testing
v2.1, ASAP, using those v2 tests.

For the future micro versions, we still have those v3 test to be inspired by.

Chris, does that 

Re: [openstack-dev] [Nova] Nominating Ken'ichi Ohmichi for nova-core

2014-06-17 Thread John Garbutt
 On 06/14/2014 12:40 AM, Michael Still wrote:
 Greetings,

 I would like to nominate Ken'ichi Ohmichi for the nova-core team.

 Ken'ichi has been involved with nova for a long time now.  His reviews
 on API changes are excellent, and he's been part of the team that has
 driven the new API work we've seen in recent cycles forward. Ken'ichi
 has also been reviewing other parts of the code base, and I think his
 reviews are detailed and helpful.

 Please respond with +1s or any concerns.

+1

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A modest proposal to reduce reviewer load

2014-06-17 Thread Armando M.
I wonder what the turnaround of trivial patches actually is, I bet you it's
very very small, and as Daniel said, the human burden is rather minimal (I
would be more concerned about slowing them down in the gate, but I digress).

I think that introducing a two-tier level for patch approval can only
mitigate the problem, but I wonder if we'd need to go a lot further, and
rather figure out a way to borrow concepts from queueing theory so that
they can be applied in the context of Gerrit. For instance Little's law [1]
says:

The long-term average number of customers (in this context *reviews*) in a
stable system L is equal to the long-term average effective arrival rate,
λ, multiplied by the average time a customer spends in the system, W; or
expressed algebraically: L = λW.

L can be used to determine the number of core reviewers that a project will
need at any given time, in order to meet a certain arrival rate and average
time spent in the queue. If the number of core reviewers is a lot less than
L then that core team is understaffed and will need to increase.

If we figured out how to model and measure Gerrit as a queuing system, then
we could improve its performance a lot more effectively; for instance, this
idea of privileging trivial patches over longer patches has roots in a
popular scheduling policy [3] for  M/G/1 queues, but that does not really
help aging of 'longer service time' patches and does not have a preemption
mechanism built-in to avoid starvation.

Just a crazy opinion...
Armando

[1] - http://en.wikipedia.org/wiki/Little's_law
[2] - http://en.wikipedia.org/wiki/Shortest_job_first
[3] - http://en.wikipedia.org/wiki/M/G/1_queue


On 17 June 2014 14:12, Matthew Booth mbo...@redhat.com wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 17/06/14 12:36, Sean Dague wrote:
  On 06/17/2014 07:23 AM, Daniel P. Berrange wrote:
  On Tue, Jun 17, 2014 at 11:04:17AM +0100, Matthew Booth wrote:
  We all know that review can be a bottleneck for Nova
  patches.Not only that, but a patch lingering in review, no
  matter how trivial, will eventually accrue rebases which sap
  gate resources, developer time, and will to live.
 
  It occurs to me that there are a significant class of patches
  which simply don't require the attention of a core reviewer.
  Some examples:
 
  * Indentation cleanup/comment fixes * Simple code motion * File
  permission changes * Trivial fixes which are obviously correct
 
  The advantage of a core reviewer is that they have experience
  of the whole code base, and have proven their ability to make
  and judge core changes. However, some fixes don't require this
  level of attention, as they are self-contained and obvious to
  any reasonable programmer.
 
  Without knowing anything of the architecture of gerrit, I
  propose something along the lines of a '+1 (trivial)' review
  flag. If a review gained some small number of these, I suggest
  2 would be reasonable, it would be equivalent to a +2 from a
  core reviewer. The ability to set this flag would be a
  privilege. However, the bar to gaining this privilege would be
  low, and preferably automatically set, e.g. 5 accepted patches.
  It would be removed for abuse.
 
  Is this practical? Would it help?
 
  You are right that some types of fix are so straightforward that
  most reasonable programmers can validate them. At the same time
  though, this means that they also don't really consume
  significant review time from core reviewers.  So having
  non-cores' approve trivial fixes wouldn't really reduce the
  burden on core devs.
 
  The main positive impact would probably be a faster turn around
  time on getting the patches approved because it is easy for the
  trivial fixes to drown in the noise.
 
  IME any non-trivial change to gerrit is just not going to happen
  in any reasonably useful timeframe though. Perhaps an
  alternative strategy would be to focus on identifying which the
  trivial fixes are. If there was an good way to get a list of all
  pending trivial fixes, then it would make it straightforward for
  cores to jump in and approve those simple patches as a priority,
  to avoid them languishing too long.
 
  If would be nice if gerrit had simple keyword tagging so any
  reviewer can tag an existing commit as trivial, but that
  doesn't seem to exist as a concept yet.
 
  So an alternative perhaps submit trivial stuff using a well
  known topic eg
 
  # git review --topic trivial
 
  Then you can just query all changes in that topic to find easy
  stuff to approve.
 
  It could go in the commit message:
 
  TrivialFix
 
  Then could be queried with -
  https://review.openstack.org/#/q/message:TrivialFix,n,z
 
  If a reviewer felt it wasn't a trivial fix, they could just edit
  the commit message inline to drop it out.

 +1. If possible I'd update the query to filter out anything with a -1.

 Where do we document these things? I'd be happy to propose a docs update.

 Matt
 - --
 Matthew Booth
 Red 

Re: [openstack-dev] [Nova] Review guidelines for API patches

2014-06-17 Thread John Garbutt
On 14 June 2014 00:48, Michael Still mi...@stillhq.com wrote:
 On Fri, Jun 13, 2014 at 1:00 PM, Christopher Yeoh cbky...@gmail.com wrote:
 On Fri, Jun 13, 2014 at 11:28 AM, Matt Riedemann
 mrie...@linux.vnet.ibm.com wrote:
 On 6/12/2014 5:58 PM, Christopher Yeoh wrote:
 On Fri, Jun 13, 2014 at 8:06 AM, Michael Still mi...@stillhq.com
 mailto:mi...@stillhq.com wrote:

 [Pretty heavy snipping to keep this reply short]

   - API changes should have an associated spec

 To compare, this [2] is an example of something that is updating an
 existing API but I don't think warrants a blueprint since I think it falls
 into the 'generally ok' section of the API change guidelines.

 So really I see this a new feature, not a bug fix. Someone thought that
 detail was supported when writing the documentation but it never was. The
 documentation is NOT the canonical source for the behaviour of the API,
 currently the code should be seen as the reference. We've run into issues
 before where people have tried to align code to the fit the documentation
 and made backwards incompatible changes (although this is not one).

 Perhaps we need a streamlined queue for very simple API changes, but I do
 think API changes should get more than the usual review because we have to
 live with them for so long (short of an emergency revert if we catch it in
 time).

 This is what I am getting at... It is sometimes hard to tell if an API
 change needs a spec or is a bug fix. The implications of making a
 mistake are large and painful. That's why I think we should push
 _every_ API change through the spec process. If its deemed by the
 reviewers there to be a trivial change or bugfix, then the spec review
 should be super fast, right?

 I like removing the discretion here, because it will reduce our error rate.

Its super pain full when we screw up with API changes, particularly
with people deploying off trunk.

Are nova-specs average turnaround times fast enough for this right
now? Probably not. But thats a separate issue we really need to fix.

John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Do any hyperviors allow disk reduction as part of resize ?

2014-06-17 Thread John Garbutt
So I am +1 deprecating resize down, mostly for consistency reasons.

On 16 June 2014 10:34, Day, Phil philip@hp.com wrote:
 Beyond what is and isn’t technically possible at the file system level there
 is always the problem that the user may have more data than can fit into the
 reduced disk.

 I don’t want to take away useful functionality from folks if there are cases
 where it already works – mostly I just want to improve the user experience,
 and  to me the biggest problem here is the current failure mode where the
 user can’t tell if the request has been tried and failed, or just not
 happened at all for some other reason.

 What if we introduced a new state of “Resize_failed” from which the only
 allowed operations are “resize_revert” and delete – so the user can at least
 get some feedback on the cases that can’t be supported ?

In the XenAPI driver, the instance actions should report the error
that occurred and how to fix it (i.e. you have too much data in your
disk, delete some).

Longer term, the tasks API makes it much more obvious what happens
with async tasks that error, with a clean rollback.

Given we are leaning towards deprecation, I would rather we didn't add
an extra Resize_failed state.

John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Question about modifying instance attribute(such as cpu-QoS, disk-QoS ) without shutdown the instance

2014-06-17 Thread John Garbutt
I have seem some work and specs around this here:
https://blueprints.launchpad.net/nova/+spec/hot-resize

Hope that helps,
John

On 8 April 2014 17:45, Trump.Zhang zhangleiqi...@gmail.com wrote:
 Such as QoS attributes of vCPU, Memory and Disk, including IOPS limit,
 Bandwidth limit, etc.


 2014-04-08 23:04 GMT+08:00 Jay Pipes jaypi...@gmail.com:

 On Tue, 2014-04-08 at 08:30 +, Zhangleiqiang (Trump) wrote:
  Hi, Stackers,
 
For Amazon, after calling ModifyInstanceAttribute API , the
  instance must be stopped.
 
In fact, the hypervisor can online-adjust these attribute. But
  amzon and openstack do not support it.
 
So I want to know what are your advice about introducing the
  capability of online adjusting these instance attribute?

 What kind of attributes?

 Best,
 -jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] New Contributor Agreement is not working?

2014-06-17 Thread ZZelle
Hello,

I confirm ... one of my college has the same trouble.


On Tue, Jun 17, 2014 at 9:57 AM, Sami J. Mäkinen sjm+osde...@hard.ware.fi
 wrote:


 Hello all.

 I am trying to sign the New Contributor Agreement to be able to
 submit a new blueprint for review.

 For several weeks now, I just always get an error message included below.
 I even suspected browser compatibility problems, but trying with Google
 Chrome, Opera or Firefox dit not help at all.

 ***

 Code Review - Error
 Server Error
 Cannot store contact information
 (button) Continue

 ***

 Pls halp!

 BR,

 -sjm


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] New Contributor Agreement is not working?

2014-06-17 Thread Russell Bryant
On 06/17/2014 09:13 AM, ZZelle wrote:
 Hello,
 
 I confirm ... one of my college has the same trouble.

Make sure you're a member of the foundation first using the same email
address you're using in gerrit:

https://www.openstack.org/join/register/

For some more commentary, see comments on:

https://bugs.launchpad.net/openstack-org/+bug/1317957

 
 On Tue, Jun 17, 2014 at 9:57 AM, Sami J. Mäkinen
 sjm+osde...@hard.ware.fi mailto:sjm+osde...@hard.ware.fi wrote:
 
 
 Hello all.
 
 I am trying to sign the New Contributor Agreement to be able to
 submit a new blueprint for review.
 
 For several weeks now, I just always get an error message included
 below.
 I even suspected browser compatibility problems, but trying with Google
 Chrome, Opera or Firefox dit not help at all.
 
 ***
 
 Code Review - Error
 Server Error
 Cannot store contact information
 (button) Continue
 
 ***
 
 Pls halp!
 
 BR,
 
 -sjm
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] enabling support for execution of tempest tests for an incubated openstack project.

2014-06-17 Thread Malini Kamalambal
Hello,

We went through a similar exercise for Marconi.
At that point of time, there was not any documentation on the series of steps 
to go through.
Below are some of the high level steps we went through.

  1.  Complete devstack integration (Make sure you run devstack with same 
local.conf as in the gate. We spent painfully long time figuring out why the 
tests works in the local devstack instance, but fails at the gate )
  2.  Add experimental jobs in Tempest, Barbican pipeline. These jobs won't be 
voting, and can be triggered with 'check experimental' comment when you submit 
a new patch set. The point here is to prove that these jobs can be eventually 
made voting.
  3.  Once you feel comfortable abt the experimental job in Step 2, make it 
voting in the Barbican pipeline.
  4.  The job becomes voting in Tempest, once Barbican graduates.

Regards,
Malini

NOTE: The QA mailing list is deprecated. QA related questions go to the 
openstack-dev mailing list with [QA]  prefix.

From: Belur, Meera meera.be...@hp.commailto:meera.be...@hp.com
Reply-To: All Things QA. 
openstack...@lists.openstack.orgmailto:openstack...@lists.openstack.org
Date: Monday, June 16, 2014 3:47 PM
To: 
openstack-in...@lists.openstack.orgmailto:openstack-in...@lists.openstack.org
 
openstack-in...@lists.openstack.orgmailto:openstack-in...@lists.openstack.org,
 openstack...@lists.openstack.orgmailto:openstack...@lists.openstack.org 
openstack...@lists.openstack.orgmailto:openstack...@lists.openstack.org
Subject: [openstack-qa] enabling support for execution of tempest tests for an 
incubated openstack project.

Hello All,
I have submitted a QA spec for adding tempest tests for barbican project. 
Here's the link to gerrit review for the QA spec:
https://review.openstack.org/#/c/99978/
Currently Barbican is an incubated openstack project. In order to support 
enabling the execution of these gated tests, I understand there are changes to 
be implemented in several projects like the infra, devstack, devstack-gate.
I would like to find out if the infra team would provide such support or the QA 
individual working on these tests would have to make the necessary changes for 
infrastructure support. Please help me on how to get started on these 
infrastructure changes. Essentially barbican will be disabled by default in 
tempest and will have a dedicated non-voting (for now) barbican job running 
these tests.
Regards,
Meera
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] Unifying configuration file

2014-06-17 Thread Julien Danjou
Hi guys,

So I've started to look at the configuration file used by Glance and I
want to switch to one configuration file only.
I stumbled upon this blueprint:

  https://blueprints.launchpad.net/glance/+spec/use-oslo-config

which fits.

Does not look like I can assign myself to it, but if someone can do so,
go ahead.

So I've started to work on that, and I got it working. My only problem
right now, concerned the [paste_deploy] options that is provided by
Glance. I'd like to remove this section altogether, as it's not possible
to have it and have the same configuration file read by both glance-api
and glance-registry.
My idea is also to unify glance-api-paste.ini and
glance-registry-paste.ini into glance-paste.ini and then have each
server reads their default pipeline (pipeline:glance-api).

Does that sounds reasonable to everyone?

-- 
Julien Danjou
-- Free Software hacker
-- http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Distributed locking

2014-06-17 Thread Doug Hellmann
On Tue, Jun 17, 2014 at 4:36 AM, Matthew Booth mbo...@redhat.com wrote:
 On 17/06/14 00:28, Joshua Harlow wrote:
 So this is a reader/write lock then?

 I have seen https://github.com/python-zk/kazoo/pull/141 come up in the
 kazoo (zookeeper python library) but there was a lack of a maintainer for
 that 'recipe', perhaps if we really find this needed we can help get that
 pull request 'sponsored' so that it can be used for this purpose?


 As far as resiliency, the thing I was thinking about was how correct do u
 want this lock to be?

 If u say go with memcached and a locking mechanism using it this will not
 be correct but it might work good enough under normal usage. So that¹s why
 I was wondering about what level of correctness do you want and what do
 you want to happen if a server that is maintaining the lock record dies.
 In memcaches case this will literally be 1 server, even if sharding is
 being used, since a key hashes to one server. So if that one server goes
 down (or a network split happens) then it is possible for two entities to
 believe they own the same lock (and if the network split recovers this
 gets even weirder); so that¹s what I was wondering about when mentioning
 resiliency and how much incorrectness you are willing to tolerate.

 From my POV, the most important things are:

 * 2 nodes must never believe they hold the same lock
 * A node must eventually get the lock

 I was expecting to implement locking on all three backends as long as
 they support it. I haven't looked closely at memcached, but if it can
 detect a split it should be able to have a fencing race with the
 possible lock holder before continuing. This is obviously undesirable,
 as you will probably be fencing an otherwise correctly functioning node,
 but it will be correct.

There's a team working on a pluggable library for distributed
coordination: http://git.openstack.org/cgit/stackforge/tooz

Doug


 Matt


 -Original Message-
 From: Matthew Booth mbo...@redhat.com
 Organization: Red Hat
 Date: Friday, June 13, 2014 at 1:40 AM
 To: Joshua Harlow harlo...@yahoo-inc.com, OpenStack Development Mailing
 List (not for usage questions) openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [nova] Distributed locking

 On 12/06/14 21:38, Joshua Harlow wrote:
 So just a few thoughts before going to far down this path,

 Can we make sure we really really understand the use-case where we think
 this is needed. I think it's fine that this use-case exists, but I just
 want to make it very clear to others why its needed and why distributing
 locking is the only *correct* way.

 An example use of this would be side-loading an image from another
 node's image cache rather than fetching it from glance, which would have
 very significant performance benefits in the VMware driver, and possibly
 other places. The copier must take a read lock on the image to prevent
 the owner from ageing it during the copy. Holding a read lock would also
 assure the copier that the image it is copying is complete.

 This helps set a good precedent for others that may follow down this
 path
 that they also clearly explain the situation, how distributed locking
 fixes it and all the corner cases that now pop-up with distributed
 locking.

 Some of the questions that I can think of at the current moment:

 * What happens when a node goes down that owns the lock, how does the
 software react to this?

 This can be well defined according to the behaviour of the backend. For
 example, it is well defined in zookeeper when a node's session expires.
 If the lock holder is no longer a valid node, it would be fenced before
 deleting its lock, allowing other nodes to continue.

 Without fencing it would not be possible to safely continue in this case.

 * What resources are being locked; what is the lock target, what is its
 lifetime?

 These are not questions for a locking implementation. A lock would be
 held on a name, and it would be up to the api user to ensure that the
 protected resource is only used while correctly locked, and that the
 lock is not held longer than necessary.

 * What resiliency do you want this lock to provide (this becomes a
 critical question when considering memcached, since memcached is not
 really the best choice for a resilient distributing locking backend)?

 What does resiliency mean in this context? We really just need the lock
 to be correct

 * What do entities that try to acquire a lock do when they can't acquire
 it?

 Typically block, but if a use case emerged for trylock() it would be
 simple to implement. For example, in the image side-loading case we may
 decide that if it isn't possible to immediately acquire the lock it
 isn't worth waiting, and we just fetch it from glance anyway.

 A useful thing I wrote up a while ago, might still be useful:

 https://wiki.openstack.org/wiki/StructuredWorkflowLocks

 Feel free to move that wiki if u find it useful (its sorta a high-level
 doc on the different 

Re: [openstack-dev] [Neutron][LBaaS] LBaaS Mid Cycle Sprint

2014-06-17 Thread Dustin Lundquist
We have an Etherpad going here:
https://etherpad.openstack.org/p/juno-lbaas-mid-cycle-hackathon


Dustin


On Tue, Jun 17, 2014 at 4:05 AM, Avishay Balderman avish...@radware.com
wrote:

 Hi
 As the lbass mid cycle sprint starts today, is there any way to track and
 understand the progress (without flying to Texas... )

 Thanks

 Avishay
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][QA] Disabling the v3 API tests in the gate

2014-06-17 Thread Ken'ichi Ohmichi
2014-06-17 21:39 GMT+09:00 John Garbutt j...@johngarbutt.com:
 On 12 June 2014 17:10, Sean Dague s...@dague.net wrote:
 On 06/12/2014 12:02 PM, Matt Riedemann wrote:


 On 6/12/2014 10:51 AM, Matthew Treinish wrote:
 On Fri, Jun 13, 2014 at 12:41:19AM +0930, Christopher Yeoh wrote:
 On Fri, Jun 13, 2014 at 12:25 AM, Dan Smith d...@danplanet.com wrote:

 I think it'd be OK to move them to the experimental queue and a
 periodic
 nightly job until the v2.1 stuff shakes out.  The v3 API is marked
 experimental right now so it seems fitting that it'd be running
 tests in
 the experimental queue until at least the spec is approved and
 microversioning starts happening in the code base.


 I think this is reasonable. Continuing to run the full set of tests on
 every patch for something we never expect to see the light of day
 (in its
 current form) seems wasteful to me. Plus, we're going to
 (presumably) be
 ramping up tests on v2.1, which means to me that we'll need to clear
 out
 some capacity to make room for that.


 Thats true, though I was suggesting as v2.1microversions rolls out
 we drop
 the test out of v3 and move it to v2.1microversions testing, so
 there's no
 change in capacity required.

 That's why I wasn't proposing that we rip the tests out of the tree.
 I'm just
 trying to weigh the benefit of leaving them enabled on every run against
 the increased load they cause in an arguably overworked gate.


 Matt - how much of the time overhead is scenario tests? That's something
 that would have a lot less impact if moved to and experimental queue.
 Although the v3 api as a whole won't be officially exposed, the api
 tests
 test specific features fairly indepdently which are slated for
 v2.1microversions on a case by case basis and I don't want to see those
 regress. I guess my concern is how often the experimental queue
 results get
 really looked at and how hard/quick it is to revert when lots of stuff
 merges in a short period of time)

 The scenario tests tend to be the slower tests in tempest. I have to
 disagree
 that removing them would have lower impact. The scenario tests provide
 the best
 functional verification, which is part of the reason we always have
 failures in
 the gate on them. While it would make the gate faster the decrease in
 what were
 testing isn't worth it. Also, for reference I pulled the test run
 times that
 were greater than 10sec out of a recent gate run:
 http://paste.openstack.org/show/83827/

 The experimental jobs aren't automatically run, they have to be manually
 triggered by leaving a 'check experimental' comment. So for changes
 that we want
 to test the v3 api on a comment would have to left. To prevent
 regression is why
 we'd also have the nightly job, which I think is a better compromise
 for the v3
 tests while we wait to migrate them to the v2.1 microversion tests.

 Another, option is that we make the v3 job run only on the check queue
 and not
 on the gate. But the benefits of that are slightly more limited,
 because we'd
 still be holding up the check queue.

 -Matt Treinish



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 Yeah the scenario tests need to stay, that's how we've exposed the two
 big ssh bugs in the last couple of weeks which are obvious issues at scale.

 I still think experimental/periodic is the way to go, not a hybrid of
 check-on/gate-off.  If we want to explicitly test v3 API changes we can
 do that with 'recheck experimental'.  Granted someone has to remember to
 run those, much like checking/rechecking 3rd party CI results.

 One issue I've had with the nightly periodic job is finding out where
 the results are in an easy to consume format.  Is there something out
 there for that?  I'm thinking specifically of things we've turned off in
 the gate before like multi-backend volume tests and
 allow_tenant_isolation=False.

 It's getting emailed to the otherwise defunct openstack-qa list.
 Subscribe there for nightlies.

 Also agreed, the scenario tests find and prevent *tons* of real issues.
 Those have to stay. There is a reason we use them in the smoke runs for
 grenade, they are a very solid sniff test of real working.

 I also think by policy we should probably pull v3 out of the main job,
 as it's not a stable API. We've had issues in Tempest with people
 landing tests, then trying to go and change the API. The biggest issue
 in taking branchless tempest back to stable/havana was Nova v3 API, as
 it's actually quite different in havana than icehouse.

 We have a chicken / egg challenge in testing experimental APIs which
 will need to get resolved, but for now I think turning off v3 is the
 right approach.

 +1

 Seems like we should concentrate on v2 tests for now.

 To stop v3 code from regressing, we should be merging and testing
 v2.1, ASAP, using those v2 tests.

Yes, right.
We have already 

[openstack-dev] [UX] Meeting Reminder - Jun 18th, 14:30 UTC

2014-06-17 Thread Jaromir Coufal

Hi UXers,

this is a reminder that our next regular IRC meeting is happening 
tomorrow (Wednesday) June 18th at 14:30 UTC at #openstack-meeting-3.


Agenda: https://wiki.openstack.org/wiki/Meetings/UX
Feel free to add topics which you are interested in.

See you all tomorrow
-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Unifying configuration file

2014-06-17 Thread Flavio Percoco

On 17/06/14 15:59 +0200, Julien Danjou wrote:

Hi guys,

So I've started to look at the configuration file used by Glance and I
want to switch to one configuration file only.
I stumbled upon this blueprint:

 https://blueprints.launchpad.net/glance/+spec/use-oslo-config



w.r.t using config.generator https://review.openstack.org/#/c/83327/


which fits.

Does not look like I can assign myself to it, but if someone can do so,
go ahead.

So I've started to work on that, and I got it working. My only problem
right now, concerned the [paste_deploy] options that is provided by
Glance. I'd like to remove this section altogether, as it's not possible
to have it and have the same configuration file read by both glance-api
and glance-registry.
My idea is also to unify glance-api-paste.ini and
glance-registry-paste.ini into glance-paste.ini and then have each
server reads their default pipeline (pipeline:glance-api).

Does that sounds reasonable to everyone?


+1, it sounds like a good idea. I don't think we need to maintain 2
separate config files, especially now that the registry service is
optional.

Thanks for working on this.
Flavio

--
@flaper87
Flavio Percoco


pgpN86biMnyui.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] nova-compute vfsguestfs

2014-06-17 Thread Richard W.M. Jones
On Fri, Jun 13, 2014 at 03:06:25PM +0530, abhishek jain wrote:
 Hi Rich
 
 Can you  help me regarding the possible cause for  VM stucking at spawning
 state on ubuntu powerpc compute node in openstack using devstack.

Did you solve this one?  It's impossible to debug unless you collect
the full debugging information.  See also:

  http://libguestfs.org/guestfs-faq.1.html#how-do-i-debug-when-using-the-api
  https://bugs.launchpad.net/nova/+bug/1279857

Rich.

-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
Fedora Windows cross-compiler. Compile Windows programs, test, and
build Windows installers. Over 100 libraries supported.
http://fedoraproject.org/wiki/MinGW

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] GenericDriver cinder volume error during manila create

2014-06-17 Thread Swartzlander, Ben
On Mon, 2014-06-16 at 23:06 +0530, Deepak Shetty wrote:
 I am trying devstack on F20 setup with Manila sources.

 When i am trying to do
 manila create --name cinder_vol_share_using_nfs2 --share-network-id
 36ec5a17-cef6-44a8-a518-457a6f36faa0 NFS 2 

 I see the below error in c-vol due to which even tho' my service VM is
 started, manila create errors out as cinder volume is not getting
 exported as iSCSI

 2014-06-16 16:39:36.151 INFO cinder.volume.flows.manager.create_volume
 [req-15d0b435-f6ce-41cd-ae4a-3851b07cf774
 1a7816e5f0144c539192360cdc9672d5 b65a066f32df4aca80fa9a
 6d5c795095] Volume 8bfd424d-9877-4c20-a9d1-058c06b9bdda: being created
 as raw with specification: {'status': u'creating', 'volume_size': 2,
 'volume_name': u'volume-8bfd
 424d-9877-4c20-a9d1-058c06b9bdda'}
 2014-06-16 16:39:36.151 DEBUG cinder.openstack.common.processutils
 [req-15d0b435-f6ce-41cd-ae4a-3851b07cf774
 1a7816e5f0144c539192360cdc9672d5 b65a066f32df4aca80fa9a6d5c
 795095] Running cmd (subprocess): sudo
 cinder-rootwrap /etc/cinder/rootwrap.conf lvcreate -n
 volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda stack-volumes -L 2g from
 (pid=4
 623)
 execute /opt/stack/cinder/cinder/openstack/common/processutils.py:142
 2014-06-16 16:39:36.828 INFO cinder.volume.flows.manager.create_volume
 [req-15d0b435-f6ce-41cd-ae4a-3851b07cf774
 1a7816e5f0144c539192360cdc9672d5 b65a066f32df4aca80fa9a
 6d5c795095] Volume volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda
 (8bfd424d-9877-4c20-a9d1-058c06b9bdda): created successfully
 2014-06-16 16:39:38.404 WARNING cinder.context [-] Arguments dropped
 when creating context: {'user': u'd9bb59a6a2394483902b382a991ffea2',
 'tenant': u'b65a066f32df4aca80
 fa9a6d5c795095', 'user_identity': u'd9bb59a6a2394483902b382a991ffea2
 b65a066f32df4aca80fa9a6d5c795095 - - -'}
 2014-06-16 16:39:38.426 DEBUG cinder.volume.manager
 [req-083cd582-1b4d-4e7c-a70c-2c6282d8d799
 d9bb59a6a2394483902b382a991ffea2 b65a066f32df4aca80fa9a6d5c795095]
 Volume 
 8bfd424d-9877-4c20-a9d1-058c06b9bdda: creating export from (pid=4623)
 initialize_connection /opt/stack/cinder/cinder/volume/manager.py:781
 2014-06-16 16:39:38.428 INFO cinder.brick.iscsi.iscsi
 [req-083cd582-1b4d-4e7c-a70c-2c6282d8d799
 d9bb59a6a2394483902b382a991ffea2 b65a066f32df4aca80fa9a6d5c795095]
 Creat
 ing iscsi_target for: volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda
 2014-06-16 16:39:38.440 DEBUG cinder.brick.iscsi.iscsi
 [req-083cd582-1b4d-4e7c-a70c-2c6282d8d799
 d9bb59a6a2394483902b382a991ffea2 b65a066f32df4aca80fa9a6d5c795095]
 Crea
 ted volume
 path 
 /opt/stack/data/cinder/volumes/volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda,
 content: 
 target
 iqn.2010-10.org.openstack:volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda
 backing-store /dev/stack-volumes/volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda
 lld iscsi
 IncomingUser kZQ6rqqT7W6KGQvMZ7Lr k4qcE3G9g5z7mDWh2woe
 /target
 from (pid=4623)
 create_iscsi_target /opt/stack/cinder/cinder/brick/iscsi/iscsi.py:183
 2014-06-16 16:39:38.440 DEBUG cinder.openstack.common.processutils
 [req-083cd582-1b4d-4e7c-a70c-2c6282d8d799
 d9bb59a6a2394483902b382a991ffea2 b65a066f32df4aca80fa9a6d5c
 795095] Running cmd (subprocess): sudo
 cinder-rootwrap /etc/cinder/rootwrap.conf tgt-admin --update
 iqn.2010-10.org.openstack:volume-8bfd424d-9877-4c20-a9d1-058c06b9bdd
 a from (pid=4623)
 execute /opt/stack/cinder/cinder/openstack/common/processutils.py:142
 2014-06-16 16:39:38.981 DEBUG cinder.openstack.common.processutils
 [req-083cd582-1b4d-4e7c-a70c-2c6282d8d799
 d9bb59a6a2394483902b382a991ffea2 b65a066f32df4aca80fa9a6d5c
 795095] Result was 107 from (pid=4623)
 execute /opt/stack/cinder/cinder/openstack/common/processutils.py:167
 2014-06-16 16:39:38.981 WARNING cinder.brick.iscsi.iscsi
 [req-083cd582-1b4d-4e7c-a70c-2c6282d8d799
 d9bb59a6a2394483902b382a991ffea2 b65a066f32df4aca80fa9a6d5c795095] Fa
 iled to create iscsi target for volume
 id:volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda: Unexpected error while
 running command.
 Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf tgt-admin
 --update
 iqn.2010-10.org.openstack:volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda
 Exit code: 107
 Stdout: 'Command:\n\ttgtadm -C 0 --lld iscsi --op new --mode target
 --tid 1 -T
 iqn.2010-10.org.openstack:volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda
 \nexited with code: 107.\n'
 Stderr: 'tgtadm: failed to send request hdr to tgt daemon, Transport
 endpoint is not connected\ntgtadm: failed to send request hdr to tgt
 daemon, Transport endpoint is not connected\ntgtadm: failed to send
 request hdr to tgt daemon, Transport endpoint is not connected
 \ntgtadm: failed to send request hdr to tgt daemon, Transport endpoint
 is not connected\n'
 2014-06-16 16:39:38.982 ERROR oslo.messaging.rpc.dispatcher
 [req-083cd582-1b4d-4e7c-a70c-2c6282d8d799
 d9bb59a6a2394483902b382a991ffea2 b65a066f32df4aca80fa9a6d5c795095]
 Exception during message handling: Failed to create iscsi target for
 volume volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda.
 2014-06-16 

Re: [openstack-dev] [glance] Unifying configuration file

2014-06-17 Thread Kuvaja, Erno
I do not like this idea. As now we are on 5 different config files (+ policy 
and schema). One for each (API and Registry) would still be ok, but putting all 
together would just become messy.

If the *-paste.ini will be migrated to .conf files that would bring it down, 
but please do not try to mix reg and API configs together.

- Erno (jokke) Kuvaja

 -Original Message-
 From: Flavio Percoco [mailto:fla...@redhat.com]
 Sent: 17 June 2014 15:19
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [glance] Unifying configuration file
 
 On 17/06/14 15:59 +0200, Julien Danjou wrote:
 Hi guys,
 
 So I've started to look at the configuration file used by Glance and I
 want to switch to one configuration file only.
 I stumbled upon this blueprint:
 
   https://blueprints.launchpad.net/glance/+spec/use-oslo-config
 
 
 w.r.t using config.generator https://review.openstack.org/#/c/83327/
 
 which fits.
 
 Does not look like I can assign myself to it, but if someone can do so,
 go ahead.
 
 So I've started to work on that, and I got it working. My only problem
 right now, concerned the [paste_deploy] options that is provided by
 Glance. I'd like to remove this section altogether, as it's not
 possible to have it and have the same configuration file read by both
 glance-api and glance-registry.
 My idea is also to unify glance-api-paste.ini and
 glance-registry-paste.ini into glance-paste.ini and then have each
 server reads their default pipeline (pipeline:glance-api).
 
 Does that sounds reasonable to everyone?
 
 +1, it sounds like a good idea. I don't think we need to maintain 2
 separate config files, especially now that the registry service is optional.
 
 Thanks for working on this.
 Flavio
 
 --
 @flaper87
 Flavio Percoco
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Do any hyperviors allow disk reduction as part of resize ?

2014-06-17 Thread Richard W.M. Jones
On Fri, Jun 13, 2014 at 06:12:16AM -0400, Aryeh Friedman wrote:
 Theoretically impossible to reduce disk unless you have some really nasty
 guest additions.

True for live resizing.

For dead resizing, libguestfs + virt-resize can do it.  Although I
wouldn't necessarily recommend it.  In almost all cases where someone
wants to shrink a disk, IMHO it is better to sparsify it instead
(ie. virt-sparsify).

Rich.

-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
virt-df lists disk usage of guests without needing to install any
software inside the virtual machine.  Supports Linux and Windows.
http://people.redhat.com/~rjones/virt-df/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Unifying configuration file

2014-06-17 Thread Brian Rosmaita
I agree with Erno.  I think that the glance registry service being optional is 
a good argument for keeping its config separate rather than munging it into the 
API config.

rosmaita

From: Kuvaja, Erno [kuv...@hp.com]
Sent: Tuesday, June 17, 2014 10:29 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [glance] Unifying configuration file

I do not like this idea. As now we are on 5 different config files (+ policy 
and schema). One for each (API and Registry) would still be ok, but putting all 
together would just become messy.

If the *-paste.ini will be migrated to .conf files that would bring it down, 
but please do not try to mix reg and API configs together.

- Erno (jokke) Kuvaja

 -Original Message-
 From: Flavio Percoco [mailto:fla...@redhat.com]
 Sent: 17 June 2014 15:19
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [glance] Unifying configuration file

 On 17/06/14 15:59 +0200, Julien Danjou wrote:
 Hi guys,
 
 So I've started to look at the configuration file used by Glance and I
 want to switch to one configuration file only.
 I stumbled upon this blueprint:
 
   https://blueprints.launchpad.net/glance/+spec/use-oslo-config
 

 w.r.t using config.generator https://review.openstack.org/#/c/83327/

 which fits.
 
 Does not look like I can assign myself to it, but if someone can do so,
 go ahead.
 
 So I've started to work on that, and I got it working. My only problem
 right now, concerned the [paste_deploy] options that is provided by
 Glance. I'd like to remove this section altogether, as it's not
 possible to have it and have the same configuration file read by both
 glance-api and glance-registry.
 My idea is also to unify glance-api-paste.ini and
 glance-registry-paste.ini into glance-paste.ini and then have each
 server reads their default pipeline (pipeline:glance-api).
 
 Does that sounds reasonable to everyone?

 +1, it sounds like a good idea. I don't think we need to maintain 2
 separate config files, especially now that the registry service is optional.

 Thanks for working on this.
 Flavio

 --
 @flaper87
 Flavio Percoco
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] New Contributor Agreement is not working?

2014-06-17 Thread Stefano Maffulli
Hi Sami,

sorry for this. I suspect there is something going on with gerrit which
throws the inappropriate error message (more below).

On 06/17/2014 12:57 AM, Sami J. Mäkinen wrote:
 For several weeks now, I just always get an error message included below.

In these cases and in general, don't wait several weeks to ask for help:
jump on IRC and ask for help there. People there will point you in the
right direction.

Now, on how to solve the issue:

 ***
 
 Code Review - Error
 Server Error
 Cannot store contact information
 (button) Continue
 
 ***

I and Anita helped another new developer yesterday, on IRC, with a
similar issue. As Russell Bryant suggested make sure you have followed
*to the letter* the instructions on
https://wiki.openstack.org/wiki/HowToContribute#Contributors_License_Agreement,
including

make sure to give on https://www.openstack.org/join/ the same
 E-mail address you'll use for code contributions, since the
 Primary Email Address in your Foundation Profile will need to
 match the Preferred Email you set later in your Gerrit contact
 information.


If you already did that, on
https://review.openstack.org/#/settings/contact make sure that you have
entered valid details for you mailing address, country and phone number.
Valid contact information is required but I suspect that gerrit either
doesn't save it correctly all the time or doesn't throw any error if the
data is left empty. Try filling in the form and save changes. You should
have a warning on that page saying something like Contact information
last updated on $DATE at $TIME.

Hop on IRC to debug this further.

/stef

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Unifying configuration file

2014-06-17 Thread Zhi Yan Liu
Frankly I don't like the idea of using single configuration for all
service too, I think it will be cool if we can generate separated
configuration template files automatically for each Glance service. So
besides https://review.openstack.org/#/c/83327/ , actually I'm working
on that idea as well, to allow deployer generates separated
configuration files on demand, and then probably we could move those
templates away from code repo.

But I like your idea for paste.ini template part.

zhiyan

On Tue, Jun 17, 2014 at 10:29 PM, Kuvaja, Erno kuv...@hp.com wrote:
 I do not like this idea. As now we are on 5 different config files (+ policy 
 and schema). One for each (API and Registry) would still be ok, but putting 
 all together would just become messy.

 If the *-paste.ini will be migrated to .conf files that would bring it down, 
 but please do not try to mix reg and API configs together.

 - Erno (jokke) Kuvaja

 -Original Message-
 From: Flavio Percoco [mailto:fla...@redhat.com]
 Sent: 17 June 2014 15:19
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [glance] Unifying configuration file

 On 17/06/14 15:59 +0200, Julien Danjou wrote:
 Hi guys,
 
 So I've started to look at the configuration file used by Glance and I
 want to switch to one configuration file only.
 I stumbled upon this blueprint:
 
   https://blueprints.launchpad.net/glance/+spec/use-oslo-config
 

 w.r.t using config.generator https://review.openstack.org/#/c/83327/

 which fits.
 
 Does not look like I can assign myself to it, but if someone can do so,
 go ahead.
 
 So I've started to work on that, and I got it working. My only problem
 right now, concerned the [paste_deploy] options that is provided by
 Glance. I'd like to remove this section altogether, as it's not
 possible to have it and have the same configuration file read by both
 glance-api and glance-registry.
 My idea is also to unify glance-api-paste.ini and
 glance-registry-paste.ini into glance-paste.ini and then have each
 server reads their default pipeline (pipeline:glance-api).
 
 Does that sounds reasonable to everyone?

 +1, it sounds like a good idea. I don't think we need to maintain 2
 separate config files, especially now that the registry service is optional.

 Thanks for working on this.
 Flavio

 --
 @flaper87
 Flavio Percoco
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][qa] Do all turbo-hipster jobs fail in stable/havana?

2014-06-17 Thread Matt Riedemann



On 6/16/2014 11:58 PM, Joshua Hesketh wrote:

Hi there,

Very sorry for the mishap. I manually enqueued our zuul to run tests on
changes that turbo-hipster had recently missed and did not pay attention
to the branch they were for.

Turbo-Hipster doesn't run tests on stable or non-master branches so it
should have never attempted to. Because I enqueued the changes manually
it accidentally attempted to run them and didn't know how to handle it
correctly.

I have removed the negative votes. Please let me know if I have missed any.

Sorry again for the trouble.

Cheers,
Josh

On 6/17/14 11:44 AM, wu jiang wrote:

Hi all,

Is turbo-hipster OK for stable/havana?

I found all turbo-hipster jobs after 06/09 failed in stable/havana [1].
And the 'recheck migrations' command didn't trigger the re-examination
of turbo-hipster, but Jenkins recheck work..

Thanks.

WingWJ

---

[1]
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/havana,n,z


[2] https://review.openstack.org/#/c/67613/
[3] https://review.openstack.org/#/c/72521/
[4] https://review.openstack.org/#/c/98874/


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






Yeah I have some on stable/icehouse with -1 votes from t-h:

https://review.openstack.org/#/c/99215/
https://review.openstack.org/#/c/97811/

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A modest proposal to reduce reviewer load

2014-06-17 Thread John Griffith
On Tue, Jun 17, 2014 at 6:51 AM, Armando M. arma...@gmail.com wrote:

 I wonder what the turnaround of trivial patches actually is, I bet you
 it's very very small, and as Daniel said, the human burden is rather
 minimal (I would be more concerned about slowing them down in the gate, but
 I digress).

 I think that introducing a two-tier level for patch approval can only
 mitigate the problem, but I wonder if we'd need to go a lot further, and
 rather figure out a way to borrow concepts from queueing theory so that
 they can be applied in the context of Gerrit. For instance Little's law [1]
 says:

 The long-term average number of customers (in this context *reviews*) in
 a stable system L is equal to the long-term average effective arrival rate,
 λ, multiplied by the average time a customer spends in the system, W; or
 expressed algebraically: L = λW.

 L can be used to determine the number of core reviewers that a project
 will need at any given time, in order to meet a certain arrival rate and
 average time spent in the queue. If the number of core reviewers is a lot
 less than L then that core team is understaffed and will need to increase.

 If we figured out how to model and measure Gerrit as a queuing system,
 then we could improve its performance a lot more effectively; for instance,
 this idea of privileging trivial patches over longer patches has roots in a
 popular scheduling policy [3] for  M/G/1 queues, but that does not really
 help aging of 'longer service time' patches and does not have a preemption
 mechanism built-in to avoid starvation.

 Just a crazy opinion...
 Armando

 [1] - http://en.wikipedia.org/wiki/Little's_law
 [2] - http://en.wikipedia.org/wiki/Shortest_job_first
 [3] - http://en.wikipedia.org/wiki/M/G/1_queue


 On 17 June 2014 14:12, Matthew Booth mbo...@redhat.com wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 17/06/14 12:36, Sean Dague wrote:
  On 06/17/2014 07:23 AM, Daniel P. Berrange wrote:
  On Tue, Jun 17, 2014 at 11:04:17AM +0100, Matthew Booth wrote:
  We all know that review can be a bottleneck for Nova
  patches.Not only that, but a patch lingering in review, no
  matter how trivial, will eventually accrue rebases which sap
  gate resources, developer time, and will to live.
 
  It occurs to me that there are a significant class of patches
  which simply don't require the attention of a core reviewer.
  Some examples:
 
  * Indentation cleanup/comment fixes * Simple code motion * File
  permission changes * Trivial fixes which are obviously correct
 
  The advantage of a core reviewer is that they have experience
  of the whole code base, and have proven their ability to make
  and judge core changes. However, some fixes don't require this
  level of attention, as they are self-contained and obvious to
  any reasonable programmer.
 
  Without knowing anything of the architecture of gerrit, I
  propose something along the lines of a '+1 (trivial)' review
  flag. If a review gained some small number of these, I suggest
  2 would be reasonable, it would be equivalent to a +2 from a
  core reviewer. The ability to set this flag would be a
  privilege. However, the bar to gaining this privilege would be
  low, and preferably automatically set, e.g. 5 accepted patches.
  It would be removed for abuse.
 
  Is this practical? Would it help?
 
  You are right that some types of fix are so straightforward that
  most reasonable programmers can validate them. At the same time
  though, this means that they also don't really consume
  significant review time from core reviewers.  So having
  non-cores' approve trivial fixes wouldn't really reduce the
  burden on core devs.
 
  The main positive impact would probably be a faster turn around
  time on getting the patches approved because it is easy for the
  trivial fixes to drown in the noise.
 
  IME any non-trivial change to gerrit is just not going to happen
  in any reasonably useful timeframe though. Perhaps an
  alternative strategy would be to focus on identifying which the
  trivial fixes are. If there was an good way to get a list of all
  pending trivial fixes, then it would make it straightforward for
  cores to jump in and approve those simple patches as a priority,
  to avoid them languishing too long.
 
  If would be nice if gerrit had simple keyword tagging so any
  reviewer can tag an existing commit as trivial, but that
  doesn't seem to exist as a concept yet.
 
  So an alternative perhaps submit trivial stuff using a well
  known topic eg
 
  # git review --topic trivial
 
  Then you can just query all changes in that topic to find easy
  stuff to approve.
 
  It could go in the commit message:
 
  TrivialFix
 
  Then could be queried with -
  https://review.openstack.org/#/q/message:TrivialFix,n,z
 
  If a reviewer felt it wasn't a trivial fix, they could just edit
  the commit message inline to drop it out.

 +1. If possible I'd update the query to filter out anything with a -1.

 Where do 

Re: [openstack-dev] [Nova] Nominating Ken'ichi Ohmichi for nova-core

2014-06-17 Thread Tian, Shuangtai
+1

-Original Message-
From: Michael Still [mailto:mi...@stillhq.com] 
Sent: Saturday, June 14, 2014 6:41 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Nova] Nominating Ken'ichi Ohmichi for nova-core

Greetings,

I would like to nominate Ken'ichi Ohmichi for the nova-core team.

Ken'ichi has been involved with nova for a long time now.  His reviews on API 
changes are excellent, and he's been part of the team that has driven the new 
API work we've seen in recent cycles forward. Ken'ichi has also been reviewing 
other parts of the code base, and I think his reviews are detailed and helpful.

Please respond with +1s or any concerns.

References:

  
https://review.openstack.org/#/q/owner:ken1ohmichi%2540gmail.com+status:open,n,z

  https://review.openstack.org/#/q/reviewer:ken1ohmichi%2540gmail.com,n,z

  http://www.stackalytics.com/?module=nova-groupuser_id=oomichi

As a reminder, we use the voting process outlined at 
https://wiki.openstack.org/wiki/Nova/CoreTeam to add members to our core team.

Thanks,
Michael

--
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] LBaaS Mid Cycle Sprint

2014-06-17 Thread Kyle Mestery
Also, pop into #openstack-lbaas on Freenode, we have people there
monitoring the channel.

On Tue, Jun 17, 2014 at 9:19 AM, Dustin Lundquist dus...@null-ptr.net wrote:
 We have an Etherpad going here:
 https://etherpad.openstack.org/p/juno-lbaas-mid-cycle-hackathon


 Dustin


 On Tue, Jun 17, 2014 at 4:05 AM, Avishay Balderman avish...@radware.com
 wrote:

 Hi
 As the lbass mid cycle sprint starts today, is there any way to track and
 understand the progress (without flying to Texas... )

 Thanks

 Avishay
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Do any hyperviors allow disk reduction as part of resize ?

2014-06-17 Thread Richard W.M. Jones
On Tue, Jun 17, 2014 at 10:56:36AM -0400, Russell Bryant wrote:
 On 06/17/2014 10:43 AM, Richard W.M. Jones wrote:
  On Fri, Jun 13, 2014 at 06:12:16AM -0400, Aryeh Friedman wrote:
  Theoretically impossible to reduce disk unless you have some really nasty
  guest additions.
  
  True for live resizing.
  
  For dead resizing, libguestfs + virt-resize can do it.  Although I
  wouldn't necessarily recommend it.  In almost all cases where someone
  wants to shrink a disk, IMHO it is better to sparsify it instead
  (ie. virt-sparsify).
 
 FWIW, the resize operation in OpenStack is a dead one.

advert

In = 1.26, `virt-sparsify --in-place' is very fast, doesn't copy, and
doesn't need mountains of temporary space (unlike the copying mode
virt-sparsify).

http://libguestfs.org/virt-sparsify.1.html#in-place-sparsification

/advert

Rich.

-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
libguestfs lets you edit virtual machines.  Supports shell scripting,
bindings from many languages.  http://libguestfs.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Do any hyperviors allow disk reduction as part of resize ?

2014-06-17 Thread Russell Bryant
On 06/17/2014 10:43 AM, Richard W.M. Jones wrote:
 On Fri, Jun 13, 2014 at 06:12:16AM -0400, Aryeh Friedman wrote:
 Theoretically impossible to reduce disk unless you have some really nasty
 guest additions.
 
 True for live resizing.
 
 For dead resizing, libguestfs + virt-resize can do it.  Although I
 wouldn't necessarily recommend it.  In almost all cases where someone
 wants to shrink a disk, IMHO it is better to sparsify it instead
 (ie. virt-sparsify).

FWIW, the resize operation in OpenStack is a dead one.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] LBaaS Mid Cycle Sprint

2014-06-17 Thread Dustin Lundquist
Actually the channel name is #neutron-lbaas.


On Tue, Jun 17, 2014 at 8:03 AM, Kyle Mestery mest...@noironetworks.com
wrote:

 Also, pop into #openstack-lbaas on Freenode, we have people there
 monitoring the channel.

 On Tue, Jun 17, 2014 at 9:19 AM, Dustin Lundquist dus...@null-ptr.net
 wrote:
  We have an Etherpad going here:
  https://etherpad.openstack.org/p/juno-lbaas-mid-cycle-hackathon
 
 
  Dustin
 
 
  On Tue, Jun 17, 2014 at 4:05 AM, Avishay Balderman avish...@radware.com
 
  wrote:
 
  Hi
  As the lbass mid cycle sprint starts today, is there any way to track
 and
  understand the progress (without flying to Texas... )
 
  Thanks
 
  Avishay
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ipv6] Do you have problem accessing IRC

2014-06-17 Thread Collins, Sean
On Tue, Jun 17, 2014 at 10:10:26AM EDT, Shixiong Shang wrote:
 Trying to join the weekly meeting, but my IRC client kept complaining…Is it 
 just me?
 
If you have problems, you can always use the web client:

http://webchat.freenode.net/

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Do any hyperviors allow disk reduction as part of resize ?

2014-06-17 Thread Pádraig Brady
On 06/13/2014 02:22 PM, Day, Phil wrote:
 I guess the question I’m really asking here is:  “Since we know resize down 
 won’t work in all cases,
 and the failure if it does occur will be hard for the user to detect,
 should we just block it at the API layer and be consistent across all 
 Hypervisors ?”

+1

There is an existing libvirt blueprint:
  https://blueprints.launchpad.net/nova/+spec/libvirt-resize-disk-down
which I've never been in favor of:
  https://bugs.launchpad.net/nova/+bug/1270238/comments/1

thanks,
Pádraig.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Unifying configuration file

2014-06-17 Thread Arnaud Legendre
@ZhiYan: I don't like the idea of removing the sample configuration file(s) 
from the git repository. Many people do not want to have to checkout the entire 
codebase and tox every time they have to verify a variable name in a 
configuration file. I know many people who were really frustrated where they 
realized that the sample config file was gone from the Nova repo.
However, I agree with the fact that it would be better if the sample was 100% 
accurate: so the way I would love to see this working is to generate the sample 
file every time there is a config change (this being totally automated (maybe 
at the gate level...)).

@Julien: I would be interested to understand the value that you see of having 
only one config file? At this point, I don't see why managing one file is more 
complicated than managing several files especially when they are organized by 
categories. Also, scrolling through the registry settings every time I want to 
modify an api setting seem to add some overhead.

Thanks,
Arnaud


- Original Message -
From: Zhi Yan Liu lzy@gmail.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Tuesday, June 17, 2014 7:47:53 AM
Subject: Re: [openstack-dev] [glance] Unifying configuration file

Frankly I don't like the idea of using single configuration for all
service too, I think it will be cool if we can generate separated
configuration template files automatically for each Glance service. So
besides https://review.openstack.org/#/c/83327/ , actually I'm working
on that idea as well, to allow deployer generates separated
configuration files on demand, and then probably we could move those
templates away from code repo.

But I like your idea for paste.ini template part.

zhiyan

On Tue, Jun 17, 2014 at 10:29 PM, Kuvaja, Erno kuv...@hp.com wrote:
 I do not like this idea. As now we are on 5 different config files (+ policy 
 and schema). One for each (API and Registry) would still be ok, but putting 
 all together would just become messy.

 If the *-paste.ini will be migrated to .conf files that would bring it down, 
 but please do not try to mix reg and API configs together.

 - Erno (jokke) Kuvaja

 -Original Message-
 From: Flavio Percoco [mailto:fla...@redhat.com]
 Sent: 17 June 2014 15:19
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [glance] Unifying configuration file

 On 17/06/14 15:59 +0200, Julien Danjou wrote:
 Hi guys,
 
 So I've started to look at the configuration file used by Glance and I
 want to switch to one configuration file only.
 I stumbled upon this blueprint:
 
   
  https://urldefense.proofpoint.com/v1/url?u=https://blueprints.launchpad.net/glance/%2Bspec/use-oslo-configk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=5wWaXo2oVaivfKLCMyU6Z9UTO8HOfeGCzbGHAT4gZpo%3D%0Am=QTguordmDDZNC%2FRUVedjVKf5cPErz5dhlJAZA56YqWU%3D%0As=ce068ea89b0fbf4260f6f8f18758f99b407536ec391c7c7392a079fc550ba468
 

 w.r.t using config.generator https://review.openstack.org/#/c/83327/

 which fits.
 
 Does not look like I can assign myself to it, but if someone can do so,
 go ahead.
 
 So I've started to work on that, and I got it working. My only problem
 right now, concerned the [paste_deploy] options that is provided by
 Glance. I'd like to remove this section altogether, as it's not
 possible to have it and have the same configuration file read by both
 glance-api and glance-registry.
 My idea is also to unify glance-api-paste.ini and
 glance-registry-paste.ini into glance-paste.ini and then have each
 server reads their default pipeline (pipeline:glance-api).
 
 Does that sounds reasonable to everyone?

 +1, it sounds like a good idea. I don't think we need to maintain 2
 separate config files, especially now that the registry service is optional.

 Thanks for working on this.
 Flavio

 --
 @flaper87
 Flavio Percoco
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] Package composition

2014-06-17 Thread McLellan, Steven
Hi,

I'm trying to figure out what we'll be able to support in the near/far term 
with regards to packaging applications. My understanding of the DSL is 
incomplete and I'm trying to fix that :)  This isn't so much a usage question 
as for assistance writing documentation.

Taking Wordpress as the ever-present example, imagine I want to present one 
choice to a user:

* Install it and mysql on one host

* Install it on one host, and mysql on another host

This also implies that I have two Applications defined; one which describes how 
to install Wordpress (and takes a database host/user/password as arguments, as 
well as an instance), and one describing how to install a database (which maybe 
also takes a username and password as arguments).

In terms of packaging, what's the best way to represent this? What would make 
sense to me is to have the database Application defined in a library somewhere, 
and the Wordpress Application and the logic and UI to take input and make 
decisions in their own package.

This kind of composition seems very similar to the SQLServer/ActiveDirectory 
example but it's hard working through the example. It was mentioned at the 
summit that we need some good examples of what can be done above Heat (and the 
reason this is a good example is because there's a direct comparison to Thomas 
Spatzier's softwareconfig example heat templates).

I'm happy to write something up but I'd like some input first.

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Unifying configuration file

2014-06-17 Thread Tom Fifield

On 17/06/14 23:30, Arnaud Legendre wrote:

@ZhiYan: I don't like the idea of removing the sample configuration file(s) 
from the git repository. Many people do not want to have to checkout the entire 
codebase and tox every time they have to verify a variable name in a 
configuration file. I know many people who were really frustrated where they 
realized that the sample config file was gone from the Nova repo.


For reference, see also the recent discussion around cinder.conf.sample: 
https://review.openstack.org/#/c/96581/ to learn more about ops wishes 
regarding sample configuration files.



However, I agree with the fact that it would be better if the sample was 100% 
accurate: so the way I would love to see this working is to generate the sample 
file every time there is a config change (this being totally automated (maybe 
at the gate level...)).

@Julien: I would be interested to understand the value that you see of having 
only one config file? At this point, I don't see why managing one file is more 
complicated than managing several files especially when they are organized by 
categories. Also, scrolling through the registry settings every time I want to 
modify an api setting seem to add some overhead.

Thanks,
Arnaud


- Original Message -
From: Zhi Yan Liu lzy@gmail.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Tuesday, June 17, 2014 7:47:53 AM
Subject: Re: [openstack-dev] [glance] Unifying configuration file

Frankly I don't like the idea of using single configuration for all
service too, I think it will be cool if we can generate separated
configuration template files automatically for each Glance service. So
besides https://review.openstack.org/#/c/83327/ , actually I'm working
on that idea as well, to allow deployer generates separated
configuration files on demand, and then probably we could move those
templates away from code repo.

But I like your idea for paste.ini template part.

zhiyan

On Tue, Jun 17, 2014 at 10:29 PM, Kuvaja, Erno kuv...@hp.com wrote:

I do not like this idea. As now we are on 5 different config files (+ policy 
and schema). One for each (API and Registry) would still be ok, but putting all 
together would just become messy.

If the *-paste.ini will be migrated to .conf files that would bring it down, 
but please do not try to mix reg and API configs together.

- Erno (jokke) Kuvaja


-Original Message-
From: Flavio Percoco [mailto:fla...@redhat.com]
Sent: 17 June 2014 15:19
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [glance] Unifying configuration file

On 17/06/14 15:59 +0200, Julien Danjou wrote:

Hi guys,

So I've started to look at the configuration file used by Glance and I
want to switch to one configuration file only.
I stumbled upon this blueprint:

  
https://urldefense.proofpoint.com/v1/url?u=https://blueprints.launchpad.net/glance/%2Bspec/use-oslo-configk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=5wWaXo2oVaivfKLCMyU6Z9UTO8HOfeGCzbGHAT4gZpo%3D%0Am=QTguordmDDZNC%2FRUVedjVKf5cPErz5dhlJAZA56YqWU%3D%0As=ce068ea89b0fbf4260f6f8f18758f99b407536ec391c7c7392a079fc550ba468



w.r.t using config.generator https://review.openstack.org/#/c/83327/


which fits.

Does not look like I can assign myself to it, but if someone can do so,
go ahead.

So I've started to work on that, and I got it working. My only problem
right now, concerned the [paste_deploy] options that is provided by
Glance. I'd like to remove this section altogether, as it's not
possible to have it and have the same configuration file read by both
glance-api and glance-registry.
My idea is also to unify glance-api-paste.ini and
glance-registry-paste.ini into glance-paste.ini and then have each
server reads their default pipeline (pipeline:glance-api).

Does that sounds reasonable to everyone?


+1, it sounds like a good idea. I don't think we need to maintain 2
separate config files, especially now that the registry service is optional.

Thanks for working on this.
Flavio

--
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] Welcome Xing Yang to the Manila core team!

2014-06-17 Thread Swartzlander, Ben
The Manila core team welcomes Xing Yang! She has been a very active
reviewer and has been consistently involved with the project.

Xing, thank you for all your effort and keep up the great work!

-Ben Swartzlander

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Unifying configuration file

2014-06-17 Thread Julien Danjou
On Tue, Jun 17 2014, Arnaud Legendre wrote:

 @ZhiYan: I don't like the idea of removing the sample configuration file(s)
 from the git repository. Many people do not want to have to checkout the
 entire codebase and tox every time they have to verify a variable name in a
 configuration file. I know many people who were really frustrated where they
 realized that the sample config file was gone from the Nova repo.
 However, I agree with the fact that it would be better if the sample was
 100% accurate: so the way I would love to see this working is to generate
 the sample file every time there is a config change (this being totally
 automated (maybe at the gate level...)).

You're a bit late on this. :)
So what I did these last months (year?) in several project, is to check
at gate time the configuration file that is automatically generated
against what's in the patches.
That turned out to be a real problem because sometimes some options
changes from the eternal module we rely on (e.g. keystone authtoken or
oslo.messaging). In the end many projects (like Nova) disabled this
check altogether, and therefore removed the generated configuration file
From the git repository.

 @Julien: I would be interested to understand the value that you see of
 having only one config file? At this point, I don't see why managing one
 file is more complicated than managing several files especially when they
 are organized by categories. Also, scrolling through the registry settings
 every time I want to modify an api setting seem to add some overhead.

Because there's no way to automatically generate several configuration
files with each its own set of options using oslo.config.

Glance is (one of?) the last project in OpenStack to manually write its
sample configuration file, which are not up to date obviously.

So really this is mainly about following what every other projects did
the last year(s).

-- 
Julien Danjou
-- Free Software hacker
-- http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [3rd Party] Log retention with 3rd party testing

2014-06-17 Thread Stefano Maffulli
[trimming reply-to to OpenStack Development since I don't think that
infra is the right venue to discuss this]

On 06/13/2014 10:08 AM, trinath.soman...@freescale.com wrote:
 I have two scenarios to deal with.
 
 [1] A change takes two months or more to get merged into the master
 branch.
 
 Here, if CI’s, delete the logs of the last month, the owner/reviewer
 may not be able to check the old logs.

I think Trinath raises an important point, since merging may take well
more than 1 month. Is he the only one having this issue?

 [2] A change is good and merged into the master branch.
 
 Here, if in future this change creates a BUG, a research on the CI 
 logs might be a helpful to resolve this.

I'm less interested about this, honestly, compared to issue #1 you raise
before.

/stef
-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A modest proposal to reduce reviewer load

2014-06-17 Thread Ben Nemec
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 06/17/2014 09:52 AM, John Griffith wrote:
 On Tue, Jun 17, 2014 at 6:51 AM, Armando M. arma...@gmail.com
 wrote:
 
 I wonder what the turnaround of trivial patches actually is, I
 bet you it's very very small, and as Daniel said, the human
 burden is rather minimal (I would be more concerned about slowing
 them down in the gate, but I digress).
 
 I think that introducing a two-tier level for patch approval can
 only mitigate the problem, but I wonder if we'd need to go a lot
 further, and rather figure out a way to borrow concepts from
 queueing theory so that they can be applied in the context of
 Gerrit. For instance Little's law [1] says:
 
 The long-term average number of customers (in this context
 *reviews*) in a stable system L is equal to the long-term average
 effective arrival rate, λ, multiplied by the average time a
 customer spends in the system, W; or expressed algebraically: L =
 λW.
 
 L can be used to determine the number of core reviewers that a
 project will need at any given time, in order to meet a certain
 arrival rate and average time spent in the queue. If the number
 of core reviewers is a lot less than L then that core team is
 understaffed and will need to increase.
 
 If we figured out how to model and measure Gerrit as a queuing
 system, then we could improve its performance a lot more
 effectively; for instance, this idea of privileging trivial
 patches over longer patches has roots in a popular scheduling
 policy [3] for  M/G/1 queues, but that does not really help aging
 of 'longer service time' patches and does not have a preemption 
 mechanism built-in to avoid starvation.
 
 Just a crazy opinion... Armando
 
 [1] - http://en.wikipedia.org/wiki/Little's_law [2] -
 http://en.wikipedia.org/wiki/Shortest_job_first [3] -
 http://en.wikipedia.org/wiki/M/G/1_queue
 
 
 On 17 June 2014 14:12, Matthew Booth mbo...@redhat.com wrote:
 
 On 17/06/14 12:36, Sean Dague wrote:
 On 06/17/2014 07:23 AM, Daniel P. Berrange wrote:
 On Tue, Jun 17, 2014 at 11:04:17AM +0100, Matthew Booth
 wrote:
 We all know that review can be a bottleneck for Nova 
 patches.Not only that, but a patch lingering in review,
 no matter how trivial, will eventually accrue rebases
 which sap gate resources, developer time, and will to
 live.
 
 It occurs to me that there are a significant class of
 patches which simply don't require the attention of a
 core reviewer. Some examples:
 
 * Indentation cleanup/comment fixes * Simple code
 motion * File permission changes * Trivial fixes which
 are obviously correct
 
 The advantage of a core reviewer is that they have
 experience of the whole code base, and have proven
 their ability to make and judge core changes. However,
 some fixes don't require this level of attention, as
 they are self-contained and obvious to any reasonable
 programmer.
 
 Without knowing anything of the architecture of gerrit,
 I propose something along the lines of a '+1 (trivial)'
 review flag. If a review gained some small number of
 these, I suggest 2 would be reasonable, it would be
 equivalent to a +2 from a core reviewer. The ability to
 set this flag would be a privilege. However, the bar to
 gaining this privilege would be low, and preferably
 automatically set, e.g. 5 accepted patches. It would be
 removed for abuse.
 
 Is this practical? Would it help?
 
 You are right that some types of fix are so
 straightforward that most reasonable programmers can
 validate them. At the same time though, this means that
 they also don't really consume significant review time
 from core reviewers.  So having non-cores' approve
 trivial fixes wouldn't really reduce the burden on core
 devs.
 
 The main positive impact would probably be a faster turn
 around time on getting the patches approved because it is
 easy for the trivial fixes to drown in the noise.
 
 IME any non-trivial change to gerrit is just not going to
 happen in any reasonably useful timeframe though. Perhaps
 an alternative strategy would be to focus on identifying
 which the trivial fixes are. If there was an good way to
 get a list of all pending trivial fixes, then it would
 make it straightforward for cores to jump in and approve
 those simple patches as a priority, to avoid them
 languishing too long.
 
 If would be nice if gerrit had simple keyword tagging so
 any reviewer can tag an existing commit as trivial, but
 that doesn't seem to exist as a concept yet.
 
 So an alternative perhaps submit trivial stuff using a
 well known topic eg
 
 # git review --topic trivial
 
 Then you can just query all changes in that topic to find
 easy stuff to approve.
 
 It could go in the commit message:
 
 TrivialFix
 
 Then could be queried with - 
 https://review.openstack.org/#/q/message:TrivialFix,n,z
 
 If a reviewer felt it wasn't a trivial fix, they could just
 edit the commit message inline to drop it out.
 
 +1. If possible I'd update the query to filter out anything with a
 

Re: [openstack-dev] [glance] Unifying configuration file

2014-06-17 Thread Arnaud Legendre
All the things that you mention here seem to be technical difficulties. 
I don't think technical difficulties should drive the experience of the user.
Also, Zhi Yan seems to be able to make that happen :)

Thanks,
Arnaud

- Original Message -
From: Julien Danjou jul...@danjou.info
To: Arnaud Legendre alegen...@vmware.com
Cc: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Tuesday, June 17, 2014 8:43:38 AM
Subject: Re: [openstack-dev] [glance] Unifying configuration file

On Tue, Jun 17 2014, Arnaud Legendre wrote:

 @ZhiYan: I don't like the idea of removing the sample configuration file(s)
 from the git repository. Many people do not want to have to checkout the
 entire codebase and tox every time they have to verify a variable name in a
 configuration file. I know many people who were really frustrated where they
 realized that the sample config file was gone from the Nova repo.
 However, I agree with the fact that it would be better if the sample was
 100% accurate: so the way I would love to see this working is to generate
 the sample file every time there is a config change (this being totally
 automated (maybe at the gate level...)).

You're a bit late on this. :)
So what I did these last months (year?) in several project, is to check
at gate time the configuration file that is automatically generated
against what's in the patches.
That turned out to be a real problem because sometimes some options
changes from the eternal module we rely on (e.g. keystone authtoken or
oslo.messaging). In the end many projects (like Nova) disabled this
check altogether, and therefore removed the generated configuration file
From the git repository.

 @Julien: I would be interested to understand the value that you see of
 having only one config file? At this point, I don't see why managing one
 file is more complicated than managing several files especially when they
 are organized by categories. Also, scrolling through the registry settings
 every time I want to modify an api setting seem to add some overhead.

Because there's no way to automatically generate several configuration
files with each its own set of options using oslo.config.

Glance is (one of?) the last project in OpenStack to manually write its
sample configuration file, which are not up to date obviously.

So really this is mainly about following what every other projects did
the last year(s).

-- 
Julien Danjou
-- Free Software hacker
-- 
https://urldefense.proofpoint.com/v1/url?u=http://julien.danjou.info/k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=5wWaXo2oVaivfKLCMyU6Z9UTO8HOfeGCzbGHAT4gZpo%3D%0Am=a7BLHSmThzpuZ12zhxZOghcz1HWzlQNCbEAXFoAcFSY%3D%0As=fe3ff048464bdba926f7da2f19834adba8df90b69fdb2ddd63a35f8288e7fed2

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [3rd Party] Log retention with 3rd party testing

2014-06-17 Thread Ben Nemec
On 06/17/2014 10:43 AM, Stefano Maffulli wrote:
 [trimming reply-to to OpenStack Development since I don't think that
 infra is the right venue to discuss this]
 
 On 06/13/2014 10:08 AM, trinath.soman...@freescale.com wrote:
 I have two scenarios to deal with.

 [1] A change takes two months or more to get merged into the master
 branch.

 Here, if CI’s, delete the logs of the last month, the owner/reviewer
 may not be able to check the old logs.
 
 I think Trinath raises an important point, since merging may take well
 more than 1 month. Is he the only one having this issue?

I don't see it as a huge problem.  If a patch fails CI, I would expect
the submitter to investigate in less than a month, and even if they
don't, fresh new logs are just a recheck no bug away (we typically
discourage that, but I think the old logs expired would be a
legitimate reason to use it).
 
 [2] A change is good and merged into the master branch.

 Here, if in future this change creates a BUG, a research on the CI 
 logs might be a helpful to resolve this.
 
 I'm less interested about this, honestly, compared to issue #1 you raise
 before.
 
 /stef
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Welcome Xing Yang to the Manila core team!

2014-06-17 Thread yang, xing
Thanks Ben!  It's my pleasure to join the Manila core team!


Xing



-Original Message-
From: Swartzlander, Ben [mailto:ben.swartzlan...@netapp.com] 
Sent: Tuesday, June 17, 2014 11:46 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Manila] Welcome Xing Yang to the Manila core team!

The Manila core team welcomes Xing Yang! She has been a very active reviewer 
and has been consistently involved with the project.

Xing, thank you for all your effort and keep up the great work!

-Ben Swartzlander

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova default quotas

2014-06-17 Thread Matt Riedemann



On 6/10/2014 3:56 PM, Matt Riedemann wrote:



On 6/4/2014 11:02 AM, Day, Phil wrote:

 Matt and I chatted on IRC and have come up with an outlined plan, if
we missed anything please don't hesitate to comment or ask.

 

 https://etherpad.openstack.org/p/quota-classes-goof-up

I added a few thoughts / questions

*From:*Joe Gordon [mailto:joe.gord...@gmail.com]
*Sent:* 02 June 2014 21:52
*To:* OpenStack Development Mailing List (not for usage questions)
*Subject:* Re: [openstack-dev] [nova] nova default quotas

On Mon, Jun 2, 2014 at 12:29 PM, Matt Riedemann
mrie...@linux.vnet.ibm.com mailto:mrie...@linux.vnet.ibm.com wrote:



On 6/2/2014 12:53 PM, Joe Gordon wrote:




On Thu, May 29, 2014 at 10:46 AM, Matt Riedemann

mrie...@linux.vnet.ibm.com mailto:mrie...@linux.vnet.ibm.com
mailto:mrie...@linux.vnet.ibm.com
mailto:mrie...@linux.vnet.ibm.com wrote:



 On 5/27/2014 4:44 PM, Vishvananda Ishaya wrote:

 I’m not sure that this is the right approach. We really
have to
 add the old extension back for compatibility, so it
might be
 best to simply keep that extension instead of adding a
new way
 to do it.

 Vish

 On May 27, 2014, at 1:31 PM, Cazzolato, Sergio J
 sergio.j.cazzol...@intel.com
mailto:sergio.j.cazzol...@intel.com

 mailto:sergio.j.cazzol...@intel.com
mailto:sergio.j.cazzol...@intel.com wrote:

 I have created a blueprint to add this
functionality to nova.

https://review.openstack.org/#__/c/94519/


 https://review.openstack.org/#/c/94519/


 -Original Message-
 From: Vishvananda Ishaya
[mailto:vishvana...@gmail.com mailto:vishvana...@gmail.com
 mailto:vishvana...@gmail.com
mailto:vishvana...@gmail.com]
 Sent: Tuesday, May 27, 2014 5:11 PM
 To: OpenStack Development Mailing List (not for
usage questions)
 Subject: Re: [openstack-dev] [nova] nova default
quotas

 Phil,

 You are correct and this seems to be an error. I
don't think
 in the earlier ML thread[1] that anyone remembered
that the
 quota classes were being used for default quotas.
IMO we
 need to revert this removal as we (accidentally)
removed a
 Havana feature with no notification to the
community. I've
 reactivated a bug[2] and marked it critcal.

 Vish

 [1]


http://lists.openstack.org/__pipermail/openstack-dev/2014-__February/027574.html



http://lists.openstack.org/pipermail/openstack-dev/2014-February/027574.html

 [2] https://bugs.launchpad.net/__nova/+bug/1299517


 https://bugs.launchpad.net/nova/+bug/1299517

 On May 27, 2014, at 12:19 PM, Day, Phil
philip@hp.com mailto:philip@hp.com

 mailto:philip@hp.com
mailto:philip@hp.com wrote:

 Hi Vish,

 I think quota classes have been removed from
Nova now.

 Phil


 Sent from Samsung Mobile


  Original message 
 From: Vishvananda Ishaya
 Date:27/05/2014 19:24 (GMT+00:00)
 To: OpenStack Development Mailing List (not
for usage
 questions)
 Subject: Re: [openstack-dev] [nova] nova
default quotas

 Are you aware that there is already a way to do
this
 through the cli using quota-class-update?


http://docs.openstack.org/__user-guide-admin/content/cli___set_quotas.html





http://docs.openstack.org/user-guide-admin/content/cli_set_quotas.html
 (near the bottom)

 Are you suggesting that we also add the ability
to use
 just regular quota-update? I'm not sure i see
the need
 for both.

 Vish

 On May 20, 2014, at 9:52 AM, Cazzolato, Sergio J
 sergio.j.cazzol...@intel.com
mailto:sergio.j.cazzol...@intel.com

 mailto:sergio.j.cazzol...@intel.com
mailto:sergio.j.cazzol...@intel.com wrote:

 I would to hear your thoughts about an idea
to add a
 way to manage the default quota values
through the API.


Re: [openstack-dev] [Neutron][LBaaS] LBaaS Mid Cycle Sprint

2014-06-17 Thread Susanne Balle
We are now on: #openstack-lbaas

The #neutron-lbaas is now deprecated.


On Tue, Jun 17, 2014 at 11:10 AM, Dustin Lundquist dus...@null-ptr.net
wrote:

 Actually the channel name is #neutron-lbaas.


 On Tue, Jun 17, 2014 at 8:03 AM, Kyle Mestery mest...@noironetworks.com
 wrote:

 Also, pop into #openstack-lbaas on Freenode, we have people there
 monitoring the channel.

 On Tue, Jun 17, 2014 at 9:19 AM, Dustin Lundquist dus...@null-ptr.net
 wrote:
  We have an Etherpad going here:
  https://etherpad.openstack.org/p/juno-lbaas-mid-cycle-hackathon
 
 
  Dustin
 
 
  On Tue, Jun 17, 2014 at 4:05 AM, Avishay Balderman 
 avish...@radware.com
  wrote:
 
  Hi
  As the lbass mid cycle sprint starts today, is there any way to track
 and
  understand the progress (without flying to Texas... )
 
  Thanks
 
  Avishay
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] LBaaS Mid Cycle Sprint

2014-06-17 Thread Kyle Mestery
So, we've now moved to #openstack-lbaas, my email was slightly ahead. :)

On Tue, Jun 17, 2014 at 10:10 AM, Dustin Lundquist dus...@null-ptr.net wrote:
 Actually the channel name is #neutron-lbaas.


 On Tue, Jun 17, 2014 at 8:03 AM, Kyle Mestery mest...@noironetworks.com
 wrote:

 Also, pop into #openstack-lbaas on Freenode, we have people there
 monitoring the channel.

 On Tue, Jun 17, 2014 at 9:19 AM, Dustin Lundquist dus...@null-ptr.net
 wrote:
  We have an Etherpad going here:
  https://etherpad.openstack.org/p/juno-lbaas-mid-cycle-hackathon
 
 
  Dustin
 
 
  On Tue, Jun 17, 2014 at 4:05 AM, Avishay Balderman
  avish...@radware.com
  wrote:
 
  Hi
  As the lbass mid cycle sprint starts today, is there any way to track
  and
  understand the progress (without flying to Texas... )
 
  Thanks
 
  Avishay
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Welcome Xing Yang to the Manila core team!

2014-06-17 Thread D'Angelo, Scott
Congratulations Xing!


-Original Message-
From: yang, xing [mailto:xing.y...@emc.com] 
Sent: Tuesday, June 17, 2014 10:11 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Manila] Welcome Xing Yang to the Manila core team!

Thanks Ben!  It's my pleasure to join the Manila core team!


Xing



-Original Message-
From: Swartzlander, Ben [mailto:ben.swartzlan...@netapp.com] 
Sent: Tuesday, June 17, 2014 11:46 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Manila] Welcome Xing Yang to the Manila core team!

The Manila core team welcomes Xing Yang! She has been a very active reviewer 
and has been consistently involved with the project.

Xing, thank you for all your effort and keep up the great work!

-Ben Swartzlander

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A modest proposal to reduce reviewer load

2014-06-17 Thread Joe Gordon
On Tue, Jun 17, 2014 at 3:56 AM, Duncan Thomas duncan.tho...@gmail.com
wrote:

 A far more effective way to reduce the load of trivial review issues
 on core reviewers is for none-core reviewers to get in there first,
 spot the problems and add a -1 - the trivial issues are then hopefully
 fixed up before a core reviewer even looks at the patch.

 The fundamental problem with review is that there are more people
 submitting than doing regular reviews. If you want the review queue to
 shrink, do five reviews for every one you submit. A -1 from a
 none-core (followed by a +1 when all the issues are fixed) is far,
 far, far more useful in general than a +1 on a new patch.


++

I think this thread is trying to optimize for the wrong types of patches.
 We shouldn't be focusing on making trivial patches land faster, but rather
more important changes such as bugs and blueprints. As some simple code
motion won't directly fix any users issue such as bugs or missing features.





 On 17 June 2014 11:04, Matthew Booth mbo...@redhat.com wrote:
  We all know that review can be a bottleneck for Nova patches.Not only
  that, but a patch lingering in review, no matter how trivial, will
  eventually accrue rebases which sap gate resources, developer time, and
  will to live.
 
  It occurs to me that there are a significant class of patches which
  simply don't require the attention of a core reviewer. Some examples:
 
  * Indentation cleanup/comment fixes
  * Simple code motion
  * File permission changes
  * Trivial fixes which are obviously correct
 
  The advantage of a core reviewer is that they have experience of the
  whole code base, and have proven their ability to make and judge core
  changes. However, some fixes don't require this level of attention, as
  they are self-contained and obvious to any reasonable programmer.
 
  Without knowing anything of the architecture of gerrit, I propose
  something along the lines of a '+1 (trivial)' review flag. If a review
  gained some small number of these, I suggest 2 would be reasonable, it
  would be equivalent to a +2 from a core reviewer. The ability to set
  this flag would be a privilege. However, the bar to gaining this
  privilege would be low, and preferably automatically set, e.g. 5
  accepted patches. It would be removed for abuse.
 
  Is this practical? Would it help?
 
  Matt
  --
  Matthew Booth
  Red Hat Engineering, Virtualisation Team
 
  Phone: +442070094448 (UK)
  GPG ID:  D33C3490
  GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Duncan Thomas

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2] Modular L2 agent architecture

2014-06-17 Thread Salvatore Orlando
We've started doing this in a slightly more reasonable way for icehouse.
What we've done is:
- remove unnecessary notification from the server
- process all port-related events, either trigger via RPC or via monitor in
one place

Obviously there is always a lot of room for improvement, and I agree
something along the lines of what Zang suggests would be more maintainable
and ensure faster event processing as well as making it easier to have some
form of reliability on event processing.

I was considering doing something for the ovs-agent again in Juno, but
since we've moving towards a unified agent, I think any new big ticket
should address this effort.

Salvatore


On 17 June 2014 13:31, Zang MingJie zealot0...@gmail.com wrote:

 Hi:

 Awesome! Currently we are suffering lots of bugs in ovs-agent, also
 intent to rebuild a more stable flexible agent.

 Taking the experience of ovs-agent bugs, I think the concurrency
 problem is also a very important problem, the agent gets lots of event
 from different greenlets, the rpc, the ovs monitor or the main loop.
 I'd suggest to serialize all event to a queue, then process events in
 a dedicated thread. The thread check the events one by one ordered,
 and resolve what has been changed, then apply the corresponding
 changes. If there is any error occurred in the thread, discard the
 current processing event, do a fresh start event, which reset
 everything, then apply the correct settings.

 The threading model is so important and may prevent tons of bugs in
 the future development, we should describe it clearly in the
 architecture


 On Wed, Jun 11, 2014 at 4:19 AM, Mohammad Banikazemi m...@us.ibm.com
 wrote:
  Following the discussions in the ML2 subgroup weekly meetings, I have
 added
  more information on the etherpad [1] describing the proposed architecture
  for modular L2 agents. I have also posted some code fragments at [2]
  sketching the implementation of the proposed architecture. Please have a
  look when you get a chance and let us know if you have any comments.
 
  [1] https://etherpad.openstack.org/p/modular-l2-agent-outline
  [2] https://review.openstack.org/#/c/99187/
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] LBaaS Mid Cycle Sprint

2014-06-17 Thread Phillip Toohill
Everyone is migrating to the new channel #openstack-lbaas

From: Dustin Lundquist dus...@null-ptr.netmailto:dus...@null-ptr.net
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, June 17, 2014 10:10 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS] LBaaS Mid Cycle Sprint

Actually the channel name is #neutron-lbaas.


On Tue, Jun 17, 2014 at 8:03 AM, Kyle Mestery 
mest...@noironetworks.commailto:mest...@noironetworks.com wrote:
Also, pop into #openstack-lbaas on Freenode, we have people there
monitoring the channel.

On Tue, Jun 17, 2014 at 9:19 AM, Dustin Lundquist 
dus...@null-ptr.netmailto:dus...@null-ptr.net wrote:
 We have an Etherpad going here:
 https://etherpad.openstack.org/p/juno-lbaas-mid-cycle-hackathon


 Dustin


 On Tue, Jun 17, 2014 at 4:05 AM, Avishay Balderman 
 avish...@radware.commailto:avish...@radware.com
 wrote:

 Hi
 As the lbass mid cycle sprint starts today, is there any way to track and
 understand the progress (without flying to Texas... )

 Thanks

 Avishay
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] LBaaS Mid Cycle Sprint

2014-06-17 Thread Doug Wiegley
Meetup webex:

https://a10networks.webex.com/a10networks/e.php?MTID=m3351a8eb388c2ade866bac44cc272c5b

From: Susanne Balle sleipnir...@gmail.commailto:sleipnir...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, June 17, 2014 at 11:13 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS] LBaaS Mid Cycle Sprint

We are now on: #openstack-lbaas

The #neutron-lbaas is now deprecated.


On Tue, Jun 17, 2014 at 11:10 AM, Dustin Lundquist 
dus...@null-ptr.netmailto:dus...@null-ptr.net wrote:
Actually the channel name is #neutron-lbaas.


On Tue, Jun 17, 2014 at 8:03 AM, Kyle Mestery 
mest...@noironetworks.commailto:mest...@noironetworks.com wrote:
Also, pop into #openstack-lbaas on Freenode, we have people there
monitoring the channel.

On Tue, Jun 17, 2014 at 9:19 AM, Dustin Lundquist 
dus...@null-ptr.netmailto:dus...@null-ptr.net wrote:
 We have an Etherpad going here:
 https://etherpad.openstack.org/p/juno-lbaas-mid-cycle-hackathon


 Dustin


 On Tue, Jun 17, 2014 at 4:05 AM, Avishay Balderman 
 avish...@radware.commailto:avish...@radware.com
 wrote:

 Hi
 As the lbass mid cycle sprint starts today, is there any way to track and
 understand the progress (without flying to Texas... )

 Thanks

 Avishay
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Name proposals

2014-06-17 Thread Radomir Dopieralski
On 06/10/2014 09:18 PM, Radomir Dopieralski wrote:

The name poll is now officially over, and the winner is:

horizon_lib

You can view the results here:
http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_ea99af9511f3f255

I think we won't need to check for trademark issues with this name, so
we can just proceed with the split.

Thank you everyone for your input!
-- 
Radomir Dopieralski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] LBaaS Mid Cycle Sprint

2014-06-17 Thread Craig Tracey
WebEx is here:
https://a10networks.webex.com/a10networks/e.php?MTID=m3351a8eb388c2ade866bac44cc272c5b


On Tue, Jun 17, 2014 at 12:13 PM, Susanne Balle sleipnir...@gmail.com
wrote:

 We are now on: #openstack-lbaas

 The #neutron-lbaas is now deprecated.


 On Tue, Jun 17, 2014 at 11:10 AM, Dustin Lundquist dus...@null-ptr.net
 wrote:

 Actually the channel name is #neutron-lbaas.


 On Tue, Jun 17, 2014 at 8:03 AM, Kyle Mestery mest...@noironetworks.com
 wrote:

 Also, pop into #openstack-lbaas on Freenode, we have people there
 monitoring the channel.

 On Tue, Jun 17, 2014 at 9:19 AM, Dustin Lundquist dus...@null-ptr.net
 wrote:
  We have an Etherpad going here:
  https://etherpad.openstack.org/p/juno-lbaas-mid-cycle-hackathon
 
 
  Dustin
 
 
  On Tue, Jun 17, 2014 at 4:05 AM, Avishay Balderman 
 avish...@radware.com
  wrote:
 
  Hi
  As the lbass mid cycle sprint starts today, is there any way to track
 and
  understand the progress (without flying to Texas... )
 
  Thanks
 
  Avishay
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2] Modular L2 agent architecture

2014-06-17 Thread Kyle Mestery
Another area of improvement for the agent would be to move away from
executing CLIs for port commands and instead use OVSDB. Terry Wilson
and I talked about this, and re-writing ovs_lib to use an OVSDB
connection instead of the CLI methods would be a huge improvement
here. I'm not sure if Terry was going to move forward with this, but
I'd be in favor of this for Juno if he or someone else wants to move
in this direction.

Thanks,
Kyle

On Tue, Jun 17, 2014 at 11:24 AM, Salvatore Orlando sorla...@nicira.com wrote:
 We've started doing this in a slightly more reasonable way for icehouse.
 What we've done is:
 - remove unnecessary notification from the server
 - process all port-related events, either trigger via RPC or via monitor in
 one place

 Obviously there is always a lot of room for improvement, and I agree
 something along the lines of what Zang suggests would be more maintainable
 and ensure faster event processing as well as making it easier to have some
 form of reliability on event processing.

 I was considering doing something for the ovs-agent again in Juno, but since
 we've moving towards a unified agent, I think any new big ticket should
 address this effort.

 Salvatore


 On 17 June 2014 13:31, Zang MingJie zealot0...@gmail.com wrote:

 Hi:

 Awesome! Currently we are suffering lots of bugs in ovs-agent, also
 intent to rebuild a more stable flexible agent.

 Taking the experience of ovs-agent bugs, I think the concurrency
 problem is also a very important problem, the agent gets lots of event
 from different greenlets, the rpc, the ovs monitor or the main loop.
 I'd suggest to serialize all event to a queue, then process events in
 a dedicated thread. The thread check the events one by one ordered,
 and resolve what has been changed, then apply the corresponding
 changes. If there is any error occurred in the thread, discard the
 current processing event, do a fresh start event, which reset
 everything, then apply the correct settings.

 The threading model is so important and may prevent tons of bugs in
 the future development, we should describe it clearly in the
 architecture


 On Wed, Jun 11, 2014 at 4:19 AM, Mohammad Banikazemi m...@us.ibm.com
 wrote:
  Following the discussions in the ML2 subgroup weekly meetings, I have
  added
  more information on the etherpad [1] describing the proposed
  architecture
  for modular L2 agents. I have also posted some code fragments at [2]
  sketching the implementation of the proposed architecture. Please have a
  look when you get a chance and let us know if you have any comments.
 
  [1] https://etherpad.openstack.org/p/modular-l2-agent-outline
  [2] https://review.openstack.org/#/c/99187/
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][QoS] Meeting agenda for Today's meeting

2014-06-17 Thread Collins, Sean
Hi, I have posted a preliminary agenda for today. I will be unable to
attend the meeting, Kevin Benton will be chairing the meeting today.

Please do add items to the agenda that you wish to discuss!

https://wiki.openstack.org/wiki/Meetings/NeutronQoS

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A modest proposal to reduce reviewer load

2014-06-17 Thread Russell Bryant
On 06/17/2014 12:22 PM, Joe Gordon wrote:
 
 
 
 On Tue, Jun 17, 2014 at 3:56 AM, Duncan Thomas duncan.tho...@gmail.com
 mailto:duncan.tho...@gmail.com wrote:
 
 A far more effective way to reduce the load of trivial review issues
 on core reviewers is for none-core reviewers to get in there first,
 spot the problems and add a -1 - the trivial issues are then hopefully
 fixed up before a core reviewer even looks at the patch.
 
 The fundamental problem with review is that there are more people
 submitting than doing regular reviews. If you want the review queue to
 shrink, do five reviews for every one you submit. A -1 from a
 none-core (followed by a +1 when all the issues are fixed) is far,
 far, far more useful in general than a +1 on a new patch.
 
 
 ++
 
 I think this thread is trying to optimize for the wrong types of
 patches.  We shouldn't be focusing on making trivial patches land
 faster, but rather more important changes such as bugs and blueprints.
 As some simple code motion won't directly fix any users issue such as
 bugs or missing features.

In fact, landing easier and less important changes causes churn in the
code base can make the more important bugs and blueprints even *harder*
to get done.

In the end, as others have said, the biggest problem by far is just that
we need more of the right people reviewing code.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2] Modular L2 agent architecture

2014-06-17 Thread Armando M.
just a provocative thought: If we used the ovsdb connection instead, do we
really need an L2 agent :P?


On 17 June 2014 18:38, Kyle Mestery mest...@noironetworks.com wrote:

 Another area of improvement for the agent would be to move away from
 executing CLIs for port commands and instead use OVSDB. Terry Wilson
 and I talked about this, and re-writing ovs_lib to use an OVSDB
 connection instead of the CLI methods would be a huge improvement
 here. I'm not sure if Terry was going to move forward with this, but
 I'd be in favor of this for Juno if he or someone else wants to move
 in this direction.

 Thanks,
 Kyle

 On Tue, Jun 17, 2014 at 11:24 AM, Salvatore Orlando sorla...@nicira.com
 wrote:
  We've started doing this in a slightly more reasonable way for icehouse.
  What we've done is:
  - remove unnecessary notification from the server
  - process all port-related events, either trigger via RPC or via monitor
 in
  one place
 
  Obviously there is always a lot of room for improvement, and I agree
  something along the lines of what Zang suggests would be more
 maintainable
  and ensure faster event processing as well as making it easier to have
 some
  form of reliability on event processing.
 
  I was considering doing something for the ovs-agent again in Juno, but
 since
  we've moving towards a unified agent, I think any new big ticket should
  address this effort.
 
  Salvatore
 
 
  On 17 June 2014 13:31, Zang MingJie zealot0...@gmail.com wrote:
 
  Hi:
 
  Awesome! Currently we are suffering lots of bugs in ovs-agent, also
  intent to rebuild a more stable flexible agent.
 
  Taking the experience of ovs-agent bugs, I think the concurrency
  problem is also a very important problem, the agent gets lots of event
  from different greenlets, the rpc, the ovs monitor or the main loop.
  I'd suggest to serialize all event to a queue, then process events in
  a dedicated thread. The thread check the events one by one ordered,
  and resolve what has been changed, then apply the corresponding
  changes. If there is any error occurred in the thread, discard the
  current processing event, do a fresh start event, which reset
  everything, then apply the correct settings.
 
  The threading model is so important and may prevent tons of bugs in
  the future development, we should describe it clearly in the
  architecture
 
 
  On Wed, Jun 11, 2014 at 4:19 AM, Mohammad Banikazemi m...@us.ibm.com
  wrote:
   Following the discussions in the ML2 subgroup weekly meetings, I have
   added
   more information on the etherpad [1] describing the proposed
   architecture
   for modular L2 agents. I have also posted some code fragments at [2]
   sketching the implementation of the proposed architecture. Please
 have a
   look when you get a chance and let us know if you have any comments.
  
   [1] https://etherpad.openstack.org/p/modular-l2-agent-outline
   [2] https://review.openstack.org/#/c/99187/
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Distributed locking

2014-06-17 Thread Clint Byrum
Excerpts from Matthew Booth's message of 2014-06-17 01:36:11 -0700:
 On 17/06/14 00:28, Joshua Harlow wrote:
  So this is a reader/write lock then?
  
  I have seen https://github.com/python-zk/kazoo/pull/141 come up in the
  kazoo (zookeeper python library) but there was a lack of a maintainer for
  that 'recipe', perhaps if we really find this needed we can help get that
  pull request 'sponsored' so that it can be used for this purpose?
  
  
  As far as resiliency, the thing I was thinking about was how correct do u
  want this lock to be?
  
  If u say go with memcached and a locking mechanism using it this will not
  be correct but it might work good enough under normal usage. So that¹s why
  I was wondering about what level of correctness do you want and what do
  you want to happen if a server that is maintaining the lock record dies.
  In memcaches case this will literally be 1 server, even if sharding is
  being used, since a key hashes to one server. So if that one server goes
  down (or a network split happens) then it is possible for two entities to
  believe they own the same lock (and if the network split recovers this
  gets even weirder); so that¹s what I was wondering about when mentioning
  resiliency and how much incorrectness you are willing to tolerate.
 
 From my POV, the most important things are:
 
 * 2 nodes must never believe they hold the same lock
 * A node must eventually get the lock
 

If these are musts, then memcache is a no-go for locking. memcached is
likely to delete anything it is storing in its RAM, at any time. Also
if you have several memcache servers, a momentary network blip could
lead to acquiring the lock erroneously.

The only thing it is useful for is coalescing, where a broken lock just
means wasted resources, erroneous errors, etc. If consistency is needed,
then you need a consistent backend.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][neutron][NFV] Mid cycle sprints

2014-06-17 Thread Sylvain Afchain
Hi,

+1 for Paris, since a mid-cycle sprint is already being hosted and organised by 
eNovance :)

Sylvain

- Original Message -
 From: Dmitry mey...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Sunday, June 15, 2014 3:40:43 PM
 Subject: Re: [openstack-dev] [Nova][neutron][NFV] Mid cycle sprints
 
 +1 for Paris/Lisbon
 
 On Sun, Jun 15, 2014 at 4:27 PM, Gary Kotton gkot...@vmware.com wrote:
 
 
  On 6/14/14, 1:05 AM, Anita Kuno ante...@anteaya.info wrote:
 
 On 06/13/2014 05:58 PM, Carlos Gonçalves wrote:
  Let me add to what I've said in my previous email, that Instituto de
 Telecomunicacoes and Portugal Telecom are also available to host and
 organize a mid cycle sprint in Lisbon, Portugal.
 
  Please let me know who may be interested in participating.
 
  Thanks,
  Carlos Goncalves
 
  On 13 Jun 2014, at 10:45, Carlos Gonçalves m...@cgoncalves.pt wrote:
 
  Hi,
 
  I like the idea of arranging a mid cycle for Neutron in Europe
 somewhere in July. I was also considering inviting folks from the
 OpenStack NFV team to meet up for a F2F kick-off.
 
  I did not know about the sprint being hosted and organised by eNovance
 in Paris until just now. I think it is a great initiative from eNovance
 even because it¹s not being focused on a specific OpenStack project.
 So, I'm interested in participating in this sprint for discussing
 Neutron and NFV. Two more people from Instituto de Telecomunicacoes and
 Portugal Telecom have shown interested too.
 
  Neutron and NFV team members, who¹s interested in meeting in Paris, or
 if not available on the date set by eNovance in other time and place?
 
  Thanks,
  Carlos Goncalves
 
  On 13 Jun 2014, at 08:42, Sylvain Bauza sba...@redhat.com wrote:
 
  Le 12/06/2014 15:32, Gary Kotton a écrit :
  Hi,
  There is the mid cycle sprint in July for Nova and Neutron. Anyone
 interested in maybe getting one together in Europe/Middle East around
 the same dates? If people are willing to come to this part of the
 world I am sure that we can organize a venue for a few days. Anyone
 interested. If we can get a quorum then I will be happy to try and
 arrange things.
  Thanks
  Gary
 
 
 
  Hi Gary,
 
  Wouldn't it be more interesting to have a mid-cycle sprint *before*
 the Nova one (which is targeted after juno-2) so that we could discuss
 on some topics and make a status to other folks so that it would allow
 a second run ?
 
  There is already a proposal in Paris for hosting some OpenStack
 sprints, see https://wiki.openstack.org/wiki/Sprints/ParisJuno2014
 
  -Sylvain
 
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 Neutron already has two sprints scheduled:
 https://wiki.openstack.org/wiki/Sprints
 
  Those sprints are both in the US. It is a very long way to travel. If
  there are a group of people that can get together in Europe then it would
  be great.
 
 
 Thanks,
 Anita.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Backwards compatibility policy for our projects

2014-06-17 Thread Clint Byrum
Excerpts from Tomas Sedovic's message of 2014-06-17 04:56:24 -0700:
 On 16/06/14 18:51, Clint Byrum wrote:
  Excerpts from Tomas Sedovic's message of 2014-06-16 09:19:40 -0700:
  All,
 
  After having proposed some changes[1][2] to tripleo-heat-templates[3],
  reviewers suggested adding a deprecation period for the merge.py script.
 
  While TripleO is an official OpenStack program, none of the projects
  under its umbrella (including tripleo-heat-templates) have gone through
  incubation and integration nor have they been shipped with Icehouse.
 
  So there is no implicit compatibility guarantee and I have not found
  anything about maintaining backwards compatibility neither on the
  TripleO wiki page[4], tripleo-heat-template's readme[5] or
  tripleo-incubator's readme[6].
 
  The Release Management wiki page[7] suggests that we follow Semantic
  Versioning[8], under which prior to 1.0.0 (t-h-t is ) anything goes.
  According to that wiki, we are using a stronger guarantee where we do
  promise to bump the minor version on incompatible changes -- but this
  again suggests that we do not promise to maintain backwards
  compatibility -- just that we document whenever we break it.
 
  
  I think there are no guarantees, and no promises. I also think that we've
  kept tripleo_heat_merge pretty narrow in surface area since making it
  into a module, so I'm not concerned that it will be incredibly difficult
  to keep those features alive for a while.
  
  According to Robert, there are now downstreams that have shipped things
  (with the implication that they don't expect things to change without a
  deprecation period) so there's clearly a disconnect here.
 
  
  I think it is more of a we will cause them extra work thing. If we
  can make a best effort and deprecate for a few releases (as in, a few
  releases of t-h-t, not OpenStack), they'll likely appreciate that. If
  we can't do it without a lot of effort, we shouldn't bother.
 
 Oh. I did assume we were talking about OpenStack releases, not t-h-t,
 sorry. I have nothing against making a new tht release that deprecates
 the features we're no longer using and dropping them for good in a later
 release.
 
 What do you suggest would be a reasonable waiting period? Say a month or
 so? I think it would be good if we could remove all the deprecated stuff
 before we start porting our templates to HOT.
 
  
  If we do promise backwards compatibility, we should document it
  somewhere and if we don't we should probably make that more visible,
  too, so people know what to expect.
 
  I prefer the latter, because it will make the merge.py cleanup easier
  and every published bit of information I could find suggests that's our
  current stance anyway.
 
  
  This is more about good will than promising. If it is easy enough to
  just keep the code around and have it complain to us if we accidentally
  resurrect a feature, that should be enough. We could even introduce a
  switch to the CLI like --strict that we can run in our gate and that
  won't allow us to keep using deprecated features.
  
  So I'd like to see us deprecate not because we have to, but because we
  can do it with only a small amount of effort.
 
 Right, that's fair enough. I've thought about adding a strict switch,
 too, but I'd like to start removing code from merge.py, not adding more :-).
 

Let's just leave the capability forever. We're not adding things to
merge.py or taking it in any new directions. Keeping the code does not
cost us anything. Some day merge.py won't be used, and then it will be
like we deleted the whole thing.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2] Modular L2 agent architecture

2014-06-17 Thread Kyle Mestery
Not if you use ODL, and we don't want to reinvent that wheel. But by
skipping CLI commands and instead using OVSDB programmatically from
agent to ovs-vswitchd, that's a decent improvement.

On Tue, Jun 17, 2014 at 11:56 AM, Armando M. arma...@gmail.com wrote:
 just a provocative thought: If we used the ovsdb connection instead, do we
 really need an L2 agent :P?


 On 17 June 2014 18:38, Kyle Mestery mest...@noironetworks.com wrote:

 Another area of improvement for the agent would be to move away from
 executing CLIs for port commands and instead use OVSDB. Terry Wilson
 and I talked about this, and re-writing ovs_lib to use an OVSDB
 connection instead of the CLI methods would be a huge improvement
 here. I'm not sure if Terry was going to move forward with this, but
 I'd be in favor of this for Juno if he or someone else wants to move
 in this direction.

 Thanks,
 Kyle

 On Tue, Jun 17, 2014 at 11:24 AM, Salvatore Orlando sorla...@nicira.com
 wrote:
  We've started doing this in a slightly more reasonable way for icehouse.
  What we've done is:
  - remove unnecessary notification from the server
  - process all port-related events, either trigger via RPC or via monitor
  in
  one place
 
  Obviously there is always a lot of room for improvement, and I agree
  something along the lines of what Zang suggests would be more
  maintainable
  and ensure faster event processing as well as making it easier to have
  some
  form of reliability on event processing.
 
  I was considering doing something for the ovs-agent again in Juno, but
  since
  we've moving towards a unified agent, I think any new big ticket
  should
  address this effort.
 
  Salvatore
 
 
  On 17 June 2014 13:31, Zang MingJie zealot0...@gmail.com wrote:
 
  Hi:
 
  Awesome! Currently we are suffering lots of bugs in ovs-agent, also
  intent to rebuild a more stable flexible agent.
 
  Taking the experience of ovs-agent bugs, I think the concurrency
  problem is also a very important problem, the agent gets lots of event
  from different greenlets, the rpc, the ovs monitor or the main loop.
  I'd suggest to serialize all event to a queue, then process events in
  a dedicated thread. The thread check the events one by one ordered,
  and resolve what has been changed, then apply the corresponding
  changes. If there is any error occurred in the thread, discard the
  current processing event, do a fresh start event, which reset
  everything, then apply the correct settings.
 
  The threading model is so important and may prevent tons of bugs in
  the future development, we should describe it clearly in the
  architecture
 
 
  On Wed, Jun 11, 2014 at 4:19 AM, Mohammad Banikazemi m...@us.ibm.com
  wrote:
   Following the discussions in the ML2 subgroup weekly meetings, I have
   added
   more information on the etherpad [1] describing the proposed
   architecture
   for modular L2 agents. I have also posted some code fragments at [2]
   sketching the implementation of the proposed architecture. Please
   have a
   look when you get a chance and let us know if you have any comments.
  
   [1] https://etherpad.openstack.org/p/modular-l2-agent-outline
   [2] https://review.openstack.org/#/c/99187/
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [taskflow] Recommendations for the granularity of tasks and their stickiness to workers

2014-06-17 Thread Joshua Harlow
Howdy,

Sandy is correct, we aren't doing automatic load-balancing of tasks/jobs
(currently).

Though there is nothing stopping this from being implemented (and I of
course would recommend adding it to taskflow rather than jumping to
gearman, but I am obviously biased),

A feature that appeared recently that can do this kind of load-balancing:

http://docs.openstack.org/developer/taskflow/conductors.html

A conductor 'group' (N-conductor processes for example) can examine attach
to jobboard[1] and decide how they want to select work (the load-balancing
part). This kind of filtering/balancing has been discussed and can likely
easily be implemented[2]; patches welcome ;)

My idea for jobs is that a job contains a 'large' unit of work (where
'large' is up to the taskflow user to define), the conductor or job
consumer would pick off the job in an atomic manner and then work on the
items in that job (jobs are typically composed of a logbook[3] that itself
contains a reference of flows/tasks to do). The granularity of the work in
the job is up to library users (although as others have stated the balance
needs to be determined by the taskflow user, since granularity affects
resumption and reversion processes).

[1] http://docs.openstack.org/developer/taskflow/jobs.html
[2] https://blueprints.launchpad.net/taskflow/+spec/job-filtering
[3] http://docs.openstack.org/developer/taskflow/persistence.html

-Original Message-
From: Sandy Walsh sandy.wa...@rackspace.com
Reply-To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date: Tuesday, June 17, 2014 at 5:33 AM
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [taskflow] Recommendations for the
granularity of tasks and their stickiness to workers

On 6/17/2014 7:04 AM, Eoghan Glynn wrote:
 Folks,

 A question for the taskflow ninjas.

 Any thoughts on best practice WRT $subject?

 Specifically I have in mind this ceilometer review[1] which adopts
 the approach of using very fine-grained tasks (at the level of an
 individual alarm evaluation) combined with short-term assignments
 to individual workers.

 But I'm also thinking of future potential usage of taskflow within
 ceilometer, to support partitioning of work over a scaled-out array
 of central agents.

 Does taskflow also naturally support a model whereby more chunky
 tasks (possibly including ongoing periodic work) are assigned to
 workers in a stickier fashion, such that re-balancing of workload
 can easily be triggered when a change is detected in the pool of
 available workers?

I don't think taskflow today is really focused on load balancing of
tasks. Something like gearman [1] might be better suited in the near term?

My understanding is that taskflow is really focused on in-process tasks
(with retry, restart, etc) and later will support distributed tasks. But
my data could be stale too. (jharlow?)

Even still, the decision of smaller tasks vs. chunky ones really comes
down to how much work you want to re-do if there is a failure. I've seen
some uses of taskflow where the breakdown of tasks seemed artificially
small. Meaning, the overhead of going back to the library on an
undo/rewind is greater than the undo itself.

-S

[1] http://gearman.org/


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2] Modular L2 agent architecture

2014-06-17 Thread Narasimhan, Vivekanandan


Managing the ports and plumbing logic is today driven by L2 Agent, with little 
assistance

from controller.



If we plan to move that functionality to the controller,  the controller has to 
be more

heavy weight (both hardware and software)  since it has to do the job of L2 
Agent for all

the compute servers in the cloud. , We need to re-verify all scale numbers for 
the controller

on POC’ing of such a change.



That said, replacing CLI with direct OVSDB calls in the L2 Agent is certainly a 
good direction.



Today, OVS Agent invokes flow calls of OVS-Lib but has no idea (or processing) 
to follow up

on success or failure of such invocations.  Nor there is certain guarantee that 
all such

flow invocations would be executed by the third-process fired by OVS-Lib to 
execute CLI.



When we transition to OVSDB calls which are more programmatic in nature, we can

enhance the Flow API (OVS-Lib) to provide more fine grained errors/return codes 
(or content)

and ovs-agent (and even other components) can act on such return state more

intelligently/appropriately.



--

Thanks,



Vivek





From: Armando M. [mailto:arma...@gmail.com]
Sent: Tuesday, June 17, 2014 10:26 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][ML2] Modular L2 agent architecture



just a provocative thought: If we used the ovsdb connection instead, do we 
really need an L2 agent :P?



On 17 June 2014 18:38, Kyle Mestery 
mest...@noironetworks.commailto:mest...@noironetworks.com wrote:

Another area of improvement for the agent would be to move away from
executing CLIs for port commands and instead use OVSDB. Terry Wilson
and I talked about this, and re-writing ovs_lib to use an OVSDB
connection instead of the CLI methods would be a huge improvement
here. I'm not sure if Terry was going to move forward with this, but
I'd be in favor of this for Juno if he or someone else wants to move
in this direction.

Thanks,
Kyle


On Tue, Jun 17, 2014 at 11:24 AM, Salvatore Orlando 
sorla...@nicira.commailto:sorla...@nicira.com wrote:
 We've started doing this in a slightly more reasonable way for icehouse.
 What we've done is:
 - remove unnecessary notification from the server
 - process all port-related events, either trigger via RPC or via monitor in
 one place

 Obviously there is always a lot of room for improvement, and I agree
 something along the lines of what Zang suggests would be more maintainable
 and ensure faster event processing as well as making it easier to have some
 form of reliability on event processing.

 I was considering doing something for the ovs-agent again in Juno, but since
 we've moving towards a unified agent, I think any new big ticket should
 address this effort.

 Salvatore


 On 17 June 2014 13:31, Zang MingJie 
 zealot0...@gmail.commailto:zealot0...@gmail.com wrote:

 Hi:

 Awesome! Currently we are suffering lots of bugs in ovs-agent, also
 intent to rebuild a more stable flexible agent.

 Taking the experience of ovs-agent bugs, I think the concurrency
 problem is also a very important problem, the agent gets lots of event
 from different greenlets, the rpc, the ovs monitor or the main loop.
 I'd suggest to serialize all event to a queue, then process events in
 a dedicated thread. The thread check the events one by one ordered,
 and resolve what has been changed, then apply the corresponding
 changes. If there is any error occurred in the thread, discard the
 current processing event, do a fresh start event, which reset
 everything, then apply the correct settings.

 The threading model is so important and may prevent tons of bugs in
 the future development, we should describe it clearly in the
 architecture


 On Wed, Jun 11, 2014 at 4:19 AM, Mohammad Banikazemi 
 m...@us.ibm.commailto:m...@us.ibm.com
 wrote:
  Following the discussions in the ML2 subgroup weekly meetings, I have
  added
  more information on the etherpad [1] describing the proposed
  architecture
  for modular L2 agents. I have also posted some code fragments at [2]
  sketching the implementation of the proposed architecture. Please have a
  look when you get a chance and let us know if you have any comments.
 
  [1] https://etherpad.openstack.org/p/modular-l2-agent-outline
  [2] https://review.openstack.org/#/c/99187/
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 

Re: [openstack-dev] An alternative approach to enforcing expected election behaviour

2014-06-17 Thread James E. Blair
Eoghan Glynn egl...@redhat.com writes:

 TL;DR: how about we adopt a soft enforcement model, relying 
on sound judgement and good faith within the community?

Thank you very much for bringing this up and proposing it to the TC.  As
others have suggested, having a concrete alternative is very helpful in
revealing both the positive and negative aspects of a proposal.

I think our recent experience has shown that the fundamental problem is
that not all of the members of our community knew what kind of behavior
we expected around elections.  That's understandable -- we had hardly
articulated it.  I think the best solution to that is therefore to
articulate and communicate that.

I believe Anita's proposal starts off by doing a very good job of
exactly that, so I would like to see a final resolution based on that
approach with very similar text to what she has proposed.  That
statement of expected behavior should then be communicated by election
officials to all participants in announcements related to all elections.
Those two simple acts will, I believe, suffice to address the problem we
have seen.

I do agree that a heavy bureaucracy is not necessary for this.  Our
community has a Code of Conduct established and administered by the
Foundation.  I think we should focus on minimizing additional process
and instead try to make this effort slot into the existing framework as
easily as possible by expecting the election officials to forward
potential violations to the Foundation's Executive Director (or
delegate) to handle as they would any other potential CoC violation.

-Jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [taskflow] Recommendations for the granularity of tasks and their stickiness to workers

2014-06-17 Thread Joshua Harlow
Back on the distributed subject (since this deserves a different email),

In the newest taskflow release (0.3.x) we have 2 mechanisms for
distribution outside of a process.

One is the job/jobboard[1]  conductor[2] concepts,

These concepts allow for atomic ownership of a 'job' and conductors act as
one way (not the only way) to perform the work in a distributed manner
(conductors consume and perform work). Conductors are pretty new so if
bugs are found please let us know. Jobs have some unique properties that I
think make them attractive[3].

The second method here is what is called the W.B.E. (the worker based
engine),

This is different in that it is a distribution at the engine[4] layer (not
at the job layer), this engine allows for running tasks on remote workers
(and it can be used in combination with the job concept/layer).
Documentation for this can be found @
http://docs.openstack.org/developer/taskflow/workers.html; this WBE is
under development, it does work, but it does have a few limitations that
still need addressing (see docs link).

So that¹s what exists currently (not just in-process things),

[1] http://docs.openstack.org/developer/taskflow/jobs.html
[2] http://docs.openstack.org/developer/taskflow/conductors.html
[3] http://docs.openstack.org/developer/taskflow/jobs.html#features
[4] http://docs.openstack.org/developer/taskflow/engines.html

-Original Message-
From: Sandy Walsh sandy.wa...@rackspace.com
Reply-To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date: Tuesday, June 17, 2014 at 5:33 AM
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [taskflow] Recommendations for the
granularity of tasks and their stickiness to workers

On 6/17/2014 7:04 AM, Eoghan Glynn wrote:
 Folks,

 A question for the taskflow ninjas.

 Any thoughts on best practice WRT $subject?

 Specifically I have in mind this ceilometer review[1] which adopts
 the approach of using very fine-grained tasks (at the level of an
 individual alarm evaluation) combined with short-term assignments
 to individual workers.

 But I'm also thinking of future potential usage of taskflow within
 ceilometer, to support partitioning of work over a scaled-out array
 of central agents.

 Does taskflow also naturally support a model whereby more chunky
 tasks (possibly including ongoing periodic work) are assigned to
 workers in a stickier fashion, such that re-balancing of workload
 can easily be triggered when a change is detected in the pool of
 available workers?

I don't think taskflow today is really focused on load balancing of
tasks. Something like gearman [1] might be better suited in the near term?

My understanding is that taskflow is really focused on in-process tasks
(with retry, restart, etc) and later will support distributed tasks. But
my data could be stale too. (jharlow?)

Even still, the decision of smaller tasks vs. chunky ones really comes
down to how much work you want to re-do if there is a failure. I've seen
some uses of taskflow where the breakdown of tasks seemed artificially
small. Meaning, the overhead of going back to the library on an
undo/rewind is greater than the undo itself.

-S

[1] http://gearman.org/


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] New Contributor Agreement is not working?

2014-06-17 Thread Sami J. Mäkinen


On 17/06/14 17:40, Stefano Maffulli wrote:

Hi Sami,

sorry for this. I suspect there is something going on with gerrit which
throws the inappropriate error message (more below).


Yay. I really just had to become a new OpenStack Foundation Member.
The error message I got is just not too informative, I could not
help thinking there is something really broken.

To summarize: missing membership of OpenStack Foundation
was my only problem here, it seems. Thanks and sorry for the trouble. :)

-sjm


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2] Modular L2 agent architecture

2014-06-17 Thread racha
Hi,
Does it make sense also to have the choice between ovs-ofctl CLI and a
direct OF1.3 connection too in the ovs-agent?

Best Regards,
Racha



On Tue, Jun 17, 2014 at 10:25 AM, Narasimhan, Vivekanandan 
vivekanandan.narasim...@hp.com wrote:



 Managing the ports and plumbing logic is today driven by L2 Agent, with
 little assistance

 from controller.



 If we plan to move that functionality to the controller,  the controller
 has to be more

 heavy weight (both hardware and software)  since it has to do the job of
 L2 Agent for all

 the compute servers in the cloud. , We need to re-verify all scale numbers
 for the controller

 on POC’ing of such a change.



 That said, replacing CLI with direct OVSDB calls in the L2 Agent is
 certainly a good direction.



 Today, OVS Agent invokes flow calls of OVS-Lib but has no idea (or
 processing) to follow up

 on success or failure of such invocations.  Nor there is certain guarantee
 that all such

 flow invocations would be executed by the third-process fired by OVS-Lib
 to execute CLI.



 When we transition to OVSDB calls which are more programmatic in nature,
 we can

 enhance the Flow API (OVS-Lib) to provide more fine grained errors/return
 codes (or content)

 and ovs-agent (and even other components) can act on such return state
 more

 intelligently/appropriately.



 --

 Thanks,



 Vivek





 *From:* Armando M. [mailto:arma...@gmail.com]
 *Sent:* Tuesday, June 17, 2014 10:26 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Neutron][ML2] Modular L2 agent
 architecture



 just a provocative thought: If we used the ovsdb connection instead, do we
 really need an L2 agent :P?



 On 17 June 2014 18:38, Kyle Mestery mest...@noironetworks.com wrote:

 Another area of improvement for the agent would be to move away from
 executing CLIs for port commands and instead use OVSDB. Terry Wilson
 and I talked about this, and re-writing ovs_lib to use an OVSDB
 connection instead of the CLI methods would be a huge improvement
 here. I'm not sure if Terry was going to move forward with this, but
 I'd be in favor of this for Juno if he or someone else wants to move
 in this direction.

 Thanks,
 Kyle


 On Tue, Jun 17, 2014 at 11:24 AM, Salvatore Orlando sorla...@nicira.com
 wrote:
  We've started doing this in a slightly more reasonable way for icehouse.
  What we've done is:
  - remove unnecessary notification from the server
  - process all port-related events, either trigger via RPC or via monitor
 in
  one place
 
  Obviously there is always a lot of room for improvement, and I agree
  something along the lines of what Zang suggests would be more
 maintainable
  and ensure faster event processing as well as making it easier to have
 some
  form of reliability on event processing.
 
  I was considering doing something for the ovs-agent again in Juno, but
 since
  we've moving towards a unified agent, I think any new big ticket should
  address this effort.
 
  Salvatore
 
 
  On 17 June 2014 13:31, Zang MingJie zealot0...@gmail.com wrote:
 
  Hi:
 
  Awesome! Currently we are suffering lots of bugs in ovs-agent, also
  intent to rebuild a more stable flexible agent.
 
  Taking the experience of ovs-agent bugs, I think the concurrency
  problem is also a very important problem, the agent gets lots of event
  from different greenlets, the rpc, the ovs monitor or the main loop.
  I'd suggest to serialize all event to a queue, then process events in
  a dedicated thread. The thread check the events one by one ordered,
  and resolve what has been changed, then apply the corresponding
  changes. If there is any error occurred in the thread, discard the
  current processing event, do a fresh start event, which reset
  everything, then apply the correct settings.
 
  The threading model is so important and may prevent tons of bugs in
  the future development, we should describe it clearly in the
  architecture
 
 
  On Wed, Jun 11, 2014 at 4:19 AM, Mohammad Banikazemi m...@us.ibm.com
  wrote:
   Following the discussions in the ML2 subgroup weekly meetings, I have
   added
   more information on the etherpad [1] describing the proposed
   architecture
   for modular L2 agents. I have also posted some code fragments at [2]
   sketching the implementation of the proposed architecture. Please
 have a
   look when you get a chance and let us know if you have any comments.
  
   [1] https://etherpad.openstack.org/p/modular-l2-agent-outline
   [2] https://review.openstack.org/#/c/99187/
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___

Re: [openstack-dev] New Contributor Agreement is not working?

2014-06-17 Thread Stefano Maffulli
On 06/17/2014 10:45 AM, Sami J. Mäkinen wrote:
 Yay. I really just had to become a new OpenStack Foundation Member.
 The error message I got is just not too informative, I could not
 help thinking there is something really broken.

Right, unfortunately the error message from gerrit cannot be changed...
but we know of the problem and think that it will go away once we
deprecate Launchpad OpenID and move authentication to the OpenStack
OpenID provider. A lot of this complexity will go away once we have only
one ID across all our domains.

BTW, we'll discuss the steps and timeline to deprecate Launchpad ID
during today's #infra meeting.

Cheers,
stef

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] agenda for OpenStack Heat meeting 2014-06-18 20:00 UTC

2014-06-17 Thread Mike Spreitzer
https://wiki.openstack.org/wiki/Meetings/HeatAgenda
http://www.timeanddate.com/worldclock/fixedtime.html?msg=Heat+Chatiso=20140618T20p1=211ah=1

Agenda (2014-06-18 2000 UTC)
Review last meeting's actions 
Adding items to the agenda 
Mid-cycle meetup 
Critical issues sync 

Regards,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Reducing quota below utilisation

2014-06-17 Thread Tim Bell

We have some projects which are dynamically creating VMs up to their quota. 
Under some circumstances, as cloud administrators, we would like these projects 
to shrink and make room for other higher priority work.

We had investigated setting the project quota below the current utilisation 
(i.e. effectively delete only, no create). This will eventually match the 
desired level of VMs as the dynamic workload leads to old VMs being deleted and 
new ones cannot be created.

However, OpenStack does not allow a quota to be set to below the current usage.

This seems a little restrictive ... any thoughts from others ?

Tim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Cleaning up configuration settings

2014-06-17 Thread W Chan
I figured.  I implemented it in https://review.openstack.org/#/c/97684/.


On Mon, Jun 16, 2014 at 9:35 PM, Renat Akhmerov rakhme...@mirantis.com
wrote:

 I don’t think we have them. You can write them I think as a part of what
 you’re doing.

 Renat Akhmerov
 @ Mirantis Inc.



 On 31 May 2014, at 04:26, W Chan m4d.co...@gmail.com wrote:

 Is there an existing unit test for testing enabling keystone middleware in
 pecan (setting cfg.CONF.pecan.auth_enable = True)?  I don't seem to find
 one.  If there's one, it's not obvious.  Can someone kindly point me to it?


 On Wed, May 28, 2014 at 9:53 AM, W Chan m4d.co...@gmail.com wrote:

 Thanks for following up.  I will publish this change as a separate patch
 from my current config cleanup.


 On Wed, May 28, 2014 at 2:38 AM, Renat Akhmerov rakhme...@mirantis.com
 wrote:


 On 28 May 2014, at 13:51, Angus Salkeld angus.salk...@rackspace.com
 wrote:

  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA1
 
  On 17/05/14 02:48, W Chan wrote:
  Regarding config opts for keystone, the keystoneclient middleware
 already
  registers the opts at
 
 https://github.com/openstack/python-keystoneclient/blob/master/keystoneclient/middleware/auth_token.py#L325
  under a keystone_authtoken group in the config file.  Currently,
 Mistral
  registers the opts again at
 
 https://github.com/stackforge/mistral/blob/master/mistral/config.py#L108
 under a
  different configuration group.  Should we remove the duplicate from
 Mistral and
  refactor the reference to keystone configurations to the
 keystone_authtoken
  group?  This seems more consistent.
 
  I think that is the only thing that makes sense. Seems like a bug
  waiting to happen having the same options registered twice.
 
  If some user used to other projects comes and configures
  keystone_authtoken then will their config take effect?
  (how much confusion will that generate)..
 
  I'd suggest just using the one that is registered keystoneclient.

 Ok, I had a feeling it was needed for some reason. But after having
 another look at this I think this is really a bug. Let’s do it.

 Thanks guys
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2] Modular L2 agent architecture

2014-06-17 Thread Kyle Mestery
I don't think so. Once we implement the OVSDB support, we will
deprecate using the CLI commands in ovs_lib.

On Tue, Jun 17, 2014 at 12:50 PM, racha ben...@gmail.com wrote:
 Hi,
 Does it make sense also to have the choice between ovs-ofctl CLI and a
 direct OF1.3 connection too in the ovs-agent?

 Best Regards,
 Racha



 On Tue, Jun 17, 2014 at 10:25 AM, Narasimhan, Vivekanandan
 vivekanandan.narasim...@hp.com wrote:



 Managing the ports and plumbing logic is today driven by L2 Agent, with
 little assistance

 from controller.



 If we plan to move that functionality to the controller,  the controller
 has to be more

 heavy weight (both hardware and software)  since it has to do the job of
 L2 Agent for all

 the compute servers in the cloud. , We need to re-verify all scale numbers
 for the controller

 on POC’ing of such a change.



 That said, replacing CLI with direct OVSDB calls in the L2 Agent is
 certainly a good direction.



 Today, OVS Agent invokes flow calls of OVS-Lib but has no idea (or
 processing) to follow up

 on success or failure of such invocations.  Nor there is certain guarantee
 that all such

 flow invocations would be executed by the third-process fired by OVS-Lib
 to execute CLI.



 When we transition to OVSDB calls which are more programmatic in nature,
 we can

 enhance the Flow API (OVS-Lib) to provide more fine grained errors/return
 codes (or content)

 and ovs-agent (and even other components) can act on such return state
 more

 intelligently/appropriately.



 --

 Thanks,



 Vivek





 From: Armando M. [mailto:arma...@gmail.com]
 Sent: Tuesday, June 17, 2014 10:26 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][ML2] Modular L2 agent architecture



 just a provocative thought: If we used the ovsdb connection instead, do we
 really need an L2 agent :P?



 On 17 June 2014 18:38, Kyle Mestery mest...@noironetworks.com wrote:

 Another area of improvement for the agent would be to move away from
 executing CLIs for port commands and instead use OVSDB. Terry Wilson
 and I talked about this, and re-writing ovs_lib to use an OVSDB
 connection instead of the CLI methods would be a huge improvement
 here. I'm not sure if Terry was going to move forward with this, but
 I'd be in favor of this for Juno if he or someone else wants to move
 in this direction.

 Thanks,
 Kyle


 On Tue, Jun 17, 2014 at 11:24 AM, Salvatore Orlando sorla...@nicira.com
 wrote:
  We've started doing this in a slightly more reasonable way for icehouse.
  What we've done is:
  - remove unnecessary notification from the server
  - process all port-related events, either trigger via RPC or via monitor
  in
  one place
 
  Obviously there is always a lot of room for improvement, and I agree
  something along the lines of what Zang suggests would be more
  maintainable
  and ensure faster event processing as well as making it easier to have
  some
  form of reliability on event processing.
 
  I was considering doing something for the ovs-agent again in Juno, but
  since
  we've moving towards a unified agent, I think any new big ticket
  should
  address this effort.
 
  Salvatore
 
 
  On 17 June 2014 13:31, Zang MingJie zealot0...@gmail.com wrote:
 
  Hi:
 
  Awesome! Currently we are suffering lots of bugs in ovs-agent, also
  intent to rebuild a more stable flexible agent.
 
  Taking the experience of ovs-agent bugs, I think the concurrency
  problem is also a very important problem, the agent gets lots of event
  from different greenlets, the rpc, the ovs monitor or the main loop.
  I'd suggest to serialize all event to a queue, then process events in
  a dedicated thread. The thread check the events one by one ordered,
  and resolve what has been changed, then apply the corresponding
  changes. If there is any error occurred in the thread, discard the
  current processing event, do a fresh start event, which reset
  everything, then apply the correct settings.
 
  The threading model is so important and may prevent tons of bugs in
  the future development, we should describe it clearly in the
  architecture
 
 
  On Wed, Jun 11, 2014 at 4:19 AM, Mohammad Banikazemi m...@us.ibm.com
  wrote:
   Following the discussions in the ML2 subgroup weekly meetings, I have
   added
   more information on the etherpad [1] describing the proposed
   architecture
   for modular L2 agents. I have also posted some code fragments at [2]
   sketching the implementation of the proposed architecture. Please
   have a
   look when you get a chance and let us know if you have any comments.
  
   [1] https://etherpad.openstack.org/p/modular-l2-agent-outline
   [2] https://review.openstack.org/#/c/99187/
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
  ___
  

Re: [openstack-dev] [nova] Reducing quota below utilisation

2014-06-17 Thread Robert Collins
Makes sense to me
On 18 Jun 2014 06:18, Tim Bell tim.b...@cern.ch wrote:



 We have some projects which are dynamically creating VMs up to their
 quota. Under some circumstances, as cloud administrators, we would like
 these projects to shrink and make room for other higher priority work.



 We had investigated setting the project quota below the current
 utilisation (i.e. effectively delete only, no create). This will eventually
 match the desired level of VMs as the dynamic workload leads to old VMs
 being deleted and new ones cannot be created.



 However, OpenStack does not allow a quota to be set to below the current
 usage.



 This seems a little restrictive … any thoughts from others ?



 Tim



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Reducing quota below utilisation

2014-06-17 Thread Ravi Jagannathan
so some questions

- how can we decide ( identify ) if the VM to delete  can indeed be deleted
- Can we better enforce quota per app / cluster level
- This delete from older VM can only  work if these are individual VM
operating only to support work loads ( and not as standalone apps ) ?


On Tue, Jun 17, 2014 at 2:18 PM, Tim Bell tim.b...@cern.ch wrote:



 We have some projects which are dynamically creating VMs up to their
 quota. Under some circumstances, as cloud administrators, we would like
 these projects to shrink and make room for other higher priority work.



 We had investigated setting the project quota below the current
 utilisation (i.e. effectively delete only, no create). This will eventually
 match the desired level of VMs as the dynamic workload leads to old VMs
 being deleted and new ones cannot be created.



 However, OpenStack does not allow a quota to be set to below the current
 usage.



 This seems a little restrictive … any thoughts from others ?



 Tim



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >