[openstack-dev] no tap interface on compute node

2014-06-16 Thread abhishek jain
Hi

I'm not able to get tap interface up on compute node when I boot VM from
controller node onto compute node.
Please help regarding this.




Thanks
Abhishek Jain
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Distributed locking

2014-06-16 Thread Jay Pipes

On 06/13/2014 05:01 AM, Julien Danjou wrote:

On Thu, Jun 12 2014, Jay Pipes wrote:


This is news to me. When was this decided and where can I read about
it?


Originally https://wiki.openstack.org/wiki/Oslo/blueprints/service-sync
has been proposed, presented and accepted back at the Icehouse summit in
HKG. That's what led to tooz creation and development since then.


Thanks, Julien, that's a helpful link. Appreciated!

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][ServiceVM] servicevm IRC meeting reminder (June 17 Tuesday 5:00(AM)UTC-)

2014-06-16 Thread Isaku Yamahata
Hi. This is a reminder mail for the servicevm IRC meeting
June 17, 2014 Tuesdays 5:00(AM)UTC-
#openstack-meeting on freenode
https://wiki.openstack.org/wiki/Meetings/ServiceVM
Maybe some won't make it because of mid-cycle meet up, though.


agenda: (feel free to add your items)
* announcement
* action items from the last week
* project incubation
* NFV meeting follow up
* blueprint follow up
* open discussion
-- 
Isaku Yamahata isaku.yamah...@gmail.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Layer 7 switching RST document on Gerrit

2014-06-16 Thread Avishay Balderman
Hi
Please review.
https://review.openstack.org/#/c/99709/

Thanks

Avishay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Starting contributing to project

2014-06-16 Thread Gary Kotton
Hi,
I would suggest that you start from the master branch. Good luck.
Thanks
HGary

On 6/15/14, 11:10 PM, Sławek Kapłoński sla...@kaplonski.pl wrote:

Hello,

I want to start contributing to neutron project. I found bug which I
want to try fix: 
https://urldefense.proofpoint.com/v1/url?u=https://bugs.launchpad.net/neut
ron/%2Bbug/1204956k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6h
goMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=dP2CZk5H6DBGb0c6WGiN01Xa7nUB4zYub3RbN
WU%2B1wM%3D%0As=699d64d0369e412e78c3511245e8453e30603f4b3bc90a1d0fc69c0ac
fda44d6 and I
have question about workflow in such case. Should I clone neutron
reposiotory from branch master and do changes based on master branch or
maybe should I do my changes starting from any other branch? What
should I do next when I will for example do patch for such bug?
Thanks in advance for any help and explanation about that

-- 
Best regards
Slawek Kaplonski
sla...@kaplonski.pl


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] 2014.1.1 (stable/icehouse) released

2014-06-16 Thread Sergey Lukjanov
Hey folks,

I'm glad to announce the release of the 2014.1.1 (stable/icehouse).

You can find more info at launchpad page:
https://launchpad.net/sahara/+milestone/2014.1.1

The next stable/icehouse release is planned for Aug 7.

Thanks.

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: [Eventletdev] Eventlet 0.15 pre-release testers needed

2014-06-16 Thread Victor Sergeyev
Hello Folks!

AFAIK, new eventlet version have limited PY3 support. So, unfortunately, it
is not ready for work with python 3 in production. See issue [1] on github.
Anyway, at least we will be in able to install eventlet in PY3 environment
and (I hope) to test some openstack features (or mock eventlet calls).
Everybody, who want to help with eventlet PY3 porting (I think, python3
migration team should be interesting in it) are welcomed to contribute.
Please, find eventlet repository on github [2] :)

I've copied this email to the eventlet maintainer - Sergey Shepelev. Maybe
he will add some more details.

[1] https://github.com/eventlet/eventlet/issues/6
[2] https://github.com/eventlet/eventlet

Thanks,
Victor



On Fri, Jun 13, 2014 at 9:59 PM, Chuck Thier cth...@gmail.com wrote:

 Just a FYI for those interested in the next eventlet version.  It also
 looks like they have a python 3 branch ready to start testing with.

 --
 Chuck

 -- Forwarded message --
 From: Sergey Shepelev temo...@gmail.com
 Date: Fri, Jun 13, 2014 at 1:18 PM
 Subject: [Eventletdev] Eventlet 0.15 pre-release testers needed
 To: eventletdev eventlet...@lists.secondlife.com, Noah Glusenkamp 
 n...@empowerengine.com, Victor Sergeyev viktor.serge...@gmail.com,
 ja...@stasiak.at


 Hello, everyone.

 TL;DR: please test these versions in Python2 and Python3:
 pip install URL should work
 (master)

 https://github.com/eventlet/eventlet/archive/6c4823c80575899e98afcb12f84dcf4d54e277cd.zip
 (py3-greenio branch, on top of master)

 https://github.com/eventlet/eventlet/archive/9e666c78086a1eb0c05027ec6892143dfa5c32bd.zip

 I am going to make Eventlet 0.15 release in coming week or two and your
 feedback would be greatly appreciated because it's the first release since
 we started work on Python3 compatibility. So please try to run your project
 in Python3 too, if you can.


 ___
 Click here to unsubscribe or manage your list subscription:
 https://lists.secondlife.com/cgi-bin/mailman/listinfo/eventletdev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate proposal - drop Postgresql configurations in the gate

2014-06-16 Thread Thierry Carrez
Robert Collins wrote:
 [...]
 C - If we can't make it harder to get races in, perhaps we can make it
 easier to get races out. We have pretty solid emergent statistics from
 every gate job that is run as check. What if set a policy that when a
 gate queue gets a race:
  - put a zuul stop all merges and checks on all involved branches
 (prevent further damage, free capacity for validation)
  - figure out when it surfaced
  - determine its not an external event
  - revert all involved branches back to the point where they looked
 good, as one large operation
- run that through jenkins N (e.g. 458) times in parallel.
- on success land it
  - go through all the merges that have been reverted and either
 twiddle them to be back in review with a new patchset against the
 revert to restore their content, or alternatively generate new reviews
 if gerrit would make that too hard.

One of the issues here is that gate queue gets a race is not a binary
state. There are always rare issues, you just can't find all the bugs
that happen 0.1% of the time. You add more such issues, and at some
point they either add up to an unacceptable level, or some other
environmental situation suddenly increases the odds of some old rare
issue to happen (think: new test cluster with slightly different
performance characteristics being thrown into our test resources). There
is no single incident you need to find and fix, and during which you can
clearly escalate to defCon 1. You can't even assume that a gate
situation was created in the set of commits around when it surfaced.

So IMHO it's a continuous process : keep looking into rare issues all
the time, to maintain them under the level where they become a problem.
You can't just have a specific process that kicks in when the gate
queue gets a race.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Using saltstack as orchestrator for fuel

2014-06-16 Thread Vladimir Kozhukalov
Guys,

First of all we need to agree about what orchestration is. In terms of Fuel
orchestration is task management (or scheduling) + task running. In other
words an orchestrator needs to be able to get data (yaml, json, etc.) and
to decide what to do, when and where and then do that. For task management
we need to have a kind of logic like that is provided by Mistral. For
launching it just needs to have a kind of transport like that is available
when we use mcollective or saltstack or ssh.

As far as I know (I did research in Saltstack a year ago), Saltstack does
not have mature management mechanism. What it has is so called overstate
mechanism which allows one to write a script for managing tasks in multiple
node environments like launch task-1 on node-1, then launch task-2 on
node-2 and then launch task-3 on node-1 again. It works, but it is
semi-manual. I mean it is exactly what we already have and call it Astute.
The only difference is that Astute is a wrapper around Mcollective.

The only advantages I see in using Saltstack instead of Mcollective is that
it is written in Python (Mcollective still does not have python binding)
and that it uses ZeroMQ. Maybe those advantages are not so subtle, but
let's take a look carefully.

For example, the fact that Saltstack is written in Python allows us to use
Saltstack directly from Nailgun. But I am absolutely sure that everyone
will agree that would be a great architectural lack. If you ask me, Nailgun
has to use an external task management service with highly outlined API
 such as Mistral. Mistral already has plenty of capabilities for that. Do
we really need to implement all that stuff?

ZeroMQ is a great advantage if you have thousands of nodes. It is highly
scalaeble. It is also allows one to avoid using one additional external
service like Rabbit. Minus one point of failure, right? On the other hand,
it brings us into the world of Saltstack with its own bugs despite its
maturity.

Consequently, my personal preference is to concentrate on splitting puppet
code into independent tasks and using Mistral for resolving task
dependencies. As our transport layer we'll then be able to use whatever we
want (Saltstack, Mcollective, bare ssh, any kind of custom implementation,
etc.)




Vladimir Kozhukalov


On Fri, Jun 13, 2014 at 8:45 AM, Mike Scherbakov mscherba...@mirantis.com
wrote:

 Dmitry,
 please read design doc attached to
 https://blueprints.launchpad.net/fuel/+spec/fuel-library-modularization.
 I think it can serve as a good source of requirements which we have, and
 then we can see what tool is the best.

 Regards,




 On Thu, Jun 12, 2014 at 12:28 PM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:

 Guys, what we really need from orchestration tool is an ability
 orchestrate a big amount of task accross the nodes with all the complicated
 dependencies, dynamic actions (e.g. what to do on failure and on success)
 and parallel execution including those, that can have no additional
 software installed somewhere deep in the user's infrastructure (e.g. we
 need to send a RESTful request to vCenter). And this is the usecase of our
 pluggable architecture. I am wondering if saltstack can do this.


 On Wed, Jun 11, 2014 at 9:08 PM, Sergii Golovatiuk 
 sgolovat...@mirantis.com wrote:

 Hi,

 That would be nice to compare Ansible and Salt. They are both Python
 based. Also, Ansible has pull model also. Personally, I am big fan of
 Ansible because of its simplicity and speed of playbook development.

 ~Sergii


 On Wed, Jun 11, 2014 at 1:21 PM, Dmitriy Shulyak dshul...@mirantis.com
 wrote:

 well, i dont have any comparison chart, i can work on one based on
 requirements i've provided in initial letter, but:
 i like ansible, but it is agentless, and it wont fit well in our
 current model of communication between nailgun and orchestrator
 cloudify - java based application, even if it is pluggable with other
 language bindings - we will benefit from application in python
 salt is been around for 3-4 years, and simply compare github graphs, it
 one of the most used and active projects in python community

 https://github.com/stackforge/mistral/graphs/contributors
 https://github.com/saltstack/salt/graphs/contributors


 On Wed, Jun 11, 2014 at 1:04 PM, Sergii Golovatiuk 
 sgolovat...@mirantis.com wrote:

 Hi,

 There are many mature orchestration applications (Salt, Ansible,
 Cloudify, Mistral). Is there any comparison chart? That would be nice to
 compare them to understand the maturity level. Thanks

 ~Sergii


 On Wed, Jun 11, 2014 at 12:48 PM, Dmitriy Shulyak 
 dshul...@mirantis.com wrote:

 Actually i am proposing salt as alternative, the main reason - salt
 is mature, feature full orchestration solution, that is well adopted even
 by our internal teams


 On Wed, Jun 11, 2014 at 12:37 PM, Evgeniy L e...@mirantis.com wrote:

 Hi,

 As far as I remember we wanted to replace Astute with Mistral [1],
 do we really want to have some intermediate steps (I mean 

Re: [openstack-dev] [Fuel] Symlinks to new stuff for OpenStack Patching

2014-06-16 Thread Matthew Mosesohn
Hi Igor,

The repo directory its is too large to fit in a docker container and
work reliably. It is just a symlink inside the repo storage container
from host:/var/www/nailgun to repo-container:/repo. This /repo folder
is shared out to containers, such as nginx, and then symlinks are
created for each subdir in /var/www/nailgun. If you need more links
without rebuilding your environment, you would need to symbolically
link your new repository from /var/www/nailgun/newlink to
/var/lib/docker/devicemapper/mnt/$(docker nspect -f='{{.ID}}'
__containername__)/rootfs/repo. (replace __containername__ with the
container you're trying to work on).

I hope this helps.

-Matthew

On Sun, Jun 15, 2014 at 2:32 PM, Igor Kalnitsky ikalnit...@mirantis.com wrote:
 Hello fuelers,

 I'm working on openstack patching for 5.1 and I've met some problems.
 The problems I've met are in repos/puppets installing process.

 The problems are almost same, so I describe it on repos example.

 The repos data are located in /var/www/nailgun. This folder is mounted
 as /repo into Nginx container. Nginx container has own /var/www/nailgun
 with various symlinks to /repo's content.

 So the problem is that we need to add symlinks to newest repos in Nginx
 container. How this problem should be solved? Should our fuel-upgrade
 script add these symlinks or we'll ship new docker containers which
 already contain these symlinks?


 Thanks,
 Igor

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] An alternative approach to enforcing expected election behaviour

2014-06-16 Thread Eoghan Glynn

TL;DR: how about we adopt a soft enforcement model, relying 
   on sound judgement and good faith within the community?


Hi Folks,

I'm concerned that the expected election behaviour review[1]
is not converging on the optimal approach, partially due to the
initial concentration on the procedural aspects of the proposal.

This concentration was natural enough, due to shortcomings such
as the lack of any provision for a right of reply, or the
fuzziness around who was capable of deciding a violation had
occurred and imposing the stiff penalties envisaged.

However, I think we should take a step back at this stage and
reconsider the whole approach. Looking at this in the round, as
I see it, the approach mooted seems to suffer from a fundamental
flaw.

Specifically, in holding individuals to the community code of
conduct[2] in a very *strict* sense (under pain of severe career
damage), when much of that code is written in an aspirational
style, and so is not very suitable for use as an *objective*
standard.

The reference to the spirit of the OpenStack ideals ideals is
even more problematic in that sense. Ideals by their nature are
*idealized* versions of reality. So IMHO it's not workable to
infuse an aspiration to meet these laudable ideals, with the
language of abuse, violations, investigation, punishment etc.
In fact it strikes me as a tad Orwellian to do so.

So I wanted to throw an alternative idea out onto the table ...

How about we rely instead on the values and attributes that
actually make our community strong?

Specifically: maturity, honesty, and a self-correcting nature.

How about we simply require that each candidate for a TC or PTL
election gives a simple undertaking in their self-nomination mail,
along the lines of:

I undertake to respect the election process, as required by
the community code of conduct.

I also undertake not to engage in campaign practices that the
community has considered objectionable in the past, including
but not limited to, unsolicited mail shots and private campaign
events.

If my behavior during this election period does not live up to
those standards, please feel free to call me out on it on this
mailing list and/or withhold your vote.

We then rely on:

  (a) the self-policing nature of an honest, open community

and:

  (b) the maturity and sound judgement within that community
  giving us the ability to quickly spot and disregard any
  frivolous reports of mis-behavior

So no need for heavy-weight inquisitions, no need to interrupt the
election process, no need for handing out of stiff penalties such
as termination of membership.

Instead, we simply rely on good faith and sound judgement within
the community.

TBH I think we're pretty good at making ourselves heard when
needs be, and also pretty good at filtering through the noise. 

So I would trust the electorate to apply their judgement, filter
out those reports of bad practice that they consider frivolous or
tending to make mischief, or conversely to withhold their vote if
they consider the practice reported to be unacceptable.

If someone has already cast their vote when the report of some
questionable behavior surfaces, well so be it. The electorate
has a long memory and most successful candidates end up running
again for subsequent elections (e.g. a follow-on term as PTL,
or for the TC).

The key strength of this alternative approach IMO is that it
directly relies on the *actual* values of the community, as
opposed to attempting to codify those values, a priori.

Just my $0.02 ...

Cheers,
Eoghan

[1] https://review.openstack.org/98675
[2] http://www.openstack.org/legal/community-code-of-conduct

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] replicate messages among servers

2014-06-16 Thread wpf
Hi

   If you are using the mongodb, then you can leverage the replication
sets,  for example ,the uri for mongodb

   uri = mongodb://mydb1,mydb2,mydb3:27017/?replicaSet=catalog...

  for some more, you can refer to
http://docs.openstack.org/developer/marconi/installing.html


On Sun, Jun 15, 2014 at 6:14 PM, Peng Gu gp_st...@163.com wrote:

 Hi all,
   Is there a mechanism or plan for maconi to support replocating messages
 among servers to improve availability?
 I thought this is a key feature to bring maconi into product environment.
 Thanks

 Peng Gu



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
祝好!

   王鹏飞
msn: dragoninfi...@hotmail.com
mobile: 13681265240
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] BGP Dynamic Routing Proposal

2014-06-16 Thread thomas.morin
Hi all,

We've just released our implementation of BGP VPN extensions (called 
'BaGPipe'), under a opensource license :
https://github.com/Orange-OpenSource/bagpipe-bgp

It reuses some code from ExaBGP, but with a dedicated engine for VPN 
instance creation through a rest API, and a modular architecture to 
drive a dataplane (e.g. OpenVSwtich).  It is based on an internal 
development we did to address IaaS/IP VPN interconnection issues; 
although still young the project was the basis for a few working lab 
prototypes.  There is more info in the README.

I filled-in a column for BaGPipe on the wiki page ( 
https://wiki.openstack.org/wiki/Neutron/BGPSpeakersComparison ), to give 
an idea where it stands, and let people think how that could address 
needs in Openstack.  I also added a line to specify support for 
Route-Target-constrained distribution of VPN routes (RFC4684), because 
it is a real need beyond VPNv4/v6 routes for some VPN interconnection 
use deployments.

Best,

-Thomas


Nachi Ueno :
 Hi folks

 ExaBGP won't suit for BGPVPN implementation because it isn't support vpnv4.
 Ryu is supporting it, however they have no internal api to binding
 neutron network  route target.
 so I think contrail is a only solution for  BGPVPN implementation now.



 2014-05-30 2:22 GMT-07:00 Mathieu Rohon mathieu.ro...@gmail.com:
 Hi,

 I was about mentionning ExaBGP too! can we also consider using those
 BGP speakers for BGPVPN implementation [1].
 This would be consistent to have the same BGP speaker used for every
 BGP needs inside Neutron.

 [1]https://review.openstack.org/#/c/93329/


 On Fri, May 30, 2014 at 10:54 AM, Jaume Devesa devv...@gmail.com wrote:
 Hello Takashi,

 thanks for doing this! As we have proposed ExaBgp[1] in the Dynamic Routing
 blueprint[2], I've added a new column for this speaker in the wiki page. I
 plan to fill it soon.

 ExaBgp was our first choice because we thought that run something in library
 mode would be much more easy to deal with (especially the exceptions and
 corner cases) and the code would be much cleaner. But seems that Ryu BGP
 also can fit in this requirement. And having the help from a Ryu developer
 like you turns it into a promising candidate!

 I'll start working now in a proof of concept to run the agent with these
 implementations and see if we need more requirements to compare between the
 speakers.

 [1]: https://wiki.openstack.org/wiki/Neutron/BGPSpeakersComparison
 [2]: https://review.openstack.org/#/c/90833/

 Regards,


 On 29 May 2014 18:42, YAMAMOTO Takashi yamam...@valinux.co.jp wrote:
 as per discussions on l3 subteem meeting today, i started
 a bgp speakers comparison wiki page for this bp.

 https://wiki.openstack.org/wiki/Neutron/BGPSpeakersComparison

 Artem, can you add other requirements as columns?

 as one of ryu developers, i'm naturally biased to ryu bgp.
 i appreciate if someone provides more info for other bgp speakers.

 YAMAMOTO Takashi

 Good afternoon Neutron developers!

 There has been a discussion about dynamic routing in Neutron for the
 past few weeks in the L3 subteam weekly meetings. I've submitted a review
 request of the blueprint documenting the proposal of this feature:
 https://review.openstack.org/#/c/90833/. If you have any feedback or
 suggestions for improvement, I would love to hear your comments and 
 include
 your thoughts in the document.

 Thank you.

 Sincerely,
 Artem Dmytrenko


_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.
Thank you.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Symlinks to new stuff for OpenStack Patching

2014-06-16 Thread Evgeniy L
Hi,

In case of OpenStack patching you don't need to create any symlinks and
mount new directories, you can just create new subdirectory in
/var/www/nailgun with specific version of openstack, like /var/www/nailgun/
5.0.1. And then use it as a repository path for new OpenStack releases.


On Mon, Jun 16, 2014 at 12:50 PM, Matthew Mosesohn mmoses...@mirantis.com
wrote:

 Hi Igor,

 The repo directory its is too large to fit in a docker container and
 work reliably. It is just a symlink inside the repo storage container
 from host:/var/www/nailgun to repo-container:/repo. This /repo folder
 is shared out to containers, such as nginx, and then symlinks are
 created for each subdir in /var/www/nailgun. If you need more links
 without rebuilding your environment, you would need to symbolically
 link your new repository from /var/www/nailgun/newlink to
 /var/lib/docker/devicemapper/mnt/$(docker nspect -f='{{.ID}}'
 __containername__)/rootfs/repo. (replace __containername__ with the
 container you're trying to work on).

 I hope this helps.

 -Matthew

 On Sun, Jun 15, 2014 at 2:32 PM, Igor Kalnitsky ikalnit...@mirantis.com
 wrote:
  Hello fuelers,
 
  I'm working on openstack patching for 5.1 and I've met some problems.
  The problems I've met are in repos/puppets installing process.
 
  The problems are almost same, so I describe it on repos example.
 
  The repos data are located in /var/www/nailgun. This folder is mounted
  as /repo into Nginx container. Nginx container has own /var/www/nailgun
  with various symlinks to /repo's content.
 
  So the problem is that we need to add symlinks to newest repos in Nginx
  container. How this problem should be solved? Should our fuel-upgrade
  script add these symlinks or we'll ship new docker containers which
  already contain these symlinks?
 
 
  Thanks,
  Igor
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] replicate messages among servers

2014-06-16 Thread Flavio Percoco

On 16/06/14 17:06 +0800, wpf wrote:

Hi

   If you are using the mongodb, then you can leverage the replication sets, 
for example ,the uri for mongodb

   uri = mongodb://mydb1,mydb2,mydb3:27017/?replicaSet=catalog...

  for some more, you can refer to http://docs.openstack.org/developer/marconi/
installing.html


On Sun, Jun 15, 2014 at 6:14 PM, Peng Gu gp_st...@163.com wrote:

   Hi all,
     Is there a mechanism or plan for maconi to support replocating messages
   among servers to improve availability? 
   I thought this is a key feature to bring maconi into product environment.
   Thanks
  
   Peng Gu
  



Hey Peng,

As mentioned in the previous reply to your email, it's recommended to
rely on the storage availability instead of doing it ourselves. If
you're using mongodb, you can configure a replicaSet and have several
marconi nodes pointing to the same replica.

Hope that helps,
Flavio

--
@flaper87
Flavio Percoco


pgp4OUU3wgtyX.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Rethink how we manage projects? (was Gate proposal - drop Postgresql configurations in the gate)

2014-06-16 Thread Thierry Carrez
David Kranz wrote:
 [...]
 There is a different way to do this. We could adopt the same methodology
 we have now around gating, but applied to each project on its own
 branch. These project branches would be integrated into master at some
 frequency or when some new feature in project X is needed by project Y. 
 Projects would want to pull from the master branch often, but the push
 process would be less frequent and run a much larger battery of tests
 than we do now.

So we would basically discover the cross-project bugs when we push to
the master master branch. I think you're just delaying discovery of
the most complex issues, and push the responsibility to resolve them
onto a inexistent set of people. Adding integration branches only makes
sense if you have an integration team. We don't have one, so we'd call
back on the development teams to solve the same issues... with a delay.

In our specific open development setting, delaying is bad because you
don't have a static set of developers that you can assume will be on
call ready to help with what they have written a few months later:
shorter feedback loops are key to us.

 Doing this would have the following advantages:
 
 1. It would be much harder for a race bug to get in. Each commit would
 be tested many more times on its branch before being merged to master
 than at present, including tests specialized for that project. The
 qa/infra teams and others would continue to define acceptance at the
 master level.
 2. If a race bug does get in, projects have at least some chance to
 avoid merging the bad code.
 3. Each project can develop its own gating policy for its own branch
 tailored to the issues and tradeoffs it has. This includes focus on
 spending time running their own tests. We would no longer run a complete
 battery of nova tests on every commit to swift.
 4. If a project branch gets into the situation we are now in:
  a) it does not impact the ability of other projects to merge code
  b) it is highly likely the bad code is actually in the project so
 it is known who should help fix it
  c) those trying to fix it will be domain experts in the area that
 is failing
 5. Distributing the gating load and policy to projects makes the whole
 system much more scalable as we add new projects.
 
 Of course there are some drawbacks:
 
 1. It will take longer, sometimes much longer, for any individual commit
 to make it to master. Of course if a super-serious issue made it to
 master and had to be fixed immediately it could be committed to master
 directly.
 2. Branch management at the project level would be required. Projects
 would have to decide gating criteria, timing of pulls, and coordinate
 around integration to master with other projects.
 3. There may be some technical limitations with git/gerrit/whatever that
 I don't understand but which would make this difficult.
 4. It makes the whole thing more complicated from a process standpoint.

An extra drawback is that you can't really do CD anymore, because your
master master branch gets big chunks of new code in one go at push time.

 I have used this model in previous large software projects and it worked
 quite well. This may also be somewhat similar to what the linux kernel
 does in some ways.

Please keep in mind that some techniques which are perfectly valid (and
even recommended) when you have a captive set of developers just can't
work in our open development setting. Some techniques which work
perfectly for a release-oriented product just don't cut it when you also
want the software to be consumable in a continuous delivery fashion. We
certainly can and should learn from other experiences, but we also need
to recognize our challenges are unique and might call for some unique
solution, with its own drawbacks and benefits.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Do any hyperviors allow disk reduction as part of resize ?

2014-06-16 Thread Day, Phil
Beyond what is and isn’t technically possible at the file system level there is 
always the problem that the user may have more data than can fit into the 
reduced disk.

I don’t want to take away useful functionality from folks if there are cases 
where it already works – mostly I just want to improve the user experience, and 
 to me the biggest problem here is the current failure mode where the user 
can’t tell if the request has been tried and failed, or just not happened at 
all for some other reason.

What if we introduced a new state of “Resize_failed” from which the only 
allowed operations are “resize_revert” and delete – so the user can at least 
get some feedback on the cases that can’t be supported ?

From: Aryeh Friedman [mailto:aryeh.fried...@gmail.com]
Sent: 13 June 2014 18:15
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Do any hyperviors allow disk reduction as 
part of resize ?

Also ZFS needs to know what is on the guest for example bhyve (the only working 
hv for bsd currency [vbox kind of also works]) stores the backing store (unless 
bare metal) as single block file.   It is impossible to make that non-opaque to 
the outside world unless you can run commands on the instance.

On Fri, Jun 13, 2014 at 11:53 AM, Darren J Moffat 
darren.mof...@oracle.commailto:darren.mof...@oracle.com wrote:


On 06/13/14 16:37, Daniel P. Berrange wrote:
The xenapi implementation only works on ext[234] filesystems. That rules
out *BSD, Windows and Linux distributions that don't use ext[234]. RHEL7
defaults to XFS for instance.
Presumably it'll have a hard time if the guest uses LVM for its image
or does luks encryption, or anything else that's more complex than just
a plain FS in a partition.

For example ZFS, which doesn't currently support device removal (except for 
mirror detach) or device size shrink (but does support device grow).  ZFS does 
support file system resize but file systems are just logical things within a 
storage pool (made up of 1 or more devices) so that has nothing to do with the 
block device size.

--
Darren J Moffat


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Aryeh M. Friedman, Lead Developer, http://www.PetiteCloud.org
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][ironic] what to do with unit test failures from ironic api contract

2014-06-16 Thread Day, Phil

From: David Shrewsbury [mailto:shrewsbury.d...@gmail.com]
Sent: 14 June 2014 02:10
To: OpenStack Development Mailing List (not for usage questions)
Cc: Shrewsbury, David; Van Der Veen, Devananda
Subject: Re: [openstack-dev] [nova][ironic] what to do with unit test failures 
from ironic api contract

Hi!

On Fri, Jun 13, 2014 at 9:30 AM, Day, Phil 
philip@hp.commailto:philip@hp.com wrote:
Hi Folks,

A recent change introduced a unit test to “warn/notify developers” when they 
make a change which will break the out of tree Ironic virt driver:   
https://review.openstack.org/#/c/98201

Ok – so my change (https://review.openstack.org/#/c/68942) broke it as it adds 
some extra parameters to the virt drive power_off() method – and so I now feel 
suitable warned and notified – but am not really clear what I’m meant to do 
next.

So far I’ve:

-  Modified the unit test in my Nova patch so it now works

-  Submitted an Ironic patch to add the extra parameters 
(https://review.openstack.org/#/c/99932/)

As far as I can see there’s no way to create a direct dependency from the 
Ironic change to my patch – so I guess its down to the Ironic folks to wait and 
accept it in the correct sequence ?

Thanks for bringing up this question.

98201 was added at the suggestion of Sean Dague during a conversation
in #openstack-infra to help prevent terrible breakages that affect the gate.
What wasn't discussed, however, is how we should coordinate these changes
going forward.

As for your change, I think what you've done is exactly what we had hoped would
be done. In your particular case, I don't see any need for Nova dev's to not go 
ahead
and approve 68942 *before* 99932 since defaults are added to the arguments. The
question is, how do we coordinate such changes if a change DOES actually break
ironic?

One suggestion is that if 
test_ironic_api_contracts.pyhttps://review.openstack.org/#/c/68942/15/nova/tests/virt/test_ironic_api_contracts.py
 is ever changed, Nova require
the Ironic PTL (or a core dev) to vote before approving. That seems sensible to 
me.
There might be an easier way of coordinating that I'm overlooking, though.

-Dave
--
David Shrewsbury (Shrews)



Hi Dave,

I agree that co-ordination is the key here – if the Ironic change is approved 
first then Nova and Ironic will continue to work, but there is a risk that the 
Nova change gets blocked / modified after the Ironic commit which would be 
painful.

If the Nova change is committed first then Ironic will of course be broken 
until its change is committed.

I’ll add a pointer and a note to the corresponding change in each of the 
patches.

Phil
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [FUEL] Authentication in FUEL meeting notes

2014-06-16 Thread Alexander Kislitsky
16.07.2014 meeting results:

Participants:
L. Oles
N. Markov
V. Kramskikh
A. Kislitsky

Acceptance points of authentication in FUEL:

   1.

   Each project (nailgun, CLI, UI, OSTF) should authenticate in Keystone
   and use received token in other services requests
   2.

   Authentication can be enabled or disabled by parameters in settings for
   each project
   3.

   We should have tests coverage with and without authentication
   4.

   We should implement default strategy(policy) for API requests -
   authentication required, but for some requests should be excluded from
   authentication checking (for example adding node, checking is
   authentication enabled in nailgun API)
   5. Nailgun API should handle Keystone tokens
   6. We should have human oriented error messages for Keystone and
   authentication errors
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Symlinks to new stuff for OpenStack Patching

2014-06-16 Thread Igor Kalnitsky
Hello,

Thanks for input, guys. So we need to prepare infrastructure for that.

- Igor


On Mon, Jun 16, 2014 at 12:23 PM, Evgeniy L e...@mirantis.com wrote:

 Hi,

 In case of OpenStack patching you don't need to create any symlinks and
 mount new directories, you can just create new subdirectory in
 /var/www/nailgun with specific version of openstack, like
 /var/www/nailgun/5.0.1. And then use it as a repository path for new
 OpenStack releases.


 On Mon, Jun 16, 2014 at 12:50 PM, Matthew Mosesohn mmoses...@mirantis.com
  wrote:

 Hi Igor,

 The repo directory its is too large to fit in a docker container and
 work reliably. It is just a symlink inside the repo storage container
 from host:/var/www/nailgun to repo-container:/repo. This /repo folder
 is shared out to containers, such as nginx, and then symlinks are
 created for each subdir in /var/www/nailgun. If you need more links
 without rebuilding your environment, you would need to symbolically
 link your new repository from /var/www/nailgun/newlink to
 /var/lib/docker/devicemapper/mnt/$(docker nspect -f='{{.ID}}'
 __containername__)/rootfs/repo. (replace __containername__ with the
 container you're trying to work on).

 I hope this helps.

 -Matthew

 On Sun, Jun 15, 2014 at 2:32 PM, Igor Kalnitsky ikalnit...@mirantis.com
 wrote:
  Hello fuelers,
 
  I'm working on openstack patching for 5.1 and I've met some problems.
  The problems I've met are in repos/puppets installing process.
 
  The problems are almost same, so I describe it on repos example.
 
  The repos data are located in /var/www/nailgun. This folder is mounted
  as /repo into Nginx container. Nginx container has own /var/www/nailgun
  with various symlinks to /repo's content.
 
  So the problem is that we need to add symlinks to newest repos in Nginx
  container. How this problem should be solved? Should our fuel-upgrade
  script add these symlinks or we'll ship new docker containers which
  already contain these symlinks?
 
 
  Thanks,
  Igor
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] An alternative approach to enforcing expected election behaviour

2014-06-16 Thread Daniel P. Berrange
On Mon, Jun 16, 2014 at 05:04:51AM -0400, Eoghan Glynn wrote:
 How about we rely instead on the values and attributes that
 actually make our community strong?
 
 Specifically: maturity, honesty, and a self-correcting nature.
 
 How about we simply require that each candidate for a TC or PTL
 election gives a simple undertaking in their self-nomination mail,
 along the lines of:
 
 I undertake to respect the election process, as required by
 the community code of conduct.
 
 I also undertake not to engage in campaign practices that the
 community has considered objectionable in the past, including
 but not limited to, unsolicited mail shots and private campaign
 events.
 
 If my behavior during this election period does not live up to
 those standards, please feel free to call me out on it on this
 mailing list and/or withhold your vote.

I like this proposal because it focuses on the carrot rather than
the stick, which is ultimately better for community cohesiveness
IMHO. It is already part of our community ethos that we can call
people out to publically debate / stand up  justify any  all
issues affecting the project whether they be related to the code,
architecture, or non-technical issues such as electioneering
behaviour.

 We then rely on:
 
   (a) the self-policing nature of an honest, open community
 
 and:
 
   (b) the maturity and sound judgement within that community
   giving us the ability to quickly spot and disregard any
   frivolous reports of mis-behavior
 
 So no need for heavy-weight inquisitions, no need to interrupt the
 election process, no need for handing out of stiff penalties such
 as termination of membership.

Before jumping headlong for a big stick to whack people with, I think
I'd expect to see examples of problems we've actually faced (as opposed
to vague hypotheticals), and a clear illustration that a self-policing
approach to the community interaction failed to address them. I've not
personally seen/experianced any problems that are so severe that they'd
suggest we need the ability to kick someone out of the community for
sending email !

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate proposal - drop Postgresql configurations in the gate

2014-06-16 Thread Sean Dague
On 06/16/2014 04:33 AM, Thierry Carrez wrote:
 Robert Collins wrote:
 [...]
 C - If we can't make it harder to get races in, perhaps we can make it
 easier to get races out. We have pretty solid emergent statistics from
 every gate job that is run as check. What if set a policy that when a
 gate queue gets a race:
  - put a zuul stop all merges and checks on all involved branches
 (prevent further damage, free capacity for validation)
  - figure out when it surfaced
  - determine its not an external event
  - revert all involved branches back to the point where they looked
 good, as one large operation
- run that through jenkins N (e.g. 458) times in parallel.
- on success land it
  - go through all the merges that have been reverted and either
 twiddle them to be back in review with a new patchset against the
 revert to restore their content, or alternatively generate new reviews
 if gerrit would make that too hard.
 
 One of the issues here is that gate queue gets a race is not a binary
 state. There are always rare issues, you just can't find all the bugs
 that happen 0.1% of the time. You add more such issues, and at some
 point they either add up to an unacceptable level, or some other
 environmental situation suddenly increases the odds of some old rare
 issue to happen (think: new test cluster with slightly different
 performance characteristics being thrown into our test resources). There
 is no single incident you need to find and fix, and during which you can
 clearly escalate to defCon 1. You can't even assume that a gate
 situation was created in the set of commits around when it surfaced.
 
 So IMHO it's a continuous process : keep looking into rare issues all
 the time, to maintain them under the level where they become a problem.
 You can't just have a specific process that kicks in when the gate
 queue gets a race.

Definitely agree. I also think part of the issue is we get emergent
behavior once we tip past some cumulative failure rate. Much of that
emergent behavior we are coming to understand over time. We've done
corrections like clean check and sliding gate window to impact them.

It's also that a new issue tends to take 12 hrs to see and figure out if
it's a ZOMG issue, and 3 - 5 days to see if it's any lower level of
severity. And given that we merge 50 - 100 patches a day, across 40
projects, across branches, the rollback would be  'interesting'.

-Sean.

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] ML2 support in Fuel deployment risks

2014-06-16 Thread Mike Scherbakov
Fuelers, Andrew,
I've talked to Sergey V. today about ML2 support in Fuel. Our current
approach [1] is to port upstream puppet module for Neutron which has
support of ML2, however our Neutron module is significantly diverged from
upstream one (at least for Neutron HA deployment capabilities), as far as I
understand. Basically, there is a risk that we will get unstable Neutron
deployment in 5.1. Also, unless we have ML2, we are blocking others who
rely on it, for example Mellanox.

To mitigate the risk, there is a suggestion to start the work in two
parallel tracks: one is to continue porting of upstream puppet module, and
another one - port the only ML2 part into Fuel Neutron puppet module. This
will not take much time, but will allow us to have 5.1 reliable and with
ML2 in case of instability after porting external module.

Your opinion on this?

[1] https://review.openstack.org/#/c/99807/1/specs/5.1/ml2-neutron.rst
-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: Fwd: Debian people don't like bash8 as a project name (Bug#748383: ITP: bash8 -- bash script style guide checker)

2014-06-16 Thread Chmouel Boudjnah
Sean Dague s...@dague.net writes:

 bashate ftw.

+1 to bashate

Chmouel

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tuskar-UI] Two new tests in jenkins

2014-06-16 Thread Peter Belanyi
Hi all,

Just some info, recently two new test jobs have been added to jenkins for 
tuskar-ui:
- selenium for running qunit tests (javascript unit tests)
- jshint for javascript linting

The jobs are currently non-voting, I'll change them to voting if they seem to 
work fine.

-- 
Kind Regards,
Péter Belányi
Software Engineer
Red Hat Czech s.r.o., Purkyňova 99/71, 612 45 Brno, Czech Republic

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] REST API - entity level validation

2014-06-16 Thread Avishay Balderman
Salvatore
Will Neutron is going to follow Nova?
https://blueprints.launchpad.net/nova/+spec/nova-api-validation-fw

Avishay

From: Salvatore Orlando [mailto:sorla...@nicira.com]
Sent: Monday, June 16, 2014 12:36 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] REST API - entity level validation

Avishay,

what you say here is correct.
However, as we are in the process of moving to Pecan as REST API framework I 
would probably refrain from adding new features to it at this stage.

Therefore, even if far from ideal, this kind of validation should perhaps be 
performed in the DB layer. I think this already happens for several API 
resources.

Salvatore

On 5 June 2014 13:01, Avishay Balderman 
avish...@radware.commailto:avish...@radware.com wrote:
Hi
With the current REST API engine in neutron we can declare attributes 
validations.
We have a rich set of validation functions 
https://github.com/openstack/neutron/blob/master/neutron/api/v2/attributes.py
However we do not have the concept of entity level validation.

Example:
I have an API ‘create-something’ and Something is an entity having 2 attributes:
Something {
  Attribute A
 Attribute B
}
And according to the business logic A must be greater than B


As for today our framework cannot handle  this kind of validation and the call 
is going inside a lower layer of neutron and must be validated there.
Example: https://review.openstack.org/#/c/93871/9

With this we have the validations implemented across multi layers. I think we 
better have the validations in one layer.

Thanks

Avishay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] VMware ESX Driver Deprecation

2014-06-16 Thread Davanum Srinivas
+1 to Option #2.

-- dims

On Sun, Jun 15, 2014 at 11:15 AM, Gary Kotton gkot...@vmware.com wrote:
 Hi,
 In the Icehouse cycle it was decided to deprecate the VMware ESX driver. The
 motivation for the decision was:

 The driver is not validated by Minesweeper
 It is not clear if there are actually any users of the driver

 Prior to jumping into the proposal we should take into account that the
 current ESX driver does not work with the following branches:

 Master (Juno)
 Icehouse
 Havana

 The above are due to VC features that were added over the course of these
 cycles.

 On the VC side the ESX can be added to a cluster and the running VM’s will
 continue to run. The problem is how that are tracked and maintained in the
 Nova DB.

 Option 1: Moving the ESX(s) into a nova managed cluster. This would require
 the nova DB entry for the instance running on the ESX to be updated to be
 running on the VC host. If the VC host restarts at point during the above
 then all of the running instances may be deleted (this is due to the fact
 that _destroy_evacuated_instances is invoked when a nova compute is started
 https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L673).
 This would be disastrous for a running deployment.

 If we do decide to go for the above option we can perform a cold migration
 of the instances from the ESX hosts to the VC hosts. The fact that the same
 instance will be running on the ESX would require us to have a ‘noop’ for
 the migration. This can be done by configuration variables but that will be
 messy. This option would require code changes.

 Option 2: Provide the administrator with tools that will enable a migration
 of the running VM’s.

 A script that will import OpenStack VM’s into the database – the script will
 detect VM’s running on a VC and import them to the database.
 A scrip that will delete VM’s running on a specific host

 The admin will use these as follows:

 Invoke the deletion script for the ESX
 Add the ESX to a VC
 Invoke the script for importing the OpenStack VM’s into the database
 Start the nova compute with the VC driver
 Terminate all Nova computes with the ESX driver

 This option requires the addition of the scripts. The advantage is that it
 does not touch any of the running code and is done out of band. A variant of
 option 2 would be to have a script that updates the host for the ESX VM’s to
 the VC host.

 Due to the fact that the code is not being run at the moment I am in favor
 of the external scripts as it will be less disruptive and not be on a
 critical path. Any thoughts or comments?

 Thanks
 Gary

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][messaging] Further improvements and refactoring

2014-06-16 Thread Gordon Sim

On 06/13/2014 02:06 PM, Ihar Hrachyshka wrote:

On 10/06/14 15:40, Alexei Kornienko wrote:

On 06/10/2014 03:59 PM, Gordon Sim wrote:



I think there could be a lot of work required to significantly
improve that driver, and I wonder if that would be better spent
on e.g. the AMQP 1.0 driver which I believe will perform much
better and will offer more choice in deployment.



I agree with you on this. However I'm not sure that we can do such
a decision. If we focus on amqp driver only we should mention it
explicitly and deprecate qpid driver completely. There is no point
in keeping driver that is not really functional.


The driver is functional. It may be not that efficient as
alternatives, but that's not a valid reason to deprecate it.


The question in my view is what the plan is for ongoing development.

Will the driver get better over time, or is it likely to remain as is at 
best (or even deteriorate)?


Choice is good, but every choice adds to the maintenance burden, in 
testing against regressions if nothing else.


I think an explicit decision about the future is beneficial, whatever 
the decision may be.





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] revert hacking to 0.8 series

2014-06-16 Thread Sean Dague
Hacking 0.9 series was released pretty late for Juno. The entire check
queue was flooded this morning with requirements proposals failing pep8
because of it (so at 6am EST we were waiting 1.5 hrs for a check node).

The previous soft policy with pep8 updates was that we set a pep8
version basically release week, and changes stopped being done for style
after first milestone.

I think in the spirit of that we should revert the hacking requirements
update back to the 0.8 series for Juno. We're past milestone 1, so
shouldn't be working on style only fixes at this point.

Proposed review here - https://review.openstack.org/#/c/100231/

I also think in future hacking major releases need to happen within one
week of release, or not at all for that series.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Weekly community meeting - 06/16/2014

2014-06-16 Thread Renat Akhmerov
Hi,

This is a reminder about another community IRC meeting at #openstack-meeting 
today at 16.00 UTC.

The agenda is usual:
Review action items
Current status (quickly by team members)
Further plans
Open discussion

Looking forward to see you there.

[0] https://wiki.openstack.org/wiki/Meetings/MistralAgenda

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Need help on developing Neutron application to receive messages from Open Day Light

2014-06-16 Thread venkatesh.nag

Hi,

I am new to both open stack and open daylight. I have to write an application / 
plugin to receive messages
From open daylight neutron interface, can you  please provide a reference or 
an example of how to go about it ?

Many thanks,
Venkatesh

The information contained in this electronic message and any attachments to 
this message are intended for the exclusive use of the addressee(s) and may 
contain proprietary, confidential or privileged information. If you are not the 
intended recipient, you should not disseminate, distribute or copy this e-mail. 
Please notify the sender immediately and destroy all copies of this message and 
any attachments.

WARNING: Computer viruses can be transmitted via email. The recipient should 
check this email and any attachments for the presence of viruses. The company 
accepts no liability for any damage caused by any virus transmitted by this 
email.

www.wipro.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev][Cinder] Review days? (open to ANYBODY and EVERYBODY)

2014-06-16 Thread Kerr, Andrew
+1

Andrew Kerr


On 6/13/14, 10:30 AM, Duncan Thomas duncan.tho...@gmail.com wrote:

Same as Jay, for much the same reasons. Having a fixed calendar time
makes it easy for me to put up a 'do not disturb' sign.

On 13 June 2014 05:10, Jay Bryant jsbry...@electronicjungle.net wrote:
 John,

 +2

 I am guilty of falling behind on reviews. Pulled in to a lot of other
stuff
 since the summit ... and before.

 Having prescribed time on my calendar is a good idea.  Just put it on my
 calendar.

 Jay

 On Jun 12, 2014 10:49 PM, John Griffith john.griff...@solidfire.com
 wrote:

 Hey Everyone,

 So I've been noticing some issues with regards to reviews in Cinder
 lately, namely we're not keeping up very well.  Most of this is a math
 problem (submitters  reviewers).  We're up around 200+ patches in the
 queue, and a large number of them have no negative feedback but have
just
 been waiting patiently (some  2 months).

 Growth is good, new contributors are FANTASTIC... but stale
submissions in
 the queue are BAD, and I hate for people interested in contributing to
 become discouraged and just go away (almost as much as I hate emails
asking
 me to review patches).

 I'd like to propose we consider one or two review days a week for a
while
 to try and work on our backlog.  I'd like to propose that on these
days we
 make an attempt to NOT propose new code (or at least limit it to
bug-fixes
 [real bugs, not features disguised as bugs]) and have an agreement from
 folks to focus on actually doing reviews and using IRC to collaborate
 together and knock some of these out.

 We did this sort of thing over a virtual meetup and it was really
 effective, I'd like to see if we can't do something for a brief
duration
 over IRC.

 I'm thinking we give it a test run, set aside a few hours next Wed
morning
 to start (coinciding with our Cinder weekly meeting since many folks
around
 that morning across TZ's etc) where we all dedicate some time prior to
the
 meeting to focus exclusively on helping each other get some reviews
knocked
 out.  As a reminder Cinder weekly meeting is 16:00 UTC.

 Let me know what you all think, and keep in mind this is NOT limited to
 just current regular Block-Heads but anybody in the OpenStack
community
 that's willing to help out and of course new reviewers are MORE than
 welcome.

 Thanks,
 John

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][Horizon] Exposing and showing associated resource_id within alarms/combined alarms

2014-06-16 Thread Eoghan Glynn


 Hello guys,
 
 I was taking a look at the proposed alarm-page designs [1] for the bp:
 https://blueprints.launchpad.net/horizon/+spec/ceilometer-alarm-management-page
 and I saw that the alarms table has a column named “Resource Name”. The
 intention of that column is to show the resources associated within an alarm
 (also for the case of more complex alarms or “combined alarm” if you prefer)
 but right now, ceilometer does only retrieve the associated resources id’s
 inside the “query” param (at least, for the threshold alarms, in the case of
 combined alarms you won’t get any resource id).

I don't really understand what you're getting at with ...

 ceilometer does only retrieve the associated resources id’s inside
  the “query” param

But I'll take a guess that you're concerned that ceilometer doesn't
map eagerly from resource ID to resource names?

Ok, let's back up a bit here and level-set ...

* A ceilometer alarm *may* include a resource-based constraint

* The resource-based constraint *may* be based on a resource ID,
  or else on any other aspect of resource metadata that ceilometer
  persists (e.g. in the case of a Heat autoscaling alarm, resource
  ID wouldn't even enter into the equation, instead a group of
  instances is referred to via their common user metadata)

* Either way, the resource constraint isn't evaluated until the
  stats query underpinning the alarm is evaluated

* It has to work this way, as for a start the set of matching
  resources may be different at the time the alarm is evaluated
  as compared to when the alarm was defined

 From the Horizon POV, getting the resource name represents a lot of work and
 a huge impact in performance because, if we choose to show this “Resource
 Name” column, we would need to do, for each alarm retrieved :
 
 1. Check the query parameter and then extract the resource id

Note the resource ID is not a required constraint. Not all alarms will
include this constraint. So it shouldn't be assumed in the UI.
 
 2. Then, depending on the type of resource (because it could be a vm, an
 image, etc.) ask for its name to the appropriate service (not sure if that
 is right way of doing it)
 
 3. Save that name and then show it on the UI
 
 
 
 In the case of combined alarms, not only we’ll have to do all that but also:
 
 · Extract the alarms_ids used for the alarm combination (which can be also
 combined alarms, so take that into account)
 
 · For each threshold alarm, run previous 1-3 steps
 
 o In case of step 3, instead of show the name to the UI, store it into a list
 of resource names that needs to be showed after finishing the processing
 
 
 
 As you can see, for an alarm table of, let’s say 10 combined alarms (which it
 could be a valid use case), we would need to do one call to alarm-list, 20
 calls to alarm-shows and then other 20 calls to each of the service that
 could give us the name of the resources.
 
 
 
 I’m seeing a couple of possible solutions:
 
 1) not rendering the “Resource name” column J (not actually an option)
 
 a. but I know it is the “coward” solution and I also know that showing and
 filtering alarms based on resources it’s a good and necessary feature.

This seems like the only sane solution IIUC, because as stated above there
is *no assumption* of a 1:1 mapping between alarms and resources.

Let me know if I've misunderstood what you're getting at here,

Cheers,
Eoghan
 
 2) Expose the resource_id as we do with “meter_name”, although we still need
 to ask for the resource name on the horizon side
 
 a. For the combined alarms, expose a list of resources and (why not) a list
 of meter associated with the combined alarms.
 
 3) Save the resource id in the alarm table in a separated column and then
 retrieve it
 
 a. For combined alarms, save a list of resource_id’s?
 
 4) Any solution that you consider J
 
 
 
 
 
 Any thoughts around this?
 
 
 
 Thanks for all the help!
 
 Christian
 
 
 
 [1]:
 http://people.redhat.com/~lsurette/OpenStack/Alarm%20Management%20-%202014-06-05.pdf
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] no tap interface on compute node

2014-06-16 Thread Kashyap Chamarthy
On Mon, Jun 16, 2014 at 11:31:50AM +0530, abhishek jain wrote:
 Hi
 
 I'm not able to get tap interface up on compute node when I boot VM
 from controller node onto compute node.  Please help regarding this.

For such usage questions, please post (with more verbose details
including Neutron investigation) them on ask.openstack.org.

That said, I use these scripts to debug Neutron:

https://raw.github.com/larsks/neutron-diag/

Neutron requires systematic debugging as there are plenty of network
devices involved.

-- 
/kashyap

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] no tap interface on compute node

2014-06-16 Thread abhishek jain
Hi

I posted it on ask.openstack.org but I got no response.



On Mon, Jun 16, 2014 at 6:41 PM, Kashyap Chamarthy kcham...@redhat.com
wrote:

 On Mon, Jun 16, 2014 at 11:31:50AM +0530, abhishek jain wrote:
  Hi
 
  I'm not able to get tap interface up on compute node when I boot VM
  from controller node onto compute node.  Please help regarding this.

 For such usage questions, please post (with more verbose details
 including Neutron investigation) them on ask.openstack.org.

 That said, I use these scripts to debug Neutron:

 https://raw.github.com/larsks/neutron-diag/

 Neutron requires systematic debugging as there are plenty of network
 devices involved.

 --
 /kashyap

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][messaging] messaging vs. messagingv2

2014-06-16 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Hi all,

I'm currently pushing Neutron to oslo.messaging, and while at it, a
question popped up.

So in oslo-rpc, we have the following notification drivers available:
neutron.openstack.common.notifier.log_notifier
neutron.openstack.common.notifier.no_op_notifier
neutron.openstack.common.notifier.rpc_notifier2
neutron.openstack.common.notifier.rpc_notifier
neutron.openstack.common.notifier.test_notifier

And in oslo.messaging, we have:
oslo.messaging.notify._impl_log:LogDriver
oslo.messaging.notify._impl_noop:NoOpDriver
oslo.messaging.notify._impl_messaging:MessagingV2Driver
oslo.messaging.notify._impl_messaging:MessagingDriver
oslo.messaging.notify._impl_test:TestDriver

My understanding is that they map to each other as in [1].

So atm Neutron uses rpc_notifier from oslo-rpc, so I'm going to
replace it with MessagingDriver. So far so good.

But then I've checked docstrings for MessagingDriver and
MessagingV2Driver [2], and the following looks suspicious to me. For
MessagingDriver, it's said:

This driver should only be used in cases where there are existing
consumers deployed which do not support the 2.0 message format.

This sounds like MessagingDriver is somehow obsolete, and we want to
use MessagingV2Driver unless forced to. But I don't get what those
consumers are. Are these other projects that interact with us via
messaging bus?

Another weird thing is that it seems that no other project is actually
using MessagingV2Driver (at least those that I've checked). Is it even
running in wild?

Can oslo devs elaborate on that topic?

Thanks,
/Ihar

[1]: https://review.openstack.org/#/c/100013/3/setup.cfg
[2]:
https://github.com/openstack/oslo.messaging/blob/master/oslo/messaging/notify/_impl_messaging.py
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBCgAGBQJTnvDhAAoJEC5aWaUY1u57bCcH/3l05eZZob+ZeNJBuDjmURQi
S8p2CCPpMKxXk714Czne9b39+SZaPPH/smyIOObx18aXYcIqKdJzRkB+Ft/ObLB/
2/dc/Vi7vzAIY3at11yXc2g+o0Ix1SmHBz3Jhh4V0x+Zp6U950ZsG75Jm+Mp2E/v
YK6Qy2snSpe2irdkNoTjz+DEK6kSZZvuOE9mLjGttTyMLhk+dWPlPITXu1MZfam1
9b32O22gf06KOePGHu15W7eVJu9M6Z4EyFMAQsZFcc4IgQV4r2hXM2R/x7ER/SHC
Xsug33LJnsu2yWbS1zxSXr9sUVs7ZtDEdmvLUg30BLfOxEICD1L3ygj2uPhsupY=
=NSoq
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Nominating Ken'ichi Ohmichi for nova-core

2014-06-16 Thread Andrew Laski

+1

On 06/13/2014 06:40 PM, Michael Still wrote:

Greetings,

I would like to nominate Ken'ichi Ohmichi for the nova-core team.

Ken'ichi has been involved with nova for a long time now.  His reviews
on API changes are excellent, and he's been part of the team that has
driven the new API work we've seen in recent cycles forward. Ken'ichi
has also been reviewing other parts of the code base, and I think his
reviews are detailed and helpful.

Please respond with +1s or any concerns.

References:

   
https://review.openstack.org/#/q/owner:ken1ohmichi%2540gmail.com+status:open,n,z

   https://review.openstack.org/#/q/reviewer:ken1ohmichi%2540gmail.com,n,z

   http://www.stackalytics.com/?module=nova-groupuser_id=oomichi

As a reminder, we use the voting process outlined at
https://wiki.openstack.org/wiki/Nova/CoreTeam to add members to our
core team.

Thanks,
Michael




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] revert hacking to 0.8 series

2014-06-16 Thread Thierry Carrez
Sean Dague wrote:
 Hacking 0.9 series was released pretty late for Juno. The entire check
 queue was flooded this morning with requirements proposals failing pep8
 because of it (so at 6am EST we were waiting 1.5 hrs for a check node).
 
 The previous soft policy with pep8 updates was that we set a pep8
 version basically release week, and changes stopped being done for style
 after first milestone.
 
 I think in the spirit of that we should revert the hacking requirements
 update back to the 0.8 series for Juno. We're past milestone 1, so
 shouldn't be working on style only fixes at this point.
 
 Proposed review here - https://review.openstack.org/#/c/100231/
 
 I also think in future hacking major releases need to happen within one
 week of release, or not at all for that series.

We may also have reached a size where changing style rules is just too
costly, whatever the moment in the cycle. I think it's good that we have
rules to enforce a minimum of common style, but the added value of those
extra rules is limited, while their impact on the common gate grows as
we add more projects.

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] An alternative approach to enforcing expected election behaviour

2014-06-16 Thread Mark McLoughlin
On Mon, 2014-06-16 at 10:56 +0100, Daniel P. Berrange wrote:
 On Mon, Jun 16, 2014 at 05:04:51AM -0400, Eoghan Glynn wrote:
  How about we rely instead on the values and attributes that
  actually make our community strong?
  
  Specifically: maturity, honesty, and a self-correcting nature.
  
  How about we simply require that each candidate for a TC or PTL
  election gives a simple undertaking in their self-nomination mail,
  along the lines of:
  
  I undertake to respect the election process, as required by
  the community code of conduct.
  
  I also undertake not to engage in campaign practices that the
  community has considered objectionable in the past, including
  but not limited to, unsolicited mail shots and private campaign
  events.
  
  If my behavior during this election period does not live up to
  those standards, please feel free to call me out on it on this
  mailing list and/or withhold your vote.
 
 I like this proposal because it focuses on the carrot rather than
 the stick, which is ultimately better for community cohesiveness
 IMHO.

I like it too. A slight tweak of that would be to require candidates to
sign the pledge publicly via an online form. We could invite the
community as a whole to sign it too in order to have candidates'
supporters covered.

  It is already part of our community ethos that we can call
 people out to publically debate / stand up  justify any  all
 issues affecting the project whether they be related to the code,
 architecture, or non-technical issues such as electioneering
 behaviour.
 
  We then rely on:
  
(a) the self-policing nature of an honest, open community
  
  and:
  
(b) the maturity and sound judgement within that community
giving us the ability to quickly spot and disregard any
frivolous reports of mis-behavior
  
  So no need for heavy-weight inquisitions, no need to interrupt the
  election process, no need for handing out of stiff penalties such
  as termination of membership.
 
 Before jumping headlong for a big stick to whack people with, I think
 I'd expect to see examples of problems we've actually faced (as opposed
 to vague hypotheticals), and a clear illustration that a self-policing
 approach to the community interaction failed to address them. I've not
 personally seen/experianced any problems that are so severe that they'd
 suggest we need the ability to kick someone out of the community for
 sending email !

Indeed. This discussion is happening in a vacuum for many people who do
not know the details of the private emails and private campaign events
which happened in the previous cycle.

The only one I know of first hand was a private email where the
recipients quickly responded saying the email was out of line and the
original sender apologized profusely. People can make mistakes in good
faith and if we can deal with it quickly and maturely as a community,
all the better.

In this example, the sender's apology could have bee followed up with
look, here's our code of conduct; sign it now, respect it in the
future, and let that be the end of the matter.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party] Current status of Neutron 3rd Party CI and how to move forward

2014-06-16 Thread Salvatore Orlando
I will probably be unable, as usual, to attend today's CI meeting (falls
right around my dinner time).
I think it's a good idea to starting keeping track of the status of the
various CI systems, but I feel the etherpad will not work very well in the
long term.

However, it would be great if we could start devising a solution for having
health reports from the various CI systems.
This report should report the following kind of information:
- timestamp of last run
- timestamp of last vote (a system might start job which then get aborted
for CI infra problems)
- % of success vs failures (not sure how important is that one but provides
a metric of comparison with upstream jenkins)
- % of disagreements with jenkins (this might allow us to quickly spot
those CI systems which are randomly -1'ing patches)

The percentage metrics might be taken over a 48 hours or 7 days interval,
or both.
Does this idea sound reasonable?

Also, regarding [1]. I agree that more is always better...  but I would
like a minimum required set of tests to be enforced.
Is this something that can be achieved?

Salvatore

[1] https://wiki.openstack.org/wiki/NeutronThirdPartyTesting




On 16 June 2014 07:07, YAMAMOTO Takashi yamam...@valinux.co.jp wrote:

 hi,

  My initial analysis of Neutron 3rd Party CI is here [1]. This was
  somewhat correlated with information from DriverLog [2], which was
  helpful to put this together.

 i updated the etherpad for ofagent.
 currently a single CI system is running tests for both of ofagent and ryu.
 is it ok?

 YAMAMOTO Takashi

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Specs repo

2014-06-16 Thread Ana Krivokapic

On 06/11/2014 05:22 PM, Jason Rist wrote:
 On Wed 11 Jun 2014 06:54:53 AM MDT, Ana Krivokapic wrote:
 Hi Horizoners,

 A lot of other projects have already adopted and started using a specs
 repo. Do we want to have one for Horizon?

 I'm still not quite clear how this works, but I'm open to it if it 
 makes things a little nicer.


Basically, specs repos are meant to hold specs - design documents of
sorts. They are used to describe and discuss the implementation design
of a particular feature. In other words, we would use the existing
Gerrit infrastructure to review blueprints. For more detailed
information, please see past mailing list threads [1], [2], and e.g.
nova-specs repo description [3].

Here are a few pros and cons with regard to having a specs repo. These
are just the ones that were the most obvious to me; feel free to add
your own.

Pros:
* This process makes sure implementation design for a feature happens
_before_ the actual implementation. This is very important as it
potentially saves developers from wasting a lot of time implementing a
feature in a suboptimal way.
* By making reviewers explicitly focus on the more essential aspects of
a feature design, we make sure that the feature gets a better/more
efficient implementation.
* We will have a feature design recorded for posterity. I don't think I
need to explain how useful this can be.

Cons:
* Shifting effort to talking about doing the work from actually doing
the work. I guess we need to make sure reviews are useful and
constructive and keep the process from becoming endless bike shedding.
* Obviously, it will take more time and effort to get features/patches
merged.

Personally, I am *for* adding a specs repo for Horizon, as long as we
put extra effort to minimize the cons listed above.

Thoughts, comments welcome!


[1]
http://lists.openstack.org/pipermail/openstack-dev/2014-March/030851.html
[2]
http://lists.openstack.org/pipermail/openstack-dev/2014-March/030102.html
[3] https://github.com/openstack/nova-specs

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Implementing new LBaaS API

2014-06-16 Thread Kyle Mestery
On Mon, Jun 16, 2014 at 2:27 AM, Eugene Nikanorov
enikano...@mirantis.com wrote:
 Salvatore,

 Also - since it seems to me that there is also consensus regarding having
 load balancing move away into a separate project
 To me it seems that there was no such a consensus; core team members were
 advocating keeping lbaas within neutron.

In the short term, yes. But longer term, we'll reevaluate the
viability of moving LBaaS out of Neutron into it's own incubated
project, likely under the Networking program.

Thanks,
Kyle

 Thanks,
 Eugene.


 On Mon, Jun 16, 2014 at 2:23 AM, Brandon Logan brandon.lo...@rackspace.com
 wrote:

 Thank you Salvatore for your feedback.

 Comments in-line.

 On Sun, 2014-06-15 at 23:26 +0200, Salvatore Orlando wrote:
  Regarding the two approaches outlines in the top post, I found out
  that the bullet This is API versioning done the wrong way appears in
  both approaches.
  Is this a mistake or intentional?

 No it was intentional.  In my opinion they are both the wrong way.  It
 would be best to be able to do a version at the resource layer but we
 can't since lbaas is a part of Neutron and its versions is directly tied
 to Neutron's.  Another possibility is to have the resource look like:

 http(s)://neutron.endpoint/v2/lbaas/v2

 This looks very odd to me though and sets a bad precedent.  That is just
 my opinion though.  So I wouldn't call this the right way either.  Thus,
 I do not know of a right way to do this other than choosing the right
 alternative way.

 
 
  From what I gather, the most reasonable approach appears to be
  starting with a clean slate, which means having a new API living side
  by side with the old one.
  I think the naming collision issues should probably be solved using
  distinct namespaces for the two API (the old one has /v2/lbaas as a
  URI prefix I think, I have hardly any idea about what namespace the
  new one should have)
 

 I'm in agreement with you as well. The old one has /v2/lb as the prefix.
 I figured the new one could be /v2/lbaas which I think works out well.

 Another thing to consider that I did not think about in my original
 message is that a whole new load balancing agent will have to be created
 as well since its code is written with the pool being the root object.
 So that should be taken into consideration.  So to be perfectly clear,
 starting with a clean slate would involve the following:

 1. New loadbalancer extension
 2. New loadbalancer plugin
 3. New lbaas_agentscheduler extension
 4. New agent_scheduler plugin.

 Also, I don't believe doing this would allow the two to be deployed at
 the same time.  I believe the setup.cfg file would have to be modified
 to point to the new plugins.  I could be wrong about that though.

 
  Finally, about deprecation - I see it's been agreed to deprecate the
  current API in Juno.
  I think this is not the right way of doing things. The limits of the
  current API are pretty much universally agreed; on the other hand, it
  is generally not advisable to deprecate an old API in favour of the
  new one at the first iteration such API is published. My preferred
  strategy would be to introduce the new API as experimental in the Juno
  release, so that in can be evaluated, apply any feedback and consider
  for promoting in K - and contextually deprecate the old API.
 
 
  As there is quite a radical change between the old and the new model,
  keeping the old API indefinitely is a maintenance burden we probably
  can't afford, and I would therefore propose complete removal one
  release cycle after deprecation. Also - since it seems to me that
  there is also consensus regarding having load balancing move away into
  a separate project so that it would not be tied anymore to the
  networking program, the old API is pretty much just dead weight.
 
  Salvatore

 Good idea on that.  I'll bring this up with everyone at the hackathon
 this week if it is not already on the table.

 Thanks again for your feedback.

 Brandon
 
 
  On 11 June 2014 18:01, Kyle Mestery mest...@noironetworks.com wrote:
  I spoke to Mark McClain about this yesterday, I'll see if I
  can get
  him to join the LBaaS team meeting tomorrow so between he and
  I we can
  close on this with the LBaaS team.
 
  On Wed, Jun 11, 2014 at 10:57 AM, Susanne Balle
  sleipnir...@gmail.com wrote:
   Do we know who has an opinion? If so maybe we can reach out
  to them directly
   and ask them to comment.
  
  
   On Tue, Jun 10, 2014 at 6:44 PM, Brandon Logan
  brandon.lo...@rackspace.com
   wrote:
  
   Well we got a few opinions, but not enough understanding of
  the two
   options to make an informed decision.  It was requested
  that the core
   reviewers respond to this thread with their opinions.
  
   Thanks,
   Brandon
  
 

Re: [openstack-dev] Need help on developing Neutron application to receive messages from Open Day Light

2014-06-16 Thread Kyle Mestery
On Mon, Jun 16, 2014 at 7:17 AM,  venkatesh@wipro.com wrote:


 Hi,



 I am new to both open stack and open daylight. I have to write an
 application / plugin to receive messages

 From open daylight neutron interface, can you  please provide a reference or
 an example of how to go about it ?

Hi Venkatesh, I'm confused as to what you mean. Do you meant you want
to receive OpenDaylight messages in the Neutron MechanismDriver, or is
it the other way around?

Thanks,
Kyle



 Many thanks,

 Venkatesh

 The information contained in this electronic message and any attachments to
 this message are intended for the exclusive use of the addressee(s) and may
 contain proprietary, confidential or privileged information. If you are not
 the intended recipient, you should not disseminate, distribute or copy this
 e-mail. Please notify the sender immediately and destroy all copies of this
 message and any attachments.

 WARNING: Computer viruses can be transmitted via email. The recipient should
 check this email and any attachments for the presence of viruses. The
 company accepts no liability for any damage caused by any virus transmitted
 by this email.

 www.wipro.com


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party] Current status of Neutron 3rd Party CI and how to move forward

2014-06-16 Thread Kyle Mestery
On Mon, Jun 16, 2014 at 8:52 AM, Salvatore Orlando sorla...@nicira.com wrote:
 I will probably be unable, as usual, to attend today's CI meeting (falls
 right around my dinner time).
 I think it's a good idea to starting keeping track of the status of the
 various CI systems, but I feel the etherpad will not work very well in the
 long term.

Agreed. The etherpad was a starting point, I'll move this information
to a wiki page later today.

 However, it would be great if we could start devising a solution for having
 health reports from the various CI systems.
 This report should report the following kind of information:
 - timestamp of last run
 - timestamp of last vote (a system might start job which then get aborted
 for CI infra problems)
 - % of success vs failures (not sure how important is that one but provides
 a metric of comparison with upstream jenkins)
 - % of disagreements with jenkins (this might allow us to quickly spot those
 CI systems which are randomly -1'ing patches)

 The percentage metrics might be taken over a 48 hours or 7 days interval, or
 both.
 Does this idea sound reasonable?

This sounds like a very good idea. Now we just need to find someone
with the time to write this. :)

 Also, regarding [1]. I agree that more is always better...  but I would like
 a minimum required set of tests to be enforced.
 Is this something that can be achieved?

I think the tests that are in there are the minimum tests I'd like to
see run. I'll clarify the language on the wiki page a bit.

 Salvatore

 [1] https://wiki.openstack.org/wiki/NeutronThirdPartyTesting




 On 16 June 2014 07:07, YAMAMOTO Takashi yamam...@valinux.co.jp wrote:

 hi,

  My initial analysis of Neutron 3rd Party CI is here [1]. This was
  somewhat correlated with information from DriverLog [2], which was
  helpful to put this together.

 i updated the etherpad for ofagent.
 currently a single CI system is running tests for both of ofagent and ryu.
 is it ok?

 YAMAMOTO Takashi

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate proposal - drop Postgresql configurations in the gate

2014-06-16 Thread Chris Dent

On Fri, 13 Jun 2014, Sean Dague wrote:


So if we can't evolve the system back towards health, we need to just
cut a bunch of stuff off until we can.


+1 This is kind of the crux of the biscuit. As things stand there's
so much noise that it's far too easy to think and act like it is
somebody else's problem.

--
Chris Dent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Rethink how we manage projects? (was Gate proposal - drop Postgresql configurations in the gate)

2014-06-16 Thread David Kranz

On 06/16/2014 05:33 AM, Thierry Carrez wrote:

David Kranz wrote:

[...]
There is a different way to do this. We could adopt the same methodology
we have now around gating, but applied to each project on its own
branch. These project branches would be integrated into master at some
frequency or when some new feature in project X is needed by project Y.
Projects would want to pull from the master branch often, but the push
process would be less frequent and run a much larger battery of tests
than we do now.

So we would basically discover the cross-project bugs when we push to
the master master branch. I think you're just delaying discovery of
the most complex issues, and push the responsibility to resolve them
onto a inexistent set of people. Adding integration branches only makes
sense if you have an integration team. We don't have one, so we'd call
back on the development teams to solve the same issues... with a delay.
You are assuming that the problem is cross-project bugs. A lot of these 
bugs are not really bugs that
are *caused* by cross-project interaction. Many are project-specific 
bugs that could have been squashed before being integrated if enough 
testing had been done, but since we do all of our testing in a 
fully-integrated environment we often don't know where they came from. I 
am not suggesting this proposal would help much to get out of the 
current jam, just make it harder to get into it again once master is 
stabilized


In our specific open development setting, delaying is bad because you
don't have a static set of developers that you can assume will be on
call ready to help with what they have written a few months later:
shorter feedback loops are key to us.
I hope you did not think I was suggesting a few months as a typical 
frequency for a project updating master. That would be unacceptable. But 
there is a continuum between on every commit and months. I was 
thinking of perhaps once a week but it would really depend on a lot of 
things that happen.



Doing this would have the following advantages:

1. It would be much harder for a race bug to get in. Each commit would
be tested many more times on its branch before being merged to master
than at present, including tests specialized for that project. The
qa/infra teams and others would continue to define acceptance at the
master level.
2. If a race bug does get in, projects have at least some chance to
avoid merging the bad code.
3. Each project can develop its own gating policy for its own branch
tailored to the issues and tradeoffs it has. This includes focus on
spending time running their own tests. We would no longer run a complete
battery of nova tests on every commit to swift.
4. If a project branch gets into the situation we are now in:
  a) it does not impact the ability of other projects to merge code
  b) it is highly likely the bad code is actually in the project so
it is known who should help fix it
  c) those trying to fix it will be domain experts in the area that
is failing
5. Distributing the gating load and policy to projects makes the whole
system much more scalable as we add new projects.

Of course there are some drawbacks:

1. It will take longer, sometimes much longer, for any individual commit
to make it to master. Of course if a super-serious issue made it to
master and had to be fixed immediately it could be committed to master
directly.
2. Branch management at the project level would be required. Projects
would have to decide gating criteria, timing of pulls, and coordinate
around integration to master with other projects.
3. There may be some technical limitations with git/gerrit/whatever that
I don't understand but which would make this difficult.
4. It makes the whole thing more complicated from a process standpoint.

An extra drawback is that you can't really do CD anymore, because your
master master branch gets big chunks of new code in one go at push time.
That depends on how big and delayed the chunks are. The question is how 
do we test commits enough to make sure they don't cause new races 
without using vastly more resources than we have, and without it taking 
days to test a patch?. I am suggesting an alternative as a possible 
least-bad approach, not a panacea. I didn't think that doing CD implied 
literally that the unit of integration was exactly one developer commit.



I have used this model in previous large software projects and it worked
quite well. This may also be somewhat similar to what the linux kernel
does in some ways.

Please keep in mind that some techniques which are perfectly valid (and
even recommended) when you have a captive set of developers just can't
work in our open development setting. Some techniques which work
perfectly for a release-oriented product just don't cut it when you also
want the software to be consumable in a continuous delivery fashion. We
certainly can and should learn from other experiences, but we also need
to recognize our challenges 

Re: [openstack-dev] [neutron] [third-party] Current status of Neutron 3rd Party CI and how to move forward

2014-06-16 Thread Salvatore Orlando
On 16 June 2014 15:58, Kyle Mestery mest...@noironetworks.com wrote:

 On Mon, Jun 16, 2014 at 8:52 AM, Salvatore Orlando sorla...@nicira.com
 wrote:
  I will probably be unable, as usual, to attend today's CI meeting (falls
  right around my dinner time).
  I think it's a good idea to starting keeping track of the status of the
  various CI systems, but I feel the etherpad will not work very well in
 the
  long term.
 
 Agreed. The etherpad was a starting point, I'll move this information
 to a wiki page later today.

  However, it would be great if we could start devising a solution for
 having
  health reports from the various CI systems.
  This report should report the following kind of information:
  - timestamp of last run
  - timestamp of last vote (a system might start job which then get aborted
  for CI infra problems)
  - % of success vs failures (not sure how important is that one but
 provides
  a metric of comparison with upstream jenkins)
  - % of disagreements with jenkins (this might allow us to quickly spot
 those
  CI systems which are randomly -1'ing patches)
 
  The percentage metrics might be taken over a 48 hours or 7 days
 interval, or
  both.
  Does this idea sound reasonable?
 
 This sounds like a very good idea. Now we just need to find someone
 with the time to write this. :)

  Also, regarding [1]. I agree that more is always better...  but I would
 like
  a minimum required set of tests to be enforced.
  Is this something that can be achieved?
 
 I think the tests that are in there are the minimum tests I'd like to
 see run. I'll clarify the language on the wiki page a bit.


If the intention is to have the minimum set of test we'd like to see run
then it's perfect!
I was trying to say we should impose that set as a minimum requirement...



  Salvatore
 
  [1] https://wiki.openstack.org/wiki/NeutronThirdPartyTesting
 
 
 
 
  On 16 June 2014 07:07, YAMAMOTO Takashi yamam...@valinux.co.jp wrote:
 
  hi,
 
   My initial analysis of Neutron 3rd Party CI is here [1]. This was
   somewhat correlated with information from DriverLog [2], which was
   helpful to put this together.
 
  i updated the etherpad for ofagent.
  currently a single CI system is running tests for both of ofagent and
 ryu.
  is it ok?
 
  YAMAMOTO Takashi
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party] Current status of Neutron 3rd Party CI and how to move forward

2014-06-16 Thread Ilya Shakhat

  However, it would be great if we could start devising a solution for
 having
  health reports from the various CI systems.
  This report should report the following kind of information:
  - timestamp of last run
  - timestamp of last vote (a system might start job which then get aborted
  for CI infra problems)
  - % of success vs failures (not sure how important is that one but
 provides
  a metric of comparison with upstream jenkins)
  - % of disagreements with jenkins (this might allow us to quickly spot
 those
  CI systems which are randomly -1'ing patches)
 
  The percentage metrics might be taken over a 48 hours or 7 days
 interval, or
  both.
  Does this idea sound reasonable?
 
 This sounds like a very good idea. Now we just need to find someone
 with the time to write this. :)


That's exactly what Stackalytics/DriverLog may do! It already collects the
latest CI votes and it wouldn't be hard to calculate metrics based on them.

Ilya


2014-06-16 18:11 GMT+04:00 Salvatore Orlando sorla...@nicira.com:




 On 16 June 2014 15:58, Kyle Mestery mest...@noironetworks.com wrote:

 On Mon, Jun 16, 2014 at 8:52 AM, Salvatore Orlando sorla...@nicira.com
 wrote:
  I will probably be unable, as usual, to attend today's CI meeting (falls
  right around my dinner time).
  I think it's a good idea to starting keeping track of the status of the
  various CI systems, but I feel the etherpad will not work very well in
 the
  long term.
 
 Agreed. The etherpad was a starting point, I'll move this information
 to a wiki page later today.

  However, it would be great if we could start devising a solution for
 having
  health reports from the various CI systems.
  This report should report the following kind of information:
  - timestamp of last run
  - timestamp of last vote (a system might start job which then get
 aborted
  for CI infra problems)
  - % of success vs failures (not sure how important is that one but
 provides
  a metric of comparison with upstream jenkins)
  - % of disagreements with jenkins (this might allow us to quickly spot
 those
  CI systems which are randomly -1'ing patches)
 
  The percentage metrics might be taken over a 48 hours or 7 days
 interval, or
  both.
  Does this idea sound reasonable?
 
 This sounds like a very good idea. Now we just need to find someone
 with the time to write this. :)

  Also, regarding [1]. I agree that more is always better...  but I would
 like
  a minimum required set of tests to be enforced.
  Is this something that can be achieved?
 
 I think the tests that are in there are the minimum tests I'd like to
 see run. I'll clarify the language on the wiki page a bit.


 If the intention is to have the minimum set of test we'd like to see run
 then it's perfect!
 I was trying to say we should impose that set as a minimum requirement...



  Salvatore
 
  [1] https://wiki.openstack.org/wiki/NeutronThirdPartyTesting
 
 
 
 
  On 16 June 2014 07:07, YAMAMOTO Takashi yamam...@valinux.co.jp wrote:
 
  hi,
 
   My initial analysis of Neutron 3rd Party CI is here [1]. This was
   somewhat correlated with information from DriverLog [2], which was
   helpful to put this together.
 
  i updated the etherpad for ofagent.
  currently a single CI system is running tests for both of ofagent and
 ryu.
  is it ok?
 
  YAMAMOTO Takashi
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] An alternative approach to enforcing expected election behaviour

2014-06-16 Thread Doug Hellmann
On Mon, Jun 16, 2014 at 9:41 AM, Mark McLoughlin mar...@redhat.com wrote:
 On Mon, 2014-06-16 at 10:56 +0100, Daniel P. Berrange wrote:
 On Mon, Jun 16, 2014 at 05:04:51AM -0400, Eoghan Glynn wrote:
  How about we rely instead on the values and attributes that
  actually make our community strong?
 
  Specifically: maturity, honesty, and a self-correcting nature.
 
  How about we simply require that each candidate for a TC or PTL
  election gives a simple undertaking in their self-nomination mail,
  along the lines of:
 
  I undertake to respect the election process, as required by
  the community code of conduct.
 
  I also undertake not to engage in campaign practices that the
  community has considered objectionable in the past, including
  but not limited to, unsolicited mail shots and private campaign
  events.
 
  If my behavior during this election period does not live up to
  those standards, please feel free to call me out on it on this
  mailing list and/or withhold your vote.

 I like this proposal because it focuses on the carrot rather than
 the stick, which is ultimately better for community cohesiveness
 IMHO.

 I like it too. A slight tweak of that would be to require candidates to
 sign the pledge publicly via an online form. We could invite the
 community as a whole to sign it too in order to have candidates'
 supporters covered.

+1

I'm less worried about the candidates, since they are in the spotlight
during the election. I'm more worried about supporters getting carried
away in their enthusiasm or not understanding how much (and why) the
community values open participation.


  It is already part of our community ethos that we can call
 people out to publically debate / stand up  justify any  all
 issues affecting the project whether they be related to the code,
 architecture, or non-technical issues such as electioneering
 behaviour.

  We then rely on:
 
(a) the self-policing nature of an honest, open community
 
  and:
 
(b) the maturity and sound judgement within that community
giving us the ability to quickly spot and disregard any
frivolous reports of mis-behavior
 
  So no need for heavy-weight inquisitions, no need to interrupt the
  election process, no need for handing out of stiff penalties such
  as termination of membership.

 Before jumping headlong for a big stick to whack people with, I think
 I'd expect to see examples of problems we've actually faced (as opposed
 to vague hypotheticals), and a clear illustration that a self-policing
 approach to the community interaction failed to address them. I've not
 personally seen/experianced any problems that are so severe that they'd
 suggest we need the ability to kick someone out of the community for
 sending email !

 Indeed. This discussion is happening in a vacuum for many people who do
 not know the details of the private emails and private campaign events
 which happened in the previous cycle.

 The only one I know of first hand was a private email where the
 recipients quickly responded saying the email was out of line and the
 original sender apologized profusely. People can make mistakes in good
 faith and if we can deal with it quickly and maturely as a community,
 all the better.

 In this example, the sender's apology could have bee followed up with
 look, here's our code of conduct; sign it now, respect it in the
 future, and let that be the end of the matter.

I agree that the penalties in the original proposal went too far. I
also think it's a good point that many people don't know the details
from the last cycle, so I think some specific guidance on how to
report issues so they can be addressed formally is an important aspect
of this proposal.

Doug


 Mark.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] NFV in OpenStack use cases and context

2014-06-16 Thread Yathiraj Udupi (yudupi)
Hi Sylvain, 

The Smart Scheduler work should work along withGantt easily, as it is just
another scheduler driver, instead of using the Filters, it will have a
separate mechanism of solving the placement problem all at once.
So just like how it currently works along with Nova scheduler, it should
easily be integrated into Gantt, is what I would imagine. I have not
looked very closely at the Gantt work, but I assume if you claim, no
behavior changes in Gantt, and if you are continuing to use
FilterScheduler, SmartScheduler should fit in too.

I will try to update the nova-spec ([1]
https://review.openstack.org/#/c/96543/) for Smart Scheduler, and let the
review continue sometime soon.

Thanks,
Yathi. 



On 6/13/14, 12:37 AM, Sylvain Bauza sba...@redhat.com wrote:

Hi Yathi,

Le 12/06/2014 20:53, Yathiraj Udupi (yudupi) a écrit :
 Hi Alan, 

 Our Smart (Solver) Scheduler blueprint
 (https://blueprints.launchpad.net/nova/+spec/solver-scheduler ) has been
 in the works in the Nova community since late 2013.  We have demoed at
the
 Hong Kong summit, as well as the Atlanta summit,  use cases using this
 smart scheduler for better, optimized resource placement with complex
 constrained scenarios.  So to let you know this work was started as a
 smart way of doing scheduling, applicable in general and not limited to
 NFV.  Currently we feel NFV is a killer app for driving this blueprint
and
 work ahead, however is applicable for all kinds of resource placement
 scenarios. 

 We will be very interested in finding out more about your blueprints
that
 you are referring to here, and see how it can be integrated as part of
our
 future roadmap. 

Indeed, Smart Scheduler is something that could help NFV use-cases. My
only concern is about the necessary steps for providing such a feature,
with regards to the scheduler breakout that is coming.

Could you please make sure the current nova-spec [1] is taking in
account all other efforts about the scheduler, like scheduler forklift
[2], on-demand resource reporting [3] or others ? It also seems the spec
is not following the defined template, could you please fix it ? It
would be easier to review your proposal.

Gantt and Nova scheduler teams are attending a weekly meeting every
Tuesday at 3pm UTC. Would you have a chance to join, it would be great
to discuss about your proposal and how we can identify all the
milestones for this and potentially track progress on it.

Thanks,
-Sylvain

[1] https://review.openstack.org/#/c/96543/
[2] https://review.openstack.org/82133 and
https://review.openstack.org/89893
[3] https://review.openstack.org/97903



 Thanks,
 Yathi. 


 On 6/12/14, 10:55 AM, Alan Kavanagh alan.kavan...@ericsson.com
wrote:

 Hi Ramki

 Really like the smart scheduler idea, we made a couple of blueprints
that
 are related to ensuring you have the right information to build a
 constrained based scheduler. I do however want to point out that this
is
 not NFV specific but is useful for all applications and services of
which
 NFV is one. 

 /Alan

 -Original Message-
 From: ramki Krishnan [mailto:r...@brocade.com]
 Sent: June-10-14 6:06 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Cc: Chris Wright; Nicolas Lemieux; Norival Figueira
 Subject: Re: [openstack-dev] NFV in OpenStack use cases and context

 Hi Steve,

 Forgot to mention, the Smart Scheduler (Solver Scheduler) enhancements
 for NFV: Use Cases, Constraints etc. is a good example of an NFV use
 case deep dive for OpenStack.

 
https://urldefense.proofpoint.com/v1/url?u=https://docs.google.com/docum
en
 
t/d/1k60BQXOMkZS0SIxpFOppGgYp416uXcJVkAFep3Oeju8/edit%23heading%3Dh.wlbc
la
 
gujw8ck=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=%2FZ35AkRhp2kCW4Q3MPeE%2BxY
2b
 
qaf%2FKm29ZfiqAKXxeo%3D%0Am=vTulCeloS8Hc59%2FeAOd32Ri4eqbNqVE%2FeMgNRzG
Zn
 
z4%3D%0As=836991d6daab66b519de3b670db8af001144ddb20e636665b395597aa1185
38
 f

 Thanks,
 Ramki

 -Original Message-
 From: ramki Krishnan
 Sent: Tuesday, June 10, 2014 3:01 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Cc: Chris Wright; Nicolas Lemieux; Norival Figueira
 Subject: RE: NFV in OpenStack use cases and context

 Hi Steve,

 We are have OpenStack gap analysis documents in ETSI NFV under member
 only access. I can work on getting public version of the documents (at
 least a draft) to fuel the kick start.

 Thanks,
 Ramki

 -Original Message-
 From: Steve Gordon [mailto:sgor...@redhat.com]
 Sent: Tuesday, June 10, 2014 12:06 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Cc: Chris Wright; Nicolas Lemieux
 Subject: Re: [openstack-dev] [NFV] Re: NFV in OpenStack use cases and
 context

 - Original Message -
 From: Steve Gordon sgor...@redhat.com
 To: Stephen Wong stephen.kf.w...@gmail.com

 - Original Message -
 From: Stephen Wong stephen.kf.w...@gmail.com
 To: ITAI MENDELSOHN (ITAI) itai.mendels...@alcatel-lucent.com,
 OpenStack Development Mailing List (not for 

Re: [openstack-dev] An alternative approach to enforcing expected election behaviour

2014-06-16 Thread Eoghan Glynn


 On Mon, 2014-06-16 at 10:56 +0100, Daniel P. Berrange wrote:
  On Mon, Jun 16, 2014 at 05:04:51AM -0400, Eoghan Glynn wrote:
   How about we rely instead on the values and attributes that
   actually make our community strong?
   
   Specifically: maturity, honesty, and a self-correcting nature.
   
   How about we simply require that each candidate for a TC or PTL
   election gives a simple undertaking in their self-nomination mail,
   along the lines of:
   
   I undertake to respect the election process, as required by
   the community code of conduct.
   
   I also undertake not to engage in campaign practices that the
   community has considered objectionable in the past, including
   but not limited to, unsolicited mail shots and private campaign
   events.
   
   If my behavior during this election period does not live up to
   those standards, please feel free to call me out on it on this
   mailing list and/or withhold your vote.
  
  I like this proposal because it focuses on the carrot rather than
  the stick, which is ultimately better for community cohesiveness
  IMHO.
 
 I like it too. A slight tweak of that would be to require candidates to
 sign the pledge publicly via an online form. We could invite the
 community as a whole to sign it too in order to have candidates'
 supporters covered.

Fair point, that would work for me also.

   It is already part of our community ethos that we can call
  people out to publically debate / stand up  justify any  all
  issues affecting the project whether they be related to the code,
  architecture, or non-technical issues such as electioneering
  behaviour.
  
   We then rely on:
   
 (a) the self-policing nature of an honest, open community
   
   and:
   
 (b) the maturity and sound judgement within that community
 giving us the ability to quickly spot and disregard any
 frivolous reports of mis-behavior
   
   So no need for heavy-weight inquisitions, no need to interrupt the
   election process, no need for handing out of stiff penalties such
   as termination of membership.
  
  Before jumping headlong for a big stick to whack people with, I think
  I'd expect to see examples of problems we've actually faced (as opposed
  to vague hypotheticals), and a clear illustration that a self-policing
  approach to the community interaction failed to address them. I've not
  personally seen/experianced any problems that are so severe that they'd
  suggest we need the ability to kick someone out of the community for
  sending email !
 
 Indeed. This discussion is happening in a vacuum for many people who do
 not know the details of the private emails and private campaign events
 which happened in the previous cycle.
 
 The only one I know of first hand was a private email where the
 recipients quickly responded saying the email was out of line and the
 original sender apologized profusely. People can make mistakes in good
 faith and if we can deal with it quickly and maturely as a community,
 all the better.

Exactly.

Most realistic missteps that I can imagine could be dealt with
by a simple calling out of the error, then moving on quickly.

Simple, lightweight, a teachable moment.

No need for heavy-handed inquisitions IMHO if we trust our own
instincts as a community.

Cheers,
Eoghan

 In this example, the sender's apology could have bee followed up with
 look, here's our code of conduct; sign it now, respect it in the
 future, and let that be the end of the matter.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] Clarification of policy for qa-specs around adding new tests

2014-06-16 Thread David Kranz
I have been reviewing some of these specs and sense a lack of clarity 
around what is expected. In the pre-qa-specs world we did not want 
tempest blueprints to be used by projects to track their tempest test 
submissions because the core review team did not want to have to spend a 
lot of time dealing with that. We said that each project could have one 
tempest blueprint that would point to some other place (project 
blueprints, spreadsheet, etherpad, etc.) that would track specific tests 
to be added. I'm not sure what aspect of the new qa-spec process would 
make us feel differently about this. Has this policy changed? We should 
spell out the expectation in any event. I will update the README when we 
have a conclusion.


 -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [UX] Design for Alarming and Alarm Management

2014-06-16 Thread Eoghan Glynn

Apologies for the top-posting, but just wanted to call out some
potential confusion that arose on the #os-ceilometer channel earlier
today.

TL;DR: the UI shouldn't assume a 1:1 mapping between alarms and
   resources, since this mapping does not exist in general

Background: See ML post[1]

Discussion: See IRC log [2]
Ctrl+F: Let's see what the UI guys think about it

Cheers,
Eoghan

[1] http://lists.openstack.org/pipermail/openstack-dev/2014-June/037788.html
[2] 
http://eavesdrop.openstack.org/irclogs/%23openstack-ceilometer/%23openstack-ceilometer.2014-06-16.log


- Original Message -
 Hi all,
 
 Thanks again for the great comments on the initial cut of wireframes. I’ve
 updated them a fair amount based on feedback in this e-mail thread along
 with the feedback written up here:
 https://etherpad.openstack.org/p/alarm-management-page-design-discussion
 
 Here is a link to the new version:
 http://people.redhat.com/~lsurette/OpenStack/Alarm%20Management%20-%202014-06-05.pdf
 
 And a quick explanation of the updates that I made from the last version:
 
 1) Removed severity.
 
 2) Added Status column. I also added details around the fact that users can
 enable/disable alerts.
 
 3) Updated Alarm creation workflow to include choosing the project and user
 (optionally for filtering the resource list), choosing resource, and
 allowing for choose of amount of time to monitor for alarming.
  -Perhaps we could be even more sophisticated for how we let users filter
  down to find the right resources that they want to monitor for alarms?
 
 4) As for notifying users…I’ve updated the “Alarms” section to be “Alarms
 History”. The point here is to show any Alarms that have occurred to notify
 the user. Other notification ideas could be to allow users to get notified
 of alerts via e-mail (perhaps a user setting?). I’ve added a wireframe for
 this update in User Settings. Then the Alarms Management section would just
 be where the user creates, deletes, enables, and disables alarms. Do you
 still think we don’t need the “alarms” tab? Perhaps this just becomes
 iteration 2 and is left out for now as you mention in your etherpad.
 
 5) Question about combined alarms…currently I’ve designed it so that a user
 could create multiple levels in the “Alarm When…” section. They could
 combine these with AND/ORs. Is this going far enough? Or do we actually need
 to allow users to combine Alarms that might watch different resources?
 
 6) I updated the Actions column to have the “More” drop down which is
 consistent with other tables in Horizon.
 
 7) Added in a section in the “Add Alarm” workflow for “Actions after Alarm”.
 I’m thinking we could have some sort of If State is X, do X type selections,
 but I’m looking to understand more details about how the backend works for
 this feature. Eoghan gave examples of logging and potentially scaling out
 via Heat. Would simple drop downs support these events?
 
 8) I can definitely add in a “scheduling” feature with respect to Alarms. I
 haven’t added it in yet, but I could see this being very useful in future
 revisions of this feature.
 
 9) Another though is that we could add in some padding for outlier data as
 Eoghan mentioned. Perhaps a setting for “This has happened 3 times over the
 last minute, so now send an alarm.”?
 
 A new round of feedback is of course welcome :)
 
 Best,
 Liz
 
 On Jun 4, 2014, at 1:27 PM, Liz Blanchard lsure...@redhat.com wrote:
 
  Thanks for the excellent feedback on these, guys! I’ll be working on making
  updates over the next week and will send a fresh link out when done.
  Anyone else with feedback, please feel free to fire away.
  
  Best,
  Liz
  On Jun 4, 2014, at 12:33 PM, Eoghan Glynn egl...@redhat.com wrote:
  
  
  Hi Liz,
  
  Two further thoughts occurred to me after hitting send on
  my previous mail.
  
  First, is the concept of alarm dimensioning; see my RDO Ceilometer
  getting started guide[1] for an explanation of that notion.
  
  A key associated concept is the notion of dimensioning which defines the
  set of matching meters that feed into an alarm evaluation. Recall that
  meters are per-resource-instance, so in the simplest case an alarm might
  be defined over a particular meter applied to all resources visible to a
  particular user. More useful however would the option to explicitly
  select which specific resources we're interested in alarming on. On one
  extreme we would have narrowly dimensioned alarms where this selection
  would have only a single target (identified by resource ID). On the other
  extreme, we'd have widely dimensioned alarms where this selection
  identifies many resources over which the statistic is aggregated, for
  example all instances booted from a particular image or all instances
  with matching user metadata (the latter is how Heat identifies
  autoscaling groups).
  
  We'd have to think about how that concept is captured in the
  UX for alarm 

Re: [openstack-dev] [qa] Clarification of policy for qa-specs around adding new tests

2014-06-16 Thread Matthew Treinish
On Mon, Jun 16, 2014 at 10:46:51AM -0400, David Kranz wrote:
 I have been reviewing some of these specs and sense a lack of clarity around
 what is expected. In the pre-qa-specs world we did not want tempest
 blueprints to be used by projects to track their tempest test submissions
 because the core review team did not want to have to spend a lot of time
 dealing with that. We said that each project could have one tempest
 blueprint that would point to some other place (project blueprints,
 spreadsheet, etherpad, etc.) that would track specific tests to be added.
 I'm not sure what aspect of the new qa-spec process would make us feel
 differently about this. Has this policy changed? We should spell out the
 expectation in any event. I will update the README when we have a
 conclusion.
 

The policy has not changed. There should be 1 BP (or maybe 2 or 3 if they want
to split the effort a bit more granularly for tracking different classes of
tests, but still 1 BP series) for improving project tests. For individual tests
part of a bigger effort should be tracked outside of the Tempest LP. IMO after
it's approved the spec/BP for tracking test additions is only really useful to
have a unified topic to use for review classification.

Also, to be fair I'm not sure things were clear before we moved to specs. When I
cleaned up the BP list at the beginning of the cycle there were a couple of BPs
on the list which didn't conform to this. 


-Matt Treinish


pgp7KB5jXHLqU.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Nominating Ken'ichi Ohmichi for nova-core

2014-06-16 Thread Aaron Rosen
+1

On Monday, June 16, 2014, Andrew Laski andrew.la...@rackspace.com wrote:

 +1

 On 06/13/2014 06:40 PM, Michael Still wrote:

 Greetings,

 I would like to nominate Ken'ichi Ohmichi for the nova-core team.

 Ken'ichi has been involved with nova for a long time now.  His reviews
 on API changes are excellent, and he's been part of the team that has
 driven the new API work we've seen in recent cycles forward. Ken'ichi
 has also been reviewing other parts of the code base, and I think his
 reviews are detailed and helpful.

 Please respond with +1s or any concerns.

 References:

https://review.openstack.org/#/q/owner:ken1ohmichi%
 2540gmail.com+status:open,n,z

https://review.openstack.org/#/q/reviewer:ken1ohmichi%
 2540gmail.com,n,z

http://www.stackalytics.com/?module=nova-groupuser_id=oomichi

 As a reminder, we use the voting process outlined at
 https://wiki.openstack.org/wiki/Nova/CoreTeam to add members to our
 core team.

 Thanks,
 Michael



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Consolidated metrics proposal

2014-06-16 Thread Buraschi, Andres
Hi Jorge, thanks for your reply! You are right about summarizing too much. The 
idea is to identify which kinds of data could be retrieved in a summarized way 
without losing detail (i.e.: uptime can be better described with start-end 
timestamps than with lots of samples with up/down status) or simply to provide 
different levels of granularity and let the user decide (yes, it can be 
sometimes dangerous).
Having said this, how could we share the current metrics intended to be 
exposed? Is there a document or should I follow the Requirements around 
statistics and billing thread?

Thank you!
Andres

From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
Sent: Thursday, June 12, 2014 6:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Consolidated metrics proposal

Hey Andres,

In my experience with usage gathering consolidating statistics at the root 
layer is usually a bad idea. The reason is that you lose potentially useful 
information once you consolidate data. When it comes to troubleshooting issues 
(such as billing) this lost information can cause problems since there is no 
way to replay what had actually happened. That said, there is no free lunch 
and keeping track of huge amounts of data can be a huge engineering challenge. 
We have a separate thread on what kinds of metrics we want to expose from the 
LBaaS service so perhaps it would be nice to understand these in more detail.

Cheers,
--Jorge

From: Buraschi, Andres 
andres.buras...@intel.commailto:andres.buras...@intel.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, June 10, 2014 3:34 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Neutron][LBaaS] Consolidated metrics proposal

Hi, we have been struggling with getting a meaningful set of metrics from LB 
stats thru ceilometer, and from a discussion about module responsibilities for 
providing data, an interesting idea came up. (Thanks Pradeep!)
The proposal is to consolidate some kinds of metrics as pool up time (hours) 
and average or historic response times of VIPs and listeners, to avoid having 
ceilometer querying for the state so frequently. There is a trade-off between 
fast response time (high sampling rate) and reasonable* amount of cumulative 
samples.
The next step in order to give more detail to the idea is to work on a use 
cases list to better explain / understand the benefits of this kind of data 
grouping.

What dou you think about this?
Do you find it will be useful to have some processed metrics on the 
loadbalancer side instead of the ceilometer side?
Do you identify any measurements about the load balancer that could not be 
obtained/calculated from ceilometer?
Perhaps this could be the base for other stats gathering solutions that may be 
under discussion?

Andres
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party] Current status of Neutron 3rd Party CI and how to move forward

2014-06-16 Thread Anita Kuno
On 06/16/2014 10:25 AM, Ilya Shakhat wrote:

 However, it would be great if we could start devising a solution for
 having
 health reports from the various CI systems.
 This report should report the following kind of information:
 - timestamp of last run
 - timestamp of last vote (a system might start job which then get aborted
 for CI infra problems)
 - % of success vs failures (not sure how important is that one but
 provides
 a metric of comparison with upstream jenkins)
 - % of disagreements with jenkins (this might allow us to quickly spot
 those
 CI systems which are randomly -1'ing patches)

 The percentage metrics might be taken over a 48 hours or 7 days
 interval, or
 both.
 Does this idea sound reasonable?

 This sounds like a very good idea. Now we just need to find someone
 with the time to write this. :)
 
 
 That's exactly what Stackalytics/DriverLog may do! It already collects the
 latest CI votes and it wouldn't be hard to calculate metrics based on them.
 
 Ilya
Hi Ilya:

Can you add an agenda item to today's Third Party meeting agenda so that
you can outline how you address this currently, any bugs you are aware
of and your current direction of development.

Thanks,
Anita.
 
 
 2014-06-16 18:11 GMT+04:00 Salvatore Orlando sorla...@nicira.com:
 



 On 16 June 2014 15:58, Kyle Mestery mest...@noironetworks.com wrote:

 On Mon, Jun 16, 2014 at 8:52 AM, Salvatore Orlando sorla...@nicira.com
 wrote:
 I will probably be unable, as usual, to attend today's CI meeting (falls
 right around my dinner time).
 I think it's a good idea to starting keeping track of the status of the
 various CI systems, but I feel the etherpad will not work very well in
 the
 long term.

 Agreed. The etherpad was a starting point, I'll move this information
 to a wiki page later today.

 However, it would be great if we could start devising a solution for
 having
 health reports from the various CI systems.
 This report should report the following kind of information:
 - timestamp of last run
 - timestamp of last vote (a system might start job which then get
 aborted
 for CI infra problems)
 - % of success vs failures (not sure how important is that one but
 provides
 a metric of comparison with upstream jenkins)
 - % of disagreements with jenkins (this might allow us to quickly spot
 those
 CI systems which are randomly -1'ing patches)

 The percentage metrics might be taken over a 48 hours or 7 days
 interval, or
 both.
 Does this idea sound reasonable?

 This sounds like a very good idea. Now we just need to find someone
 with the time to write this. :)

 Also, regarding [1]. I agree that more is always better...  but I would
 like
 a minimum required set of tests to be enforced.
 Is this something that can be achieved?

 I think the tests that are in there are the minimum tests I'd like to
 see run. I'll clarify the language on the wiki page a bit.


 If the intention is to have the minimum set of test we'd like to see run
 then it's perfect!
 I was trying to say we should impose that set as a minimum requirement...



 Salvatore

 [1] https://wiki.openstack.org/wiki/NeutronThirdPartyTesting




 On 16 June 2014 07:07, YAMAMOTO Takashi yamam...@valinux.co.jp wrote:

 hi,

 My initial analysis of Neutron 3rd Party CI is here [1]. This was
 somewhat correlated with information from DriverLog [2], which was
 helpful to put this together.

 i updated the etherpad for ofagent.
 currently a single CI system is running tests for both of ofagent and
 ryu.
 is it ok?

 YAMAMOTO Takashi

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party] Current status of Neutron 3rd Party CI and how to move forward

2014-06-16 Thread Anita Kuno
On 06/16/2014 09:52 AM, Salvatore Orlando wrote:
 I will probably be unable, as usual, to attend today's CI meeting (falls
 right around my dinner time).
I'm sorry to hear that, since I value your input in these matters. Your
CI system is doing an excellent job and I use it as an example for others.

Maybe we need to get you speaking somewhere so you can be in a different
time zone for the meeting? :D

Thanks Salvatore,
Anita.
 I think it's a good idea to starting keeping track of the status of the
 various CI systems, but I feel the etherpad will not work very well in the
 long term.
 
 However, it would be great if we could start devising a solution for having
 health reports from the various CI systems.
 This report should report the following kind of information:
 - timestamp of last run
 - timestamp of last vote (a system might start job which then get aborted
 for CI infra problems)
 - % of success vs failures (not sure how important is that one but provides
 a metric of comparison with upstream jenkins)
 - % of disagreements with jenkins (this might allow us to quickly spot
 those CI systems which are randomly -1'ing patches)
 
 The percentage metrics might be taken over a 48 hours or 7 days interval,
 or both.
 Does this idea sound reasonable?
 
 Also, regarding [1]. I agree that more is always better...  but I would
 like a minimum required set of tests to be enforced.
 Is this something that can be achieved?
 
 Salvatore
 
 [1] https://wiki.openstack.org/wiki/NeutronThirdPartyTesting
 
 
 
 
 On 16 June 2014 07:07, YAMAMOTO Takashi yamam...@valinux.co.jp wrote:
 
 hi,

 My initial analysis of Neutron 3rd Party CI is here [1]. This was
 somewhat correlated with information from DriverLog [2], which was
 helpful to put this together.

 i updated the etherpad for ofagent.
 currently a single CI system is running tests for both of ofagent and ryu.
 is it ok?

 YAMAMOTO Takashi

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [barbican] Meeting Monday June 16 at 20:00 UTC

2014-06-16 Thread Douglas Mendizabal
Hi Everyone,

The Barbican team is hosting our weekly meeting today, Monday June 16nd, at
20:00 UTC in #openstack-meeting-alt

Meeting agenda is available here
https://wiki.openstack.org/wiki/Meetings/Barbican and everyone is welcomed
to add agenda items.

You can check this link
http://time.is/0800PM_16_Jun_2014_in_UTC/CDT/EDT/PDT?Barbican_Weekly_Meeting
if you need to figure out what 20:00 UTC means in your time.

-Douglas Mendizábal





smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Backwards compatibility policy for our projects

2014-06-16 Thread Tomas Sedovic
All,

After having proposed some changes[1][2] to tripleo-heat-templates[3],
reviewers suggested adding a deprecation period for the merge.py script.

While TripleO is an official OpenStack program, none of the projects
under its umbrella (including tripleo-heat-templates) have gone through
incubation and integration nor have they been shipped with Icehouse.

So there is no implicit compatibility guarantee and I have not found
anything about maintaining backwards compatibility neither on the
TripleO wiki page[4], tripleo-heat-template's readme[5] or
tripleo-incubator's readme[6].

The Release Management wiki page[7] suggests that we follow Semantic
Versioning[8], under which prior to 1.0.0 (t-h-t is ) anything goes.
According to that wiki, we are using a stronger guarantee where we do
promise to bump the minor version on incompatible changes -- but this
again suggests that we do not promise to maintain backwards
compatibility -- just that we document whenever we break it.

According to Robert, there are now downstreams that have shipped things
(with the implication that they don't expect things to change without a
deprecation period) so there's clearly a disconnect here.

If we do promise backwards compatibility, we should document it
somewhere and if we don't we should probably make that more visible,
too, so people know what to expect.

I prefer the latter, because it will make the merge.py cleanup easier
and every published bit of information I could find suggests that's our
current stance anyway.

Tomas

[1]: https://review.openstack.org/#/c/99384/
[2]: https://review.openstack.org/#/c/97939/
[3]: https://github.com/openstack/tripleo-heat-templates
[4]: https://wiki.openstack.org/wiki/TripleO
[5]:
https://github.com/openstack/tripleo-heat-templates/blob/master/README.md
[6]: https://github.com/openstack/tripleo-incubator/blob/master/README.rst
[7]: https://wiki.openstack.org/wiki/TripleO/ReleaseManagement
[8]: http://semver.org/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [FWD] KVM Forum 2014 Call for Participation

2014-06-16 Thread Daniel P. Berrange
- Forwarded message from Paolo Bonzini pbonz...@redhat.com -

 Date: Mon, 16 Jun 2014 18:08:17 +0200
 From: Paolo Bonzini pbonz...@redhat.com
 To: qemu-devel qemu-de...@nongnu.org
 Subject: [Qemu-devel] KVM Forum 2014 Call for Participation

=
KVM Forum 2014: Call For Participation
October 14-16, 2014 - Congress Centre Düsseldorf - Düsseldorf, Germany

(All submissions must be received before midnight July 27, 2014)
=

KVM is an industry leading open source hypervisor that provides an ideal
platform for datacenter virtualization, virtual desktop infrastructure,
and cloud computing.  Once again, it's time to bring together the
community of developers and users that define the KVM ecosystem for
our annual technical conference.  We will discuss the current state of
affairs and plan for the future of KVM, its surrounding infrastructure,
and management tools.  Mark your calendar and join us in advancing KVM.
http://events.linuxfoundation.org/events/kvm-forum/

Once again we are colocated with the Linux Foundation's LinuxCon Europe,
CloudOpen Europe, Embedded Linux Conference (ELC) Europe, and this year, the
Linux Plumbers Conference. KVM Forum attendees will be able to attend
LinuxCon + CloudOpen + ELC for a discounted rate.
http://events.linuxfoundation.org/events/kvm-forum/attend/register

We invite you to lead part of the discussion by submitting a speaking
proposal for KVM Forum 2014.
http://events.linuxfoundation.org/cfp

Suggested topics:

 KVM/Kernel
 - Scaling and optimizations
 - Nested virtualization
 - Linux kernel performance improvements
 - Resource management (CPU, I/O, memory)
 - Hardening and security
 - VFIO: SR-IOV, GPU, platform device assignment
 - Architecture ports

 QEMU
 - Management interfaces: QOM and QMP
 - New devices, new boards, new architectures
 - Scaling and optimizations
 - Desktop virtualization and SPICE
 - Virtual GPU
 - virtio and vhost, including non-Linux or non-virtualized uses
 - Hardening and security
 - New storage features
 - Live migration and fault tolerance
 - High availability and continuous backup
 - Real-time guest support
 - Emulation and TCG
 - Firmware: ACPI, UEFI, coreboot, u-Boot, etc.
 - Testing

 Management and infrastructure
 - Managing KVM: Libvirt, OpenStack, oVirt, etc.
 - Storage: glusterfs, Ceph, etc.
 - Software defined networking: Open vSwitch, OpenDaylight, etc.
 - Network Function Virtualization
 - Security
 - Provisioning
 - Performance tuning


===
SUBMITTING YOUR PROPOSAL
===
Abstracts due: July 27, 2014

Please submit a short abstract (~150 words) describing your presentation
proposal. Slots vary in length up to 45 minutes.  Also include in your
proposal
the proposal type -- one of:
- technical talk
- end-user talk

Submit your proposal here:
http://events.linuxfoundation.org/cfp
Please only use the categories presentation and panel discussion

You will receive a notification whether or not your presentation proposal
was accepted by Aug 20th.

Speakers will receive a complimentary pass for the event. In the instance
that your submission has multiple presenters, only the primary speaker for a
proposal will receive a complementary event pass. For panel discussions, all
panelists will receive a complimentary event pass.

TECHNICAL TALKS

A good technical talk should not just report on what has happened over
the last year; it should present a concrete problem and how it impacts
the user and/or developer community. Whenever applicable, it should
focus on the work that needs to be done or the difficulties that haven't yet
been solved.  Summarizing recent developments is okay but it should
not be more than a small portion of the overall talk.

END-USER TALKS

One of the big challenges as developers is to know what, where and how
people actually use our software.  We will reserve a few slots for end
users talking about their deployment challenges and achievements.

If you are using KVM in production you are encouraged submit a speaking
proposal.  Simply mark it as an end-user talk.  As an end user, this is a
unique opportunity to get your input to developers.

HANDS-ON / BOF SESSIONS

We will reserve some time for people to get together and discuss
strategic decisions as well as other topics that are best solved within
smaller groups. This time can also be used for hands-on hacking
sessions if you have concrete code problems to solve.

These sessions will be announced during the event. If you are interested
in organizing such a session, please add it to the list at

  http://www.linux-kvm.org/page/KVM_Forum_2014_BOF

Let people you think might be interested know about it, and encourage
them to add their names to the wiki page as well. Please try to
add your ideas to the list before KVM Forum starts.

PANEL DISCUSSIONS

If you are proposing a panel discussion, please make sure that you list 

Re: [openstack-dev] [TripleO] Backwards compatibility policy for our projects

2014-06-16 Thread Jason Rist
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Mon 16 Jun 2014 10:19:40 AM MDT, Tomas Sedovic wrote:
 All,
 
 After having proposed some changes[1][2] to
 tripleo-heat-templates[3], reviewers suggested adding a deprecation
 period for the merge.py script.
 
 While TripleO is an official OpenStack program, none of the
 projects under its umbrella (including tripleo-heat-templates) have
 gone through incubation and integration nor have they been shipped
 with Icehouse.
 
 So there is no implicit compatibility guarantee and I have not
 found anything about maintaining backwards compatibility neither on
 the TripleO wiki page[4], tripleo-heat-template's readme[5] or 
 tripleo-incubator's readme[6].
 
 The Release Management wiki page[7] suggests that we follow
 Semantic Versioning[8], under which prior to 1.0.0 (t-h-t is )
 anything goes. According to that wiki, we are using a stronger
 guarantee where we do promise to bump the minor version on
 incompatible changes -- but this again suggests that we do not
 promise to maintain backwards compatibility -- just that we
 document whenever we break it.
 
 According to Robert, there are now downstreams that have shipped
 things (with the implication that they don't expect things to
 change without a deprecation period) so there's clearly a
 disconnect here.
 
 If we do promise backwards compatibility, we should document it 
 somewhere and if we don't we should probably make that more
 visible, too, so people know what to expect.
 
 I prefer the latter, because it will make the merge.py cleanup
 easier and every published bit of information I could find suggests
 that's our current stance anyway.
 
 Tomas
 
 [1]: https://review.openstack.org/#/c/99384/ [2]:
 https://review.openstack.org/#/c/97939/ [3]:
 https://github.com/openstack/tripleo-heat-templates [4]:
 https://wiki.openstack.org/wiki/TripleO [5]: 
 https://github.com/openstack/tripleo-heat-templates/blob/master/README.md

 
[6]: https://github.com/openstack/tripleo-incubator/blob/master/README.rst
 [7]: https://wiki.openstack.org/wiki/TripleO/ReleaseManagement [8]:
 http://semver.org/
 
 ___ OpenStack-dev
 mailing list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

I'm going to have to agree with Tomas here.  There doesn't seem to be
any reasonable expectation of backwards compatibility for the reasons
he outlined, despite some downstream releases that may be impacted.

- -J
- -- 
Jason E. Rist
Senior Software Engineer
OpenStack Management UI
Red Hat, Inc.
openuc: +1.972.707.6408
mobile: +1.720.256.3933
Freenode: jrist
github/identi.ca: knowncitizen
- -- 
Jason E. Rist
Senior Software Engineer
OpenStack Management UI
Red Hat, Inc.
openuc: +1.972.707.6408
mobile: +1.720.256.3933
Freenode: jrist
github/identi.ca: knowncitizen
-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJTnxutAAoJEMqxYTi6t2f4Mh4H+gOF3aUZwIxY9FSEW/Hj1EjJ
eDSDDPuOwWSlMn8VNmEq44ks7KNgGDU/qpjaDUjAJ8BCEm4cXi8JtS7zYsPJJeY3
t3z/cFPkyhWLgg0qQYOB03rbqYGX58G43xa8lFjvVi7GyfqCSKJ3AxauF2bDkx+b
FoQztiaHvU09dw77JmvTxPJ2CpsvBHGaLkG3NneVIBNA8WtnBqKUQrT63ZnP8K++
G98SoMSNpXlztVEnElFMZoi+Lr7rO/37kP9CvqYtXBaDgL2Wbj6B+21Pn5OUVcXd
MTy0CcvvpM/P08DNFW9+coHJcQOKJSeAYuDxn8z0+bpyJkAiSf9o4zlWOWtavfU=
=qXmp
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate proposal - drop Postgresql configurations in the gate

2014-06-16 Thread Joe Gordon
On Sat, Jun 14, 2014 at 3:46 AM, Sean Dague s...@dague.net wrote:

 On 06/13/2014 06:47 PM, Joe Gordon wrote:
 
 
 
  On Thu, Jun 12, 2014 at 7:18 PM, Dan Prince dpri...@redhat.com
  mailto:dpri...@redhat.com wrote:
 
  On Thu, 2014-06-12 at 09:24 -0700, Joe Gordon wrote:
  
   On Jun 12, 2014 8:37 AM, Sean Dague s...@dague.net
  mailto:s...@dague.net wrote:
   
On 06/12/2014 10:38 AM, Mike Bayer wrote:

 On 6/12/14, 8:26 AM, Julien Danjou wrote:
 On Thu, Jun 12 2014, Sean Dague wrote:

 That's not cacthable in unit or functional tests?
 Not in an accurate manner, no.

 Keeping jobs alive based on the theory that they might one
 day
   be useful
 is something we just don't have the liberty to do any more.
   We've not
 seen an idle node in zuul in 2 days... and we're only at j-1.
   j-3 will
 be at least +50% of this load.
 Sure, I'm not saying we don't have a problem. I'm just saying
   it's not a
 good solution to fix that problem IMHO.

 Just my 2c without having a full understanding of all of
   OpenStack's CI
 environment, Postgresql is definitely different enough that
 MySQL
 strict mode could still allow issues to slip through quite
   easily, and
 also as far as capacity issues, this might be longer term but
 I'm
   hoping
 to get database-related tests to be lots faster if we can move
 to
   a
 model that spends much less time creating databases and
 schemas.
   
This is what I mean by functional testing. If we were directly
   hitting a
real database on a set of in tree project tests, I think you
 could
discover issues like this. Neutron was headed down that path.
   
But if we're talking about a devstack / tempest run, it's not
 really
applicable.
   
If someone can point me to a case where we've actually found this
   kind
of bug with tempest / devstack, that would be great. I've just
   *never*
seen it. I was the one that did most of the fixing for pg
 support in
Nova, and have helped other projects as well, so I'm relatively
   familiar
with the kinds of fails we can discover. The ones that Julien
   pointed
really aren't likely to be exposed in our current system.
   
Which is why I think we're mostly just burning cycles on the
   existing
approach for no gain.
  
   Given all the points made above, I think dropping PostgreSQL is the
   right choice; if only we had infinite cloud that would be another
   story.
  
   What about converting one of our existing jobs (grenade partial
 ncpu,
   large ops, regular grenade, tempest with nova network etc.) Into a
   PostgreSQL only job? We could get some level of PostgreSQL testing
   without any additional jobs, although this is  tradeoff obviously.
 
  I'd be fine with this tradeoff if it allows us to keep PostgreSQL in
 the
  mix.
 
 
  Here is my proposed change to how we handle postgres in the gate:
 
  https://review.openstack.org/#/c/100033
 
 
  Merge postgres and neutron jobs in integrated-gate template
 
 
 
 
  Instead of having a separate job for postgres and neutron, combine them.
  In the integrated-gate we will only test postgres+neutron and not
 
 
  neutron/mysql or nova-network/postgres.
 
  * neutron/mysql is still tested in integrated-gate-neutron
  * nova-network/postgres is tested in nova

 Because neutron only runs smoke jobs, this actually drops all the
 interesting testing of pg. The things I've actually seen catch
 differences are the nova negative tests, which basically aren't run in
 this job.


I forgot about the smoke test only part when I originally proposed this.
From a cursory look, neutron-full appears to be fairly stable, so if we
move over to neutron-full in the near future that should address your
concerns. Are there plans to move over to neutron-full in the near future?



 So I think that's kind of the worst of all possible worlds, because it
 would make people think the thing is tested interestingly, when it's not.

 -Sean

 --
 Sean Dague
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Refactor ISCSIDriver to support other iSCSI transports besides TCP

2014-06-16 Thread John Griffith
On Thu, Mar 27, 2014 at 10:45 AM, Shlomi Sasson shlo...@mellanox.com
wrote:

  Of course I’m aware of that.. I’m the one who pushed it there in the
 first place J

 But it was not the best way to handle this.. I think that the right/better
 approach is as suggested.



 I’m planning to remove the existing ISERDriver code, this will eliminate
 significant code and class duplication, and will work with all the iSCSI
 vendors who supports both tcp and rdma without the need to modify their
 plug-in drivers.





 *From:* John Griffith [mailto:john.griff...@solidfire.com]
 *Sent:* Wednesday, March 26, 2014 22:47

 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [nova][cinder] Refactor ISCSIDriver to
 support other iSCSI transports besides TCP







 On Wed, Mar 26, 2014 at 12:18 PM, Eric Harney ehar...@redhat.com wrote:

 On 03/25/2014 11:07 AM, Shlomi Sasson wrote:

  I am not sure what will be the right approach to handle this, I already
 have the code, should I open a bug or blueprint to track this issue?
 
  Best Regards,
  Shlomi
 
 

 A blueprint around this would be appreciated.  I have had similar
 thoughts around this myself, that these should be options for the LVM
 iSCSI driver rather than different drivers.

 These options also mirror how we can choose between tgt/iet/lio in the
 LVM driver today.  I've been assuming that RDMA support will be added to
 the LIO driver there at some point, and this seems like a nice way to
 enable that.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 I'm open to improving this, but I am curious you know there's an ISER
 subclass in iscsi for Cinder currently right?

 http://goo.gl/kQJoDO

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ​Hi Shlomis,

I started working on something similar to this that decouples data-path and
control-plan for the backend devices and threw some ideas out in Cinder
channel this morning.  Luckily some folks remembered this post and pointed
me to it :)

Wondering did you ever post anything here?  Would you be interested in
collaborating on this and sharing ideas?​  I'm planning to have a prototype
ready to share later this week, but it sounds like you may already have a
lot of this done so it would be cool if we worked together on it.

Thanks,
John
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Backwards compatibility policy for our projects

2014-06-16 Thread Duncan Thomas
On 16 June 2014 17:30, Jason Rist jr...@redhat.com wrote:
 I'm going to have to agree with Tomas here.  There doesn't seem to be
 any reasonable expectation of backwards compatibility for the reasons
 he outlined, despite some downstream releases that may be impacted.


Backward compatibility is a hard habit to get into, and easy to put
off. If you're not making any guarantees now, when are you going to
start making them? How much breakage can users expect? Without wanting
to look entirely like a troll, should TripleO be dropped as an
official until it can start making such guarantees? I think every
other official OpenStack project has a stable API policy of some kind,
even if they don't entirely match...

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] revert hacking to 0.8 series

2014-06-16 Thread Ben Nemec
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 06/16/2014 08:37 AM, Thierry Carrez wrote:
 Sean Dague wrote:
 Hacking 0.9 series was released pretty late for Juno. The entire
 check queue was flooded this morning with requirements proposals
 failing pep8 because of it (so at 6am EST we were waiting 1.5 hrs
 for a check node).
 
 The previous soft policy with pep8 updates was that we set a
 pep8 version basically release week, and changes stopped being
 done for style after first milestone.
 
 I think in the spirit of that we should revert the hacking
 requirements update back to the 0.8 series for Juno. We're past
 milestone 1, so shouldn't be working on style only fixes at this
 point.
 
 Proposed review here - https://review.openstack.org/#/c/100231/
 
 I also think in future hacking major releases need to happen
 within one week of release, or not at all for that series.
 
 We may also have reached a size where changing style rules is just
 too costly, whatever the moment in the cycle. I think it's good
 that we have rules to enforce a minimum of common style, but the
 added value of those extra rules is limited, while their impact on
 the common gate grows as we add more projects.

A few thoughts:

1) I disagree with the proposition that hacking updates can only
happen in the first week after release.  I get that there needs to be
a cutoff, but I don't think one week is reasonable.  Even if we
release in the first week, you're still going to be dealing with
hacking updates for the rest of the cycle as projects adopt the new
rules at their leisure.  I don't like retroactively applying milestone
1 as a cutoff either, although I could see making that the policy
going forward.

2) Given that most of the changes involved in fixing the new failures
are trivial, I think we should encourage combining the fixes into one
commit.  We _really_ don't need separate commits to fix H305 and H307.
 This doesn't help much with the reviewer load, but it should reduce
the gate load somewhat.  It violates the one change-one commit rule,
but A foolish consistency...

3) We should start requiring specs for all new hacking rules to make
sure we have consensus (I think oslo-specs is the place for this).  2
+2's doesn't really accomplish that.  We also may need to raise the
bar for inclusion of new rules - while I agree with all of the new
ones added in hacking .9, I wonder if some of them are really necessary.

4) I don't think we're at a point where we should freeze hacking
completely, however.  The import grouping and long line wrapping
checks in particular are things that reviewers have to enforce today,
and that has a significant, if less well-defined, cost too.  If we're
really going to say those rules can't be enforced by hacking then we
need to remove them from our hacking guidelines and start the long
process of educating reviewers to stop requiring them.  I'd rather
just deal with the pain of adding them to hacking one time and never
have to think about them again.  I'm less convinced the other two that
were added in .9 are necessary, but in any case these are discussions
that should happen in spec reviews going forward.

5) We may want to come up with some way to globally disable pep8
checks we don't want to enforce, since we don't have any control over
that but probably don't want to just stop updating pep8.  That could
make the pain of these updates much less.

I could probably come up with a few more, but this is already too
wall-of-texty for my tastes. :-)

- -Ben
-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJTnx7wAAoJEDehGd0Fy7uqoYAH/0KmxmR873Qn2Kti7LIEUNp4
1FJaBOX09ItxkkvyNRpcsIQu4fWycm60CckSOLfB7rgxgIjgsVkiZ9puE6oCmj2o
Lhe5DhjYA2ROu9h8i0vmzYDnAKeu/WuRGtgyLSElUXeuiLpSrBcEA/03GpkCGiAP
1muAkVgv2oxDDwsaLwL7MmFrlZ1MPTP97lAfsfHbwbsOM5YMuPrRz9PirgHPBtTV
59UyofCGEBTtJKmJRLzRDZyDwTux5xrrc/cefer5GFLQH0ZbxOU1HHFESyc5wFVJ
tI/3nPlbFpqCUtgmnQc8k3lX3d2H1Qr9UfCvYlJFTN1TmPmHmK378ioi81HoAVo=
=tqtf
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Backwards compatibility policy for our projects

2014-06-16 Thread Arkady_Kanevsky
Why is OOO being singled out for backwards compatibility?

-Original Message-
From: Duncan Thomas [mailto:duncan.tho...@gmail.com] 
Sent: Monday, June 16, 2014 11:42 AM
To: jr...@redhat.com; OpenStack Development Mailing List (not for usage 
questions)
Subject: Re: [openstack-dev] [TripleO] Backwards compatibility policy for our 
projects

On 16 June 2014 17:30, Jason Rist jr...@redhat.com wrote:
 I'm going to have to agree with Tomas here.  There doesn't seem to be 
 any reasonable expectation of backwards compatibility for the reasons 
 he outlined, despite some downstream releases that may be impacted.


Backward compatibility is a hard habit to get into, and easy to put off. If 
you're not making any guarantees now, when are you going to start making them? 
How much breakage can users expect? Without wanting to look entirely like a 
troll, should TripleO be dropped as an official until it can start making such 
guarantees? I think every other official OpenStack project has a stable API 
policy of some kind, even if they don't entirely match...

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Backwards compatibility policy for our projects

2014-06-16 Thread Clint Byrum
Excerpts from Tomas Sedovic's message of 2014-06-16 09:19:40 -0700:
 All,
 
 After having proposed some changes[1][2] to tripleo-heat-templates[3],
 reviewers suggested adding a deprecation period for the merge.py script.
 
 While TripleO is an official OpenStack program, none of the projects
 under its umbrella (including tripleo-heat-templates) have gone through
 incubation and integration nor have they been shipped with Icehouse.
 
 So there is no implicit compatibility guarantee and I have not found
 anything about maintaining backwards compatibility neither on the
 TripleO wiki page[4], tripleo-heat-template's readme[5] or
 tripleo-incubator's readme[6].
 
 The Release Management wiki page[7] suggests that we follow Semantic
 Versioning[8], under which prior to 1.0.0 (t-h-t is ) anything goes.
 According to that wiki, we are using a stronger guarantee where we do
 promise to bump the minor version on incompatible changes -- but this
 again suggests that we do not promise to maintain backwards
 compatibility -- just that we document whenever we break it.
 

I think there are no guarantees, and no promises. I also think that we've
kept tripleo_heat_merge pretty narrow in surface area since making it
into a module, so I'm not concerned that it will be incredibly difficult
to keep those features alive for a while.

 According to Robert, there are now downstreams that have shipped things
 (with the implication that they don't expect things to change without a
 deprecation period) so there's clearly a disconnect here.
 

I think it is more of a we will cause them extra work thing. If we
can make a best effort and deprecate for a few releases (as in, a few
releases of t-h-t, not OpenStack), they'll likely appreciate that. If
we can't do it without a lot of effort, we shouldn't bother.

 If we do promise backwards compatibility, we should document it
 somewhere and if we don't we should probably make that more visible,
 too, so people know what to expect.
 
 I prefer the latter, because it will make the merge.py cleanup easier
 and every published bit of information I could find suggests that's our
 current stance anyway.
 

This is more about good will than promising. If it is easy enough to
just keep the code around and have it complain to us if we accidentally
resurrect a feature, that should be enough. We could even introduce a
switch to the CLI like --strict that we can run in our gate and that
won't allow us to keep using deprecated features.

So I'd like to see us deprecate not because we have to, but because we
can do it with only a small amount of effort.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Backwards compatibility policy for our projects

2014-06-16 Thread Clint Byrum
Excerpts from Duncan Thomas's message of 2014-06-16 09:41:49 -0700:
 On 16 June 2014 17:30, Jason Rist jr...@redhat.com wrote:
  I'm going to have to agree with Tomas here.  There doesn't seem to be
  any reasonable expectation of backwards compatibility for the reasons
  he outlined, despite some downstream releases that may be impacted.
 
 
 Backward compatibility is a hard habit to get into, and easy to put
 off. If you're not making any guarantees now, when are you going to
 start making them? How much breakage can users expect? Without wanting
 to look entirely like a troll, should TripleO be dropped as an
 official until it can start making such guarantees? I think every
 other official OpenStack project has a stable API policy of some kind,
 even if they don't entirely match...
 

I actually agree with the sentiment of your statement, which is backward
compatibility matters.

However, there is one thing that is inaccurate in your statements:

TripleO is not a project, it is a program. These tools are products
of that program's mission which is to deploy OpenStack using itself as
much as possible. Where there are holes, we fill them with existing
tools or we write minimal tools such as the tripleo_heat_merge Heat
template pre-processor.

This particular tool is marked for death as soon as Heat grows the
appropriate capabilities to allow that. This tool never wants to
be integrated into the release. So it is a little hard to justify
bending over backwards for BC. But I don't think that is what anybody
is requesting.

We're not looking for this tool to remain super agile and grow, thus
making any existing code and interfaces a burden. So I think it is pretty
easy to just start marking features as deprecated and raising deprecation
warnings when they're used.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] revert hacking to 0.8 series

2014-06-16 Thread Sean Dague
On 06/16/2014 12:44 PM, Ben Nemec wrote:
 On 06/16/2014 08:37 AM, Thierry Carrez wrote:
 Sean Dague wrote:
 Hacking 0.9 series was released pretty late for Juno. The entire
 check queue was flooded this morning with requirements proposals
 failing pep8 because of it (so at 6am EST we were waiting 1.5 hrs
 for a check node).

 The previous soft policy with pep8 updates was that we set a
 pep8 version basically release week, and changes stopped being
 done for style after first milestone.

 I think in the spirit of that we should revert the hacking
 requirements update back to the 0.8 series for Juno. We're past
 milestone 1, so shouldn't be working on style only fixes at this
 point.

 Proposed review here - https://review.openstack.org/#/c/100231/

 I also think in future hacking major releases need to happen
 within one week of release, or not at all for that series.
 
 We may also have reached a size where changing style rules is just
 too costly, whatever the moment in the cycle. I think it's good
 that we have rules to enforce a minimum of common style, but the
 added value of those extra rules is limited, while their impact on
 the common gate grows as we add more projects.
 
 A few thoughts:
 
 1) I disagree with the proposition that hacking updates can only
 happen in the first week after release.  I get that there needs to be
 a cutoff, but I don't think one week is reasonable.  Even if we
 release in the first week, you're still going to be dealing with
 hacking updates for the rest of the cycle as projects adopt the new
 rules at their leisure.  I don't like retroactively applying milestone
 1 as a cutoff either, although I could see making that the policy
 going forward.
 
 2) Given that most of the changes involved in fixing the new failures
 are trivial, I think we should encourage combining the fixes into one
 commit.  We _really_ don't need separate commits to fix H305 and H307.
  This doesn't help much with the reviewer load, but it should reduce
 the gate load somewhat.  It violates the one change-one commit rule,
 but A foolish consistency...

The challenge is that hacking updates are basically giant merge conflict
engines. If there is any significant amount of code outstanding in a
project, landing hacking only changes basically means requiring much of
the outstanding code to rebase.

So it's actually expensive in a way that doesn't jump out immediately.
The cost of landing hacking isn't just the code of reviewing the hacking
patches, it's also the cost of the extra roundtrips on outstanding patches.

 3) We should start requiring specs for all new hacking rules to make
 sure we have consensus (I think oslo-specs is the place for this).  2
 +2's doesn't really accomplish that.  We also may need to raise the
 bar for inclusion of new rules - while I agree with all of the new
 ones added in hacking .9, I wonder if some of them are really necessary.

 4) I don't think we're at a point where we should freeze hacking
 completely, however.  The import grouping and long line wrapping
 checks in particular are things that reviewers have to enforce today,
 and that has a significant, if less well-defined, cost too.  If we're
 really going to say those rules can't be enforced by hacking then we
 need to remove them from our hacking guidelines and start the long
 process of educating reviewers to stop requiring them.  I'd rather
 just deal with the pain of adding them to hacking one time and never
 have to think about them again.  I'm less convinced the other two that
 were added in .9 are necessary, but in any case these are discussions
 that should happen in spec reviews going forward.

I think both of those cases are really nits to the point that they
aren't worth enforcing. They won't change the correctness of the code.
And barely change the readability.

There are differences with things like the is None checks, or python 3
checks, which change correctness, or prevent subtle bugs. But I think
we're now getting to a level of cleanliness enforcement that trumps
functionally working.

 5) We may want to come up with some way to globally disable pep8
 checks we don't want to enforce, since we don't have any control over
 that but probably don't want to just stop updating pep8.  That could
 make the pain of these updates much less.

Actually, if you look at python novaclient, the doc string checks, which
are really niggly, and part of hacking are the biggest fails. It's much
less upstream that's doing this to us.

 I could probably come up with a few more, but this is already too
 wall-of-texty for my tastes. :-)
 
 -Ben
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

[openstack-dev] [Manila] GenericDriver cinder volume error during manila create

2014-06-16 Thread Deepak Shetty
I am trying devstack on F20 setup with Manila sources.

When i am trying to do
*manila create --name cinder_vol_share_using_nfs2 --share-network-id
36ec5a17-cef6-44a8-a518-457a6f36faa0 NFS 2 *

I see the below error in c-vol due to which even tho' my service VM is
started, manila create errors out as cinder volume is not getting exported
as iSCSI

2014-06-16 16:39:36.151 INFO cinder.volume.flows.manager.create_volume
[req-15d0b435-f6ce-41cd-ae4a-3851b07cf774 1a7816e5f0144c539192360cdc9672d5
b65a066f32df4aca80fa9a
6d5c795095] Volume 8bfd424d-9877-4c20-a9d1-058c06b9bdda: being created as
raw with specification: {'status': u'creating', 'volume_size': 2,
'volume_name': u'volume-8bfd
424d-9877-4c20-a9d1-058c06b9bdda'}
2014-06-16 16:39:36.151 DEBUG cinder.openstack.common.processutils
[req-15d0b435-f6ce-41cd-ae4a-3851b07cf774 1a7816e5f0144c539192360cdc9672d5
b65a066f32df4aca80fa9a6d5c
795095] Running cmd (subprocess): sudo cinder-rootwrap
/etc/cinder/rootwrap.conf lvcreate -n
volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda stack-volumes -L 2g from (pid=4
623) execute /opt/stack/cinder/cinder/openstack/common/processutils.py:142
2014-06-16 16:39:36.828 INFO cinder.volume.flows.manager.create_volume
[req-15d0b435-f6ce-41cd-ae4a-3851b07cf774 1a7816e5f0144c539192360cdc9672d5
b65a066f32df4aca80fa9a
6d5c795095] Volume volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda
(8bfd424d-9877-4c20-a9d1-058c06b9bdda): created successfully
2014-06-16 16:39:38.404 WARNING cinder.context [-] Arguments dropped when
creating context: {'user': u'd9bb59a6a2394483902b382a991ffea2', 'tenant':
u'b65a066f32df4aca80
fa9a6d5c795095', 'user_identity': u'd9bb59a6a2394483902b382a991ffea2
b65a066f32df4aca80fa9a6d5c795095 - - -'}
2014-06-16 16:39:38.426 DEBUG cinder.volume.manager
[req-083cd582-1b4d-4e7c-a70c-2c6282d8d799 d9bb59a6a2394483902b382a991ffea2
b65a066f32df4aca80fa9a6d5c795095] Volume
8bfd424d-9877-4c20-a9d1-058c06b9bdda: creating export from (pid=4623)
initialize_connection /opt/stack/cinder/cinder/volume/manager.py:781
2014-06-16 16:39:38.428 INFO cinder.brick.iscsi.iscsi
[req-083cd582-1b4d-4e7c-a70c-2c6282d8d799 d9bb59a6a2394483902b382a991ffea2
b65a066f32df4aca80fa9a6d5c795095] Creat
ing iscsi_target for: volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda
2014-06-16 16:39:38.440 DEBUG cinder.brick.iscsi.iscsi
[req-083cd582-1b4d-4e7c-a70c-2c6282d8d799 d9bb59a6a2394483902b382a991ffea2
b65a066f32df4aca80fa9a6d5c795095] Crea
ted volume path
/opt/stack/data/cinder/volumes/volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda,
content:
*target
iqn.2010-10.org.openstack:volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda*
* backing-store
/dev/stack-volumes/volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda*
* lld iscsi*
* IncomingUser kZQ6rqqT7W6KGQvMZ7Lr k4qcE3G9g5z7mDWh2woe*
* /target*
from (pid=4623) create_iscsi_target
/opt/stack/cinder/cinder/brick/iscsi/iscsi.py:183
2014-06-16 16:39:38.440 DEBUG cinder.openstack.common.processutils
[req-083cd582-1b4d-4e7c-a70c-2c6282d8d799 d9bb59a6a2394483902b382a991ffea2
b65a066f32df4aca80fa9a6d5c
795095] Running cmd (subprocess): sudo cinder-rootwrap
/etc/cinder/rootwrap.conf tgt-admin --update
iqn.2010-10.org.openstack:volume-8bfd424d-9877-4c20-a9d1-058c06b9bdd
a from (pid=4623) execute
/opt/stack/cinder/cinder/openstack/common/processutils.py:142
2014-06-16 16:39:38.981 DEBUG cinder.openstack.common.processutils
[req-083cd582-1b4d-4e7c-a70c-2c6282d8d799 d9bb59a6a2394483902b382a991ffea2
b65a066f32df4aca80fa9a6d5c
795095] Result was 107 from (pid=4623) execute
/opt/stack/cinder/cinder/openstack/common/processutils.py:167
2014-06-16 16:39:38.981 WARNING cinder.brick.iscsi.iscsi
[req-083cd582-1b4d-4e7c-a70c-2c6282d8d799
d9bb59a6a2394483902b382a991ffea2 *b65a066f32df4aca80fa9a6d5c795095]
Fa*
*iled to create iscsi target for volume
id:volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda: Unexpected error while
running command.*
*Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf tgt-admin --update
iqn.2010-10.org.openstack:volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda*
*Exit code: 107*
Stdout: 'Command:\n\ttgtadm -C 0 --lld iscsi --op new --mode target --tid 1
-T
iqn.2010-10.org.openstack:volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda\nexited
with code: 107.\n'
Stderr: 'tgtadm: failed to send request hdr to tgt daemon, Transport
endpoint is not connected\ntgtadm: failed to send request hdr to tgt
daemon, Transport endpoint is not connected\ntgtadm: failed to send request
hdr to tgt daemon, Transport endpoint is not connected\ntgtadm: failed to
send request hdr to tgt daemon, Transport endpoint is not connected\n'
2014-06-16 16:39:38.982 ERROR oslo.messaging.rpc.dispatcher
[req-083cd582-1b4d-4e7c-a70c-2c6282d8d799 d9bb59a6a2394483902b382a991ffea2
b65a066f32df4aca80fa9a6d5c795095] Exception during message handling: Failed
to create iscsi target for volume
volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda.
2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher Traceback (most
recent call last):
2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher 

Re: [openstack-dev] Gate proposal - drop Postgresql configurations in the gate

2014-06-16 Thread Mac Innes, Kiall
On Thu, 2014-06-12 at 11:36 -0400, Sean Dague wrote:
 If someone can point me to a case where we've actually found this kind
 of bug with tempest / devstack, that would be great. I've just *never*
 seen it. I was the one that did most of the fixing for pg support in
 Nova, and have helped other projects as well, so I'm relatively
 familiar
 with the kinds of fails we can discover. The ones that Julien pointed
 really aren't likely to be exposed in our current system.
 
 Which is why I think we're mostly just burning cycles on the existing
 approach for no gain.
 
 -Sean

I don't have links handy - but Designate has hit a couple of bugs that
prevented our database migrations from succeeding on PostgreSQL - Maybe
a new re-usable test slave type for exercising database migrations +
database interface code against? These would be much quicker to run than
a full devstack/tempest gate..

Thanks,
Kiall
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Backwards compatibility policy for our projects

2014-06-16 Thread Duncan Thomas
Hi Clint

This looks like a special pleading here - all OpenStack projects (or
'program' if you prefer - I'm honestly not seeing a difference) have
bits that they've written quickly and would rather not have to
maintain, but in order to allow people to make use of them downstream
have to do that work. Ask the cinder team about how much I try to stay
on top of any back-compat issues.

If TripleO is not ready to take up that burden, then IMO it shouldn't
be an official project. If the bits that make it up are too immature
to actually be maintained with reasonable guarantees that they won't
just pull the rug out from any consumers, then their use needs to be
re-thought. Currently, tripleO enjoys huge benefits from its official
status, but isn't delivering to that standard. No other project has a
hope of coming in as an official deployment tool while tripleO holds
that niche. Despite this, tripleO is barely usable, and doesn't seem
to be maturing towards taking up the responsibilities that other
projects have had forced upon them. If it isn't ready for that, should
it go back to incubation and give some other team or technology a fair
chance to step up to the plate?

I don't want to look like I'm specifically beating on tripleO here,
but it is the first openstack component I've worked with that seems to
have this little concern for downstream users *and* no apparent plans
to fix it.

That's without going into all of the other difficulties myself and
fellow developers have had trying to get involved with tripleO, which
I'll go into at some other point.

It is possible there are other places with similar problems, but this
is the first I've run into - I'll call out any others I run into,
since I think it is important, and discussing it publicly keeps
everyone honest. If I've got the wrong expectations, I'd at least like
to have the correction on record.

Regards


-- 
Duncan Thomas




On 16 June 2014 17:58, Clint Byrum cl...@fewbar.com wrote:
 Excerpts from Duncan Thomas's message of 2014-06-16 09:41:49 -0700:
 On 16 June 2014 17:30, Jason Rist jr...@redhat.com wrote:
  I'm going to have to agree with Tomas here.  There doesn't seem to be
  any reasonable expectation of backwards compatibility for the reasons
  he outlined, despite some downstream releases that may be impacted.


 Backward compatibility is a hard habit to get into, and easy to put
 off. If you're not making any guarantees now, when are you going to
 start making them? How much breakage can users expect? Without wanting
 to look entirely like a troll, should TripleO be dropped as an
 official until it can start making such guarantees? I think every
 other official OpenStack project has a stable API policy of some kind,
 even if they don't entirely match...


 I actually agree with the sentiment of your statement, which is backward
 compatibility matters.

 However, there is one thing that is inaccurate in your statements:

 TripleO is not a project, it is a program. These tools are products
 of that program's mission which is to deploy OpenStack using itself as
 much as possible. Where there are holes, we fill them with existing
 tools or we write minimal tools such as the tripleo_heat_merge Heat
 template pre-processor.

 This particular tool is marked for death as soon as Heat grows the
 appropriate capabilities to allow that. This tool never wants to
 be integrated into the release. So it is a little hard to justify
 bending over backwards for BC. But I don't think that is what anybody
 is requesting.

 We're not looking for this tool to remain super agile and grow, thus
 making any existing code and interfaces a burden. So I think it is pretty
 easy to just start marking features as deprecated and raising deprecation
 warnings when they're used.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] revert hacking to 0.8 series

2014-06-16 Thread Clint Byrum
Excerpts from Sean Dague's message of 2014-06-16 05:15:54 -0700:
 Hacking 0.9 series was released pretty late for Juno. The entire check
 queue was flooded this morning with requirements proposals failing pep8
 because of it (so at 6am EST we were waiting 1.5 hrs for a check node).
 
 The previous soft policy with pep8 updates was that we set a pep8
 version basically release week, and changes stopped being done for style
 after first milestone.
 
 I think in the spirit of that we should revert the hacking requirements
 update back to the 0.8 series for Juno. We're past milestone 1, so
 shouldn't be working on style only fixes at this point.
 
 Proposed review here - https://review.openstack.org/#/c/100231/
 
 I also think in future hacking major releases need to happen within one
 week of release, or not at all for that series.
 

+1. Hacking is supposed to help us avoid redundant nit-picking in
reviews. If it places any large burden on developers, whether by merge
conflicting or backing up CI, it is a failure IMO.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Heat integration

2014-06-16 Thread Nikhil Manchanda

Denis Makogon writes:

 Good day, Stackers, Trove community.


 I'd like to start thread related to orchestration based resource
 management. At current state Heat support in Trove is nothing else than
 experimental. Trove should be able to fully support Trove as resource
 management driver.

I assume you mean that Trove should be support Heat as a way to
provision resources. I do agree with this sentiment, and Trove
today does have two ways of provisioning instances (i.e. with or without
Heat, depending on configuration).


 Why is it so important?

 Because Trove should not do what it does now (cloud service orchestration
 is not the part of the OS Database Program). Trove should delegate all
 tasks to Cloud Orchestration Service (Heat).


Agreed that Trove should delegate provisioning, and orchestration tasks
to Heat. Tasks like taking backups, configuring an instance, promoting
it to master, etc are database specific tasks, and I'm not convinced
that Heat is the route to take for them.


 [...]

 Resource management interface

 What is management interface – abstract class that describes the required
 tasks to accomplish. From Trove-taskmanager perspective, management
 interface is nothing else than RPC service manager that being used at
 service start [1].


Why is a separate RPC interface needed for this? It would be prudent,
imho, to keep the same TaskManager RPC API, since that's works as a
generic API across both Heat and Natives (nova, cinder, etc) and because
that's what's being deployed in production today. Changing it would mean
figuring out what to do about backwards compatibility, and I'm not sure
that changing it would actually buy us anything.


 Why is it needed?

 The first answer is: To split-out two completely different resource
 management engines. Nova/Cinder/Neutron engine etc. called “NATIVES” and
 Heat engine called “ORCHESTRATOR”.

 As you can all know they cannot work together, because they are acting with
 resources in their own manners. But both engines are sharing more than
 enough common code inside the Trove.

So this was something that was discussed in one of our meetings. There's
nothing today that prevents orchestrating the creation tasks via heat,
and later querying a native API for read-only information (for
eg. query nova to check the flavor of a provisioned instance). In this
case, Heat is being used for orchestration, but we're still working with
native API calls where it makes sense. So is there a real need or
requirement to move away from this model?


 Is it backward compatible?

 Here comes the third (mixed) manager called “MIGRATION”. It allows to work
 with previously provisioned instances through NATIVES engine (resizes,
 migration, deletion) but new instances which would be provisioned in future
 will be provisioned withing stacks through ORCHESTRATOR.

 So, there are three valid options:

-

use NATIVES if there's no available Heat;
-

use ORCHESTRATOR to work with Heat only;
-

use MIGRATION to work with mixed manager;

This does seem a bit overly complex. Steve mentioned the idea of stack
adopt (Thanks!), and I think that would be quite a bit simpler. I think
it behooves us to investigate that as a mechanism for creating a stack
from existing resources, rather than having something like a mixed
migration manager that has been proposed.


 [...]

implement instance resize; Done

 https://github.com/openstack/heat/blob/master/heat/engine/resources/instance.py#L564-L648
-

implement volume resize; Done

 https://github.com/openstack/heat/commit/34e215c3c930b3b79bc3795dca3b5a73678f2a36


IIRC we did have an open issue and were trying to work with heat devs to
expose a callback to trove in the case of the VERIFY_RESIZE during
instance resize. Is that now done?



 Testing environment

 In terms of this topic i’d like to propose two new experimental gates for
 Trove:

-

gate-trove-heat-integration (integration testing)
-

This should probably be a higher priority. I'd be more inclined to get
the gate testing all scenarios of the current heat workflow that we
already have in trove today (configuration based), rather than work on a
resource management interface. We really need to make sure our QA and
test coverage around the heat scenarios is better than what we have
today. This would probably be my highest priority over any of the other
work mentioned above.


gate-trove-heat-integration-faked (testing based upon fake
implementation of heat).


We don't need a separate gate job for this, and the current tox unit
test gate would suffice. We could re-run the fake tests with the heat
configuration enabled as part of the same unit test job.


 Review process

 For the next BP review meeting (Trove) i will revisit this BP since all
 required tasks were done.

 For the bug-reports, i’d like to ask Trove-core team to take a look at
 them. And review/approve/merge them as soon as possible to start 

Re: [openstack-dev] [metrics] How to group activity in git/gerrit repositories

2014-06-16 Thread Stefano Maffulli
On Fri 13 Jun 2014 10:51:24 AM PDT, Stangel, Dan wrote:
 You can also refer to the example of Stackalytics, who have created
 their own hierarchy and groupings for metrics reporting:
 https://github.com/stackforge/stackalytics/blob/master/etc/default_data.json

It's a very neat grouping. It seems to me that the clients are grouped 
with their parent git/gerrit repo (nova with python-novaclient, under 
'Compute' program) and Nova is shown alone. I don't see the python 
clients individual repositories or grouped: is that correct?

For the quarterly reports I will need granularity because I believe 
that clients have different dynamics than their parent project (and if 
that proves not to be the case, we can remove this complexity later and 
merge data).

can you share a concrete example of how you group things?

--
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Using saltstack as orchestrator for fuel

2014-06-16 Thread Dmitry Borodaenko
Mistral doesn't have to be married to RabbitMQ, there's a ZeroMQ
driver in oslo.messaging, so in theory Mistral should be able to make
use of that.

On Mon, Jun 16, 2014 at 1:42 AM, Vladimir Kozhukalov
vkozhuka...@mirantis.com wrote:
 Guys,

 First of all we need to agree about what orchestration is. In terms of Fuel
 orchestration is task management (or scheduling) + task running. In other
 words an orchestrator needs to be able to get data (yaml, json, etc.) and to
 decide what to do, when and where and then do that. For task management we
 need to have a kind of logic like that is provided by Mistral. For launching
 it just needs to have a kind of transport like that is available when we use
 mcollective or saltstack or ssh.

 As far as I know (I did research in Saltstack a year ago), Saltstack does
 not have mature management mechanism. What it has is so called overstate
 mechanism which allows one to write a script for managing tasks in multiple
 node environments like launch task-1 on node-1, then launch task-2 on
 node-2 and then launch task-3 on node-1 again. It works, but it is
 semi-manual. I mean it is exactly what we already have and call it Astute.
 The only difference is that Astute is a wrapper around Mcollective.

 The only advantages I see in using Saltstack instead of Mcollective is that
 it is written in Python (Mcollective still does not have python binding) and
 that it uses ZeroMQ. Maybe those advantages are not so subtle, but let's
 take a look carefully.

 For example, the fact that Saltstack is written in Python allows us to use
 Saltstack directly from Nailgun. But I am absolutely sure that everyone will
 agree that would be a great architectural lack. If you ask me, Nailgun has
 to use an external task management service with highly outlined API  such as
 Mistral. Mistral already has plenty of capabilities for that. Do we really
 need to implement all that stuff?

 ZeroMQ is a great advantage if you have thousands of nodes. It is highly
 scalaeble. It is also allows one to avoid using one additional external
 service like Rabbit. Minus one point of failure, right? On the other hand,
 it brings us into the world of Saltstack with its own bugs despite its
 maturity.

 Consequently, my personal preference is to concentrate on splitting puppet
 code into independent tasks and using Mistral for resolving task
 dependencies. As our transport layer we'll then be able to use whatever we
 want (Saltstack, Mcollective, bare ssh, any kind of custom implementation,
 etc.)




 Vladimir Kozhukalov


 On Fri, Jun 13, 2014 at 8:45 AM, Mike Scherbakov mscherba...@mirantis.com
 wrote:

 Dmitry,
 please read design doc attached to
 https://blueprints.launchpad.net/fuel/+spec/fuel-library-modularization.
 I think it can serve as a good source of requirements which we have, and
 then we can see what tool is the best.

 Regards,




 On Thu, Jun 12, 2014 at 12:28 PM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:

 Guys, what we really need from orchestration tool is an ability
 orchestrate a big amount of task accross the nodes with all the complicated
 dependencies, dynamic actions (e.g. what to do on failure and on success)
 and parallel execution including those, that can have no additional software
 installed somewhere deep in the user's infrastructure (e.g. we need to send
 a RESTful request to vCenter). And this is the usecase of our pluggable
 architecture. I am wondering if saltstack can do this.


 On Wed, Jun 11, 2014 at 9:08 PM, Sergii Golovatiuk
 sgolovat...@mirantis.com wrote:

 Hi,

 That would be nice to compare Ansible and Salt. They are both Python
 based. Also, Ansible has pull model also. Personally, I am big fan of
 Ansible because of its simplicity and speed of playbook development.

 ~Sergii


 On Wed, Jun 11, 2014 at 1:21 PM, Dmitriy Shulyak dshul...@mirantis.com
 wrote:

 well, i dont have any comparison chart, i can work on one based on
 requirements i've provided in initial letter, but:
 i like ansible, but it is agentless, and it wont fit well in our
 current model of communication between nailgun and orchestrator
 cloudify - java based application, even if it is pluggable with other
 language bindings - we will benefit from application in python
 salt is been around for 3-4 years, and simply compare github graphs, it
 one of the most used and active projects in python community

 https://github.com/stackforge/mistral/graphs/contributors
 https://github.com/saltstack/salt/graphs/contributors


 On Wed, Jun 11, 2014 at 1:04 PM, Sergii Golovatiuk
 sgolovat...@mirantis.com wrote:

 Hi,

 There are many mature orchestration applications (Salt, Ansible,
 Cloudify, Mistral). Is there any comparison chart? That would be nice to
 compare them to understand the maturity level. Thanks

 ~Sergii


 On Wed, Jun 11, 2014 at 12:48 PM, Dmitriy Shulyak
 dshul...@mirantis.com wrote:

 Actually i am proposing salt as alternative, the main reason - salt
 is mature, feature full orchestration 

Re: [openstack-dev] Gate proposal - drop Postgresql configurations in the gate

2014-06-16 Thread Robert Collins
On 16 Jun 2014 20:33, Thierry Carrez thie...@openstack.org wrote:

 Robert Collins wrote:
  [...]
  C - If we can't make it harder to get races in, perhaps we can make it
  easier to get races out. We have pretty solid emergent statistics from
  every gate job that is run as check. What if set a policy that when a
  gate queue gets a race:
   - put a zuul stop all merges and checks on all involved branches
  (prevent further damage, free capacity for validation)
   - figure out when it surfaced
   - determine its not an external event
   - revert all involved branches back to the point where they looked
  good, as one large operation
 - run that through jenkins N (e.g. 458) times in parallel.
 - on success land it
   - go through all the merges that have been reverted and either
  twiddle them to be back in review with a new patchset against the
  revert to restore their content, or alternatively generate new reviews
  if gerrit would make that too hard.

 One of the issues here is that gate queue gets a race is not a binary
 state. There are always rare issues, you just can't find all the bugs
 that happen 0.1% of the time. You add more such issues, and at some
 point they either add up to an unacceptable level, or some other
 environmental situation suddenly increases the odds of some old rare
 issue to happen (think: new test cluster with slightly different
 performance characteristics being thrown into our test resources). There
 is no single incident you need to find and fix, and during which you can
 clearly escalate to defCon 1. You can't even assume that a gate
 situation was created in the set of commits around when it surfaced.

 So IMHO it's a continuous process : keep looking into rare issues all
 the time, to maintain them under the level where they become a problem.
 You can't just have a specific process that kicks in when the gate
 queue gets a race

You seem to be drawing different conclusions here but the emergent
behaviour is a shared model that we both have. In no part of my mail did I
suggest ignoring issues until we hit Defcon one. I suggested that what we
are doing is not working, and put forward a model to explain why it's not
working ... one which to me seems to fit the evidence. And finally
suggested a few different things which might help.

For the specific scenario you raise that might not fit... Adding a test
cluster is a change to our test config and certainly something we could
revert. That's the benefit of configuration as code.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate proposal - drop Postgresql configurations in the gate

2014-06-16 Thread Kyle Mestery
On Mon, Jun 16, 2014 at 11:38 AM, Joe Gordon joe.gord...@gmail.com wrote:



 On Sat, Jun 14, 2014 at 3:46 AM, Sean Dague s...@dague.net wrote:

 On 06/13/2014 06:47 PM, Joe Gordon wrote:
 
 
 
  On Thu, Jun 12, 2014 at 7:18 PM, Dan Prince dpri...@redhat.com
  mailto:dpri...@redhat.com wrote:
 
  On Thu, 2014-06-12 at 09:24 -0700, Joe Gordon wrote:
  
   On Jun 12, 2014 8:37 AM, Sean Dague s...@dague.net
  mailto:s...@dague.net wrote:
   
On 06/12/2014 10:38 AM, Mike Bayer wrote:

 On 6/12/14, 8:26 AM, Julien Danjou wrote:
 On Thu, Jun 12 2014, Sean Dague wrote:

 That's not cacthable in unit or functional tests?
 Not in an accurate manner, no.

 Keeping jobs alive based on the theory that they might one
  day
   be useful
 is something we just don't have the liberty to do any more.
   We've not
 seen an idle node in zuul in 2 days... and we're only at
  j-1.
   j-3 will
 be at least +50% of this load.
 Sure, I'm not saying we don't have a problem. I'm just saying
   it's not a
 good solution to fix that problem IMHO.

 Just my 2c without having a full understanding of all of
   OpenStack's CI
 environment, Postgresql is definitely different enough that
  MySQL
 strict mode could still allow issues to slip through quite
   easily, and
 also as far as capacity issues, this might be longer term but
  I'm
   hoping
 to get database-related tests to be lots faster if we can move
  to
   a
 model that spends much less time creating databases and
  schemas.
   
This is what I mean by functional testing. If we were directly
   hitting a
real database on a set of in tree project tests, I think you
  could
discover issues like this. Neutron was headed down that path.
   
But if we're talking about a devstack / tempest run, it's not
  really
applicable.
   
If someone can point me to a case where we've actually found
  this
   kind
of bug with tempest / devstack, that would be great. I've just
   *never*
seen it. I was the one that did most of the fixing for pg
  support in
Nova, and have helped other projects as well, so I'm relatively
   familiar
with the kinds of fails we can discover. The ones that Julien
   pointed
really aren't likely to be exposed in our current system.
   
Which is why I think we're mostly just burning cycles on the
   existing
approach for no gain.
  
   Given all the points made above, I think dropping PostgreSQL is
  the
   right choice; if only we had infinite cloud that would be another
   story.
  
   What about converting one of our existing jobs (grenade partial
  ncpu,
   large ops, regular grenade, tempest with nova network etc.) Into a
   PostgreSQL only job? We could get some level of PostgreSQL testing
   without any additional jobs, although this is  tradeoff obviously.
 
  I'd be fine with this tradeoff if it allows us to keep PostgreSQL in
  the
  mix.
 
 
  Here is my proposed change to how we handle postgres in the gate:
 
  https://review.openstack.org/#/c/100033
 
 
  Merge postgres and neutron jobs in integrated-gate template
 
 
 
 
  Instead of having a separate job for postgres and neutron, combine them.
  In the integrated-gate we will only test postgres+neutron and not
 
 
  neutron/mysql or nova-network/postgres.
 
  * neutron/mysql is still tested in integrated-gate-neutron
  * nova-network/postgres is tested in nova

 Because neutron only runs smoke jobs, this actually drops all the
 interesting testing of pg. The things I've actually seen catch
 differences are the nova negative tests, which basically aren't run in
 this job.


 I forgot about the smoke test only part when I originally proposed this.
 From a cursory look, neutron-full appears to be fairly stable, so if we move
 over to neutron-full in the near future that should address your concerns.
 Are there plans to move over to neutron-full in the near future?

This is on my radar for Juno-2. I'll syncup with some folks in-channel
on what the next steps would be to make this happen.

Kyle



 So I think that's kind of the worst of all possible worlds, because it
 would make people think the thing is tested interestingly, when it's not.

 -Sean

 --
 Sean Dague
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev 

Re: [openstack-dev] Gate proposal - drop Postgresql configurations in the gate

2014-06-16 Thread Robert Collins
On 16 Jun 2014 22:33, Sean Dague s...@dague.net wrote:

 On 06/16/2014 04:33 AM, Thierry Carrez wrote:
  Robert Collins wrote:
  [...]
  C - If we can't make it harder to get races in, perhaps we can make it
  easier to get races out. We have pretty solid emergent statistics from
  every gate job that is run as check. What if set a policy that when a
  gate queue gets a race:
   - put a zuul stop all merges and checks on all involved branches
  (prevent further damage, free capacity for validation)
   - figure out when it surfaced
   - determine its not an external event
   - revert all involved branches back to the point where they looked
  good, as one large operation
 - run that through jenkins N (e.g. 458) times in parallel.
 - on success land it
   - go through all the merges that have been reverted and either
  twiddle them to be back in review with a new patchset against the
  revert to restore their content, or alternatively generate new reviews
  if gerrit would make that too hard.
 
  One of the issues here is that gate queue gets a race is not a binary
  state. There are always rare issues, you just can't find all the bugs
  that happen 0.1% of the time. You add more such issues, and at some
  point they either add up to an unacceptable level, or some other
  environmental situation suddenly increases the odds of some old rare
  issue to happen (think: new test cluster with slightly different
  performance characteristics being thrown into our test resources). There
  is no single incident you need to find and fix, and during which you can
  clearly escalate to defCon 1. You can't even assume that a gate
  situation was created in the set of commits around when it surfaced.
 
  So IMHO it's a continuous process : keep looking into rare issues all
  the time, to maintain them under the level where they become a problem.
  You can't just have a specific process that kicks in when the gate
  queue gets a race.

 Definitely agree. I also think part of the issue is we get emergent
 behavior once we tip past some cumulative failure rate. Much of that
 emergent behavior we are coming to understand over time. We've done
 corrections like clean check and sliding gate window to impact them.

 It's also that a new issue tends to take 12 hrs to see and figure out if
 it's a ZOMG issue, and 3 - 5 days to see if it's any lower level of
 severity. And given that we merge 50 - 100 patches a day, across 40
 projects, across branches, the rollback would be  'interesting'.

So zomg - 50 runs and lower issues between 150 and 500 test runs. That's
fitting my model pretty well for the ballpark failure rate and margin I was
using. That is it sounds like the model isn't too far out from reality.

Yes revert would be hard... But what do you think of the model ... Is it
wrong? It implies Sergei different points we can try to fix things and I
would love to know what folk think of the other possibilities I've raised
or raise some themselves.

-Rob
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] how to install fuel on a esxi vm

2014-06-16 Thread Andrey Danin
Hi, Wang.
Could you provide a network topology of your installation (incliding
hardware and virtual L2/L3 parts of it)?
Also, please, check this instruction
http://vbyron.com/blog/deploy-openstack-on-vsphere-with-fuel/


On Thu, Jun 12, 2014 at 8:20 AM, Wang Liming wan...@certusnet.com.cn
wrote:

  hi all:
  I install openstack with the fuel which version is 5.0 on a vmware esxi vm
 the network is configed correctly ,but when deployed the openstack all
 service not install
 is there any error action ?

 Best Regards
 Wang Liming



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Andrey Danin
ada...@mirantis.com
skype: gcon.monolake
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Backwards compatibility policy for our projects

2014-06-16 Thread Clint Byrum
Excerpts from Duncan Thomas's message of 2014-06-16 10:46:12 -0700:
 Hi Clint
 
 This looks like a special pleading here - all OpenStack projects (or
 'program' if you prefer - I'm honestly not seeing a difference) have
 bits that they've written quickly and would rather not have to
 maintain, but in order to allow people to make use of them downstream
 have to do that work. Ask the cinder team about how much I try to stay
 on top of any back-compat issues.
 

I don't just prefer program. It is an entirely different thing:

https://wiki.openstack.org/wiki/Programs

https://wiki.openstack.org/wiki/Governance/NewProjects

 If TripleO is not ready to take up that burden, then IMO it shouldn't
 be an official project. If the bits that make it up are too immature
 to actually be maintained with reasonable guarantees that they won't
 just pull the rug out from any consumers, then their use needs to be
 re-thought. Currently, tripleO enjoys huge benefits from its official
 status, but isn't delivering to that standard. No other project has a
 hope of coming in as an official deployment tool while tripleO holds
 that niche. Despite this, tripleO is barely usable, and doesn't seem
 to be maturing towards taking up the responsibilities that other
 projects have had forced upon them. If it isn't ready for that, should
 it go back to incubation and give some other team or technology a fair
 chance to step up to the plate?
 

TripleO _isn't_ an official project. It is a program to make OpenStack
deploy itself. This is the same as the infra program, which has a
mission to support development. We're not calling for Zuul to be
integrated into the release, we are just expecting it to keep supporting
the goals of the infra program and OpenStack in general.

What is the official deployment tool you mention? There isn't one.
The tool we've been debating is something that enables OpenStack to
be deployed using its own component, Heat, but that is sort of like
oslo-incubator.. it is driving a proof of concept for inclusion into an
official project.

Ironic was spun out very early on because it was clear there was a need
for an integrated project to manage baremetal. This is an example where
pieces used for TripleO have been pushed into the integrated release.

However, Heat already exists, and that is where the responsibility lies
to orchestrate applications. We are driving quite a bit into Heat right
now, with a massive refactor of the core to be more resilient to the
types of challenges a datacenter environment will present. The features
we get from the tripleo_heat_merge pre-processor that is in question
will be the next thing to go into Heat. Expecting us to commit resources
to both of those efforts doesn't make much sense. The program is driving
its mission, and the tools will be incubated and integrated when that
makes sense.

Meanwhile, it turns out OpenStack _is not_ currently able to deploy
itself. Users have to bolt things on, whether it is our tools, or
salt/puppet/chef/ansible artifacts, users cannot use just what is in
OpenStack to deploy OpenStack. But we need to be able to test from one
end to the other while we get things landed in OpenStack.. and so, we
use the pre-release model while we get to a releasable thing.

 I don't want to look like I'm specifically beating on tripleO here,
 but it is the first openstack component I've worked with that seems to
 have this little concern for downstream users *and* no apparent plans
 to fix it.
 

Which component specifically are you referring to? Our plan, nay,
our mission, is to fix it by pushing the necessary features into the
relevant projects.

Also, we actually take on a _higher_ burden of backward compatibility with
some of our tools that we do want to release. They're not integrated, and
we intend to keep them working with all releases of OpenStack because we
intend to keep their interfaces stable for as long as those interfaces
are relevant. diskimage-builder, os-apply-config, os-collect-config,
os-refresh-config, are all stable, and don't need to be integrated into
the OpenStack release because they're not even OpenStack specific.

 That's without going into all of the other difficulties myself and
 fellow developers have had trying to get involved with tripleO, which
 I'll go into at some other point.
 

I would be quite interested in any feedback you can give us on how
hard it might be to join the effort. It is a large effort, and I know
new contributors can often get lost in a sea of possibilities if we,
the long time contributors, aren't careful to get them bootstrapped.

 It is possible there are other places with similar problems, but this
 is the first I've run into - I'll call out any others I run into,
 since I think it is important, and discussing it publicly keeps
 everyone honest. If I've got the wrong expectations, I'd at least like
 to have the correction on record.

I do think that there is a misunderstanding that TripleO is some kind
of tool. 

Re: [openstack-dev] revert hacking to 0.8 series

2014-06-16 Thread Joe Gordon
On Jun 16, 2014 9:44 AM, Ben Nemec openst...@nemebean.com wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 06/16/2014 08:37 AM, Thierry Carrez wrote:
  Sean Dague wrote:
  Hacking 0.9 series was released pretty late for Juno. The entire
  check queue was flooded this morning with requirements proposals
  failing pep8 because of it (so at 6am EST we were waiting 1.5 hrs
  for a check node).

This is a general issue with global requirements, the number of jobs we run
and the number of available nodes. Let's solve the general case.

 
  The previous soft policy with pep8 updates was that we set a
  pep8 version basically release week, and changes stopped being
  done for style after first milestone.
 
  I think in the spirit of that we should revert the hacking
  requirements update back to the 0.8 series for Juno. We're past
  milestone 1, so shouldn't be working on style only fixes at this
  point.
 
  Proposed review here - https://review.openstack.org/#/c/100231/
 
  I also think in future hacking major releases need to happen
  within one week of release, or not at all for that series.
 
  We may also have reached a size where changing style rules is just
  too costly, whatever the moment in the cycle. I think it's good
  that we have rules to enforce a minimum of common style, but the
  added value of those extra rules is limited, while their impact on
  the common gate grows as we add more projects.

 A few thoughts:

 1) I disagree with the proposition that hacking updates can only
 happen in the first week after release.  I get that there needs to be
 a cutoff, but I don't think one week is reasonable.  Even if we
 release in the first week, you're still going to be dealing with
 hacking updates for the rest of the cycle as projects adopt the new
 rules at their leisure.  I don't like retroactively applying milestone
 1 as a cutoff either, although I could see making that the policy
 going forward.


++

 2) Given that most of the changes involved in fixing the new failures
 are trivial, I think we should encourage combining the fixes into one
 commit.  We _really_ don't need separate commits to fix H305 and H307.
  This doesn't help much with the reviewer load, but it should reduce
 the gate load somewhat.  It violates the one change-one commit rule,
 but A foolish consistency...


++

 3) We should start requiring specs for all new hacking rules to make
 sure we have consensus (I think oslo-specs is the place for this).  2
 +2's doesn't really accomplish that.  We also may need to raise the
 bar for inclusion of new rules - while I agree with all of the new
 ones added in hacking .9, I wonder if some of them are really necessary.


I would rather just have more folks review hacking patches then add a specs
repo. A specs repo is overkill, IMHO, hacking doesn't have that many
patches per cycle. In general when adding a rule to hacking it has to
already be in HACKING.rst and/or needs a ML post.

 4) I don't think we're at a point where we should freeze hacking
 completely, however.  The import grouping and long line wrapping
 checks in particular are things that reviewers have to enforce today,
 and that has a significant, if less well-defined, cost too.  If we're
 really going to say those rules can't be enforced by hacking then we
 need to remove them from our hacking guidelines and start the long
 process of educating reviewers to stop requiring them.  I'd rather
 just deal with the pain of adding them to hacking one time and never
 have to think about them again.  I'm less convinced the other two that
 were added in .9 are necessary, but in any case these are discussions
 that should happen in spec reviews going forward.

 5) We may want to come up with some way to globally disable pep8
 checks we don't want to enforce, since we don't have any control over
 that but probably don't want to just stop updating pep8.  That could
 make the pain of these updates much less.

 I could probably come up with a few more, but this is already too
 wall-of-texty for my tastes. :-)

 - -Ben
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1
 Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

 iQEcBAEBAgAGBQJTnx7wAAoJEDehGd0Fy7uqoYAH/0KmxmR873Qn2Kti7LIEUNp4
 1FJaBOX09ItxkkvyNRpcsIQu4fWycm60CckSOLfB7rgxgIjgsVkiZ9puE6oCmj2o
 Lhe5DhjYA2ROu9h8i0vmzYDnAKeu/WuRGtgyLSElUXeuiLpSrBcEA/03GpkCGiAP
 1muAkVgv2oxDDwsaLwL7MmFrlZ1MPTP97lAfsfHbwbsOM5YMuPrRz9PirgHPBtTV
 59UyofCGEBTtJKmJRLzRDZyDwTux5xrrc/cefer5GFLQH0ZbxOU1HHFESyc5wFVJ
 tI/3nPlbFpqCUtgmnQc8k3lX3d2H1Qr9UfCvYlJFTN1TmPmHmK378ioi81HoAVo=
 =tqtf
 -END PGP SIGNATURE-

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] revert hacking to 0.8 series

2014-06-16 Thread Joe Gordon
On Jun 16, 2014 10:02 AM, Sean Dague s...@dague.net wrote:

 On 06/16/2014 12:44 PM, Ben Nemec wrote:
  On 06/16/2014 08:37 AM, Thierry Carrez wrote:
  Sean Dague wrote:
  Hacking 0.9 series was released pretty late for Juno. The entire
  check queue was flooded this morning with requirements proposals
  failing pep8 because of it (so at 6am EST we were waiting 1.5 hrs
  for a check node).
 
  The previous soft policy with pep8 updates was that we set a
  pep8 version basically release week, and changes stopped being
  done for style after first milestone.
 
  I think in the spirit of that we should revert the hacking
  requirements update back to the 0.8 series for Juno. We're past
  milestone 1, so shouldn't be working on style only fixes at this
  point.
 
  Proposed review here - https://review.openstack.org/#/c/100231/
 
  I also think in future hacking major releases need to happen
  within one week of release, or not at all for that series.
 
  We may also have reached a size where changing style rules is just
  too costly, whatever the moment in the cycle. I think it's good
  that we have rules to enforce a minimum of common style, but the
  added value of those extra rules is limited, while their impact on
  the common gate grows as we add more projects.
 
  A few thoughts:
 
  1) I disagree with the proposition that hacking updates can only
  happen in the first week after release.  I get that there needs to be
  a cutoff, but I don't think one week is reasonable.  Even if we
  release in the first week, you're still going to be dealing with
  hacking updates for the rest of the cycle as projects adopt the new
  rules at their leisure.  I don't like retroactively applying milestone
  1 as a cutoff either, although I could see making that the policy
  going forward.
 
  2) Given that most of the changes involved in fixing the new failures
  are trivial, I think we should encourage combining the fixes into one
  commit.  We _really_ don't need separate commits to fix H305 and H307.
   This doesn't help much with the reviewer load, but it should reduce
  the gate load somewhat.  It violates the one change-one commit rule,
  but A foolish consistency...

 The challenge is that hacking updates are basically giant merge conflict
 engines. If there is any significant amount of code outstanding in a
 project, landing hacking only changes basically means requiring much of
 the outstanding code to rebase.

 So it's actually expensive in a way that doesn't jump out immediately.
 The cost of landing hacking isn't just the code of reviewing the hacking
 patches, it's also the cost of the extra roundtrips on outstanding
patches.

When a project chooses to enforce rules us decoupled with when hacking if
released. So this sounds like a related yet different issue.


  3) We should start requiring specs for all new hacking rules to make
  sure we have consensus (I think oslo-specs is the place for this).  2
  +2's doesn't really accomplish that.  We also may need to raise the
  bar for inclusion of new rules - while I agree with all of the new
  ones added in hacking .9, I wonder if some of them are really necessary.

  4) I don't think we're at a point where we should freeze hacking
  completely, however.  The import grouping and long line wrapping
  checks in particular are things that reviewers have to enforce today,
  and that has a significant, if less well-defined, cost too.  If we're
  really going to say those rules can't be enforced by hacking then we
  need to remove them from our hacking guidelines and start the long
  process of educating reviewers to stop requiring them.  I'd rather
  just deal with the pain of adding them to hacking one time and never
  have to think about them again.  I'm less convinced the other two that
  were added in .9 are necessary, but in any case these are discussions
  that should happen in spec reviews going forward.

 I think both of those cases are really nits to the point that they
 aren't worth enforcing. They won't change the correctness of the code.
 And barely change the readability.

 There are differences with things like the is None checks, or python 3
 checks, which change correctness, or prevent subtle bugs. But I think
 we're now getting to a level of cleanliness enforcement that trumps
 functionally working.

  5) We may want to come up with some way to globally disable pep8
  checks we don't want to enforce, since we don't have any control over
  that but probably don't want to just stop updating pep8.  That could
  make the pain of these updates much less.

 Actually, if you look at python novaclient, the doc string checks, which
 are really niggly, and part of hacking are the biggest fails. It's much
 less upstream that's doing this to us.

  I could probably come up with a few more, but this is already too
  wall-of-texty for my tastes. :-)
 
  -Ben
 
  ___
  OpenStack-dev mailing list
  

Re: [openstack-dev] Rethink how we manage projects? (was Gate proposal - drop Postgresql configurations in the gate)

2014-06-16 Thread Chris Friesen

On 06/16/2014 03:33 AM, Thierry Carrez wrote:

David Kranz wrote:

[...]
There is a different way to do this. We could adopt the same methodology
we have now around gating, but applied to each project on its own
branch. These project branches would be integrated into master at some
frequency or when some new feature in project X is needed by project Y.
Projects would want to pull from the master branch often, but the push
process would be less frequent and run a much larger battery of tests
than we do now.


So we would basically discover the cross-project bugs when we push to
the master master branch. I think you're just delaying discovery of
the most complex issues, and push the responsibility to resolve them
onto a inexistent set of people. Adding integration branches only makes
sense if you have an integration team. We don't have one, so we'd call
back on the development teams to solve the same issues... with a delay.

In our specific open development setting, delaying is bad because you
don't have a static set of developers that you can assume will be on
call ready to help with what they have written a few months later:
shorter feedback loops are key to us.


On the other hand, I've had fairly trivial changes wait for a week to be 
merged because it failed multiple separate testcases that were totally 
unrelated to the change I was making.


If I'm making a change that is entirely contained within nova, it seems 
really unfortunate that a buggy commit in neutron or cinder can block my 
commit from being merged.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Heat integration

2014-06-16 Thread Zane Bitter

On 16/06/14 13:56, Nikhil Manchanda wrote:


Denis Makogon writes:

Because Trove should not do what it does now (cloud service orchestration
is not the part of the OS Database Program). Trove should delegate all
tasks to Cloud Orchestration Service (Heat).



Agreed that Trove should delegate provisioning, and orchestration tasks
to Heat. Tasks like taking backups, configuring an instance, promoting
it to master, etc are database specific tasks, and I'm not convinced
that Heat is the route to take for them.


I don't think anyone disagrees with you on this, although in the future 
Mistral might be helpful for some of the task running aspects (as we 
discussed at the design summit).



Here comes the third (mixed) manager called “MIGRATION”. It allows to work
with previously provisioned instances through NATIVES engine (resizes,
migration, deletion) but new instances which would be provisioned in future
will be provisioned withing stacks through ORCHESTRATOR.

So, there are three valid options:

-

use NATIVES if there's no available Heat;
-

use ORCHESTRATOR to work with Heat only;
-

use MIGRATION to work with mixed manager;


This does seem a bit overly complex. Steve mentioned the idea of stack
adopt (Thanks!), and I think that would be quite a bit simpler. I think
it behooves us to investigate that as a mechanism for creating a stack
from existing resources, rather than having something like a mixed
migration manager that has been proposed.


+1 for the stack adopt, this is an ideal use case for it IMO.


[...]

implement instance resize; Done

https://github.com/openstack/heat/blob/master/heat/engine/resources/instance.py#L564-L648
-

implement volume resize; Done

https://github.com/openstack/heat/commit/34e215c3c930b3b79bc3795dca3b5a73678f2a36



IIRC we did have an open issue and were trying to work with heat devs to
expose a callback to trove in the case of the VERIFY_RESIZE during
instance resize. Is that now done?


No, this remains an outstanding issue. There are, however, plans to 
address it:


https://blueprints.launchpad.net/heat/+spec/stack-lifecycle-plugpoint

cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [UX] Design for Alarming and Alarm Management

2014-06-16 Thread Liz Blanchard

On Jun 16, 2014, at 10:56 AM, Eoghan Glynn egl...@redhat.com wrote:

 
 Apologies for the top-posting, but just wanted to call out some
 potential confusion that arose on the #os-ceilometer channel earlier
 today.
 
 TL;DR: the UI shouldn't assume a 1:1 mapping between alarms and
   resources, since this mapping does not exist in general

Thanks for the clarification on this Eoghan. After reading the IRC chat and 
e-mail thread I’m now understanding that there are alarms that can be created 
for things like “Alarm me when a new instance is created” that have nothing to 
do with monitoring instances. Am I correct? Are there other cases we should 
consider here? I’ve updated the latest version of wireframes to reflect an 
example of an alarm like this (See Alarm 4 in tables). Also, I got rid of the 
required mark on Resource in the Add Alarm modal. I will be sending a link 
these updated wireframes along with feedback to Christian’s latest comments in 
the next few minutes...

Best,
Liz

 
 Background: See ML post[1]
 
 Discussion: See IRC log [2]
Ctrl+F: Let's see what the UI guys think about it
 
 Cheers,
 Eoghan
 
 [1] http://lists.openstack.org/pipermail/openstack-dev/2014-June/037788.html
 [2] 
 http://eavesdrop.openstack.org/irclogs/%23openstack-ceilometer/%23openstack-ceilometer.2014-06-16.log
 
 
 - Original Message -
 Hi all,
 
 Thanks again for the great comments on the initial cut of wireframes. I’ve
 updated them a fair amount based on feedback in this e-mail thread along
 with the feedback written up here:
 https://etherpad.openstack.org/p/alarm-management-page-design-discussion
 
 Here is a link to the new version:
 http://people.redhat.com/~lsurette/OpenStack/Alarm%20Management%20-%202014-06-05.pdf
 
 And a quick explanation of the updates that I made from the last version:
 
 1) Removed severity.
 
 2) Added Status column. I also added details around the fact that users can
 enable/disable alerts.
 
 3) Updated Alarm creation workflow to include choosing the project and user
 (optionally for filtering the resource list), choosing resource, and
 allowing for choose of amount of time to monitor for alarming.
 -Perhaps we could be even more sophisticated for how we let users filter
 down to find the right resources that they want to monitor for alarms?
 
 4) As for notifying users…I’ve updated the “Alarms” section to be “Alarms
 History”. The point here is to show any Alarms that have occurred to notify
 the user. Other notification ideas could be to allow users to get notified
 of alerts via e-mail (perhaps a user setting?). I’ve added a wireframe for
 this update in User Settings. Then the Alarms Management section would just
 be where the user creates, deletes, enables, and disables alarms. Do you
 still think we don’t need the “alarms” tab? Perhaps this just becomes
 iteration 2 and is left out for now as you mention in your etherpad.
 
 5) Question about combined alarms…currently I’ve designed it so that a user
 could create multiple levels in the “Alarm When…” section. They could
 combine these with AND/ORs. Is this going far enough? Or do we actually need
 to allow users to combine Alarms that might watch different resources?
 
 6) I updated the Actions column to have the “More” drop down which is
 consistent with other tables in Horizon.
 
 7) Added in a section in the “Add Alarm” workflow for “Actions after Alarm”.
 I’m thinking we could have some sort of If State is X, do X type selections,
 but I’m looking to understand more details about how the backend works for
 this feature. Eoghan gave examples of logging and potentially scaling out
 via Heat. Would simple drop downs support these events?
 
 8) I can definitely add in a “scheduling” feature with respect to Alarms. I
 haven’t added it in yet, but I could see this being very useful in future
 revisions of this feature.
 
 9) Another though is that we could add in some padding for outlier data as
 Eoghan mentioned. Perhaps a setting for “This has happened 3 times over the
 last minute, so now send an alarm.”?
 
 A new round of feedback is of course welcome :)
 
 Best,
 Liz
 
 On Jun 4, 2014, at 1:27 PM, Liz Blanchard lsure...@redhat.com wrote:
 
 Thanks for the excellent feedback on these, guys! I’ll be working on making
 updates over the next week and will send a fresh link out when done.
 Anyone else with feedback, please feel free to fire away.
 
 Best,
 Liz
 On Jun 4, 2014, at 12:33 PM, Eoghan Glynn egl...@redhat.com wrote:
 
 
 Hi Liz,
 
 Two further thoughts occurred to me after hitting send on
 my previous mail.
 
 First, is the concept of alarm dimensioning; see my RDO Ceilometer
 getting started guide[1] for an explanation of that notion.
 
 A key associated concept is the notion of dimensioning which defines the
 set of matching meters that feed into an alarm evaluation. Recall that
 meters are per-resource-instance, so in the simplest case an alarm might
 be defined over a particular 

Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-16 Thread Stephen Balukoff
I would like to see something more sophisticated than a simple counter
(it's so easy for a counter to get off when dealing with non-atomic
asynchronous commands). But a counter is a good place to start.
On Jun 13, 2014 6:54 AM, Jain, Vivek vivekj...@ebay.com wrote:

  +2. I totally agree with your comments Doug. It defeats the purpose if
 Barbican does not want to deal with consumers of its service.

  Barbican can simply have a counter field on each container to signify
 how many consumers are using it. Every time a consumer uses a container, it
 increases the counter using barbican API.  If counter is 0, container is
 safe to delete.

  —vivek

   From: Doug Wiegley do...@a10networks.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Tuesday, June 10, 2014 at 2:41 PM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS
 Integration Ideas

   Of what use is a database that randomly delete rows?  That is, in
 effect, what you’re allowing.

  The secrets are only useful when paired with a service.  And unless I’m
 mistaken, there’s no undo.  So you’re letting users shoot themselves in the
 foot, for what reason, exactly?  How do you expect openstack to rely on a
 data store that is fundamentally random at the whim of users?  Every single
 service that uses Barbican will now have to hack in a defense mechanism of
 some kind, because they can’t trust that the secret they rely on will still
 be there later.  Which defeats the purpose of this mission statement:  
 Barbican
 is a ReST API designed for the secure storage, provisioning and management
 of secrets.”

  (And I don’t think anyone is suggesting that blind refcounts are the
 answer.  At least, I hope not.)

  Anyway, I hear this has already been decided, so, so be it.  Sounds like
 we’ll hack around it.

  Thanks,
  doug


   From: Douglas Mendizabal douglas.mendiza...@rackspace.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Tuesday, June 10, 2014 at 3:26 PM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS
 Integration Ideas

   I think that having Barbican decide whether the user is or isn’t
 allowed to delete a secret that they own based on a reference count that is
 not directly controlled by them is unacceptable.   This is indeed policy
 enforcement, and we’d rather not go down that path.

  I’m opposed to the idea of reference counting altogether, but a couple
 of other Barbican-core members are open to it, as long as it does not
 affect the delete behaviors.

  -Doug M.

   From: Adam Harwell adam.harw...@rackspace.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Tuesday, June 10, 2014 at 4:17 PM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS
 Integration Ideas

   Doug: Right, we actually have a blueprint draft for EXACTLY this, but
 the Barbican team gave us a flat not happening, we reject this change on
 causing a delete to fail. The shadow-copy solution I proposed only came
 about because the option you are proposing is not possible. :(

  I also realized that really, this whole thing is an issue for the
 backend, not really for the API itself — the LBaaS API will be retrieving
 the key/cert from Barbican and passing it to the backend, and the backend
 it what's responsible for handling it from that point (F5, Stingray etc
 would never actually call back to Barbican). So, really, the Service-VM
 solution we're architecting is where the shadow-copy solution needs to
 live, at which point it no longer is really an issue we'd need to discuss
 on this mailing list, I think. Stephen, does that make sense to you?
  --Adam

  https://keybase.io/rm_you


   From: Doug Wiegley do...@a10networks.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Tuesday, June 10, 2014 4:10 PM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS
 Integration Ideas

   A third option, that is neither shadow copying nor policy enforcement:

  Ask the Barbican team to put in a small api that is effectively, “hey,
 I’m using this container” and “hey, I’m done with this container”, and the
 have their delete fail if someone is still using it.  This isn’t calling
 into other services, it’s simply getting informed of who’s using what, and
 not stomping it.  That seems pretty core to me, and the workarounds if we
 can’t trust 

Re: [openstack-dev] Gate proposal - drop Postgresql configurations in the gate

2014-06-16 Thread Joe Gordon
On Jun 14, 2014 11:12 AM, Robert Collins robe...@robertcollins.net
wrote:

 You know its bad when you can't sleep because you're redesigning gate
 workflows in your head so I apologise that this email is perhaps
 not as rational, nor as organised, as usual - but , . :)

 Obviously this is very important to address, and if we can come up
 with something systemic I'm going to devote my time both directly, and
 via resource-hunting within HP, to address it. And accordingly I'm
 going to feel free to say 'zuul this' with no regard for existing
 features. We need to get ahead of the problem and figure out how to
 stay there, and I think below I show why the current strategy just
 won't do that.

 On 13 June 2014 06:08, Sean Dague s...@dague.net wrote:

  We're hitting a couple of inflection points.
 
  1) We're basically at capacity for the unit work that we can do. Which
  means it's time to start making decisions if we believe everything we
  currently have running is more important than the things we aren't
  currently testing.
 
  Everyone wants multinode testing in the gate. It would be impossible to
  support that given current resources.

 How much of our capacity problems are due to waste - such as:
  - tempest runs of code the author knows is broken
  - tempest runs of code that doesn't pass unit tests
  - tempest runs while the baseline is unstable - to expand on this
 one, if master only passes one commit in 4, no check job can have a
 higher success rate overall.

 Vs how much are an indication of the sheer volume of development being
done?

  2) We're far past the inflection point of people actually debugging jobs
  when they go wrong.
 
  The gate is backed up (currently to 24hrs) because there are bugs in
  OpenStack. Those are popping up at a rate much faster than the number of
  people who are willing to spend any time on them. And often they are
  popping up in configurations that we're not all that familiar with.

 So, I *totally* appreciate that people fixing the jobs is the visible
 expendable resource, but I'm not sure its the bottleneck. I think the
 bottleneck is our aggregate ability to a) detect the problem and b)
 resolve it.

 For instance - strawman - if when the gate goes bad, after a check for
 external issues like new SQLAlchemy releases etc, what if we just
 rolled trunk of every project that is in the integrated gate back to
 before the success rate nosedived ? I'm well aware of the DVCS issues
 that implies, but from a human debugging perspective that would
 massively increase the leverage we get from the folk that do dive in
 and help. It moves from 'figure out that there is a problem and it
 came in after X AND FIX IT' to 'figure out it came in after X'.

 Reverting is usually much faster and more robust than rolling forward,
 because rolling forward has more unknowns.

 I think we have a systematic problem, because this situation happens
 again and again. And the root cause is that our time to detect
 races/nondeterministic tests is a probability function, not a simple
 scalar. Sometimes we catch such tests within one patch in the gate,
 sometimes they slip through. If we want to land hundreds or thousands
 of patches a day, and we don't want this pain to happen, I don't see
 any way other than *either*:
 A - not doing this whole gating CI process at all
 B - making detection a whole lot more reliable (e.g. we want
 near-certainty that a given commit does not contain a race)
 C - making repair a whole lot faster (e.g. we want = one test cycle
 in the gate to recover once we have determined that some commit is
 broken.

 Taking them in turn:
 A - yeah, no. We have lots of experience with the axiom that that
 which is not tested is broken. And thats the big concern about
 removing things from our matrix - when they are not tested, we can be
 sure that they will break and we will have to spend neurons fixing
 them - either directly or as reviews from people fixing it.

 B - this is really hard. Say we want quite sure sure that there are no
 new races that will occur with more than some probability in a given
 commit, and we assume that race codepaths might be run just once in
 the whole test matrix. A single test run can never tell us that - it
 just tells us it worked. What we need is some N trials where we don't
 observe a new race (but may observe old races), given a maximum risk
 of the introduction of a (say) 5% failure rate into the gate. [check
 my stats]
 (1-max risk)^trials = margin-of-error
 0.95^N = 0.01
 log(0.01, base=0.95) = N
 N ~= 90

 So if we want to stop 5% races landing, and we may exercise any given
 possible race code path a minimum of 1 times in the test matrix, we
 need to exercise the whole test matrix 90 times to have that 1% margin
 sure we saw it. Raise that to a 1% race:
 log(0.01. base=0.99) = 458
 Thats a lot of test runs. I don't think we can do that for each commit
 with our current resources - and I'm not at all sure that asking for
 

Re: [openstack-dev] [Horizon] [UX] Design for Alarming and Alarm Management

2014-06-16 Thread Eoghan Glynn


 On Jun 16, 2014, at 10:56 AM, Eoghan Glynn egl...@redhat.com wrote:
 
  
  Apologies for the top-posting, but just wanted to call out some
  potential confusion that arose on the #os-ceilometer channel earlier
  today.
  
  TL;DR: the UI shouldn't assume a 1:1 mapping between alarms and
resources, since this mapping does not exist in general
 
 Thanks for the clarification on this Eoghan. After reading the IRC chat and
 e-mail thread I’m now understanding that there are alarms that can be
 created for things like “Alarm me when a new instance is created” that have
 nothing to do with monitoring instances. Am I correct?

More something like:

 Alarm me when the average CPU util throughout all instances in an
  autoscaling group suggests that the group is under-scaled

In that case, the alarm may map onto zero resources initially, then
N actually-existing resources at any given point in time (where N
lies between some high and low water marks, but is not constant in
time).

That's an example of an 1:N mapping between alarm and resource names,
but where the set of N resource names is potentially constantly varying
(or apparently static, if the load on the autoscaling group is relatively
constant).

 Are there other cases we should consider here? 

Another example would be:

 Alarm me when the number of instances owned by a particular tenant
  exceeds some threshold

(... actually, that would require an update to the alarm API to
 accommodate the new selectable cardinality aggregate, but would
 be easy to do) 

Well, I'd recommend removing the concept of alarms and resources being
*directly* tied to each other.

Cheers,
Eoghan

 I’ve updated the latest version of wireframes to
 reflect an example of an alarm like this (See Alarm 4 in tables). Also, I
 got rid of the required mark on Resource in the Add Alarm modal. I will be
 sending a link these updated wireframes along with feedback to Christian’s
 latest comments in the next few minutes...
 
 Best,
 Liz
 
  
  Background: See ML post[1]
  
  Discussion: See IRC log [2]
 Ctrl+F: Let's see what the UI guys think about it
  
  Cheers,
  Eoghan
  
  [1]
  http://lists.openstack.org/pipermail/openstack-dev/2014-June/037788.html
  [2]
  http://eavesdrop.openstack.org/irclogs/%23openstack-ceilometer/%23openstack-ceilometer.2014-06-16.log
  
  
  - Original Message -
  Hi all,
  
  Thanks again for the great comments on the initial cut of wireframes. I’ve
  updated them a fair amount based on feedback in this e-mail thread along
  with the feedback written up here:
  https://etherpad.openstack.org/p/alarm-management-page-design-discussion
  
  Here is a link to the new version:
  http://people.redhat.com/~lsurette/OpenStack/Alarm%20Management%20-%202014-06-05.pdf
  
  And a quick explanation of the updates that I made from the last version:
  
  1) Removed severity.
  
  2) Added Status column. I also added details around the fact that users
  can
  enable/disable alerts.
  
  3) Updated Alarm creation workflow to include choosing the project and
  user
  (optionally for filtering the resource list), choosing resource, and
  allowing for choose of amount of time to monitor for alarming.
  -Perhaps we could be even more sophisticated for how we let users
  filter
  down to find the right resources that they want to monitor for alarms?
  
  4) As for notifying users…I’ve updated the “Alarms” section to be “Alarms
  History”. The point here is to show any Alarms that have occurred to
  notify
  the user. Other notification ideas could be to allow users to get notified
  of alerts via e-mail (perhaps a user setting?). I’ve added a wireframe for
  this update in User Settings. Then the Alarms Management section would
  just
  be where the user creates, deletes, enables, and disables alarms. Do you
  still think we don’t need the “alarms” tab? Perhaps this just becomes
  iteration 2 and is left out for now as you mention in your etherpad.
  
  5) Question about combined alarms…currently I’ve designed it so that a
  user
  could create multiple levels in the “Alarm When…” section. They could
  combine these with AND/ORs. Is this going far enough? Or do we actually
  need
  to allow users to combine Alarms that might watch different resources?
  
  6) I updated the Actions column to have the “More” drop down which is
  consistent with other tables in Horizon.
  
  7) Added in a section in the “Add Alarm” workflow for “Actions after
  Alarm”.
  I’m thinking we could have some sort of If State is X, do X type
  selections,
  but I’m looking to understand more details about how the backend works for
  this feature. Eoghan gave examples of logging and potentially scaling out
  via Heat. Would simple drop downs support these events?
  
  8) I can definitely add in a “scheduling” feature with respect to Alarms.
  I
  haven’t added it in yet, but I could see this being very useful in future
  revisions of this feature.
  
  9) Another 

[openstack-dev] [nova][vmwareapi] Spawn Refactor progress

2014-06-16 Thread Tracy Jones
Our phase 1 of spawn refactor merged a week or 2 ago and we are hard at work on 
phase 2 and 3.  The patch set has been posted.  Here is the list in order to 
review for your convenience

not a refactor by a trivial fix to clean up some code before the refactor
https://review.openstack.org/#/c/99238

Phase 2 - these are ready for review
https://review.openstack.org/#/c/98285 (DatastorePath class)
https://review.openstack.org/#/c/99427 (Datatore classs)
https://review.openstack.org/#/c/87002 (get_image_properties)

Phase 3 - This set is still undergoing some further decomposition. But early 
(even just high level) comments on the approach, the extent/granularity of the 
unit testing will be most welcome.
Also, trying to break the big patch into smaller self-contained ones is turning 
out to be quite a challenge. Recommendations on how we can do this sanely most 
appreciated as well.
https://review.openstack.org/#/c/98322 (image fetching/processing/use)


Related review -
https://review.openstack.org/#/c/98529/ (somewhat orthogonal, more like a bit 
of new feature, but came out of the get_image_properties work is the 
descriptor-based validation of fields in the VMwareImage object)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Do any hyperviors allow disk reduction as part of resize ?

2014-06-16 Thread Eric Brown

This is a fix in flight for the vmware driver.  It also throws an exception on 
disk size reduction.

https://review.openstack.org/#/c/85804/



On Jun 13, 2014, at 3:02 AM, Day, Phil philip@hp.com wrote:

 Hi Folks,
  
 I was looking at the resize code in libvirt, and it has checks which raise an 
 exception if the target root or ephemeral disks are smaller than the current 
 ones – which seems fair enough I guess (you can’t drop arbitary disk content 
 on resize), except that the  because the check is in the virt driver the 
 effect is to just ignore the request (the instance remains active rather than 
 going to resize-verify).
  
 It made me wonder if there were any hypervisors that actually allow this, and 
 if not wouldn’t it be better to move the check to the API layer so that the 
 request can be failed rather than silently ignored ?
  
 As far as I can see:
  
 baremetal: Doesn’t support resize
  
 hyperv: Checks only for root disk 
 (https://github.com/openstack/nova/blob/master/nova/virt/hyperv/migrationops.py#L99-L108
   )
  
 libvirt: fails for a reduction of either root or ephemeral  
 (https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L4918-L4923
  )
  
 vmware:   doesn’t seem to check at all ?
  
 xen: Allows resize down for root but not for ephemeral 
 (https://github.com/openstack/nova/blob/master/nova/virt/xenapi/vmops.py#L1015-L1032
  )
  
  
 It feels kind of clumsy to have such a wide variation of behavior across the 
 drivers, and to have the check performed only in the driver ?
  
 Phil
  
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [metrics] How to group activity in git/gerrit repositories

2014-06-16 Thread Ilya Shakhat
Let me explain how Stackalytics grouping works.

Most of groups are created from the official programs http://programs.yaml
.yaml. Every program turns into item in the module list (colored in
violet), for example 'Nova Compute' is a group containing 'nova',
'python-novaclient' and 'nova-specs'. Every type of repo (integrated,
incubated and others) turns into the project type, for example 'integrated'
type would contain all modules for a chosen release.

Also Stackalytics has a few custom project types
https://github.com/stackforge/stackalytics/blob/master/etc/default_data.json#L7833-L7879,
for example 'infra' is every project under 'openstack-infra' git, or
'documentation' which is the group 'documentation' from programs.yaml.
Custom module groups
https://github.com/stackforge/stackalytics/blob/master/etc/default_data.json#L7749-L7778
are also possible, but actually used for stackforge projects only.
Currently there's no group for python clients, but it would be very easy to
add such group.

Thanks,
Ilya

2014-06-16 21:57 GMT+04:00 Stefano Maffulli stef...@openstack.org:

 On Fri 13 Jun 2014 10:51:24 AM PDT, Stangel, Dan wrote:
  You can also refer to the example of Stackalytics, who have created
  their own hierarchy and groupings for metrics reporting:
 
 https://github.com/stackforge/stackalytics/blob/master/etc/default_data.json

 It's a very neat grouping. It seems to me that the clients are grouped
 with their parent git/gerrit repo (nova with python-novaclient, under
 'Compute' program) and Nova is shown alone. I don't see the python
 clients individual repositories or grouped: is that correct?

 For the quarterly reports I will need granularity because I believe
 that clients have different dynamics than their parent project (and if
 that proves not to be the case, we can remove this complexity later and
 merge data).

 can you share a concrete example of how you group things?

 --
 Ask and answer questions on https://ask.openstack.org

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] locked instances and snaphot

2014-06-16 Thread melanie witt
Hi all,

Recently a nova bug [1] was opened where the user describes a scenario where an 
instance that is locked is still able to be snapshotted (create image and 
backup). In the case of Trove, instances are locked ...to ensure integrity and 
protect secrets which are needed by the resident Trove Agent. However, the 
end-user can still take a snapshot of the instance to create an image while 
it's locked, and restore the image later. The end-user then has access to the 
restored image.

During the patch review, a reviewer raised a concern about the purpose of 
instance locking and whether prevention of snapshot while an instance is locked 
is appropriate. From what we understand, instance lock is meant to prevent 
unwanted modification of an instance. Is snapshotting considered a logical 
modification of an instance? That is, if an instance is locked to a user, they 
take a snapshot, create another instance using that snapshot, and modify the 
instance, have they essentially modified the original locked instance?

I wanted to get input from the ML on whether it makes sense to disallow 
snapshot an instance is locked.

Thanks,
melanie

[1] https://bugs.launchpad.net/nova/+bug/1314741
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Mid-cycle meetup for Cinder devs

2014-06-16 Thread John Griffith
On Thu, Jun 12, 2014 at 3:58 PM, John Griffith john.griff...@solidfire.com
wrote:




 On Wed, Jun 11, 2014 at 3:16 PM, D'Angelo, Scott scott.dang...@hp.com
 wrote:

  During the June 11 #openstack-cinder meeting we discussed a mid-cycle
 meetup. The agenda is To be Determined.

 I have inquired and HP in Fort Collins, CO has room and network
 connectivity available. There were some dates that worked well for
 reserving a nice room:

 July 14,15,17,18, 21-25, 27-Aug 1

 But a room could be found regardless.

 Virtual connectivity would also be available.



 Some of the open questions are:

 Are developers interested in a mid-cycle meetup?

 What dates are Not Good (Blackout dates)?

 What dates are Good?

 Whom might be able to be physically present in Ft Collins, CO?

 Are there alternative locations to be considered?



 Someone had mentioned a Google Survey. Would someone like to create that?
 Which questions should be asked?



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ​I've put together a basic Google Form to get some input and to try and
 nail down some dates.


 https://docs.google.com/forms/d/1k0QsOtNR2-Q2S1YETyUHyFyt6zg0u41b_giz6byJBXA/viewform

 Thanks,
 John​


​All,

There are a number of folks that have asked that we do this ​the week of
Aug 11-15 due to some travel restrictions etc.  All of the respondents to
the survey have indicated this will work.

Scott,
Is the HP site in Fort Collins available during this week?

John
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][ceilometer] FloatingIp pollster spamming n-api logs (bug 1328694)

2014-06-16 Thread Joe Gordon
On Sat, Jun 14, 2014 at 7:33 AM, Eoghan Glynn egl...@redhat.com wrote:



 - Original Message -
  On 11 June 2014 20:07, Joe Gordon joe.gord...@gmail.com wrote:
   On Wed, Jun 11, 2014 at 11:38 AM, Matt Riedemann
   mrie...@linux.vnet.ibm.com wrote:
   On 6/11/2014 10:01 AM, Eoghan Glynn wrote:
   Thanks for bringing this to the list Matt, comments inline ...
  
   tl;dr: some pervasive changes were made to nova to enable polling in
   ceilometer which broke some things and in my opinion shouldn't have
 been
   merged as a bug fix but rather should have been a blueprint.
  
   ===
  
   The detailed version:
  
   I opened bug 1328694 [1] yesterday and found that came back to some
   changes made in ceilometer for bug 1262124 [2].
  
   Upon further inspection, the original ceilometer bug 1262124 made
 some
   changes to the nova os-floating-ips API extension and the database
 API
   [3], and changes to python-novaclient [4] to enable ceilometer to
 use
   the new API changes (basically pass --all-tenants when listing
 floating
   IPs).
  
   The original nova change introduced bug 1328694 which spams the
 nova-api
   logs due to the ceilometer change [5] which does the polling, and
 right
   now in the gate ceilometer is polling every 15 seconds.
  
  
   IIUC that polling cadence in the gate is in the process of being
 reverted
   to the out-of-the-box default of 600s.
  
   I pushed a revert in ceilometer to fix the spam bug and a separate
 patch
   was pushed to nova to fix the problem in the network API.
  
  
   Thank you for that. The revert is just now approved on the ceilometer
   side,
   and is wending its merry way through the gate.
  
   The bigger problem I see here is that these changes were all made
 under
   the guise of a bug when I think this is actually a blueprint.  We
 have
   changes to the nova API, changes to the nova database API, CLI
 changes,
   potential performance impacts (ceilometer can be hitting the nova
   database a lot when polling here), security impacts (ceilometer
 needs
   admin access to the nova API to list floating IPs for all tenants),
   documentation impacts (the API and CLI changes are not documented),
 etc.
  
   So right now we're left with, in my mind, two questions:
  
   1. Do we just fix the spam bug 1328694 and move on, or
   2. Do we revert the nova API/CLI changes and require this goes
 through
   the nova-spec blueprint review process, which should have happened
 in
   the first place.
  
  
   So just to repeat the points I made on the unlogged #os-nova IRC
 channel
   earlier, for posterity here ...
  
   Nova already exposed an all_tenants flag in multiple APIs (servers,
   volumes,
   security-groups etc.) and these would have:
  
  (a) generally pre-existed ceilometer's usage of the corresponding
 APIs
  
   and:
  
  (b) been tracked and proposed at the time via straight-forward LP
   bugs,
  as  opposed to being considered blueprint material
  
   So the manner of the addition of the all_tenants flag to the
 floating_ips
   API looks like it just followed existing custom  practice.
  
   Though that said, the blueprint process and in particular the
 nova-specs
   aspect, has been tightened up since then.
  
   My preference would be to fix the issue in the underlying API, but
 to use
   this as a teachable moment ... i.e. to require more oversight (in
 the
   form of a reviewed  approved BP spec) when such API changes are
 proposed
   in the future.
  
   Cheers,
   Eoghan
  
   Are there other concerns here?  If there are no major objections to
 the
   code that's already merged, then #2 might be excessive but we'd
 still
   need docs changes.
  
   I've already put this on the nova meeting agenda for tomorrow.
  
   [1] https://bugs.launchpad.net/ceilometer/+bug/1328694
   [2] https://bugs.launchpad.net/nova/+bug/1262124
   [3] https://review.openstack.org/#/c/81429/
   [4] https://review.openstack.org/#/c/83660/
   [5] https://review.openstack.org/#/c/83676/
  
   --
  
   Thanks,
  
   Matt Riedemann
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
   While there is precedent for --all-tenants with some of the other
 APIs,
   I'm concerned about where this stops.  When ceilometer wants polling
 on
   some
   other resources that the nova API exposes, will it need the same
 thing?
   Doing all of this polling for resources in all tenants in nova puts an
   undue
   burden on the nova API and the database.
  
   Can we do something with notifications here instead?  That's where the
   nova-spec process would have probably caught this.
  
   ++ to notifications and not polling.
 
 

Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-16 Thread Clint Byrum
Excerpts from Doug Wiegley's message of 2014-06-10 14:41:29 -0700:
 Of what use is a database that randomly delete rows?  That is, in effect, 
 what you’re allowing.
 
 The secrets are only useful when paired with a service.  And unless I’m 
 mistaken, there’s no undo.  So you’re letting users shoot themselves in the 
 foot, for what reason, exactly?  How do you expect openstack to rely on a 
 data store that is fundamentally random at the whim of users?  Every single 
 service that uses Barbican will now have to hack in a defense mechanism of 
 some kind, because they can’t trust that the secret they rely on will still 
 be there later.  Which defeats the purpose of this mission statement:  
 Barbican is a ReST API designed for the secure storage, provisioning and 
 management of secrets.”
 
 (And I don’t think anyone is suggesting that blind refcounts are the answer.  
 At least, I hope not.)
 
 Anyway, I hear this has already been decided, so, so be it.  Sounds like 
 we’ll hack around it.
 


Doug, nobody is calling Barbican a database. It is a place to store
secrets.

The idea is to loosely couple things, and if you need more assurances,
use something like Heat to manage the relationships.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [NFV] Specific example NFV use case for a data plane app

2014-06-16 Thread Steve Gordon
- Original Message -
 From: Calum Loudon calum.lou...@metaswitch.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 
 Hello all
 
 At Wednesday's meeting I promised to supply specific examples to help
 illustrate the NFV use cases and also show how they map to some of the
 blueprints.  Here's my first example - info on our session border
 controller, which is a data plane app.  Please let me know if this is
 the sort of example and detail the group are looking for, then I can
 add it into the wiki and send out info on the second, a vIMS core.

 Use case example
 
 
 Perimeta Session Border Controller, Metaswitch Networks.  Sits on the
 edge of a service provider's network and polices SIP and RTP (i.e. VoIP)
 control and media traffic passing over the access network between
 end-users and the core network or the trunk network between the core and
 another SP.
 
 Characteristics relevant to NFV/OpenStack
 -
 
 Fast  guaranteed performance:
 - fast = performance of order of several million VoIP packets (~64-220
 bytes depending on codec) per second per core (achievable on COTS hardware)
 - guaranteed via SLAs.
 
 Fully HA, with no SPOFs and service continuity over software and hardware
 failures.
 
 Elastically scalable by adding/removing instances under the control of the
 NFV orchestrator.
 
 Ideally, ability to separate traffic from different customers via VLANs.
 
 Requirements and mapping to blueprints
 --
 
 Fast  guaranteed performance - implications for network:
 
 - the packets per second target - either SR-IOV or an accelerated
   DPDK-like data plane
   -   maps to the SR-IOV and accelerated vSwitch blueprints:
   -   SR-IOV Networking Support
   
 (https://blueprints.launchpad.net/nova/+spec/pci-passthrough-sriov)
   -   Open vSwitch to use patch ports
   
 (https://blueprints.launchpad.net/neutron/+spec/openvswitch-patch-port-use)
   -   userspace vhost in ovd vif bindings
   
 (https://blueprints.launchpad.net/nova/+spec/libvirt-ovs-use-usvhost)
   -   Snabb NFV driver
   
 (https://blueprints.launchpad.net/neutron/+spec/snabb-nfv-mech-driver)
   -   VIF_SNABB 
 (https://blueprints.launchpad.net/nova/+spec/vif-snabb)
 
 Fast  guaranteed performance - implications for compute:
 
 - to optimize data rate we need to keep all working data in L3 cache
   - need to be able to pin cores
   -   Virt driver pinning guest vCPUs to host pCPUs
   (https://blueprints.launchpad.net/nova/+spec/virt-driver-cpu-pinning)
 
 - similarly to optimize data rate need to bind to NIC on host CPU's bus
   -   I/O (PCIe) Based NUMA Scheduling
   
 (https://blueprints.launchpad.net/nova/+spec/input-output-based-numa-scheduling)
 
 - to offer guaranteed performance as opposed to 'best efforts' we need
   to control placement of cores, minimise TLB misses and get accurate
   info about core topology (threads vs. hyperthreads etc.); maps to the
   remaining blueprints on NUMA  vCPU topology:
   -   Virt driver guest vCPU topology configuration
   (https://blueprints.launchpad.net/nova/+spec/virt-driver-vcpu-topology)
   -   Virt driver guest NUMA node placement  topology
   (https://blueprints.launchpad.net/nova/+spec/virt-driver-numa-placement)
   -   Virt driver large page allocation for guest RAM
   (https://blueprints.launchpad.net/nova/+spec/virt-driver-large-pages)
 
 - may need support to prevent 'noisy neighbours' stealing L3 cache -
   unproven, and no blueprint we're aware of.
 
 HA:
 - requires anti-affinity rules to prevent active/passive being
   instantiated on same host - already supported, so no gap.
 
 Elastic scaling:
 - similarly readily achievable using existing features - no gap.
 
 VLAN trunking:
 - maps straightforwardly to VLAN trunking networks for NFV
 (https://blueprints.launchpad.net/neutron/+spec/nfv-vlan-trunks et al).
 
 Other:
 - being able to offer apparent traffic separation (e.g. service
   traffic vs. application management) over single network is also
   useful in some cases
   -   Support two interfaces from one VM attached to the same 
 network
   (https://blueprints.launchpad.net/nova/+spec/2-if-1-net)
 
 regards
 
 Calum

Hi Calum,

Thanks for contributing this, I think as a concrete example it's very helpful 
and I like the breakdown. I've taken the liberty of adding it to the Wiki for 
further editing/discussion (it appears Chris has not split the pages as yet so 
for now on the meetings page):

https://wiki.openstack.org/wiki/Teams/NFV#Session_Border_Controller

The content itself seems fine, I think however for brevity - particularly as we 

Re: [openstack-dev] [TripleO] Backwards compatibility policy for our projects

2014-06-16 Thread James Slagle
On Mon, Jun 16, 2014 at 12:19 PM, Tomas Sedovic tsedo...@redhat.com wrote:
 All,

 After having proposed some changes[1][2] to tripleo-heat-templates[3],
 reviewers suggested adding a deprecation period for the merge.py script.

 While TripleO is an official OpenStack program, none of the projects
 under its umbrella (including tripleo-heat-templates) have gone through
 incubation and integration nor have they been shipped with Icehouse.

 So there is no implicit compatibility guarantee and I have not found
 anything about maintaining backwards compatibility neither on the
 TripleO wiki page[4], tripleo-heat-template's readme[5] or
 tripleo-incubator's readme[6].

 The Release Management wiki page[7] suggests that we follow Semantic
 Versioning[8], under which prior to 1.0.0 (t-h-t is ) anything goes.
 According to that wiki, we are using a stronger guarantee where we do
 promise to bump the minor version on incompatible changes -- but this
 again suggests that we do not promise to maintain backwards
 compatibility -- just that we document whenever we break it.

 According to Robert, there are now downstreams that have shipped things
 (with the implication that they don't expect things to change without a
 deprecation period) so there's clearly a disconnect here.

 If we do promise backwards compatibility, we should document it
 somewhere and if we don't we should probably make that more visible,
 too, so people know what to expect.

 I prefer the latter, because it will make the merge.py cleanup easier
 and every published bit of information I could find suggests that's our
 current stance anyway.

 Tomas

 [1]: https://review.openstack.org/#/c/99384/
 [2]: https://review.openstack.org/#/c/97939/
 [3]: https://github.com/openstack/tripleo-heat-templates
 [4]: https://wiki.openstack.org/wiki/TripleO
 [5]:
 https://github.com/openstack/tripleo-heat-templates/blob/master/README.md
 [6]: https://github.com/openstack/tripleo-incubator/blob/master/README.rst
 [7]: https://wiki.openstack.org/wiki/TripleO/ReleaseManagement
 [8]: http://semver.org/

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Hi Tomas,

By and large, I think you are correct in your conclusions about the
current state of backwards compatibility in TripleO.

Much of this is the reason why I pushed for the stable branches that
we cut for icehouse. I'm not sure what downstreams that have shipped
things are being referred to, but perhaps those needs could be served
by the stable/icehouse branches that exist today?  I know at least for
the RDO downstream, the packages are being built off of releases done
from the stable branches. So, honestly, I'm not that concerned about
your proposed changes to rip stuff out without any deprecation from
that point of view :).

That being said, just because TripleO has taken the stance that
backwards compatibility is not guaranteed, I agree with some of the
other sentiments in this thread: that we should at least try if there
are easy things we can do.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Review guidelines for API patches

2014-06-16 Thread Joe Gordon
On Fri, Jun 13, 2014 at 4:18 AM, Christopher Yeoh cbky...@gmail.com wrote:

 Hi Phil,

 On Fri, 13 Jun 2014 09:28:30 +
 Day, Phil philip@hp.com wrote:
 
  The documentation is NOT the canonical source for the behaviour of
  the API, currently the code should be seen as the reference. We've
  run into issues before where people have tried to align code to the
  fit the documentation and made backwards incompatible changes
  (although this is not one).
 
  I’ve never seen this defined before – is this published as official
  Openstack  or Nova policy ?

 Its not published, but not following this guideline has got us into
 trouble before because we end up making backwards incompatible changes
 to force the code to match the docs rather than the other way around.


Here is a patch to publish this policy.

https://review.openstack.org/#/c/100335/

The published results of this file live at:

http://docs.openstack.org/developer/nova/devref/policies.html




 The documentation historically has been generated manually by people
 looking at the code and writing up what they think it does. NOTE: this
 is not a reflection on the docs team - they have done an EXCELLENT job
 based on what we've been giving them (just the api samples and the
 code). Its very easy to get it wrong and its most often someone who has
 not written the code who is writing the documentation and not familiar
 with what was actually merged.

  Personally I think we should be putting as much effort into reviewing
  the API docs as we do API code so that we can say that the API docs
  are the canonical source for behavior.Not being able to fix bugs
  in say input validation that escape code reviews because they break
  backwards compatibility seems to be a weakness to me.

 +1 for people to go back through the v2 api docs and fix the
 documentation where it is incorrect.

 So our medium term goal (and this one the reasons behind wanting the
 new v2.1/v3 infrastructure with json schema etc) is to be able to
 properly automate the production of the documentation from the code. So
 there is no contradiction between the two.

 I agree we need to be able to fix bugs that result in backwards
 incompatible changes. v2.1microversions should allow us to do that
 cleanly as possible.

 Chris

 
 
  Phil
 
 
 
  From: Christopher Yeoh [mailto:cbky...@gmail.com]
  Sent: 13 June 2014 04:00
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [Nova] Review guidelines for API patches
 
  On Fri, Jun 13, 2014 at 11:28 AM, Matt Riedemann
  mrie...@linux.vnet.ibm.commailto:mrie...@linux.vnet.ibm.com wrote:
 
 
  On 6/12/2014 5:58 PM, Christopher Yeoh wrote:
  On Fri, Jun 13, 2014 at 8:06 AM, Michael Still
  mi...@stillhq.commailto:mi...@stillhq.com
  mailto:mi...@stillhq.commailto:mi...@stillhq.com wrote:
 
  In light of the recent excitement around quota classes and the
  floating ip pollster, I think we should have a conversation about
  the review guidelines we'd like to see for API changes proposed
  against nova. My initial proposal is:
 
- API changes should have an associated spec
 
 
  +1
 
- API changes should not be merged until there is a tempest
  change to test them queued for review in the tempest repo
 
 
  +1
 
  Chris
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.orgmailto:
 OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  We do have some API change guidelines here [1].  I don't want to go
  overboard on every change and require a spec if it's not necessary,
  i.e. if it falls into the 'generally ok' list in that wiki.  But if
  it's something that's not documented as a supported API (so it's
  completely new) and is pervasive (going into novaclient so it can be
  used in some other service), then I think that warrants some spec
  consideration so we don't miss something.
 
  To compare, this [2] is an example of something that is updating an
  existing API but I don't think warrants a blueprint since I think it
  falls into the 'generally ok' section of the API change guidelines.
 
  So really I see this a new feature, not a bug fix. Someone thought
  that detail was supported when writing the documentation but it never
  was. The documentation is NOT the canonical source for the behaviour
  of the API, currently the code should be seen as the reference. We've
  run into issues before where people have tried to align code to the
  fit the documentation and made backwards incompatible changes
  (although this is not one).
 
  Perhaps we need a streamlined queue for very simple API changes, but
  I do think API changes should get more than the usual review because
  we have to live with them for so long (short of an emergency revert
  if we catch it in time).
 
  [1] https://wiki.openstack.org/wiki/APIChangeGuidelines
  [2] 

  1   2   >