Re: [openstack-dev] [neutron]Performance of security group

2014-08-21 Thread Miguel Angel Ajo Pelayo
Thank you shihanzhang!,

I can't believe I didn't realize the ipset part spec was accepted I live
on my own bubble... I will be reviewing and testing/helping on that part 
too during the next few days,  I was too concentrated in the RPC part.


Best regards,

- Original Message -
 hi neutroner!
 my patch about BP:
 https://blueprints.launchpad.net/openstack/?searchtext=add-ipset-to-security
 need install ipset in devstack, I have commit the patch:
 https://review.openstack.org/#/c/113453/, who can help me review it, thanks
 very much!
 
 Best regards,
 shihanzhang
 
 
 
 
 At 2014-08-21 10:47:59, Martinx - ジェームズ thiagocmarti...@gmail.com wrote:
 
 
 
 +1 NFTablesDriver!
 
 Also, NFTables, AFAIK, improves IDS systems, like Suricata, for example:
 https://home.regit.org/2014/02/suricata-and-nftables/
 
 Then, I'm wondering here... What benefits might come for OpenStack Nova /
 Neutron, if it comes with a NFTables driver, instead of the current
 IPTables?!
 
 * E fficient Security Group design?
 * Better FWaaS, maybe with NAT(44/66) support?
 * Native support for IPv6, with the defamed NAT66 built-in, simpler Floating
 IP implementation, for both v4 and v6 networks under a single
 implementation ( I don't like NAT66, I prefer a `routed Floating IPv6`
 version ) ?
 * Metadata over IPv6 still using NAT(66) ( I don't like NAT66 ), single
 implementation?
 * Suricata-as-a-Service?!
 
 It sounds pretty cool! :-)
 
 
 On 20 August 2014 23:16, Baohua Yang  yangbao...@gmail.com  wrote:
 
 
 
 Great!
 We met similar problems.
 The current mechanisms produce too many iptables rules, and it's hard to
 debug.
 Really look forward to seeing a more efficient security group design.
 
 
 On Thu, Jul 10, 2014 at 11:44 PM, Kyle Mestery  mest...@noironetworks.com 
 wrote:
 
 
 
 On Thu, Jul 10, 2014 at 4:30 AM, shihanzhang  ayshihanzh...@126.com  wrote:
  
  With the deployment 'nova + neutron + openvswitch', when we bulk create
  about 500 VM with a default security group, the CPU usage of neutron-server
  and openvswitch agent is very high, especially the CPU usage of openvswitch
  agent will be 100%, this will cause creating VMs failed.
  
  With the method discussed in mailist:
  
  1) ipset optimization ( https://review.openstack.org/#/c/100761/ )
  
  3) sg rpc optimization (with fanout)
  ( https://review.openstack.org/#/c/104522/ )
  
  I have implement these two scheme in my deployment, when we again bulk
  create about 500 VM with a default security group, the CPU usage of
  openvswitch agent will reduce to 10%, even lower than 10%, so I think the
  iprovement of these two options are very efficient.
  
  Who can help us to review our spec?
  
 This is great work! These are on my list of things to review in detail
 soon, but given the Neutron sprint this week, I haven't had time yet.
 I'll try to remedy that by the weekend.
 
 Thanks!
 Kyle
 
  Best regards,
  shihanzhang
  
  
  
  
  
  At 2014-07-03 10:08:21, Ihar Hrachyshka  ihrac...@redhat.com  wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512
  
 Oh, so you have the enhancement implemented? Great! Any numbers that
 shows how much we gain from that?
  
 /Ihar
  
 On 03/07/14 02:49, shihanzhang wrote:
  Hi, Miguel Angel Ajo! Yes, the ipset implementation is ready, today
  I will modify my spec, when the spec is approved, I will commit the
  codes as soon as possilbe!
  
  
  
  
  
  At 2014-07-02 10:12:34, Miguel Angel Ajo  majop...@redhat.com 
  wrote:
  
  Nice Shihanzhang,
  
  Do you mean the ipset implementation is ready, or just the
  spec?.
  
  
  For the SG group refactor, I don't worry about who does it, or
  who takes the credit, but I believe it's important we address
  this bottleneck during Juno trying to match nova's scalability.
  
  Best regards, Miguel Ángel.
  
  
  On 07/02/2014 02:50 PM, shihanzhang wrote:
  hi Miguel Ángel and Ihar Hrachyshka, I agree with you that
  split the work in several specs, I have finished the work (
  ipset optimization), you can do 'sg rpc optimization (without
  fanout)'. as the third part(sg rpc optimization (with fanout)),
  I think we need talk about it, because just using ipset to
  optimize security group agent codes does not bring the best
  results!
  
  Best regards, shihanzhang.
  
  
  
  
  
  
  
  
  At 2014-07-02 04:43:24, Ihar Hrachyshka  ihrac...@redhat.com 
  wrote:
  On 02/07/14 10:12, Miguel Angel Ajo wrote:
  
  Shihazhang,
  
  I really believe we need the RPC refactor done for this cycle,
  and given the close deadlines we have (July 10 for spec
  submission and July 20 for spec approval).
  
  Don't you think it's going to be better to split the work in
  several specs?
  
  1) ipset optimization (you) 2) sg rpc optimization (without
  fanout) (me) 3) sg rpc optimization (with fanout) (edouard, you
  , me)
  
  
  This way we increase the chances of having part of this for the
  Juno cycle. If we go for something too complicated is going to
  take more time for approval.
  
 

Re: [openstack-dev] [ceilometer] repackage ceilometer and ceilometerclient

2014-08-21 Thread Nejc Saje



On 08/21/2014 07:50 AM, Osanai, Hisashi wrote:


Folks,

I wrote the following BP regarding repackaging ceilometer and ceilometerclient.

https://blueprints.launchpad.net/ceilometer/+spec/repackaging-ceilometerclient

I need to install the ceilometer package when the swift_middlware middleware 
uses.
And the ceilometer package has dependencies with the following:

- requirements.txt in the ceilometer package
...
python-ceilometerclient=1.0.6
python-glanceclient=0.13.1
python-keystoneclient=0.9.0
python-neutronclient=2.3.5,3
python-novaclient=2.17.0
python-swiftclient=2.0.2
...

 From maintenance point of view, these dependencies are undesirable. What do 
you think?



I don't think there's any way the modules you mention in the BP can be 
moved into ceilometerclient. I think the best approach to resolve this 
would be to rewrite swift middleware to use oslo.messaging 
notifications, as discussed here: 
http://lists.openstack.org/pipermail/openstack-dev/2014-July/041628.html


Cheers,
Nejc

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Incubator concerns from packaging perspective

2014-08-21 Thread loy wolfe
On Thu, Aug 21, 2014 at 12:28 AM, Salvatore Orlando sorla...@nicira.com
wrote:

 Some comments inline.

 Salvatore

 On 20 August 2014 17:38, Ihar Hrachyshka ihrac...@redhat.com wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512

 Hi all,

 I've read the proposal for incubator as described at [1], and I have
 several comments/concerns/suggestions to this.

 Overall, the idea of giving some space for experimentation that does
 not alienate parts of community from Neutron is good. In that way, we
 may relax review rules and quicken turnaround for preview features
 without loosing control on those features too much.

 Though the way it's to be implemented leaves several concerns, as follows:

 1. From packaging perspective, having a separate repository and
 tarballs seems not optimal. As a packager, I would better deal with a
 single tarball instead of two. Meaning, it would be better to keep the
 code in the same tree.

 I know that we're afraid of shipping the code for which some users may
 expect the usual level of support and stability and compatibility.
 This can be solved by making it explicit that the incubated code is
 unsupported and used on your user's risk. 1) The experimental code
 wouldn't probably be installed unless explicitly requested, and 2) it
 would be put in a separate namespace (like 'preview', 'experimental',
 or 'staging', as the call it in Linux kernel world [2]).

 This would facilitate keeping commit history instead of loosing it
 during graduation.

 Yes, I know that people don't like to be called experimental or
 preview or incubator... And maybe neutron-labs repo sounds more
 appealing than an 'experimental' subtree in the core project. Well,
 there are lots of EXPERIMENTAL features in Linux kernel that we
 actively use (for example, btrfs is still considered experimental by
 Linux kernel devs, while being exposed as a supported option to RHEL7
 users), so I don't see how that naming concern is significant.


 I think this is the whole point of the discussion around the incubator and
 the reason for which, to the best of my knowledge, no proposal has been
 accepted yet.


 2. If those 'extras' are really moved into a separate repository and
 tarballs, this will raise questions on whether packagers even want to
 cope with it before graduation. When it comes to supporting another
 build manifest for a piece of code of unknown quality, this is not the
 same as just cutting part of the code into a separate
 experimental/labs package. So unless I'm explicitly asked to package
 the incubator, I wouldn't probably touch it myself. This is just too
 much effort (btw the same applies to moving plugins out of the tree -
 once it's done, distros will probably need to reconsider which plugins
 they really want to package; at the moment, those plugins do not
 require lots of time to ship them, but having ~20 separate build
 manifests for each of them is just too hard to handle without clear
 incentive).


 One reason instead for moving plugins out of the main tree is allowing
 their maintainers to have full control over them.
 If there was a way with gerrit or similars to give somebody rights to
 merge code only on a subtree I probably would not even consider the option
 of moving plugin and drivers away. From my perspective it's not that I
 don't want them in the main tree, it's that I don't think it's fair for
 core team reviewers to take responsibility of approving code that they
 can't fully tests (3rd partt CI helps, but is still far from having a
 decent level of coverage).


It's also unfair that core team reviewers are forced to spend time on 3rd
plugins and drivers under existing process. There are so many 3rd
networking backend technologies, from hardware to controller, anyone can
submit plugins and drivers to the tree, and for the principle of neutrality
we can't agree some and refuse others' reviewing request. Then reviewers'
time slot are full of these 3rd backend related work, leaving less time on
the most important and urgent thing: improve Neutron core architecture to
the same mature level like Nova as soon as possible.





 3. The fact that neutron-incubator is not going to maintain any stable
 branches for security fixes and major failures concerns me too. In
 downstream, we don't generally ship the latest and greatest from PyPI.
 Meaning, we'll need to maintain our own downstream stable branches for
 major fixes. [BTW we already do that for python clients.]


 This is a valid point. We need to find an appropriate trade off. My
 thinking was that incubated projects could be treated just like client
 libraries from a branch perspective.



 4. Another unclear part of the proposal is that notion of keeping
 Horizon and client changes required for incubator features in
 neutron-incubator. AFAIK the repo will be governed by Neutron Core
 team, and I doubt the team is ready to review Horizon changes (?). I
 think I don't understand how we're going to handle that. Can we 

Re: [openstack-dev] [Octavia] Proposal to support multiple listeners on one HAProxy instance

2014-08-21 Thread Stephen Balukoff
Hi Michael!

Just to give others some background on this: The current proposal (by me)
is to have each Listener object, (as defined in the Neutron LBaaS v2 code
base) correspond with one haproxy process on the Octavia VM in the
currently proposed Octavia design document. Michael's proposal is to have
each Loadbalancer object correspond with one haproxy process (which would
have multiple front-end sections in it to service each Listener on the
Loadbalancer).

Anyway, we thought it would be useful to discuss this on the mailing list
so that we could give others a chance to register their opinions, and
justify the same.

That being said, my responses to your points are in-line below, followed by
my reasoning for wanting 1 haproxy process = 1 listener in the
implementation:


On Wed, Aug 20, 2014 at 12:34 PM, Michael Johnson johnso...@gmail.com
wrote:

 I am proposing that Octavia should support deployment models that
 enable multiple listeners to be configured inside the HAProxy
 instance.

 The model I am proposing is:

 1. One or more VIP per Octavia VM (propose one VIP in 0.5 release)
 2. One or more HAProxy instance per Octavia VM
 3. One or more listeners on each HAProxy instance


This is where our proposals differ. I propose 1 listener per haproxy
instance.


 4. Zero or more pools per listener (shared pools should be supported
 as a configuration render optimization, but propose support post 0.5
 release)
 5. One or more members per pool


I would also propose zero or more members per pool. A pool with zero
members in it has been (is being) used by some of our customers to
blacklist certain client IP addresses. These customers want to respond to
the blacklisted IPs with an error 503 page (which can be done by haproxy)
instead of simply not responding to packets (if the blacklist were done at
the firewall).


 This provides flexibility to the operator to support multiple
 deployment models,  including active-active and hot standby Octavia
 VMs.  Without the flexibility to have multiple listeners per HAProxy
 instance we are limiting the operators deployment models.


I don't think your conclusion follows logically from your justification
here.  Specifically, active-active and hot standby Octavia VMs are equally
supported by a one-process-per-listener model. Further, for reasons I'll
get into below, I think the one-process-per-listener model actually
provides more flexibility to the operators and users in how services are
deployed. Therefore, the conclusion I come to is the exact opposite of
yours: By insisting that all listeners on a given loadbalancer share a
single haproxy process, we actually limit flexibility in deployment models
(as well as introduce some potential operational problems we otherwise
wouldn't encounter).


I am advocating for multiple listeners per HAProxy instance because I
 think it provides the following advantages:

 1. It reduces memory overhead due to running multiple HAProxy
 instances on one Octavia VM.  Since the Octavia constitution states
 that Octavia is for large operators where this memory overhead could
 have a financial impact we should allow alternate deployment options.

2. It reduces host CPU overhead due to reduced context switching that
 would occur between HAProxy instances.  HAProxy is event driven and
 will mostly be idle waiting for traffic, where multiple instances of
 HAProxy will require context switching between the processes which
 increases the VM’s CPU load.  Since the Octavia constitution states
 that we are designing for large operators, anything we can do to
 reduce the host CPU load reduces the operator’s costs.


So these two points might be the only compelling reason I see to follow the
approach you suggest. However, I would like to see the savings here
justified via benchmarks. If benchmarks don't show a significant difference
in performance running multiple haproxy instances to service different
listeners over running a single haproxy instance servicing the same
listeners, then I don't think these points are sufficient justification. I
understand your team (HP) is going to be working on these, hopefully in
time for next week's Octavia meeting.

Please also understand that memory and CPU usage are just two factors in
determining overall cost of the solution. Slowing progress on delivering
features, increasing faults and other problems by having a more complicated
configuration, and making problems more difficult to isolate and
troubleshoot are also factors that affect the cost of a solution (though
they aren't as easy to quantify). Therefore it does not necessarily
logically follow that anything we can do to reduce CPU load decreases the
operator's costs.

Keep in mind, also, that for large operators the scaling strategy is to
ensure services can be scaled horizontally (meaning the CPU / memory
footprint of a single process isn't very important for a large load that
will be spread across many machines anyway), and any costs for delivering
the 

Re: [openstack-dev] [all] The future of the integrated release

2014-08-21 Thread Chris Friesen

On 08/20/2014 09:54 PM, Clint Byrum wrote:

Excerpts from Jay Pipes's message of 2014-08-20 14:53:22 -0700:

On 08/20/2014 05:06 PM, Chris Friesen wrote:

On 08/20/2014 07:21 AM, Jay Pipes wrote:

Hi Thierry, thanks for the reply. Comments inline. :)

On 08/20/2014 06:32 AM, Thierry Carrez wrote:

If we want to follow your model, we probably would have to dissolve
programs as they stand right now, and have blessed categories on one
side, and teams on the other (with projects from some teams being
blessed as the current solution).


Why do we have to have blessed categories at all? I'd like to think of
a day when the TC isn't picking winners or losers at all. Level the
playing field and let the quality of the projects themselves determine
the winner in the space. Stop the incubation and graduation madness and
change the role of the TC to instead play an advisory role to upcoming
(and existing!) projects on the best ways to integrate with other
OpenStack projects, if integration is something that is natural for the
project to work towards.


It seems to me that at some point you need to have a recommended way of
doing things, otherwise it's going to be *really hard* for someone to
bring up an OpenStack installation.


Why can't there be multiple recommended ways of setting up an OpenStack
installation? Matter of fact, in reality, there already are multiple
recommended ways of setting up an OpenStack installation, aren't there?

There's multiple distributions of OpenStack, multiple ways of doing
bare-metal deployment, multiple ways of deploying different message
queues and DBs, multiple ways of establishing networking, multiple open
and proprietary monitoring systems to choose from, etc. And I don't
really see anything wrong with that.



This is an argument for loosely coupling things, rather than tightly
integrating things. You will almost always win my vote with that sort of
movement, and you have here. +1.


I mostly agree, but I think we should distinguish between things that 
are possible, and things that are supported.  Arguably, anything 
that is supported should be tested as part of the core infrastructure 
and documented in the core OpenStack documentation.



We already run into issues with something as basic as competing SQL
databases.


If the TC suddenly said Only MySQL will be supported, that would not
mean that the greater OpenStack community would be served better. It
would just unnecessarily take options away from deployers.


On the other hand, if the community says explicitly we only test with 
sqlite and MySQL then that sends a signal that anyone wanting to use 
something else should plan on doing additional integration testing.


I've stumbled over some of these issues, and it's no fun. (There's still 
an open bug around the fact that sqlite behaves differently than MySQL 
with respect to regex.)



IMO, OpenStack should be about choice. Choice of hypervisor, choice of
DB and MQ infrastructure, choice of operating systems, choice of storage
vendors, choice of networking vendors.



Err, uh. I think OpenStack should be about users. If having 400 choices
means users are just confused, then OpenStack becomes nothing and
everything all at once. Choices should be part of the whole not when 1%
of the market wants a choice, but when 20%+ of the market _requires_
a choice.


I agree.

If there are too many choices without enough documentation as to why 
someone would choose one over the other, or insufficient testing such 
that some choices are theoretically valid but broken in practice, then 
it's less useful for the end users.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] repackage ceilometer and ceilometerclient

2014-08-21 Thread Osanai, Hisashi

Thank you for your quick response.

On Thursday, August 21, 2014 3:12 PM, Nejc Saje wrote:
 I don't think there's any way the modules you mention in the BP can be
 moved into ceilometerclient. I think the best approach to resolve this
 would be to rewrite swift middleware to use oslo.messaging
 notifications, as discussed here:
 http://lists.openstack.org/pipermail/openstack-dev/2014-July/041628.
 html

I understand your point that solve almost unnecessary dependencies. I would 
like 
to make sure that remained the dependencies of context and timeutils after 
rewriting.
Does the rewriting include removing the dependencies?

=== copy from the BP ===
- swift_middleware.py
61 from ceilometer.openstack.common import context
62 from ceilometer.openstack.common import timeutils
63 from ceilometer import pipeline
64 from ceilometer import sample
65 from ceilometer import service

On the other hand, I'm really interested in the mail thread you pointed out:D
https://www.mail-archive.com/openstack-dev@lists.openstack.org/msg30880.html

Best Regards,
Hisashi Osanai


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][all] switch from mysqldb to another eventlet aware mysql client

2014-08-21 Thread Angus Lees
On Wed, 20 Aug 2014 05:03:51 PM Clark Boylan wrote:
 On Mon, Aug 18, 2014, at 01:59 AM, Ihar Hrachyshka wrote:
  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA512
  
  On 17/08/14 02:09, Angus Lees wrote:
   On 16 Aug 2014 06:09, Doug Hellmann d...@doughellmann.com
   
   mailto:d...@doughellmann.com wrote:
   On Aug 15, 2014, at 9:29 AM, Ihar Hrachyshka
   ihrac...@redhat.com
   
   mailto:ihrac...@redhat.com wrote:
   Signed PGP part Some updates on the matter:
   
   - oslo-spec was approved with narrowed scope which is now
   'enabled mysqlconnector as an alternative in gate' instead of
   'switch the default db driver to mysqlconnector'. We'll revisit
   the switch part the next cycle once we have the new driver
   running in gate and real benchmarking is heavy-lifted.
   
   - there are several patches that are needed to make devstack
   and tempest passing deployment and testing. Those are collected
   under the hood of: https://review.openstack.org/#/c/114207/ Not
   much of them.
   
   - we'll need a new oslo.db release to bump versions (this is
   needed to set raise_on_warnings=False for the new driver, which
   was incorrectly set to True in sqlalchemy till very recently).
   This is expected to be released this month (as per Roman
   Podoliaka).
   
   This release is currently blocked on landing some changes in
   projects
   
   using the library so they don?t break when the new version starts
   using different exception classes. We?re tracking that work in
   https://etherpad.openstack.org/p/sqla_exceptions_caught
   
   It looks like we?re down to 2 patches, one for cinder
   
   (https://review.openstack.org/#/c/111760/) and one for glance
   (https://review.openstack.org/#/c/109655). Roman, can you verify
   that those are the only two projects that need changes for the
   exception issue?
   
   - once the corresponding patch for sqlalchemy-migrate is
   merged, we'll also need a new version released for this.
   
   So we're going for a new version of sqlalchemy?  (We have a
   separate workaround for raise_on_warnings that doesn't require the
   new sqlalchemy release if this brings too many other issues)
  
  Wrong. We're going for a new version of *sqlalchemy-migrate*. Which is
  the code that we inherited from Mike and currently track in stackforge.
  
   - on PyPI side, no news for now. The last time I've heard from
   Geert (the maintainer of MySQL Connector for Python), he was
   working on this. I suspect there are some legal considerations
   running inside Oracle. I'll update once I know more about
   that.
   
   If we don?t have the new package on PyPI, how do we plan to
   include it
   
   in the gate? Are there options to allow an exception, or to make
   the mirroring software download it anyway?
   
   We can test via devstack without waiting for pypi, since devstack
   will install via rpms/debs.
  
  I expect that it will be settled. I have no indication that the issue
  is unsolvable, it will just take a bit more time than we're accustomed
  to. :)
  
  At the moment, we install MySQLdb from distro packages for devstack.
  Same applies to new driver. It will be still great to see the package
  published on PyPI so that we can track its version requirements
  instead of relying on distros to package it properly. But I don't see
  it as a blocker.
  
  Also, we will probably be able to run with other drivers supported by
  SQLAlchemy once all the work is done.
 
 So I got bored last night and decided to take a stab at making PyMySQL
 work since I was a proponent of it earlier. Thankfully it did just
 mostly work like I thought it would.
 https://review.openstack.org/#/c/115495/ is the WIP devstack change to
 test this out.

Thanks!

 Postgres tests fail because it was applying the pymysql driver to the
 postgres connection string. We can clean this up later in devstack
 and/or devstack-gate depending on how we need to apply this stuff.
 Bashate failed because I had to monkeypatch in a fix for a ceilometer
 issue loading sqlalchemy drivers. The tempest neutron full job fails on
 one test occasionally. Not sure yet if that is normal neutron full
 failure mode or if a new thing from PyMySQL. The regular tempest job
 passes just fine.
 
 There are also some DB related errors in the logs that will need to be
 cleaned up but overall it just works. So I would like to repropose that
 we stop focusing all of this effort on the hard thing and use the easy
 thing if it meets our needs. We can continue to make alternatives work,
 but that is a different problem that we can solve at a different pace. I
 am not sure how to test the neutron thing that Gus was running into
 though so we should also check that really quickly.

TL;DR: pymysql passes my test case.
I'm perfectly happy to move towards using mysql+pymysql in gate tests.  (The 
various changes I've been submitting are to support _any_ non-default driver).

If anyone cares, my test case is in 

[openstack-dev] [openstack][DOUBT]Please clarify

2014-08-21 Thread Sharath V
Dear Friends, Have an doubt, please clarify me .!! When i start
understanding openstack , There are three nodes a) controller node
b)Compute node c) Network node
i) as my understanding controller node contains all the components like
nova,neutron cinder,glance,swift,Horizon etc

ii) Compute node is nova and neutron but not all components.

iii) Network node is nova and neutron.

when i reading doc , they said like openstack compute (Controller Services)
, openstack network services (Cloud controller) , can you please clarify is
each and every component of openstack has controller and client? [like Nova
Service(Controller) - Nova Client, Neutron Service (Controller) - neutron
client, cinder controller - cinder client ] or (nova controller for compute
, nova-network for cloud controller), Is Nova only controller , if nova is
only a controller it must be act as orchestration right? if yes then why we
have to use heat for orchestration ?

If any thing wrong , please clarify me,

if you have any document or guide please route to me.

Thank you in advance,

-- 
Best Regards,
Sharath
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] How to handle blocking bugs/changes in Neutron 3rd party CI

2014-08-21 Thread Kevin Benton
I'm not sure if this is possible with a Zuul setup, but once we identify a
failure causing commit, we change the reported job status to (skipped)
for any patches that contain the commit but not the fix. It's a relatively
straightforward way to communicate that the CI system is still operational
but voting was intentionally bypassed. This logic is handled in the script
that determines and posts the results of the tests to gerrit.


On Wed, Aug 20, 2014 at 3:27 PM, Dane Leblanc (leblancd) lebla...@cisco.com
 wrote:

 Preface: I posed this problem on the #openstack-infra IRC, and they
 couldn't offer an easy or obvious solution, and suggested that I get some
 consensus from the Neutron community as to how we want to handle this
 situation. So I'd like to bounce this around, get some ideas, and maybe
 bring this up in the 3rd party CI IRC.

 The challenge is this: Occasionally, a blocking bug is introduced which
 causes our 3rd party CI tests to consistently fail on every change set that
 we're testing against. We can develop a fix for the problem, but until that
 fix gets merged upstream, tests against all other change sets are seen to
 fail.

 (Note that we have a similar situation whenever we introduce a completely
 new plugin with its associated 3rd party CI... until the plugin code, or an
 enabling subset of that plugin code is merged upstream, then typically
 all other commits would fail on that CI setup.)

 In the past, we've tried dynamically patching the fix(es) on top of the
 fetched code being reviewed, but this isn't always reliable due to merge
 conflicts, and we've had to monkey patch DevStack to apply the fixes after
 cloning Neutron but before installing Neutron.

 So we'd prefer to enter a throttled or filtering CI mode when we hit
 this situation, where we're (temporarily) only testing against commits
 related to our plugin/driver which contain (or have a dependency on) the
 fix for the blocking bug until the fix is merged.

 In an ideal world, for the sake of transparency, we would love to be able
 to have Jenkins/Zuul report back to Gerrit with a descriptive test result
 such as N/A, Not tested, or even Aborted for all other change sets,
 letting the committer know that, Yeah, we see your review, but we're
 unable to test it at the moment. Zuul does have the ability to report
 Aborted status to Gerrit, but this is sent e.g. when Zuul decides to
 abort change set 'N' for a review when change set 'N+1' has just been
 submitted, or when a Jenkins admin manually aborts a Jenkins job.
 Unfortunately, this type of status is not available programmatically within
 a Jenkins job script; the only outcomes are pass (zero RC) or fail
 (non-zero RC). (Note that we can't directly filter at the Zuul level in our
 topology, since we have one Zuul server servicing multiple 3rd party CI
 setups.)

 As a second option, we'd like to not run any tests for the other changes,
 and report NOTHING to Gerrit, while continuing to run against changes
 related to our plugin (as required for the plugin changes to be approved).
 This was the favored approach discussed in the Neutron IRC on Monday. But
 herein lies the rub. By the time our Jenkins job script discovers that the
 change set that is being tested is not in a list of preferred/allowed
 change sets, the script has 2 options: pass or fail. With the current
 Jenkins, there is no programmatic way for a Jenkins script to signal to
 Gearman/Zuul that the job should be aborted.

 There was supposedly a bug filed with Jenkins to allow it to interpret
 different exit codes from job scripts as different result values, but this
 hasn't made any progress.

 There may be something that can be changed in Zuul to allow it to
 interpret different result codes other than success/fail, or maybe to allow
 Zuul to do change ID filtering on a per Jenkins job basis, but this would
 require the infra team to make changes to Zuul.

 The bottom line is that based on the current Zuul/Jenkins infrastructure,
 whenever our 3rd party CI is blocked by a bug, I'm struggling with the
 conflicting requirements:
 * Continue testing against change sets for the blocking bug (or plugin
 related changes)
 * Don't report anything to Gerrit for all other change sets, since these
 can't be meaningfully tested against the CI hardware

 Let me know if I'm missing a solution to this. I appreciate any
 suggestions!

 -Dane


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][all] switch from mysqldb to another eventlet aware mysql client

2014-08-21 Thread Endre Karlson
Why pymysql over mysql-python?

Endre Karlson
21. Aug. 2014 09:05 skrev Angus Lees g...@inodes.org følgende:

 On Wed, 20 Aug 2014 05:03:51 PM Clark Boylan wrote:
  On Mon, Aug 18, 2014, at 01:59 AM, Ihar Hrachyshka wrote:
   -BEGIN PGP SIGNED MESSAGE-
   Hash: SHA512
  
   On 17/08/14 02:09, Angus Lees wrote:
On 16 Aug 2014 06:09, Doug Hellmann d...@doughellmann.com
   
mailto:d...@doughellmann.com wrote:
On Aug 15, 2014, at 9:29 AM, Ihar Hrachyshka
ihrac...@redhat.com
   
mailto:ihrac...@redhat.com wrote:
Signed PGP part Some updates on the matter:
   
- oslo-spec was approved with narrowed scope which is now
'enabled mysqlconnector as an alternative in gate' instead of
'switch the default db driver to mysqlconnector'. We'll revisit
the switch part the next cycle once we have the new driver
running in gate and real benchmarking is heavy-lifted.
   
- there are several patches that are needed to make devstack
and tempest passing deployment and testing. Those are collected
under the hood of: https://review.openstack.org/#/c/114207/ Not
much of them.
   
- we'll need a new oslo.db release to bump versions (this is
needed to set raise_on_warnings=False for the new driver, which
was incorrectly set to True in sqlalchemy till very recently).
This is expected to be released this month (as per Roman
Podoliaka).
   
This release is currently blocked on landing some changes in
projects
   
using the library so they don?t break when the new version starts
using different exception classes. We?re tracking that work in
https://etherpad.openstack.org/p/sqla_exceptions_caught
   
It looks like we?re down to 2 patches, one for cinder
   
(https://review.openstack.org/#/c/111760/) and one for glance
(https://review.openstack.org/#/c/109655). Roman, can you verify
that those are the only two projects that need changes for the
exception issue?
   
- once the corresponding patch for sqlalchemy-migrate is
merged, we'll also need a new version released for this.
   
So we're going for a new version of sqlalchemy?  (We have a
separate workaround for raise_on_warnings that doesn't require the
new sqlalchemy release if this brings too many other issues)
  
   Wrong. We're going for a new version of *sqlalchemy-migrate*. Which is
   the code that we inherited from Mike and currently track in stackforge.
  
- on PyPI side, no news for now. The last time I've heard from
Geert (the maintainer of MySQL Connector for Python), he was
working on this. I suspect there are some legal considerations
running inside Oracle. I'll update once I know more about
that.
   
If we don?t have the new package on PyPI, how do we plan to
include it
   
in the gate? Are there options to allow an exception, or to make
the mirroring software download it anyway?
   
We can test via devstack without waiting for pypi, since devstack
will install via rpms/debs.
  
   I expect that it will be settled. I have no indication that the issue
   is unsolvable, it will just take a bit more time than we're accustomed
   to. :)
  
   At the moment, we install MySQLdb from distro packages for devstack.
   Same applies to new driver. It will be still great to see the package
   published on PyPI so that we can track its version requirements
   instead of relying on distros to package it properly. But I don't see
   it as a blocker.
  
   Also, we will probably be able to run with other drivers supported by
   SQLAlchemy once all the work is done.
 
  So I got bored last night and decided to take a stab at making PyMySQL
  work since I was a proponent of it earlier. Thankfully it did just
  mostly work like I thought it would.
  https://review.openstack.org/#/c/115495/ is the WIP devstack change to
  test this out.

 Thanks!

  Postgres tests fail because it was applying the pymysql driver to the
  postgres connection string. We can clean this up later in devstack
  and/or devstack-gate depending on how we need to apply this stuff.
  Bashate failed because I had to monkeypatch in a fix for a ceilometer
  issue loading sqlalchemy drivers. The tempest neutron full job fails on
  one test occasionally. Not sure yet if that is normal neutron full
  failure mode or if a new thing from PyMySQL. The regular tempest job
  passes just fine.
 
  There are also some DB related errors in the logs that will need to be
  cleaned up but overall it just works. So I would like to repropose that
  we stop focusing all of this effort on the hard thing and use the easy
  thing if it meets our needs. We can continue to make alternatives work,
  but that is a different problem that we can solve at a different pace. I
  am not sure how to test the neutron thing that Gus was running into
  though so we should also check that really quickly.

 TL;DR: pymysql passes my test case.
 I'm perfectly happy to move 

Re: [openstack-dev] [Glance][Heat] Murano split dsicussion

2014-08-21 Thread Thierry Carrez
Georgy Okrokvertskhov wrote:
 During last Atlanta summit there were couple discussions about
 Application Catalog and Application space projects in OpenStack. These
 cross-project discussions occurred as a result of Murano incubation
 request [1] during Icehouse cycle.  On the TC meeting devoted to Murano
 incubation there was an idea about splitting the Murano into parts which
 might belong to different programs[2].
 
 
 Today, I would like to initiate a discussion about potential splitting
 of Murano between two or three programs.
 [...]

I think the proposed split makes a lot of sense. Let's wait for the
feedback of the affected programs to see if it's compatible with their
own plans.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] indicating sample provenance

2014-08-21 Thread Chris Dent

On Thu, 21 Aug 2014, Nejc Saje wrote:

More riffing: we are moving away from per-sample specific data with Gnocchi. 
I don't think we should store this per-sample, since the user doesn't 
actually care about which agent the sample came from. The user cares about 
which *resource* it came from.


I'm thinking from a debugging and auditing standpoint it is useful
to know the hops an atom of data has taken on its way to its final
destination. Under normal circumstances that info isn't needed, but
under extraordinary circumstances it could be useful.

I could see this going into an agent's log. On each polling cycle, we could 
log which *resources* we are responsible (not samples).


If it goes in the agent's log how do you associate a particular
sample with that log? From the sample (or resource metadata or what
have you) you can know the time window of the resource. Now you need
to go looking around all the agents to find out which one was
satisfying that resource within that time window.

If there are two agents, no big deal, if there are 2000, problem.

And besides: Consider integration testing scenarios, making the data
a bit more meaningful will make it possible to do more flexible
testing.

I appreciate that searching through endless log files is a common
task in OpenStack but that doesn't make it the best way.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] indicating sample provenance

2014-08-21 Thread Eoghan Glynn


 One of the outcomes from Juno will be horizontal scalability in the
 central agent and alarm evaluator via partitioning[1]. The compute
 agent will get the same capability if you choose to use it, but it
 doesn't make quite as much sense.
 
 I haven't investigated the alarm evaluator side closely yet, but one
 concern I have with the central agent partitioning is that, as far
 as I can tell, it will result in stored samples that give no
 indication of which (of potentially very many) central-agent it came
 from.
 
 This strikes me as a debugging nightmare when something goes wrong
 with the content of a sample that makes it all the way to storage.
 We need some way, via the artifact itself, to narrow the scope of
 our investigation.
 
 a) Am I right that no indicator is there?
 
 b) Assuming there should be one:
 
 * Where should it go? Presumably it needs to be an attribute of
   each sample because as agents leave and join the group, where
   samples are published from can change.
 
 * How should it be named? The never-ending problem.
 
 Thoughts?


Probably best to keep the bulk of this dicussion on-gerrit, but
FWIW here's my riff just commented there ...

Cheers,
Eoghan


WRT to marking each sample with an indication of originating agent.

First, IIUC, true provenance would require that the full chain-of-
ownership could be reconstructed for the sample, so we'd need to
also record the individual collector that persisted each sample.
So let's assume that we're only talking here about associating the
originating agent with the sample.  For most classes of bugs/issues
that could impact on an agent, we'd expect an equivalent impact on
all agents. However, I guess there would be a subset of issues, e.g.
an agent being left behind after an upgrade, that could be localized.

So in the classic ceilometer approach to metadata, one could imagine
the agent identity being recorded in the sample itself. However this
would become a lot more problematic, I think, after a shift to pure
timeseries data. In which case, I don't think we'd necessarily want
to pollute the limited number of dimensions that can be efficiently
associated with a datapoint with additional information purely related
to the implementation/architecture of ceilometer.

So how about turning the issue on its head, and putting the onus on
the agent to record its allocated resources for each cycle? The
obvious way to do that would be via logging.

Then in order to determine which agent was responsible for polling a
particular resource at a particular time, the problem would collapse
down to a distributed search over the agent log files for that period
(perhaps aided by whatever log retention scheme is in use, e.g. logstash).

 [1] https://review.openstack.org/#/c/113549/
 [2] https://review.openstack.org/#/c/115237/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-21 Thread Thierry Carrez
Zane Bitter wrote:
 On 11/08/14 05:24, Thierry Carrez wrote:
 This all has created a world where you need to be*in*  OpenStack to
 matter, or to justify the investment. This has created a world where
 everything and everyone wants to be in the OpenStack integrated
 release. This has created more pressure to add new projects, and less
 pressure to fix and make the existing projects perfect. 4 years in, we
 might want to inflect that trajectory and take steps to fix this world.
 
 We should certainly consider this possibility, that we've set up
 perverse incentives leading to failure. But what if it's just because we
 haven't yet come even close to satisfying all of our users' needs? I
 mean, AWS has more than 30 services that could be considered equivalent
 in scope to an OpenStack project... if anything our scope is increasing
 more _slowly_ than the industry at large. I'm slightly shocked that
 nobody in this thread appears to have even entertained the idea that
 *this is what success looks like*.
 
 The world is not going to stop because we want to get off, take a
 breather, do a consolidation cycle.

That's an excellent counterpoint, thank you for voicing it so eloquently.

Our challenge is to improve our structures so that we can follow the
rhythm the world imposes on us. It's a complex challenge, especially in
an open collaboration experiment where you can't rely that much on past
experiences or traditional methods. So it's always tempting to slow
things down, to rate-limit our success to make that challenge easier.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] indicating sample provenance

2014-08-21 Thread Julien Danjou
On Wed, Aug 20 2014, Chris Dent wrote:

 a) Am I right that no indicator is there?

Yes.

 b) Assuming there should be one:

* Where should it go? Presumably it needs to be an attribute of
  each sample because as agents leave and join the group, where
  samples are published from can change.

I guess that depends if we should/want to store it.
If so, I'd say include it in the sample, otherwise, just let the
publisher indicates the provenance or the receiver indicates from where
it received the sample.

* How should it be named? The never-ending problem.

generated_by?

-- 
Julien Danjou
/* Free Software hacker
   http://julien.danjou.info */


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Issues with POSIX semaphores and other locks in lockutils

2014-08-21 Thread Julien Danjou
On Wed, Aug 20 2014, Vishvananda Ishaya wrote:

 This may be slightly off-topic but it is worth mentioning that the use of 
 threading.Lock[1]
 which was included to make the locks thread safe seems to be leading to a 
 deadlock in eventlet[2].
 It seems like we have rewritten this too many times in order to fix minor 
 pain points and are
 adding risk to a very important component of the system.

Indeed it looks slightly off-topic as it actually looks like more the
never ending nightmare of eventlet monkey patching we're trying to get
rid off… :(

-- 
Julien Danjou
# Free Software hacker
# http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] nova unable associate floating ip

2014-08-21 Thread Li Tianqing
Hello,
   https://bugs.launchpad.net/nova/+bug/1316621
   Is there someone has a solution for that bug?





--

Best
Li Tianqing___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Congress] Ramp-up strategy

2014-08-21 Thread Madhu Mohan
Hi,

Since a few weeks I am trying to get a hold on Congress code base and
understand the flow.

Here is a brief summary what I am trying out:

Prepared a dummy client to send the policy strings to congress_server
listening at the path /policies. This is now changed to v1/policies. I
am using POST request to send the policy string to the server.

The call to server somehow seems to get converted to an action with the
name create_policies
Added a new API create_policies in the api model policy_model.py which
gets the policy string in params.

I am able to call compile.parse() and runtime.initialize() functions from
this API.
The compilation produces a result in the format below:


*Rule(head=[Literal(table=u'error', arguments=[Variable(name=u'vm')],
negated=False)], body=[Literal(table=u'nova:virtual_machine',
arguments=[Variable(name=u'vm')],. *
I am not really sure about how to go about from here to see the policies
actually getting applied and monitored.

Any resource or instructions on getting through the code flow will be of
great help to proceed further.

Thanks in Advance,
Madhu Mohan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Incubator concerns from packaging perspective

2014-08-21 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 20/08/14 18:28, Salvatore Orlando wrote:
 Some comments inline.
 
 Salvatore
 
 On 20 August 2014 17:38, Ihar Hrachyshka ihrac...@redhat.com 
 mailto:ihrac...@redhat.com wrote:
 
 Hi all,
 
 I've read the proposal for incubator as described at [1], and I
 have several comments/concerns/suggestions to this.
 
 Overall, the idea of giving some space for experimentation that
 does not alienate parts of community from Neutron is good. In that
 way, we may relax review rules and quicken turnaround for preview
 features without loosing control on those features too much.
 
 Though the way it's to be implemented leaves several concerns, as 
 follows:
 
 1. From packaging perspective, having a separate repository and 
 tarballs seems not optimal. As a packager, I would better deal with
 a single tarball instead of two. Meaning, it would be better to
 keep the code in the same tree.
 
 I know that we're afraid of shipping the code for which some users
 may expect the usual level of support and stability and
 compatibility. This can be solved by making it explicit that the
 incubated code is unsupported and used on your user's risk. 1) The
 experimental code wouldn't probably be installed unless explicitly
 requested, and 2) it would be put in a separate namespace (like
 'preview', 'experimental', or 'staging', as the call it in Linux
 kernel world [2]).
 
 This would facilitate keeping commit history instead of loosing it 
 during graduation.
 
 Yes, I know that people don't like to be called experimental or 
 preview or incubator... And maybe neutron-labs repo sounds more 
 appealing than an 'experimental' subtree in the core project.
 Well, there are lots of EXPERIMENTAL features in Linux kernel that
 we actively use (for example, btrfs is still considered
 experimental by Linux kernel devs, while being exposed as a
 supported option to RHEL7 users), so I don't see how that naming
 concern is significant.
 
 
 I think this is the whole point of the discussion around the
 incubator and the reason for which, to the best of my knowledge,
 no proposal has been accepted yet.
 

I wonder where discussion around the proposal is running. Is it public?

 
 2. If those 'extras' are really moved into a separate repository
 and tarballs, this will raise questions on whether packagers even
 want to cope with it before graduation. When it comes to supporting
 another build manifest for a piece of code of unknown quality, this
 is not the same as just cutting part of the code into a separate 
 experimental/labs package. So unless I'm explicitly asked to
 package the incubator, I wouldn't probably touch it myself. This is
 just too much effort (btw the same applies to moving plugins out of
 the tree - once it's done, distros will probably need to reconsider
 which plugins they really want to package; at the moment, those
 plugins do not require lots of time to ship them, but having ~20
 separate build manifests for each of them is just too hard to
 handle without clear incentive).
 
 
 One reason instead for moving plugins out of the main tree is
 allowing their maintainers to have full control over them. If
 there was a way with gerrit or similars to give somebody rights
 to merge code only on a subtree I probably would not even
 consider the option of moving plugin and drivers away. From my
 perspective it's not that I don't want them in the main tree,
 it's that I don't think it's fair for core team reviewers to take
 responsibility of approving code that they can't fully tests (3rd
 partt CI helps, but is still far from having a decent level of
 coverage).
 

I agree with that. I actually think that moving vendor plugins outside
the main tree AND rearranging review permissions and obligations
should be extremely beneficial to the community. I'm totally for that
as quick as possible (Kilo please!) Reviewers waste their time
reviewing plugins that are in most cases interesting for a tiny
fraction of operators. Let the ones that are primarily interested in
good quality of that code (vendors) to drive development. And if some
plugins become garbage, it's bad news for specific vendors; if neutron
screws because of lack of concentration on core features and open
source plugins, everyone is doomed.

Of course, splitting vendor plugins into separate repositories will
make life of packagers a bit harder, but the expected benefits from
such move are huge, so - screw packagers on this one. :)

Though the way incubator is currently described in that proposal on
the wiki doesn't clearly imply similar benefits for the project, hence
concerns.

 
 
 3. The fact that neutron-incubator is not going to maintain any
 stable branches for security fixes and major failures concerns me
 too. In downstream, we don't generally ship the latest and greatest
 from PyPI. Meaning, we'll need to maintain our own downstream
 stable branches for major fixes. [BTW we already do that for python
 clients.]
 

Re: [openstack-dev] [neutron] Incubator concerns from packaging perspective

2014-08-21 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 21/08/14 08:33, loy wolfe wrote:
 It's also unfair that core team reviewers are forced to spend
 time on 3rd plugins and drivers under existing process. There are
 so many 3rd networking backend technologies, from hardware to
 controller, anyone can submit plugins and drivers to the tree,
 and for the principle of neutrality we can't agree some and
 refuse others' reviewing request. Then reviewers' time slot are
 full of these 3rd backend related work, leaving less time on the
 most important and urgent thing: improve Neutron core
 architecture to the same mature level like Nova as soon as 
 possible.
 

Don't get me wrong on this. I'm totally in favour of splitting plugins
into separate repos with dedicated (vendor?) core teams.

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJT9ceUAAoJEC5aWaUY1u57BqgIAJ0fLgSGxAknvm7eY1q0k3rH
FHR1ObdW5TKOmVZHf5eNn/r/MmLH6DQhZhL+UV8XKcaKQ+HWjbXk4E0TqCPE1L5N
DdYc9MTJLsXUUiOHLl3XDiZEsbB3T9rli5EbnQs28XTyMGWkG8YIJ90hCRJsvFk9
kWtlYPnxGTp9vMauvvqVU8rndoCTpUSK/AY8Cp/wgtOZ6ReGKQgduTL0RNo2xSWw
sge600M7yugLOuefCVdpeGnJ13h0JUzd3hHcPIskqayykZWQg2e7eUXHI96DS6FM
fEEUi0kGEpLvk7EdwVPuI2qm9HzVoQPJSY5E1HsXum6pAU5+O7LYo1YFPThHJGE=
=XJgz
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][all] switch from mysqldb to another eventlet aware mysql client

2014-08-21 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 21/08/14 02:03, Clark Boylan wrote:
 On Mon, Aug 18, 2014, at 01:59 AM, Ihar Hrachyshka wrote: On
 17/08/14 02:09, Angus Lees wrote:
 
 On 16 Aug 2014 06:09, Doug Hellmann d...@doughellmann.com
  mailto:d...@doughellmann.com wrote:
 
 
 On Aug 15, 2014, at 9:29 AM, Ihar Hrachyshka 
 ihrac...@redhat.com
 mailto:ihrac...@redhat.com wrote:
 
 Signed PGP part Some updates on the matter:
 
 - oslo-spec was approved with narrowed scope which is
 now 'enabled mysqlconnector as an alternative in gate'
 instead of 'switch the default db driver to
 mysqlconnector'. We'll revisit the switch part the next
 cycle once we have the new driver running in gate and
 real benchmarking is heavy-lifted.
 
 - there are several patches that are needed to make
 devstack and tempest passing deployment and testing.
 Those are collected under the hood of:
 https://review.openstack.org/#/c/114207/ Not much of
 them.
 
 - we'll need a new oslo.db release to bump versions (this
 is needed to set raise_on_warnings=False for the new
 driver, which was incorrectly set to True in sqlalchemy
 till very recently). This is expected to be released this
 month (as per Roman Podoliaka).
 
 This release is currently blocked on landing some changes
 in projects
 using the library so they don?t break when the new version
 starts using different exception classes. We?re tracking that
 work in 
 https://etherpad.openstack.org/p/sqla_exceptions_caught
 
 It looks like we?re down to 2 patches, one for cinder
 (https://review.openstack.org/#/c/111760/) and one for glance
  (https://review.openstack.org/#/c/109655). Roman, can you
 verify that those are the only two projects that need changes
 for the exception issue?
 
 
 - once the corresponding patch for sqlalchemy-migrate is 
 merged, we'll also need a new version released for this.
 
 So we're going for a new version of sqlalchemy?  (We have a 
 separate workaround for raise_on_warnings that doesn't
 require the new sqlalchemy release if this brings too many
 other issues)
 
 Wrong. We're going for a new version of *sqlalchemy-migrate*. Which
 is the code that we inherited from Mike and currently track in
 stackforge.
 
 
 - on PyPI side, no news for now. The last time I've heard
 from Geert (the maintainer of MySQL Connector for
 Python), he was working on this. I suspect there are some
 legal considerations running inside Oracle. I'll update
 once I know more about that.
 
 If we don?t have the new package on PyPI, how do we plan
 to include it
 in the gate? Are there options to allow an exception, or to
 make the mirroring software download it anyway?
 
 We can test via devstack without waiting for pypi, since
 devstack will install via rpms/debs.
 
 I expect that it will be settled. I have no indication that the
 issue is unsolvable, it will just take a bit more time than we're
 accustomed to. :)
 
 At the moment, we install MySQLdb from distro packages for
 devstack. Same applies to new driver. It will be still great to see
 the package published on PyPI so that we can track its version
 requirements instead of relying on distros to package it properly.
 But I don't see it as a blocker.
 
 Also, we will probably be able to run with other drivers supported
 by SQLAlchemy once all the work is done.
 
 So I got bored last night and decided to take a stab at making
 PyMySQL work since I was a proponent of it earlier. Thankfully it
 did just mostly work like I thought it would. 
 https://review.openstack.org/#/c/115495/ is the WIP devstack
 change to test this out.

Great!

 
 Postgres tests fail because it was applying the pymysql driver to
 the postgres connection string. We can clean this up later in
 devstack and/or devstack-gate depending on how we need to apply
 this stuff. Bashate failed because I had to monkeypatch in a
 fix for a ceilometer issue loading sqlalchemy drivers. The
 tempest neutron full job fails on one test occasionally. Not sure
 yet if that is normal neutron full failure mode or if a new thing
 from PyMySQL. The regular tempest job passes just fine.
 
 There are also some DB related errors in the logs that will need
 to be cleaned up but overall it just works. So I would like to
 repropose that we stop focusing all of this effort on the hard
 thing and use the easy thing if it meets our needs. We can
 continue to make alternatives work, but that is a different
 problem that we can solve at a different pace. I am not sure how
 to test the neutron thing that Gus was running into though so we
 should also check that really quickly.

In our patches througout the projects, we're actually not focusing on
any specific driver, even though the original spec is focused on MySQL
Connector. I still think we should achieve MySQL Connector working in
gate in the very near future. The current progress can be tracked at:

https://review.openstack.org/#/c/114207/

 
 Also, the tests themselves don't seem to run any faster or slower
 than 

Re: [openstack-dev] [neutron][all] switch from mysqldb to another eventlet aware mysql client

2014-08-21 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 21/08/14 09:42, Endre Karlson wrote:
 Why pymysql over mysql-python?
 

http://specs.openstack.org/openstack/oslo-specs/specs/juno/enable-mysql-connector.html#problem-description
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJT9crkAAoJEC5aWaUY1u573MwH/1O9o41v616AysL0XC5mc7j6
Bit5rOcxlL7frEvgG9UYFwIlyHfoyBVK+5K9rc36ORlQySBvieta+T+9YhJEWuYI
S7qy/e/a3Kl98hkT4PDSrVTfLT4Xe0eSaloJmfrrCkrqjUPd+hq8bopJBcj5U1K4
Wjb1DqZqrvTlHQtInFwADJxRW+s4hnXpPcE2rYXzZK1KyB5S7ov68LH1JYz8nuSq
muJ5DeClvsLYFibQEYubcdC9d2y15s/QimKNcKJEd/CT2frNKbZdH9R921xnnvNY
XVuTXwqN9Sc0fQxApmIfE6sPVhQ8inYpoqvRFzPAx9d1Vkx+u4jcREhHw9kyEn0=
=9NcS
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] Migration from nova-network to Neutron for large production clouds

2014-08-21 Thread Thierry Carrez
Tim Bell wrote:
 Michael has been posting very informative blogs on the summary of the
 mid-cycle meetups for Nova. The one on the Nova Network to Neutron
 migration was of particular interest to me as it raises a number of
 potential impacts for the CERN production cloud. The blog itself is at
 http://www.stillhq.com/openstack/juno/14.html
 
 I would welcome suggestions from the community on the approach to take
 and areas that the nova/neutron team could review to limit the impact on
 the cloud users.
 
 For some background, CERN has been running nova-network in flat DHCP
 mode since our first Diablo deployment. We moved to production for our
 users in July last year and are currently supporting around 70,000
 cores, 6 cells, 100s of projects and thousands of VMs. Upgrades
 generally involve disabling the API layer while allowing running VMs to
 carry on without disruption. Within the time scale of the migration to
 Neutron (M release at the latest), these numbers are expected to double.

Thanks for bringing your concerns here. To start this discussion, it's
worth adding some context on the currently-proposed cold migration
path. During the Icehouse and Juno cycles the TC reviewed the gaps
between the integration requirements we now place on new entrants and
the currently-integrated projects. That resulted in a number of
identified gaps that we asked projects to address ASAP, ideally within
the Juno cycle.

Most of the Neutron gaps revolved around its failure to be a full
nova-network replacement -- some gaps around supporting basic modes of
operation, and a gap in providing a basic migration path. Neutron devs
promised to close that in Juno, but after a bit of discussion we
considered that a cold migration path was all we'd require them to
provide in Juno.

That doesn't mean a hot or warm migration path can't be worked on.
There are two questions to solve: how can we technically perform that
migration with a minimal amount of downtime, and is it reasonable to
mark nova-network deprecated until we've solved that issue.

On the first question, migration is typically an operational problem,
and operators could really help to design one that would be acceptable
to them. They may require developers to add features in the code to
support that process, but we seem to not even be at this stage. Ideally
I would like ops and devs to join to solve that technical challenge.

The answer to the second question lies in the multiple dimensions of
deprecated.

On one side it means is no longer in our future plans, new usage is now
discouraged, new development is stopped, explore your options to migrate
out of it. I think it's extremely important that we do that as early as
possible, to reduce duplication of effort and set expectations correctly.

On the other side it means will be removed in release X (not
necessarily the next release, but you set a countdown). To do that, you
need to be pretty confident that you'll have your ducks in a row at
removal date, and don't set up operators for a nightmare migration.

 For us, the concerns we have with the ‘cold’ approach would be on the
 user impact and operational risk of such a change. Specifically,
 
 1.  A big bang approach of shutting down the cloud, upgrade and the
 resuming the cloud would cause significant user disruption
 
 2.  The risks involved with a cloud of this size and the open source
 network drivers would be difficult to mitigate through testing and could
 lead to site wide downtime
 
 3.  Rebooting VMs may be possible to schedule in batches but would
 need to be staggered to keep availability levels

What minimal level of hot would be acceptable to you ?

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Future CI jobs

2014-08-21 Thread Giulio Fidente

On 08/20/2014 07:35 PM, Gregory Haynes wrote:

Excerpts from Derek Higgins's message of 2014-08-20 09:06:48 +:

On 19/08/14 20:58, Gregory Haynes wrote:

Excerpts from Giulio Fidente's message of 2014-08-19 12:07:53 +:

One last comment, maybe a bit OT but I'm raising it here to see what is
the other people opinion: how about we modify the -ha job so that at
some point we actually kill one of the controllers and spawn a second
user image?


I think this is a great long term goal, but IMO performing an update
isnt really the type of verification we want for this kind of test. We
really should have some minimal tempest testing in place first so we can
verify that when these types of failures occur our cloud remains in a
functioning state.


Greg, you said performing an update did you mean killing a controller
node ?

if so I agree, verifying our cloud is still in a working order with
tempest would get us more coverage then spawning a node. So once we have
tempest in place we can add a test to kill a controller node.



Ah, I misread the original message a bit, but sounds like were all on
the same page.


I don't see why we should wait for tempest being add too before 
introducing the node kill step.


I understand to have a view of the overall status tempest is the tool we 
need, but today we rely on small, short, scenario: we boot a guest from 
a volume and assign it a float


I think we can continue to rely on this and also introduce the node kill 
step, without interfering with the work needed to put tempest in the cycle.


--
Giulio Fidente
GPG KEY: 08D733BA

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] Migration from nova-network to Neutron for large production clouds

2014-08-21 Thread Daniel P. Berrange
On Wed, Aug 20, 2014 at 03:17:40PM +, Tim Bell wrote:
 Michael has been posting very informative blogs on the summary of
 the mid-cycle meetups for Nova. The one on the Nova Network to
 Neutron migration was of particular interest to me as it raises a
 number of potential impacts for the CERN production cloud. The blog
 itself is at http://www.stillhq.com/openstack/juno/14.html

FWIW, I do *not* support the following policy statement written
there

  The current plan is to go forward with a cold upgrade path,
   unless a user comes forward with an absolute hard requirement
   for a live upgrade, and a plan to fund developers to work on it.

I think that saying that our users are responsible for providing or
identifying funding for live upgrades is user-hostile  unacceptable.
If we as a dev team want to take away major features that our users
currently rely on in production and the users then determine ( tell
us) that the proposed upgrade path is not practical, then it is the
*dev team's* reponsibility to figure out how address that, not the
users.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-21 Thread Thierry Carrez
Jay Pipes wrote:
 I don't believe the Programs are needed, as they are currently
 structured. I don't really believe they serve any good purposes, and
 actually serve to solidify positions of power, slanted towards existing
 power centers, which is antithetical to a meritocratic community.

Let me translate that, considering programs are just teams of people...
You're still OK with the concept of teams of people working toward a
common goal, but you don't think blessing some teams serves any good
purpose. Is that right? (if yes, see below for more on what that
actually means).

 [...]
 If we want to follow your model, we probably would have to dissolve
 programs as they stand right now, and have blessed categories on one
 side, and teams on the other (with projects from some teams being
 blessed as the current solution).
 
 Why do we have to have blessed categories at all? I'd like to think of
 a day when the TC isn't picking winners or losers at all. Level the
 playing field and let the quality of the projects themselves determine
 the winner in the space. Stop the incubation and graduation madness and
 change the role of the TC to instead play an advisory role to upcoming
 (and existing!) projects on the best ways to integrate with other
 OpenStack projects, if integration is something that is natural for the
 project to work towards.

I'm still trying to wrap my head around what you actually propose here.
Do you just want to get rid of incubation ? Or do you want to get rid of
the whole integrated release concept ? The idea that we collectively
apply effort around a limited set of projects to make sure they are
delivered in an acceptable fashion (on a predictable schedule, following
roughly the same rules, with some amount of integrated feature, some
amount of test coverage, some amount of documentation...)

Because I still think there is a whole lot of value in that. I don't
think our mission is to be the sourceforge of cloud projects. Our
mission is to *produce* the ubiquitous Open Source Cloud Computing
platform. There must be some amount of opinionated choices there.

Everything else in our structure derives from that. If we have an
integrated release, we need to bless a set of projects that will be part
of it (graduation). We need to single out promising projects so that we
mentor them on the common rules they will have to follow there (incubation).

Now there are bad side-effects we need to solve, like the idea that
incubation and integration are steps on a openstack ecosystem holy
ladder that every project should aspire to climb.

 That would leave the horizontal programs like Docs, QA or Infra,
 where the team and the category are the same thing, as outliers again
 (like they were before we did programs).
 
 What is the purpose of having these programs, though? If it's just to
 have a PTL, then I think we need to reconsider the whole concept of
 Programs. [...]

The main purpose of programs (or official teams) is that being part of
one of them gives you the right to participate in electing the Technical
Committee, and as a result places you under its authority. Both parties
have to agree to be placed under that contract, which is why teams have
to apply (we can't force them), and the TC has to accept (they can't
force us).

Programs have *nothing* to do with PTLs, which are just a convenient way
to solve potential decision deadlocks in teams (insert your favorite
dysfunctional free software project example here). We could get rid of
the PTL concept (to replace them for example with a set of designated
liaisons) and we would still have programs (teams) and projects (the
code repos that team is working on).

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-21 Thread Sean Dague
On 08/20/2014 02:37 PM, Jay Pipes wrote:
 On 08/20/2014 11:41 AM, Zane Bitter wrote:
 On 19/08/14 10:37, Jay Pipes wrote:

 By graduating an incubated project into the integrated release, the
 Technical Committee is blessing the project as the OpenStack way to do
 some thing. If there are projects that are developed *in the OpenStack
 ecosystem* that are actively being developed to serve the purpose that
 an integrated project serves, then I think it is the responsibility of
 the Technical Committee to take another look at the integrated project
 and answer the following questions definitively:

   a) Is the Thing that the project addresses something that the
 Technical Committee believes the OpenStack ecosystem benefits from by
 the TC making a judgement on what is the OpenStack way of addressing
 that Thing.

 and IFF the decision of the TC on a) is YES, then:

   b) Is the Vision and Implementation of the currently integrated
 project the one that the Technical Committee wishes to continue to
 bless as the the OpenStack way of addressing the Thing the project
 does.

 I disagree with part (b); projects are not code - projects, like Soylent
 Green, are people.
 
 Hey! Don't steal my slide content! :P
 
 http://bit.ly/navigating-openstack-community (slide 3)
 
 So it's not critical that the implementation is the
 one the TC wants to bless, what's critical is that the right people are
 involved to get to an implementation that the TC would be comfortable
 blessing over time. For example, everyone agrees that Ceilometer has
 room for improvement, but any implication that the Ceilometer is not
 interested in or driving towards those improvements (because of NIH or
 whatever) is, as has been pointed out, grossly unfair to the Ceilometer
 team.
 
 I certainly have not made such an implication about Ceilometer. What I
 see in the Ceilometer space, though, is that there are clearly a number
 of *active* communities of OpenStack engineers developing code that
 crosses similar problem spaces. I think the TC blessing one of those
 communities before the market has had a chance to do a bit more
 natural filtering of quality is a barrier to innovation. I think having
 all of those separate teams able to contribute code to an openstack/
 code namespace and naturally work to resolve differences and merge
 innovation is a better fit for a meritocracy.

I think the other thing that's been discovered in the metering space is
it's not just an engineering problem with the bulk of the hard stuff
already figured out. This problem actually is really hard to get right,
especially when performance and overhead are key.

By blessing one team what we're saying is all the good ideas pool for
tackling this hard problem can only come from that one team. That has a
trade off cost. It means if we believe that Ceilometer is fundamentally
the right architecture but just needs a bit of polish, that's the right
call. It's telling people to just get with the program. But it seems
right now we don't think that's the case. And we includes a bunch of
folks in Ceilometer. As evidenced by a bunch of rearchitecture going on.
Which is fine, it's a hard problem, as evidenced by the fact that there
are a ton of open source projects in the general area.

But by blessing a team, and saddling them with an existing architecture
that no one loves, we're actually making it a lot harder to come up with
a final best in class thing in this slot in the OpenStack universe. The
Ceilometer team has to live within the upgrade constraints, for
instance. They have API stability requirements applied to them. The
entire set of requirements of a project once integrated does impose a
tax on the rate the team can change the project so that stable contracts
are kept up.

Honestly, I don't want this to be about stigma of kicking something
out, but more about openning up freedom and flexibility to research out
this space, which has shown to be a hard space. I don't want to question
that anyone isn't working hard here, because I absolutely think the
teams doing this are. But I also think that cracking this nut of high
performance metering on a large scale is tough, and only made tougher by
having to go after that solution while also staying within the bounds of
acceptable integrated project evolution.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]Performance of security group

2014-08-21 Thread Édouard Thuleau
Nice job! That's awesome.

Thanks,
Édouard.


On Thu, Aug 21, 2014 at 8:02 AM, Miguel Angel Ajo Pelayo 
mangel...@redhat.com wrote:

 Thank you shihanzhang!,

 I can't believe I didn't realize the ipset part spec was accepted I live
 on my own bubble... I will be reviewing and testing/helping on that part
 too during the next few days,  I was too concentrated in the RPC part.


 Best regards,

 - Original Message -
  hi neutroner!
  my patch about BP:
 
 https://blueprints.launchpad.net/openstack/?searchtext=add-ipset-to-security
  need install ipset in devstack, I have commit the patch:
  https://review.openstack.org/#/c/113453/, who can help me review it,
 thanks
  very much!
 
  Best regards,
  shihanzhang
 
 
 
 
  At 2014-08-21 10:47:59, Martinx - ジェームズ thiagocmarti...@gmail.com
 wrote:
 
 
 
  +1 NFTablesDriver!
 
  Also, NFTables, AFAIK, improves IDS systems, like Suricata, for example:
  https://home.regit.org/2014/02/suricata-and-nftables/
 
  Then, I'm wondering here... What benefits might come for OpenStack Nova /
  Neutron, if it comes with a NFTables driver, instead of the current
  IPTables?!
 
  * E fficient Security Group design?
  * Better FWaaS, maybe with NAT(44/66) support?
  * Native support for IPv6, with the defamed NAT66 built-in, simpler
 Floating
  IP implementation, for both v4 and v6 networks under a single
  implementation ( I don't like NAT66, I prefer a `routed Floating IPv6`
  version ) ?
  * Metadata over IPv6 still using NAT(66) ( I don't like NAT66 ), single
  implementation?
  * Suricata-as-a-Service?!
 
  It sounds pretty cool! :-)
 
 
  On 20 August 2014 23:16, Baohua Yang  yangbao...@gmail.com  wrote:
 
 
 
  Great!
  We met similar problems.
  The current mechanisms produce too many iptables rules, and it's hard to
  debug.
  Really look forward to seeing a more efficient security group design.
 
 
  On Thu, Jul 10, 2014 at 11:44 PM, Kyle Mestery 
 mest...@noironetworks.com 
  wrote:
 
 
 
  On Thu, Jul 10, 2014 at 4:30 AM, shihanzhang  ayshihanzh...@126.com 
 wrote:
  
   With the deployment 'nova + neutron + openvswitch', when we bulk create
   about 500 VM with a default security group, the CPU usage of
 neutron-server
   and openvswitch agent is very high, especially the CPU usage of
 openvswitch
   agent will be 100%, this will cause creating VMs failed.
  
   With the method discussed in mailist:
  
   1) ipset optimization ( https://review.openstack.org/#/c/100761/ )
  
   3) sg rpc optimization (with fanout)
   ( https://review.openstack.org/#/c/104522/ )
  
   I have implement these two scheme in my deployment, when we again bulk
   create about 500 VM with a default security group, the CPU usage of
   openvswitch agent will reduce to 10%, even lower than 10%, so I think
 the
   iprovement of these two options are very efficient.
  
   Who can help us to review our spec?
  
  This is great work! These are on my list of things to review in detail
  soon, but given the Neutron sprint this week, I haven't had time yet.
  I'll try to remedy that by the weekend.
 
  Thanks!
  Kyle
 
   Best regards,
   shihanzhang
  
  
  
  
  
   At 2014-07-03 10:08:21, Ihar Hrachyshka  ihrac...@redhat.com 
 wrote:
  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA512
  
  Oh, so you have the enhancement implemented? Great! Any numbers that
  shows how much we gain from that?
  
  /Ihar
  
  On 03/07/14 02:49, shihanzhang wrote:
   Hi, Miguel Angel Ajo! Yes, the ipset implementation is ready, today
   I will modify my spec, when the spec is approved, I will commit the
   codes as soon as possilbe!
  
  
  
  
  
   At 2014-07-02 10:12:34, Miguel Angel Ajo  majop...@redhat.com 
   wrote:
  
   Nice Shihanzhang,
  
   Do you mean the ipset implementation is ready, or just the
   spec?.
  
  
   For the SG group refactor, I don't worry about who does it, or
   who takes the credit, but I believe it's important we address
   this bottleneck during Juno trying to match nova's scalability.
  
   Best regards, Miguel Ángel.
  
  
   On 07/02/2014 02:50 PM, shihanzhang wrote:
   hi Miguel Ángel and Ihar Hrachyshka, I agree with you that
   split the work in several specs, I have finished the work (
   ipset optimization), you can do 'sg rpc optimization (without
   fanout)'. as the third part(sg rpc optimization (with fanout)),
   I think we need talk about it, because just using ipset to
   optimize security group agent codes does not bring the best
   results!
  
   Best regards, shihanzhang.
  
  
  
  
  
  
  
  
   At 2014-07-02 04:43:24, Ihar Hrachyshka  ihrac...@redhat.com 
   wrote:
   On 02/07/14 10:12, Miguel Angel Ajo wrote:
  
   Shihazhang,
  
   I really believe we need the RPC refactor done for this cycle,
   and given the close deadlines we have (July 10 for spec
   submission and July 20 for spec approval).
  
   Don't you think it's going to be better to split the work in
   several specs?
  
   1) ipset optimization (you) 2) sg rpc optimization (without
   

[openstack-dev] [nova] Prioritizing review of potentially approvable patches

2014-08-21 Thread Daniel P. Berrange
Tagged with '[nova]' but this might be relevant data / idea for other
teams too.

With my code contributor hat on, one of the things that I find most the
frustrating about Nova code review process is that a patch can get a +2
vote from one core team member and then sit around for days, weeks, even
months without getting a second +2 vote, even if it has no negative
feedback at all and is a simple  important bug fix.

If a patch is good enough to have received one +2 vote, then compared to
the open patches as a whole, this patch is much more likely to be one
that is ready for approval  merge. It will likely be easier to review,
since it can be assumed other reviewers have already caught the majority
of the silly / tedious / time consuming bugs.

Letting these patches languish with a single +2 for too long makes it very
likely that, when a second core reviewer finally appears, there will be a
merge conflict or other bit-rot that will cause it to have to undergo yet
another rebase  re-review. This is wasting time of both our contributors
and our review team.

On this basis I suggest that core team members should consider patches
that already have a +2 to be high(er) priority items to review than open
patches as a whole.

Currently Nova has (on master branch)

  - 158 patches which have at least one +2 vote, and are not approved
  - 122 patches which have at least one +2 vote, are not approved and
don't have any -1 code review votes.

So that's 122 patches that should be easy candidates for merging right
now. Another 30 can possibly be merged depending on whether the core
reviewer agrees with the -1 feedback given or now.

That is way more patches than we should have outstanding in that state.
It is not unreasonable to say that once a patch has a single +2 vote, we
should aim to get either a second +2 vote or further -1 review feedback
in a matter of days, and certainly no longer than a week.

If everyone on the core team looked at the list of potentially approvable
patches each day I think it would significantly improve our throughput.
It would also decrease the amount of review work overall by reducing
chance that patches bitrot  need rebase for merge conflicts. And most
importantly of all it will give our code contributors a better impression
that we care about them.

As an added carrot, working through this list will be an effective way
to improve your rankings [1] against other core reviewers, not that I
mean to suggest we should care about rankings over review quality ;-P

The next version of gerrymander[2] will contain a new command to allow
core reviewers to easily identify these patches

   $ gerrymander todo-approvable -g nova --branch master

This will of course filter out patches which you yourself own since you
can't approve your own work. It will also filter out patches which you
have given feedback on already. What's left will be a list of patches
where you are able to apply the casting +2 vote to get to +A state.
If the '--strict' arg is added it will also filter out any patches which
have a -1 code review comment.

Regards,
Daniel

[1] http://russellbryant.net/openstack-stats/nova-reviewers-30.txt
[2] 
https://github.com/berrange/gerrymander/commit/790df913fc512580d92e808f28793e29783fecd7
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-21 Thread Chris Dent

On Thu, 21 Aug 2014, Sean Dague wrote:


By blessing one team what we're saying is all the good ideas pool for
tackling this hard problem can only come from that one team.


This is a big part of this conversation that really confuses me. Who is
that one team?

I don't think it is that team that is being blessed, it is that
project space. That project space ought, if possible, have a team
made up of anyone who is interested. Within that umbrella both
the competition and cooperation that everyone wants can happen.

You're quite right Sean, there is a lot of gravity that comes from
needing to support and slowly migrate the existing APIs. That takes
up quite a lot of resources. It doesn't mean, however, that other
resources can't work on substantial improvements in cooperation with
the rest of the project. Gnocchi and the entire V3 concept in
ceilometer are a good example of this. Some folk are working on that
and some folk are working on maintaining and improving the old
stuff.

Some participants in this thread seem to be saying give some else a
chance. Surely nobody needs to be given the chance, they just need
to join the project and make some contributions? That is how this is
supposed to work isn't it?

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Prioritizing review of potentially approvable patches

2014-08-21 Thread Sean Dague
FWIW, this is one of my normal morning practices, and the reason that
that query is part of most of the gerrit dashboards -
https://github.com/stackforge/gerrit-dash-creator/blob/master/dashboards/compute-program.dash

On 08/21/2014 06:57 AM, Daniel P. Berrange wrote:
 Tagged with '[nova]' but this might be relevant data / idea for other
 teams too.
 
 With my code contributor hat on, one of the things that I find most the
 frustrating about Nova code review process is that a patch can get a +2
 vote from one core team member and then sit around for days, weeks, even
 months without getting a second +2 vote, even if it has no negative
 feedback at all and is a simple  important bug fix.
 
 If a patch is good enough to have received one +2 vote, then compared to
 the open patches as a whole, this patch is much more likely to be one
 that is ready for approval  merge. It will likely be easier to review,
 since it can be assumed other reviewers have already caught the majority
 of the silly / tedious / time consuming bugs.
 
 Letting these patches languish with a single +2 for too long makes it very
 likely that, when a second core reviewer finally appears, there will be a
 merge conflict or other bit-rot that will cause it to have to undergo yet
 another rebase  re-review. This is wasting time of both our contributors
 and our review team.
 
 On this basis I suggest that core team members should consider patches
 that already have a +2 to be high(er) priority items to review than open
 patches as a whole.
 
 Currently Nova has (on master branch)
 
   - 158 patches which have at least one +2 vote, and are not approved
   - 122 patches which have at least one +2 vote, are not approved and
 don't have any -1 code review votes.
 
 So that's 122 patches that should be easy candidates for merging right
 now. Another 30 can possibly be merged depending on whether the core
 reviewer agrees with the -1 feedback given or now.
 
 That is way more patches than we should have outstanding in that state.
 It is not unreasonable to say that once a patch has a single +2 vote, we
 should aim to get either a second +2 vote or further -1 review feedback
 in a matter of days, and certainly no longer than a week.
 
 If everyone on the core team looked at the list of potentially approvable
 patches each day I think it would significantly improve our throughput.
 It would also decrease the amount of review work overall by reducing
 chance that patches bitrot  need rebase for merge conflicts. And most
 importantly of all it will give our code contributors a better impression
 that we care about them.
 
 As an added carrot, working through this list will be an effective way
 to improve your rankings [1] against other core reviewers, not that I
 mean to suggest we should care about rankings over review quality ;-P
 
 The next version of gerrymander[2] will contain a new command to allow
 core reviewers to easily identify these patches
 
$ gerrymander todo-approvable -g nova --branch master
 
 This will of course filter out patches which you yourself own since you
 can't approve your own work. It will also filter out patches which you
 have given feedback on already. What's left will be a list of patches
 where you are able to apply the casting +2 vote to get to +A state.
 If the '--strict' arg is added it will also filter out any patches which
 have a -1 code review comment.
 
 Regards,
 Daniel
 
 [1] http://russellbryant.net/openstack-stats/nova-reviewers-30.txt
 [2] 
 https://github.com/berrange/gerrymander/commit/790df913fc512580d92e808f28793e29783fecd7
 


-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] How to handle blocking bugs/changes in Neutron 3rd party CI

2014-08-21 Thread Dane Leblanc (leblancd)
That makes sense for setups that don’t use Zuul.

But for setups using Zuul/Jenkins, and for a vendor who is introducing a new 
plugin which has initial hardware-enabling commits which haven’t been merged 
yet, I don’t see how we can meet Neutron 3rd party testing requirements. The 
requirements and the tools just seem to be at odds in this situation.

From: Kevin Benton [mailto:blak...@gmail.com]
Sent: Thursday, August 21, 2014 3:25 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] How to handle blocking bugs/changes in 
Neutron 3rd party CI

I'm not sure if this is possible with a Zuul setup, but once we identify a 
failure causing commit, we change the reported job status to (skipped) for 
any patches that contain the commit but not the fix. It's a relatively 
straightforward way to communicate that the CI system is still operational but 
voting was intentionally bypassed. This logic is handled in the script that 
determines and posts the results of the tests to gerrit.

On Wed, Aug 20, 2014 at 3:27 PM, Dane Leblanc (leblancd) 
lebla...@cisco.commailto:lebla...@cisco.com wrote:
Preface: I posed this problem on the #openstack-infra IRC, and they couldn't 
offer an easy or obvious solution, and suggested that I get some consensus from 
the Neutron community as to how we want to handle this situation. So I'd like 
to bounce this around, get some ideas, and maybe bring this up in the 3rd party 
CI IRC.

The challenge is this: Occasionally, a blocking bug is introduced which causes 
our 3rd party CI tests to consistently fail on every change set that we're 
testing against. We can develop a fix for the problem, but until that fix gets 
merged upstream, tests against all other change sets are seen to fail.

(Note that we have a similar situation whenever we introduce a completely new 
plugin with its associated 3rd party CI... until the plugin code, or an 
enabling subset of that plugin code is merged upstream, then typically all 
other commits would fail on that CI setup.)

In the past, we've tried dynamically patching the fix(es) on top of the fetched 
code being reviewed, but this isn't always reliable due to merge conflicts, and 
we've had to monkey patch DevStack to apply the fixes after cloning Neutron but 
before installing Neutron.

So we'd prefer to enter a throttled or filtering CI mode when we hit this 
situation, where we're (temporarily) only testing against commits related to 
our plugin/driver which contain (or have a dependency on) the fix for the 
blocking bug until the fix is merged.

In an ideal world, for the sake of transparency, we would love to be able to 
have Jenkins/Zuul report back to Gerrit with a descriptive test result such as 
N/A, Not tested, or even Aborted for all other change sets, letting the 
committer know that, Yeah, we see your review, but we're unable to test it at 
the moment. Zuul does have the ability to report Aborted status to Gerrit, 
but this is sent e.g. when Zuul decides to abort change set 'N' for a review 
when change set 'N+1' has just been submitted, or when a Jenkins admin manually 
aborts a Jenkins job.  Unfortunately, this type of status is not available 
programmatically within a Jenkins job script; the only outcomes are pass (zero 
RC) or fail (non-zero RC). (Note that we can't directly filter at the Zuul 
level in our topology, since we have one Zuul server servicing multiple 3rd 
party CI setups.)

As a second option, we'd like to not run any tests for the other changes, and 
report NOTHING to Gerrit, while continuing to run against changes related to 
our plugin (as required for the plugin changes to be approved).  This was the 
favored approach discussed in the Neutron IRC on Monday. But herein lies the 
rub. By the time our Jenkins job script discovers that the change set that is 
being tested is not in a list of preferred/allowed change sets, the script has 
2 options: pass or fail. With the current Jenkins, there is no programmatic way 
for a Jenkins script to signal to Gearman/Zuul that the job should be aborted.

There was supposedly a bug filed with Jenkins to allow it to interpret 
different exit codes from job scripts as different result values, but this 
hasn't made any progress.

There may be something that can be changed in Zuul to allow it to interpret 
different result codes other than success/fail, or maybe to allow Zuul to do 
change ID filtering on a per Jenkins job basis, but this would require the 
infra team to make changes to Zuul.

The bottom line is that based on the current Zuul/Jenkins infrastructure, 
whenever our 3rd party CI is blocked by a bug, I'm struggling with the 
conflicting requirements:
* Continue testing against change sets for the blocking bug (or plugin related 
changes)
* Don't report anything to Gerrit for all other change sets, since these can't 
be meaningfully tested against the CI hardware

Let me know if I'm missing a solution to this. I 

Re: [openstack-dev] [nova][neutron] Migration from nova-network to Neutron for large production clouds

2014-08-21 Thread Tim Bell

On 21 Aug 2014, at 12:38, Thierry Carrez thie...@openstack.org wrote:

 Tim Bell wrote:
 Michael has been posting very informative blogs on the summary of the
 mid-cycle meetups for Nova. The one on the Nova Network to Neutron
 migration was of particular interest to me as it raises a number of
 potential impacts for the CERN production cloud. The blog itself is at
 http://www.stillhq.com/openstack/juno/14.html
 
 I would welcome suggestions from the community on the approach to take
 and areas that the nova/neutron team could review to limit the impact on
 the cloud users.
 
 For some background, CERN has been running nova-network in flat DHCP
 mode since our first Diablo deployment. We moved to production for our
 users in July last year and are currently supporting around 70,000
 cores, 6 cells, 100s of projects and thousands of VMs. Upgrades
 generally involve disabling the API layer while allowing running VMs to
 carry on without disruption. Within the time scale of the migration to
 Neutron (M release at the latest), these numbers are expected to double.
 
 Thanks for bringing your concerns here. To start this discussion, it's
 worth adding some context on the currently-proposed cold migration
 path. During the Icehouse and Juno cycles the TC reviewed the gaps
 between the integration requirements we now place on new entrants and
 the currently-integrated projects. That resulted in a number of
 identified gaps that we asked projects to address ASAP, ideally within
 the Juno cycle.
 
 Most of the Neutron gaps revolved around its failure to be a full
 nova-network replacement -- some gaps around supporting basic modes of
 operation, and a gap in providing a basic migration path. Neutron devs
 promised to close that in Juno, but after a bit of discussion we
 considered that a cold migration path was all we'd require them to
 provide in Juno.
 
 That doesn't mean a hot or warm migration path can't be worked on.
 There are two questions to solve: how can we technically perform that
 migration with a minimal amount of downtime, and is it reasonable to
 mark nova-network deprecated until we've solved that issue.
 
 On the first question, migration is typically an operational problem,
 and operators could really help to design one that would be acceptable
 to them. They may require developers to add features in the code to
 support that process, but we seem to not even be at this stage. Ideally
 I would like ops and devs to join to solve that technical challenge.
 
 The answer to the second question lies in the multiple dimensions of
 deprecated.
 
 On one side it means is no longer in our future plans, new usage is now
 discouraged, new development is stopped, explore your options to migrate
 out of it. I think it's extremely important that we do that as early as
 possible, to reduce duplication of effort and set expectations correctly.
 
 On the other side it means will be removed in release X (not
 necessarily the next release, but you set a countdown). To do that, you
 need to be pretty confident that you'll have your ducks in a row at
 removal date, and don't set up operators for a nightmare migration.
 
 For us, the concerns we have with the ‘cold’ approach would be on the
 user impact and operational risk of such a change. Specifically,
 
 1.  A big bang approach of shutting down the cloud, upgrade and the
 resuming the cloud would cause significant user disruption
 
 2.  The risks involved with a cloud of this size and the open source
 network drivers would be difficult to mitigate through testing and could
 lead to site wide downtime
 
 3.  Rebooting VMs may be possible to schedule in batches but would
 need to be staggered to keep availability levels
 
 What minimal level of hot would be acceptable to you ?
 

I am wary of using phrases like not acceptable as they tend to lead to very 
binary discussions :-)

We could consider rebooting VMs. We would much rather not have to. Rebooting 
all at once would cause major difficulties.

Staggering the VM migrations would allow us to significantly reduce the risk as 
we could pause in the event of an operational issue. My assumption is that 
rollback would be a major development effort so I prefer a way to progress with 
caution.

Renumbering IPs of VMs would be painful also.

I think, as you say, a small team of developers and operators with this need 
can sit down to find the right balance between a simple migration and an 
implementation which does not require infinite development effort.

Since there is an upcoming Ops meet up next week in San Antonio (Michael S 
thought he would attend), I can suggest to Tom that he gets some volunteers and 
then we discuss further in Paris.

I'm all in favour of early announcements of depreciations so that we can start 
to work this through with the community. I'd also like to not leave it too late 
as we are adding new VMs and hypervisors all the time and so the scale 
challenges will increase.


Re: [openstack-dev] oslo.db 0.4.0 released

2014-08-21 Thread Doug Hellmann

On Aug 20, 2014, at 7:38 AM, Victor Sergeyev vserge...@mirantis.com wrote:

 Hello Folks!
 
 Oslo team is pleased to announce the new Oslo database handling library 
 release - oslo.db 0.4.0
 Thanks all for contributions to this release.
 
 Feel free to report issues using the launchpad tracker: 
 https://bugs.launchpad.net/oslo and mark them with ``db`` tag.
 
 See the full list of changes:
 
 $ git log --oneline --no-merges 0.3.0..0.4.0
 ee176a8 Implement a dialect-level function dispatch system
 6065b21 Move to oslo.utils
 deeda38 Restore correct source file encodings
 4dde38b Handle DB2 SmallInteger type for change_deleted_column_type_to_boolean
 4c18fca Imported Translations from Transifex
 69f16bf Fixes comments to pass E265 check.
 e1dbd31 Fixes indentations to pass E128 check.
 423c17e Uses keyword params for i18n string to pass H703
 3cb5927 Adds empty line to multilines docs to pass H405
 0996c5d Updates one line docstring with dot to pass H402
 a3ca010 Changes import orders to pass H305 check
 584a883 Fixed DeprecationWarning in exc_filters
 fc2fc90 Imported Translations from Transifex
 3b17365 oslo.db.exceptions module documentation
 c919585 Updated from global requirements
 4685631 Extension of DBDuplicateEntry exception
 7cb512c oslo.db.options module documentation
 c0d9f36 oslo.db.api module documentation
 93d95d4 Imported Translations from Transifex
 e83e4ca Use SQLAlchemy cursor execute events for tracing
 d845a16 Remove sqla_07 from tox.ini
 9722ab6 Updated from global requirements
 3bf8941 Specify raise_on_warnings=False for mysqlconnector
 1814bf8 Make MySQL regexes generic across MySQL drivers
 62729fb Allow tox tests with complex OS_TEST_DBAPI_CONNECTION URLs
 a9e3af2 Raise DBReferenceError on foreign key violation
 b69899e Add host argument to get_connect_string()
 9a6aa50 Imported Translations from Transifex
 f817555 Don't drop pre-existing database before tests
 4499da7 Port _is_db_connection_error check to exception filters
 9d5ab2a Integrate the ping listener into the filter system.
 cbae81e Add disconnect modification support to exception handling
 0a6c8a8 Implement new exception interception and filtering layer
 69a4a03 Implement the SQLAlchemy ``handle_error()`` event.
 f96deb8 Remove moxstubout.py from oslo.db
 7d78e3e Added check for DB2 deadlock error
 2df7e88 Bump hacking to version 0.9.2
 c34c32e Opportunistic migration tests
 108e2bd Move all db exception to exception.py
 35afdf1 Enable skipped tests from test_models.py
 e68a53b Use explicit loops instead of list comprehensions
 44e96a8 Imported Translations from Transifex
 817fd44 Allow usage of several iterators on ModelBase
 baf30bf Add DBDuplicateEntry detection for mysqlconnector driver
 4796d06 Check for mysql_sql_mode is not None in create_engine()
 01b916c remove definitions of Python Source Code Encoding
 
 Thanks,
 Victor

I want to point out one of the changes in that long long list for special 
attention: Implement new exception interception and filtering layer”. This and 
a few related changes means that the new version of oslo.db uses a consistent 
set of exceptions, no matter which database backend is in use. That means 
applications no longer need to have different imports or logic for catching 
exceptions from mysql, postgresql, sqlite, etc. 

The team held back from releasing the library until they could update a few 
places in applications to catch the new exceptions to ensure that the new 
library release didn’t break anyone.

Nice work, everyone!

Doug


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Way to get wrapped method's name/class using Pecan secure decorators?

2014-08-21 Thread Doug Hellmann

On Aug 20, 2014, at 2:43 PM, Pendergrass, Eric eric.pendergr...@hp.com wrote:

 Hi Ryan,
 
 We tried globally applying the hook but could not get execution to enter the
 hook class. 
 
 Perhaps we made a mistake, but we concluded the Controller still had to
 inherit from HookController using the project-wide method.  Otherwise we
 would have been satisfied applying it project-wide.

Ceilometer already has a couple of other hooks installed (for example, to 
provide the database handle) and those do work. I assume you followed them as 
examples?

Doug

 
 Thanks,
 Eric
 
 Eric,
 
 
 Doug's correct - this looks like a bug in pecan that occurs when you
 subclass both rest.RestController and hooks.HookController.  I'm working on
 a bug fix as we speak.  In the meantime, have you tried applying hooks at a
 global application level?  This approach should still work.
 
 On 08/14/14 04:38 PM, Pendergrass, Eric wrote:
 Sure, Doug.  We want the ability to selectively apply policies to 
 certain Ceilometer API methods based on user/tenant roles.
 
 For example, we want to restrict the ability to execute Alarm deletes 
 to admins and user/tenants who have a special role, say domainadmin.
 
 The policy file might look like this:
 {
context_is_admin:  [[role:admin]],
admin_and_matching_project_domain_id:  [[role:domainadmin]],
admin_or_cloud_admin: [[rule:context_is_admin], 
 [rule:admin_and_matching_project_domain_id]],
telemetry:delete_alarms:  [[rule:admin_or_cloud_admin]] }
 
 The current acl.py and _query_to_kwargs access control setup either 
 sets project_id scope to None (do everything) or to the project_id in 
 the request header 'X-Project-Id'.  This allows for admin or project 
 scope, but nothing in between.
 
 We tried hooks.  Unfortunately we can't seem to turn the API 
 controllers into HookControllers just by adding HookController to the 
 Controller class definition.  It causes infinite recursion on API 
 startup.  For example, this doesn't work because ceilometer-api will 
 not start with it:
class MetersController(rest.RestController, HookController):
 
 If there was a way to use hooks with the v2. API controllers that 
 might work really well.
 
 So we are left using the @secure decorator and deriving the method 
 name from the request environ PATH_INFO and REQUEST_METHOD values.  
 This is how we determine the wrapped method within the class 
 (REQUEST_METHOD + PATH_INFO = telemetry:delete_alarms with some 
 munging).  We need the method name in order to selectively apply acces 
 control to certain methods.
 
 Deriving the method this way isn't ideal but it's the only thing we've 
 gotten working between hooks, @secure, and regular decorators.
 
 I submitted a WIP BP here: https://review.openstack.org/#/c/112137/3.  
 It is slightly out of date but should give you a beter idea of our
 goals.
 
 Thanks
 
 Eric,
 
 If you can give us some more information about your end goal, 
 independent
 of the implementation, maybe we can propose an alternate technique to 
 achieve the same thing.
 
 Doug
 
 On Aug 12, 2014, at 6:21 PM, Ryan Petrello 
 ryan.petre...@dreamhost.com
 wrote:
 
 Yep, you're right, this doesn't seem to work.  The issue is that 
 security is enforced at routing time (while the controller is 
 still actually being discovered).  In order to do this sort of 
 thing with the `check_permissions`, we'd probably need to add a
 feature to pecan.
 
 On 08/12/14 06:38 PM, Pendergrass, Eric wrote:
 Sure, here's the decorated method from v2.py:
 
   class MetersController(rest.RestController):
   Works on meters.
 
   @pecan.expose()
   def _lookup(self, meter_name, *remainder):
   return MeterController(meter_name), remainder
 
   @wsme_pecan.wsexpose([Meter], [Query])
   @secure(RBACController.check_permissions)
   def get_all(self, q=None):
 
 and here's the decorator called by the secure tag:
 
   class RBACController(object):
   global _ENFORCER
   if not _ENFORCER:
   _ENFORCER = policy.Enforcer()
 
 
   @classmethod
   def check_permissions(cls):
   # do some stuff
 
 In check_permissions I'd like to know the class and method with 
 the
 @secure tag that caused check_permissions to be invoked.  In this 
 case, that would be MetersController.get_all.
 
 Thanks
 
 
 Can you share some code?  What do you mean by, is there a way 
 for the
 decorator code to know it was called by MetersController.get_all
 
 On 08/12/14 04:46 PM, Pendergrass, Eric wrote:
 Thanks Ryan, but for some reason the controller attribute is
 None:
 
 (Pdb) from pecan.core import state
 (Pdb) state.__dict__
 {'hooks': [ceilometer.api.hooks.ConfigHook object at 
 0x31894d0, ceilometer.api.hooks.DBHook object at 0x3189650,
 
 ceilometer.api.hooks.PipelineHook object at 0x39871d0, 
 ceilometer.api.hooks.TranslationHook object at 0x3aa5510],
 'app':
 pecan.core.Pecan object at 0x2e76390, 'request': Request at
 0x3ed7390 GET 

Re: [openstack-dev] [oslo] Issues with POSIX semaphores and other locks in lockutils

2014-08-21 Thread Doug Hellmann
+1 to going back to file locks, too. We can keep the current scheme under a 
different API in the module for anyone that wants to use it explicitly, but I 
think at this point it’s better to have something that works reliably when 
configured properly as the default.

I hope we can switch that to tooz/zookeeper in a future release, but we’ll need 
some more discussion before making that change.

Doug

On Aug 20, 2014, at 3:29 PM, Davanum Srinivas dava...@gmail.com wrote:

 Ben, +1 to the plan you outlined.
 
 -- dims
 
 On Wed, Aug 20, 2014 at 4:13 PM, Ben Nemec openst...@nemebean.com wrote:
 On 08/20/2014 01:03 PM, Vishvananda Ishaya wrote:
 This may be slightly off-topic but it is worth mentioning that the use of 
 threading.Lock[1]
 which was included to make the locks thread safe seems to be leading to a 
 deadlock in eventlet[2].
 It seems like we have rewritten this too many times in order to fix minor 
 pain points and are
 adding risk to a very important component of the system.
 
 [1] https://review.openstack.org/#/c/54581
 [2] https://bugs.launchpad.net/nova/+bug/1349452
 
 This is pretty much why I'm pushing to just revert to the file locking
 behavior we had up until a couple of months ago, rather than
 implementing some new shiny lock thing that will probably cause more
 subtle issues in the consuming projects.  It's become clear to me that
 lockutils is too deeply embedded in the other projects, and there are
 too many implementation details that they rely on, to make significant
 changes to its default code path.
 
 
 On Aug 18, 2014, at 2:05 PM, Pádraig Brady p...@draigbrady.com wrote:
 
 On 08/18/2014 03:38 PM, Julien Danjou wrote:
 On Thu, Aug 14 2014, Yuriy Taraday wrote:
 
 Hi Yuriy,
 
 […]
 
 Looking forward to your opinions.
 
 This looks like a good summary of the situation.
 
 I've added a solution E based on pthread, but didn't get very far about
 it for now.
 
 In my experience I would just go with the fcntl locks.
 They're auto unlocked and well supported, and importantly,
 supported for distributed processes.
 
 I'm not sure how problematic the lock_path config is TBH.
 That is adjusted automatically in certain cases where needed anyway.
 
 Pádraig.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 -- 
 Davanum Srinivas :: http://davanum.wordpress.com
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Change of meeting time

2014-08-21 Thread Ladislav Smola

On 08/20/2014 11:30 AM, Dougal Matthews wrote:

- Original Message -

From: Derek Higgins der...@redhat.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Wednesday, 20 August, 2014 10:15:51 AM
Subject: Re: [openstack-dev] [TripleO] Change of meeting time

On 24/05/14 01:21, James Polley wrote:

Following a lengthy discussion under the subject Alternating meeting
tmie for more TZ friendliness, the TripleO meeting now alternates
between Tuesday 1900UTC (the former time) and Wednesday 0700UTC, for
better coverage across Australia, India, China, Japan, and the other
parts of the world that found it impossible to get to our previous
meeting time.

Raising a point that came up on this morning's irc meeting

A lot (most?) of the people at this morning's meeting were based in
western Europe, getting up earlier then usual for the meeting (me
included), When daylight saving kicks in it might push them passed the
threshold, would an hour later (0800 UTC) work better for people or is
the current time what fits best?

I'll try to make the meeting regardless if its moved or not but an hour
later would certainly make it a little more palatable.

+1, I don't have a strong preference, but an hour later would make it a
bit easier, particularly when DST kicks in.

Dougal


+1


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Prioritizing review of potentially approvable patches

2014-08-21 Thread Matt Riedemann



On 8/21/2014 7:09 AM, Sean Dague wrote:

FWIW, this is one of my normal morning practices, and the reason that
that query is part of most of the gerrit dashboards -
https://github.com/stackforge/gerrit-dash-creator/blob/master/dashboards/compute-program.dash

On 08/21/2014 06:57 AM, Daniel P. Berrange wrote:

Tagged with '[nova]' but this might be relevant data / idea for other
teams too.

With my code contributor hat on, one of the things that I find most the
frustrating about Nova code review process is that a patch can get a +2
vote from one core team member and then sit around for days, weeks, even
months without getting a second +2 vote, even if it has no negative
feedback at all and is a simple  important bug fix.

If a patch is good enough to have received one +2 vote, then compared to
the open patches as a whole, this patch is much more likely to be one
that is ready for approval  merge. It will likely be easier to review,
since it can be assumed other reviewers have already caught the majority
of the silly / tedious / time consuming bugs.

Letting these patches languish with a single +2 for too long makes it very
likely that, when a second core reviewer finally appears, there will be a
merge conflict or other bit-rot that will cause it to have to undergo yet
another rebase  re-review. This is wasting time of both our contributors
and our review team.

On this basis I suggest that core team members should consider patches
that already have a +2 to be high(er) priority items to review than open
patches as a whole.

Currently Nova has (on master branch)

   - 158 patches which have at least one +2 vote, and are not approved
   - 122 patches which have at least one +2 vote, are not approved and
 don't have any -1 code review votes.

So that's 122 patches that should be easy candidates for merging right
now. Another 30 can possibly be merged depending on whether the core
reviewer agrees with the -1 feedback given or now.

That is way more patches than we should have outstanding in that state.
It is not unreasonable to say that once a patch has a single +2 vote, we
should aim to get either a second +2 vote or further -1 review feedback
in a matter of days, and certainly no longer than a week.

If everyone on the core team looked at the list of potentially approvable
patches each day I think it would significantly improve our throughput.
It would also decrease the amount of review work overall by reducing
chance that patches bitrot  need rebase for merge conflicts. And most
importantly of all it will give our code contributors a better impression
that we care about them.

As an added carrot, working through this list will be an effective way
to improve your rankings [1] against other core reviewers, not that I
mean to suggest we should care about rankings over review quality ;-P

The next version of gerrymander[2] will contain a new command to allow
core reviewers to easily identify these patches

$ gerrymander todo-approvable -g nova --branch master

This will of course filter out patches which you yourself own since you
can't approve your own work. It will also filter out patches which you
have given feedback on already. What's left will be a list of patches
where you are able to apply the casting +2 vote to get to +A state.
If the '--strict' arg is added it will also filter out any patches which
have a -1 code review comment.

Regards,
Daniel

[1] http://russellbryant.net/openstack-stats/nova-reviewers-30.txt
[2] 
https://github.com/berrange/gerrymander/commit/790df913fc512580d92e808f28793e29783fecd7






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Yeah:

https://review.openstack.org/#/projects/openstack/nova,dashboards/important-changes:review-inbox-dashboard

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-21 Thread Jay Pipes

On 08/21/2014 07:58 AM, Chris Dent wrote:

On Thu, 21 Aug 2014, Sean Dague wrote:


By blessing one team what we're saying is all the good ideas pool for
tackling this hard problem can only come from that one team.


This is a big part of this conversation that really confuses me. Who is
that one team?

I don't think it is that team that is being blessed, it is that
project space. That project space ought, if possible, have a team
made up of anyone who is interested. Within that umbrella both
the competition and cooperation that everyone wants can happen.

You're quite right Sean, there is a lot of gravity that comes from
needing to support and slowly migrate the existing APIs. That takes
up quite a lot of resources. It doesn't mean, however, that other
resources can't work on substantial improvements in cooperation with
the rest of the project. Gnocchi and the entire V3 concept in
ceilometer are a good example of this. Some folk are working on that
and some folk are working on maintaining and improving the old
stuff.

Some participants in this thread seem to be saying give some else a
chance. Surely nobody needs to be given the chance, they just need
to join the project and make some contributions? That is how this is
supposed to work isn't it?


Specifically for Ceilometer, many of the folks working on alternate 
implementations have contributed or are actively contributing to 
Ceilometer. Some have stopped contributing because of fundamental 
disagreements about the appropriateness of the Ceilometer architecture. 
Others have begun working on Gnocchi to address design issues, and 
others have joined efforts on Monasca, and others have continued work on 
Stacktach. Eoghan has done an admirable job of informing the TC about 
goings on in the Ceilometer community and being forthright about the 
efforts around Gnocchi. And there isn't any perceived animosity between 
the aforementioned contributor subteams. The point I've been making is 
that by the TC continuing to bless only the Ceilometer project as the 
OpenStack Way of Metering, I think we do a disservice to our users by 
picking a winner in a space that is clearly still unsettled.


Specifically for Triple-O, by making the Deployment program == Triple-O, 
the TC has picked the disk-image-based deployment of an undercloud 
design as The OpenStack Way of Deployment. And as I've said previously 
in this thread, I believe that the deployment space is similarly 
unsettled, and that it would be more appropriate to let the Chef 
cookbooks and Puppet modules currently sitting in the stackforge/ code 
namespace live in the openstack/ code namespace.


I recommended getting rid of the formal Program concept because I didn't 
think it was serving any purpose other than solidifying existing power 
centers and was inhibiting innovation by sending the signal of blessed 
teams/projects, instead of sending a signal of inclusion.


Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer][WSME] Sphinx failing sporadically because of wsme autodoc extension

2014-08-21 Thread Nejc Saje

Yesterday, doc builds started failing sporadically in Ceilometer gate.

http://logstash.openstack.org/#eyJzZWFyY2giOiJcIkVSUk9SOiBJbnZvY2F0aW9uRXJyb3I6ICcvaG9tZS9qZW5raW5zL3dvcmtzcGFjZS9nYXRlLWNlaWxvbWV0ZXItZG9jcy8udG94L3ZlbnYvYmluL3B5dGhvbiBzZXR1cC5weSBidWlsZF9zcGhpbngnXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjE3MjgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MDg2MjQwMjcyMDF9

Can someone with more Sphinx-fu than me figure out why Sphinx is using a 
wsme extension where there is no wsme code? (The build fails when 
processing ceilometer.alarm module, example of a successful build: 
https://jenkins02.openstack.org/job/gate-ceilometer-docs/4412/consoleFull


Cheers,
Nejc

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Prioritizing review of potentially approvable patches

2014-08-21 Thread Daniel P. Berrange
On Thu, Aug 21, 2014 at 08:26:29AM -0500, Matt Riedemann wrote:
 
 
 On 8/21/2014 7:09 AM, Sean Dague wrote:
 FWIW, this is one of my normal morning practices, and the reason that
 that query is part of most of the gerrit dashboards -
 https://github.com/stackforge/gerrit-dash-creator/blob/master/dashboards/compute-program.dash
 
 
 https://review.openstack.org/#/projects/openstack/nova,dashboards/important-changes:review-inbox-dashboard

That should really sort the changes so oldest one is shown first rather
than most recently changed one, otherwise stuff that is waiting the
longest is least likely to be seen  processed - particularly as it is
truncating the list at 50 changes and we have 100+ pending. It ought to
filter out ones with a +A on them already too

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Prioritizing review of potentially approvable patches

2014-08-21 Thread Sylvain Bauza


Le 21/08/2014 13:57, Daniel P. Berrange a écrit :

Tagged with '[nova]' but this might be relevant data / idea for other
teams too.

With my code contributor hat on, one of the things that I find most the
frustrating about Nova code review process is that a patch can get a +2
vote from one core team member and then sit around for days, weeks, even
months without getting a second +2 vote, even if it has no negative
feedback at all and is a simple  important bug fix.

If a patch is good enough to have received one +2 vote, then compared to
the open patches as a whole, this patch is much more likely to be one
that is ready for approval  merge. It will likely be easier to review,
since it can be assumed other reviewers have already caught the majority
of the silly / tedious / time consuming bugs.

Letting these patches languish with a single +2 for too long makes it very
likely that, when a second core reviewer finally appears, there will be a
merge conflict or other bit-rot that will cause it to have to undergo yet
another rebase  re-review. This is wasting time of both our contributors
and our review team.

On this basis I suggest that core team members should consider patches
that already have a +2 to be high(er) priority items to review than open
patches as a whole.

Currently Nova has (on master branch)

   - 158 patches which have at least one +2 vote, and are not approved
   - 122 patches which have at least one +2 vote, are not approved and
 don't have any -1 code review votes.

So that's 122 patches that should be easy candidates for merging right
now. Another 30 can possibly be merged depending on whether the core
reviewer agrees with the -1 feedback given or now.

That is way more patches than we should have outstanding in that state.
It is not unreasonable to say that once a patch has a single +2 vote, we
should aim to get either a second +2 vote or further -1 review feedback
in a matter of days, and certainly no longer than a week.

If everyone on the core team looked at the list of potentially approvable
patches each day I think it would significantly improve our throughput.
It would also decrease the amount of review work overall by reducing
chance that patches bitrot  need rebase for merge conflicts. And most
importantly of all it will give our code contributors a better impression
that we care about them.

As an added carrot, working through this list will be an effective way
to improve your rankings [1] against other core reviewers, not that I
mean to suggest we should care about rankings over review quality ;-P

The next version of gerrymander[2] will contain a new command to allow
core reviewers to easily identify these patches

$ gerrymander todo-approvable -g nova --branch master

This will of course filter out patches which you yourself own since you
can't approve your own work. It will also filter out patches which you
have given feedback on already. What's left will be a list of patches
where you are able to apply the casting +2 vote to get to +A state.
If the '--strict' arg is added it will also filter out any patches which
have a -1 code review comment.

Regards,
Daniel

[1] http://russellbryant.net/openstack-stats/nova-reviewers-30.txt
[2] 
https://github.com/berrange/gerrymander/commit/790df913fc512580d92e808f28793e29783fecd7


Strong +1 here for 2 reasons :
 - We make sure the taken direction is agreed for at least 2 people
 - When it goes to the gate, there is less risk to have a merge failed

That thread makes me think about a blogpost I just read about code 
reviews, really worth it :

http://swreflections.blogspot.fr/2014/08/dont-waste-time-on-code-reviews.html

-Sylvain


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] indicating sample provenance

2014-08-21 Thread gordon chung
 b) Assuming there should be one:

* Where should it go? Presumably it needs to be an attribute of
  each sample because as agents leave and join the group, where
  samples are published from can change.is this just for debugging 
 purposes or auditing? from an audit standpoint, whenever an 
 event/meter/whatever is handled within a system, it should be captured. so in 
 CADF[1] and i assume any other auditing standard out there, when a resource 
 such as a publisher in the pipeline creates the sample, it should add a 
 reporter attribute noting that it was who created it and that would be 
 captured in the final sample/event.[1] 
 http://docs.openstack.org/developer/pycadf/
cheers,
gord  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Use lrzip for upgrade tarball - reject?

2014-08-21 Thread Dmitry Pyzhov
Fuelers,

Our upgrade tarball for 5.1 is more than 4.5Gb. We can reduce it size by
2Gb with lrzip tool (ticket
https://bugs.launchpad.net/fuel/+bug/1356813, change
in build system https://review.openstack.org/#/c/114201/, change in docs
https://review.openstack.org/#/c/115331/), but it will dramatically
increase unpacking time. I've run unpack on my virtualbox environment and
got this result:
[root@fuel var]# lrzuntar fuel-5.1-upgrade.tar.lrz
Decompressing...
100%7637.48 /   7637.48 MB
Average DeCompression Speed:  8.014MB/s
[OK] - 8008478720 bytes
Total time: 00:15:52.93

My suggestion is to reject this change, release 5.1 with big tarball and
find another solution in next release. Any objections?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Use lrzip for upgrade tarball - reject?

2014-08-21 Thread Mike Scherbakov
What are other possible solutions to this issue?


On Thu, Aug 21, 2014 at 5:50 PM, Dmitry Pyzhov dpyz...@mirantis.com wrote:

 Fuelers,

 Our upgrade tarball for 5.1 is more than 4.5Gb. We can reduce it size by
 2Gb with lrzip tool (ticket https://bugs.launchpad.net/fuel/+bug/1356813,
 change in build system https://review.openstack.org/#/c/114201/, change
 in docs https://review.openstack.org/#/c/115331/), but it will
 dramatically increase unpacking time. I've run unpack on my virtualbox
 environment and got this result:
 [root@fuel var]# lrzuntar fuel-5.1-upgrade.tar.lrz
 Decompressing...
 100%7637.48 /   7637.48 MB
 Average DeCompression Speed:  8.014MB/s
 [OK] - 8008478720 bytes
 Total time: 00:15:52.93

 My suggestion is to reject this change, release 5.1 with big tarball and
 find another solution in next release. Any objections?

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Use public IP address as instance fixed IP

2014-08-21 Thread Bao Wang
I have a very complex Openstack deployment for NFV. It could not be
deployed as Flat. It will have a lot of isolated private networks. Some
interfaces of a group VM instances will need bridged network with their
fixed IP addresses to communicate with outside world while other interfaces
from the same set of VM should keep isolated with real private/fixed IP
addresses. What happen if we use public IP addresses directly as fixed IP
on those interfaces ? Will this work with Openstack neutron networking ?
Will Openstack do NAT automatically on those ?

Overall, the requirement is to use the fixed/public IP to communicate with
outside directly on some interfaces of some VM instances while keeping
others as private. The floating IP is not an option here
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Congress] Policy Enforcement logic

2014-08-21 Thread Madhu Mohan
Hi,

I am quite new to the Congress and Openstack as well and this question may
seem very trivial and basic.

I am trying to figure out the policy enforcement logic,

Can some body help me understand how exactly, a policy enforcement action
is taken.

From the example policy there is an action defined as:



*action(disconnect_network)nova:network-(vm, network) :-
disconnect_network(vm, network) *
I assume that this statement when applied would translate to deletion of
entry in the database.

But, how does this affect the actual setup (i.e) How is this database
update translated to actual disconnection of the VM from the network.
How does nova know that it has to disconnect the VM from the network ?

Thanks and Regards,
Madhu Mohan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]Performance of security group

2014-08-21 Thread Kyle Mestery
This is great work! Looking forward to seeing this get reviewed and
merged in Juno!

Kyle

On Thu, Aug 21, 2014 at 6:49 AM, Édouard Thuleau thul...@gmail.com wrote:
 Nice job! That's awesome.

 Thanks,
 Édouard.


 On Thu, Aug 21, 2014 at 8:02 AM, Miguel Angel Ajo Pelayo
 mangel...@redhat.com wrote:

 Thank you shihanzhang!,

 I can't believe I didn't realize the ipset part spec was accepted I live
 on my own bubble... I will be reviewing and testing/helping on that part
 too during the next few days,  I was too concentrated in the RPC part.


 Best regards,

 - Original Message -
  hi neutroner!
  my patch about BP:
 
  https://blueprints.launchpad.net/openstack/?searchtext=add-ipset-to-security
  need install ipset in devstack, I have commit the patch:
  https://review.openstack.org/#/c/113453/, who can help me review it,
  thanks
  very much!
 
  Best regards,
  shihanzhang
 
 
 
 
  At 2014-08-21 10:47:59, Martinx - ジェームズ thiagocmarti...@gmail.com
  wrote:
 
 
 
  +1 NFTablesDriver!
 
  Also, NFTables, AFAIK, improves IDS systems, like Suricata, for example:
  https://home.regit.org/2014/02/suricata-and-nftables/
 
  Then, I'm wondering here... What benefits might come for OpenStack Nova
  /
  Neutron, if it comes with a NFTables driver, instead of the current
  IPTables?!
 
  * E fficient Security Group design?
  * Better FWaaS, maybe with NAT(44/66) support?
  * Native support for IPv6, with the defamed NAT66 built-in, simpler
  Floating
  IP implementation, for both v4 and v6 networks under a single
  implementation ( I don't like NAT66, I prefer a `routed Floating IPv6`
  version ) ?
  * Metadata over IPv6 still using NAT(66) ( I don't like NAT66 ), single
  implementation?
  * Suricata-as-a-Service?!
 
  It sounds pretty cool! :-)
 
 
  On 20 August 2014 23:16, Baohua Yang  yangbao...@gmail.com  wrote:
 
 
 
  Great!
  We met similar problems.
  The current mechanisms produce too many iptables rules, and it's hard to
  debug.
  Really look forward to seeing a more efficient security group design.
 
 
  On Thu, Jul 10, 2014 at 11:44 PM, Kyle Mestery 
  mest...@noironetworks.com 
  wrote:
 
 
 
  On Thu, Jul 10, 2014 at 4:30 AM, shihanzhang  ayshihanzh...@126.com 
  wrote:
  
   With the deployment 'nova + neutron + openvswitch', when we bulk
   create
   about 500 VM with a default security group, the CPU usage of
   neutron-server
   and openvswitch agent is very high, especially the CPU usage of
   openvswitch
   agent will be 100%, this will cause creating VMs failed.
  
   With the method discussed in mailist:
  
   1) ipset optimization ( https://review.openstack.org/#/c/100761/ )
  
   3) sg rpc optimization (with fanout)
   ( https://review.openstack.org/#/c/104522/ )
  
   I have implement these two scheme in my deployment, when we again bulk
   create about 500 VM with a default security group, the CPU usage of
   openvswitch agent will reduce to 10%, even lower than 10%, so I think
   the
   iprovement of these two options are very efficient.
  
   Who can help us to review our spec?
  
  This is great work! These are on my list of things to review in detail
  soon, but given the Neutron sprint this week, I haven't had time yet.
  I'll try to remedy that by the weekend.
 
  Thanks!
  Kyle
 
   Best regards,
   shihanzhang
  
  
  
  
  
   At 2014-07-03 10:08:21, Ihar Hrachyshka  ihrac...@redhat.com 
   wrote:
  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA512
  
  Oh, so you have the enhancement implemented? Great! Any numbers that
  shows how much we gain from that?
  
  /Ihar
  
  On 03/07/14 02:49, shihanzhang wrote:
   Hi, Miguel Angel Ajo! Yes, the ipset implementation is ready, today
   I will modify my spec, when the spec is approved, I will commit the
   codes as soon as possilbe!
  
  
  
  
  
   At 2014-07-02 10:12:34, Miguel Angel Ajo  majop...@redhat.com 
   wrote:
  
   Nice Shihanzhang,
  
   Do you mean the ipset implementation is ready, or just the
   spec?.
  
  
   For the SG group refactor, I don't worry about who does it, or
   who takes the credit, but I believe it's important we address
   this bottleneck during Juno trying to match nova's scalability.
  
   Best regards, Miguel Ángel.
  
  
   On 07/02/2014 02:50 PM, shihanzhang wrote:
   hi Miguel Ángel and Ihar Hrachyshka, I agree with you that
   split the work in several specs, I have finished the work (
   ipset optimization), you can do 'sg rpc optimization (without
   fanout)'. as the third part(sg rpc optimization (with fanout)),
   I think we need talk about it, because just using ipset to
   optimize security group agent codes does not bring the best
   results!
  
   Best regards, shihanzhang.
  
  
  
  
  
  
  
  
   At 2014-07-02 04:43:24, Ihar Hrachyshka  ihrac...@redhat.com 
   wrote:
   On 02/07/14 10:12, Miguel Angel Ajo wrote:
  
   Shihazhang,
  
   I really believe we need the RPC refactor done for this cycle,
   and given the close deadlines we have (July 10 for spec
   submission 

Re: [openstack-dev] [all] The future of the integrated release

2014-08-21 Thread Jay Pipes

On 08/20/2014 11:54 PM, Clint Byrum wrote:

Excerpts from Jay Pipes's message of 2014-08-20 14:53:22 -0700:

On 08/20/2014 05:06 PM, Chris Friesen wrote:

On 08/20/2014 07:21 AM, Jay Pipes wrote:

...snip

We already run into issues with something as basic as competing SQL
databases.


If the TC suddenly said Only MySQL will be supported, that would not
mean that the greater OpenStack community would be served better. It
would just unnecessarily take options away from deployers.


This is really where supported becomes the mutex binding us all. The
more supported options, the larger the matrix, the more complex a
user's decision process becomes.


I don't believe this is necessarily true.

A large chunk of users of OpenStack will deploy their cloud using one of 
the OpenStack distributions -- RDO, Ubuntu OpenStack, MOS, or one of the 
OpenStack appliances. For these users, they will select the options that 
their distribution offers (or makes for them).


Another chunk of users of OpenStack will deploy their cloud using things 
like the Chef cookbooks or Puppet modules on stackforge. For these 
users, they will select the options that the writers of those Puppet 
modules or Chef cookbooks have wired into the module or cookbook.


Another chunk of users of OpenStack will deploy their cloud by following 
the upstream installation documentation. This documentation currently 
focuses on the integrated projects, and so these users would only be 
deploying the projects that contributed excellent documentation and 
worked with distributors and packagers to make the installation and use 
of their project as easy as possible.


So, I think there is an argument to be made that packagers and deployers 
would have more decisions to make, but not necessarily end-users of 
OpenStack.



   If every component has several competing implementations and

none of them are official how many more interaction issues are going
to trip us up?


IMO, OpenStack should be about choice. Choice of hypervisor, choice of
DB and MQ infrastructure, choice of operating systems, choice of storage
vendors, choice of networking vendors.


Err, uh. I think OpenStack should be about users. If having 400 choices
means users are just confused, then OpenStack becomes nothing and
everything all at once. Choices should be part of the whole not when 1%
of the market wants a choice, but when 20%+ of the market _requires_
a choice.


I believe by picking winners in unsettled spaces, we add more to the 
confusion of users than having 1 option for doing something.



What we shouldn't do is harm that 1%'s ability to be successful. We should
foster it and help it grow, but we don't just pull it into the program and
say You're ALSO in OpenStack now!


I haven't been proposing that these competing projects would be in 
OpenStack now. I have been proposing that these projects live in the 
openstack/ code namespace, as these projects are 100% targeting 
OpenStack installations and users, and they are offering options to 
OpenStack deployers.


I hate the fact that the TC is deciding what is OpenStack.

IMO, we should be instead answering questions like does project X solve 
problem Y for OpenStack users? and can the design of project A be 
adapted to pull in good things from project B? and where can we advise 
project M to put resources that would most benefit OpenStack users?.


 and we also don't want to force those

users to make a hard choice because the better solution is not blessed.


But users are *already* forced to make these choices. They make these 
choices by picking an OpenStack distribution, or by necessity of a 
certain scale, or by their experience and knowledge base of a particular 
technology. Blessing one solution when there are multiple valid 
solutions does not suddenly remove the choice for users.



If there are multiple actively-developed projects that address the same
problem space, I think it serves our OpenStack users best to let the
projects work things out themselves and let the cream rise to the top.
If the cream ends up being one of those projects, so be it. If the cream
ends up being a mix of both projects, so be it. The production community
will end up determining what that cream should be based on what it
deploys into its clouds and what input it supplies to the teams working
on competing implementations.


I'm really not a fan of making it a competitive market. If a space has a
diverse set of problems, we can expect it will have a diverse set of
solutions that overlap. But that doesn't mean they both need to drive
toward making that overlap all-encompassing. Sometimes that happens and
it is good, and sometimes that happens and it causes horrible bloat.


Yes, I recognize the danger that choice brings. I just am more 
optimistic than you about our ability to handle choice. :)



And who knows... what works or is recommended by one deployer may not be
what is best for another type of deployer and I believe we (the

Re: [openstack-dev] [Fuel] Use lrzip for upgrade tarball - reject?

2014-08-21 Thread Dmitry Pyzhov
I see no other quick solutions in 5.1. We can find the difference in
packages between 5.0 and 5.0.2, put only updated packages in tarball and
get missed packages from existing repos on master node.


On Thu, Aug 21, 2014 at 5:55 PM, Mike Scherbakov mscherba...@mirantis.com
wrote:

 What are other possible solutions to this issue?


 On Thu, Aug 21, 2014 at 5:50 PM, Dmitry Pyzhov dpyz...@mirantis.com
 wrote:

 Fuelers,

 Our upgrade tarball for 5.1 is more than 4.5Gb. We can reduce it size by
 2Gb with lrzip tool (ticket
 https://bugs.launchpad.net/fuel/+bug/1356813, change in build system
 https://review.openstack.org/#/c/114201/, change in docs
 https://review.openstack.org/#/c/115331/), but it will dramatically
 increase unpacking time. I've run unpack on my virtualbox environment and
 got this result:
 [root@fuel var]# lrzuntar fuel-5.1-upgrade.tar.lrz
 Decompressing...
 100%7637.48 /   7637.48 MB
 Average DeCompression Speed:  8.014MB/s
 [OK] - 8008478720 bytes
 Total time: 00:15:52.93

 My suggestion is to reject this change, release 5.1 with big tarball and
 find another solution in next release. Any objections?

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Mike Scherbakov
 #mihgen


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Enable SSL between client and API exposed via public URL with HAProxy

2014-08-21 Thread Guillaume Thouvenin
On Thu, Aug 21, 2014 at 5:02 PM, Mike Scherbakov mscherba...@mirantis.com
wrote:



 Guillaume, do I understand right that without implementation of
 https://blueprints.launchpad.net/fuel/+spec/ca-deployment, SSL support
 will not be fully automated? And, consequently, we can not call it as
 complete production ready feature for Fuel users?


Yes you are right.  Without the implementation of the CA deployment  we can
not consider it as ready to use.
To test my deployment I manually copy a self-signed certificate on all
controllers on a predefined location according to what I have in the puppet
manifest. So it's really just for testing. I also write a small puppet
manifest to generate a self signed certificate to deploy it automatically
but it works only for one controller so this solution is also only for
testing.

So to have the feature ready for production we need to manage certificate
maybe as a new option into the fuel dashboard.

Best Regards,
Guillaume
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Incubator concerns from packaging perspective

2014-08-21 Thread Kyle Mestery
On Thu, Aug 21, 2014 at 5:12 AM, Ihar Hrachyshka ihrac...@redhat.com wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512

 On 20/08/14 18:28, Salvatore Orlando wrote:
 Some comments inline.

 Salvatore

 On 20 August 2014 17:38, Ihar Hrachyshka ihrac...@redhat.com
 mailto:ihrac...@redhat.com wrote:

 Hi all,

 I've read the proposal for incubator as described at [1], and I
 have several comments/concerns/suggestions to this.

 Overall, the idea of giving some space for experimentation that
 does not alienate parts of community from Neutron is good. In that
 way, we may relax review rules and quicken turnaround for preview
 features without loosing control on those features too much.

 Though the way it's to be implemented leaves several concerns, as
 follows:

 1. From packaging perspective, having a separate repository and
 tarballs seems not optimal. As a packager, I would better deal with
 a single tarball instead of two. Meaning, it would be better to
 keep the code in the same tree.

 I know that we're afraid of shipping the code for which some users
 may expect the usual level of support and stability and
 compatibility. This can be solved by making it explicit that the
 incubated code is unsupported and used on your user's risk. 1) The
 experimental code wouldn't probably be installed unless explicitly
 requested, and 2) it would be put in a separate namespace (like
 'preview', 'experimental', or 'staging', as the call it in Linux
 kernel world [2]).

 This would facilitate keeping commit history instead of loosing it
 during graduation.

 Yes, I know that people don't like to be called experimental or
 preview or incubator... And maybe neutron-labs repo sounds more
 appealing than an 'experimental' subtree in the core project.
 Well, there are lots of EXPERIMENTAL features in Linux kernel that
 we actively use (for example, btrfs is still considered
 experimental by Linux kernel devs, while being exposed as a
 supported option to RHEL7 users), so I don't see how that naming
 concern is significant.


 I think this is the whole point of the discussion around the
 incubator and the reason for which, to the best of my knowledge,
 no proposal has been accepted yet.


 I wonder where discussion around the proposal is running. Is it public?

The discussion started out privately as the incubation proposal was
put together, but it's now on the mailing list, in person, and in IRC
meetings. Lets keep the discussion going on list now.


 2. If those 'extras' are really moved into a separate repository
 and tarballs, this will raise questions on whether packagers even
 want to cope with it before graduation. When it comes to supporting
 another build manifest for a piece of code of unknown quality, this
 is not the same as just cutting part of the code into a separate
 experimental/labs package. So unless I'm explicitly asked to
 package the incubator, I wouldn't probably touch it myself. This is
 just too much effort (btw the same applies to moving plugins out of
 the tree - once it's done, distros will probably need to reconsider
 which plugins they really want to package; at the moment, those
 plugins do not require lots of time to ship them, but having ~20
 separate build manifests for each of them is just too hard to
 handle without clear incentive).


 One reason instead for moving plugins out of the main tree is
 allowing their maintainers to have full control over them. If
 there was a way with gerrit or similars to give somebody rights
 to merge code only on a subtree I probably would not even
 consider the option of moving plugin and drivers away. From my
 perspective it's not that I don't want them in the main tree,
 it's that I don't think it's fair for core team reviewers to take
 responsibility of approving code that they can't fully tests (3rd
 partt CI helps, but is still far from having a decent level of
 coverage).


 I agree with that. I actually think that moving vendor plugins outside
 the main tree AND rearranging review permissions and obligations
 should be extremely beneficial to the community. I'm totally for that
 as quick as possible (Kilo please!) Reviewers waste their time
 reviewing plugins that are in most cases interesting for a tiny
 fraction of operators. Let the ones that are primarily interested in
 good quality of that code (vendors) to drive development. And if some
 plugins become garbage, it's bad news for specific vendors; if neutron
 screws because of lack of concentration on core features and open
 source plugins, everyone is doomed.

 Of course, splitting vendor plugins into separate repositories will
 make life of packagers a bit harder, but the expected benefits from
 such move are huge, so - screw packagers on this one. :)

 Though the way incubator is currently described in that proposal on
 the wiki doesn't clearly imply similar benefits for the project, hence
 concerns.

Lets not confuse the incubator with moving drivers out of tree. The
two proposals 

Re: [openstack-dev] [neutron][ml2] Mech driver as out-of-tree add-on

2014-08-21 Thread Kyle Mestery
On Fri, Aug 15, 2014 at 2:26 PM, Kevin Benton blak...@gmail.com wrote:
 I definitely agree that reviewer time is wasted reviewing changes. However,
 I don't think moving them to a different repo with different cores is going
 to make them less brittle without some very strict guidelines about what a
 driver/plugin is allowed to do.

Agreed.

 For example, without neutron core reviewer oversight, what prevents a plugin
 author from monkey patching parts of the neutron API? If nothing, that will
 immediately break on any kind of API refactoring, module renaming, etc.

The fact that this will need to be merged in a repo under the
networking program means we would catch this. However, the person who
wants to monkey patch like this could easily move their plugin out of
tree and monkey patch to their hearts content.

 That scenario also brings up another concern. Will changes to neutron that
 break a vendor plugin even be blocked on the neutron side? If so, the
 neutron repo will be held hostage by third-party code that isn't in Neutron
 and lacks the quality control it would have in Neutron. If not, this will
 immediately break the gate on the driver repo, forcing maintainers to
 disable the gating job for that vendor plugin. Neither of these scenarios
 seem less brittle to me.

If we had cross-repo CI running (like was suggested for the
incubator), we would catch things like this. In other words, if the
driver repo ran for patches to the neutron repo and posted back, we
could catch this.

 What the PLUMgrid folks did works; however, IIUC it was at the sacrifice of
 unit tests verifying any calls into the plumlib. There is just a fake driver
 that accepts anything for the unit tests. [1] They could implement a lot of
 mock logic to simulate the real driver, but then we are back to square one
 and they might as well put the actual driver there.

 I'm all for moving drivers/plugins out of Neutron, but we need to be really
 careful here because we are going to lose a lot of quality control that
 Neutron could end up taking the blame for since these drivers/plugins are
 still in a public repo.

++, this is a critical area here. On the other hand, the current model
of adding 5-6 new plugins/drivers for proprietary backends each cycle
won't scale going forward, and the level of involvement of most of
these companies ends at their plugin. So something needs to change to
make this scalable going forward.

Kyle

 1.
 https://github.com/openstack/neutron/blob/08529376f16837083c28b009411cc52e0e2a8d33/neutron/plugins/plumgrid/drivers/fake_plumlib.py


 On Fri, Aug 15, 2014 at 8:50 AM, Kyle Mestery mest...@mestery.com wrote:

 I think the review time alone is a huge issue. Even worse, for the
 most part, core reviewers are reviewing code for which they themselves
 can't test because it requires proprietary hardware or software,
 making the situation brittle at best. Having a separate git repository
 which is gated by stringent third-party CI requirements, with separate
 (and possibly overlapping) core reviewers would help to alleviate this
 problem. Another alternative is to move most intelligence out of the
 plugins/drivers and into vendor owned packages which can live on pypi.
 This is similar to what the PLUMgrid folks did for their plugin. This
 allows vendor control over most of their bits, removes the constant
 churn for simple bug fixes in the neutron tree, and adds the benefit
 of being a part of the simultaneous release, which is the only thing
 most vendors care about.

 On Thu, Aug 14, 2014 at 10:34 PM, Kevin Benton blak...@gmail.com wrote:
 I also feel like the drivers/plugins are currently BEYOND a tipping
  point, and are in fact dragging down velocity of the core project in
  many ways.
 
  Can you elaborate on the ways that they are slowing down the velocity? I
  know they take up reviewer time, but are there other ways that you think
  they slow down the project?
 
 
  On Thu, Aug 14, 2014 at 6:07 AM, Kyle Mestery mest...@mestery.com
  wrote:
 
  I also feel like the drivers/plugins are currently BEYOND a tipping
  point, and are in fact dragging down velocity of the core project in
  many ways. I'm working on a proposal for Kilo where we move all
  drivers/plugins out of the main Neutron tree and into a separate git
  repository under the networking program. We have way too many drivers,
  requiring way too man review cycles, for this to be a sustainable
  model going forward. Since the main reason plugin/driver authors want
  their code upstream is to be a part of the simultaneous release, and
  thus be packaged by distributions, having a separate repository for
  these will satisfy this requirement. I'm still working through the
  details around reviews of this repository, etc.
 
  Also, I feel as if the level of passion on the mailing list has died
  down a bit, so I thought I'd send something out to try and liven
  things up a bit. It's been somewhat non-emotional here for a day or
  so. :)
 
  

Re: [openstack-dev] [Fuel] Use lrzip for upgrade tarball - reject?

2014-08-21 Thread Igor Kalnitsky
Hi,

Hmm.. I think ~15 minutes isn't long enough to skip this approach in production.
What about using lrzip only for end-users, but keep regular tarball
for CI and internal usage?

Thanks,
Igor

On Thu, Aug 21, 2014 at 5:22 PM, Dmitry Pyzhov dpyz...@mirantis.com wrote:
 I see no other quick solutions in 5.1. We can find the difference in
 packages between 5.0 and 5.0.2, put only updated packages in tarball and get
 missed packages from existing repos on master node.


 On Thu, Aug 21, 2014 at 5:55 PM, Mike Scherbakov mscherba...@mirantis.com
 wrote:

 What are other possible solutions to this issue?


 On Thu, Aug 21, 2014 at 5:50 PM, Dmitry Pyzhov dpyz...@mirantis.com
 wrote:

 Fuelers,

 Our upgrade tarball for 5.1 is more than 4.5Gb. We can reduce it size by
 2Gb with lrzip tool (ticket, change in build system, change in docs), but it
 will dramatically increase unpacking time. I've run unpack on my virtualbox
 environment and got this result:
 [root@fuel var]# lrzuntar fuel-5.1-upgrade.tar.lrz
 Decompressing...
 100%7637.48 /   7637.48 MB
 Average DeCompression Speed:  8.014MB/s
 [OK] - 8008478720 bytes
 Total time: 00:15:52.93

 My suggestion is to reject this change, release 5.1 with big tarball and
 find another solution in next release. Any objections?

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Mike Scherbakov
 #mihgen


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Change of meeting time

2014-08-21 Thread Petr Blaho
On Wed, Aug 20, 2014 at 05:30:25AM -0400, Dougal Matthews wrote:
 - Original Message -
  From: Derek Higgins der...@redhat.com
  To: OpenStack Development Mailing List (not for usage questions) 
  openstack-dev@lists.openstack.org
  Sent: Wednesday, 20 August, 2014 10:15:51 AM
  Subject: Re: [openstack-dev] [TripleO] Change of meeting time
  
  On 24/05/14 01:21, James Polley wrote:
   Following a lengthy discussion under the subject Alternating meeting
   tmie for more TZ friendliness, the TripleO meeting now alternates
   between Tuesday 1900UTC (the former time) and Wednesday 0700UTC, for
   better coverage across Australia, India, China, Japan, and the other
   parts of the world that found it impossible to get to our previous
   meeting time.
  
  Raising a point that came up on this morning's irc meeting
  
  A lot (most?) of the people at this morning's meeting were based in
  western Europe, getting up earlier then usual for the meeting (me
  included), When daylight saving kicks in it might push them passed the
  threshold, would an hour later (0800 UTC) work better for people or is
  the current time what fits best?
  
  I'll try to make the meeting regardless if its moved or not but an hour
  later would certainly make it a little more palatable.
 
 +1, I don't have a strong preference, but an hour later would make it a
 bit easier, particularly when DST kicks in.
 
 Dougal
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

I prefer later time for Wednesday too.

-- 
Petr Blaho, pbl...@redhat.com
Software Engineer

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-21 Thread Kyle Mestery
On Thu, Aug 21, 2014 at 4:09 AM, Thierry Carrez thie...@openstack.org wrote:
 Zane Bitter wrote:
 On 11/08/14 05:24, Thierry Carrez wrote:
 This all has created a world where you need to be*in*  OpenStack to
 matter, or to justify the investment. This has created a world where
 everything and everyone wants to be in the OpenStack integrated
 release. This has created more pressure to add new projects, and less
 pressure to fix and make the existing projects perfect. 4 years in, we
 might want to inflect that trajectory and take steps to fix this world.

 We should certainly consider this possibility, that we've set up
 perverse incentives leading to failure. But what if it's just because we
 haven't yet come even close to satisfying all of our users' needs? I
 mean, AWS has more than 30 services that could be considered equivalent
 in scope to an OpenStack project... if anything our scope is increasing
 more _slowly_ than the industry at large. I'm slightly shocked that
 nobody in this thread appears to have even entertained the idea that
 *this is what success looks like*.

 The world is not going to stop because we want to get off, take a
 breather, do a consolidation cycle.

 That's an excellent counterpoint, thank you for voicing it so eloquently.

 Our challenge is to improve our structures so that we can follow the
 rhythm the world imposes on us. It's a complex challenge, especially in
 an open collaboration experiment where you can't rely that much on past
 experiences or traditional methods. So it's always tempting to slow
 things down, to rate-limit our success to make that challenge easier.

++

Thanks for wording this perfectly. It's sometimes easy to look at
things through a single lens, as a community it's good when we look at
all the angles of a problem.

I think the main point is it's sometimes hard to judge the future of a
project like OpenStack from the past, because as we move forward we
add new variables to the equation. Thus, adjusting on the fly is
really the only way forward. The points in this thread make it clear
we're doing that as a project, but perhaps not at a quick enough pace.

Thanks,
Kyle

 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Prioritizing review of potentially approvable patches

2014-08-21 Thread Dan Genin

Hear, hear!

Dan

On 08/21/2014 07:57 AM, Daniel P. Berrange wrote:

Tagged with '[nova]' but this might be relevant data / idea for other
teams too.

With my code contributor hat on, one of the things that I find most the
frustrating about Nova code review process is that a patch can get a +2
vote from one core team member and then sit around for days, weeks, even
months without getting a second +2 vote, even if it has no negative
feedback at all and is a simple  important bug fix.

If a patch is good enough to have received one +2 vote, then compared to
the open patches as a whole, this patch is much more likely to be one
that is ready for approval  merge. It will likely be easier to review,
since it can be assumed other reviewers have already caught the majority
of the silly / tedious / time consuming bugs.

Letting these patches languish with a single +2 for too long makes it very
likely that, when a second core reviewer finally appears, there will be a
merge conflict or other bit-rot that will cause it to have to undergo yet
another rebase  re-review. This is wasting time of both our contributors
and our review team.

On this basis I suggest that core team members should consider patches
that already have a +2 to be high(er) priority items to review than open
patches as a whole.

Currently Nova has (on master branch)

   - 158 patches which have at least one +2 vote, and are not approved
   - 122 patches which have at least one +2 vote, are not approved and
 don't have any -1 code review votes.

So that's 122 patches that should be easy candidates for merging right
now. Another 30 can possibly be merged depending on whether the core
reviewer agrees with the -1 feedback given or now.

That is way more patches than we should have outstanding in that state.
It is not unreasonable to say that once a patch has a single +2 vote, we
should aim to get either a second +2 vote or further -1 review feedback
in a matter of days, and certainly no longer than a week.

If everyone on the core team looked at the list of potentially approvable
patches each day I think it would significantly improve our throughput.
It would also decrease the amount of review work overall by reducing
chance that patches bitrot  need rebase for merge conflicts. And most
importantly of all it will give our code contributors a better impression
that we care about them.

As an added carrot, working through this list will be an effective way
to improve your rankings [1] against other core reviewers, not that I
mean to suggest we should care about rankings over review quality ;-P

The next version of gerrymander[2] will contain a new command to allow
core reviewers to easily identify these patches

$ gerrymander todo-approvable -g nova --branch master

This will of course filter out patches which you yourself own since you
can't approve your own work. It will also filter out patches which you
have given feedback on already. What's left will be a list of patches
where you are able to apply the casting +2 vote to get to +A state.
If the '--strict' arg is added it will also filter out any patches which
have a -1 code review comment.

Regards,
Daniel

[1] http://russellbryant.net/openstack-stats/nova-reviewers-30.txt
[2] 
https://github.com/berrange/gerrymander/commit/790df913fc512580d92e808f28793e29783fecd7





smime.p7s
Description: S/MIME Cryptographic Signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Non-readonly connection to libvirt in unit tests

2014-08-21 Thread Solly Ross
FYI, the context of this is that I would like to be able to test some of the 
libvirt storage pool code against a live file system, as we currently test the 
storage pool code.  To do this, we need at least to be able to get a proper 
connection to a session daemon.  IMHO, since these calls aren't expensive, so 
to speak, it should be fine to have them run against a real libvirt.

 So If we require libvirt-python for tests and that requires
 libvirt-bin, what's stopping us from just removing fakelibvirt since
 it's kind of useless now anyway, right?

The thing about fakelibvirt is that it allows us to operate against against a 
libvirt API without actually doing libvirt-y things like launching VMs.  Now, 
libvirt does have a test:///default URI that IIRC has similar functionality, 
so we could start to phase out fakelibvirt in favor of that.  However, there 
are probably still some spots where we'll want to use fakelibvirt.

Best Regards,
Solly

- Original Message -
 From: Matt Riedemann mrie...@linux.vnet.ibm.com
 To: openstack-dev@lists.openstack.org
 Sent: Wednesday, August 20, 2014 8:37:39 PM
 Subject: Re: [openstack-dev] [nova][libvirt] Non-readonly connection to 
 libvirt in unit tests
 
 
 
 On 8/11/2014 4:42 AM, Daniel P. Berrange wrote:
  On Mon, Aug 04, 2014 at 06:46:13PM -0400, Solly Ross wrote:
  Hi,
  I was wondering if there was a way to get a non-readonly connection
  to libvirt when running the unit tests
  on the CI.  If I call `LibvirtDriver._connect(LibvirtDriver.uri())`,
  it works fine locally, but the upstream
  CI barfs with libvirtError: internal error Unable to locate libvirtd
  daemon in /usr/sbin (to override, set $LIBVIRTD_PATH to the name of the
  libvirtd binary).
  If I try to connect by calling libvirt.open(None), it also barfs, saying
  I don't have permission to connect.  I could just set it to always use
  fakelibvirt,
  but it would be nice to be able to run some of the tests against a real
  target.  The tests in question are part of
  https://review.openstack.org/#/c/111459/,
  and involve manipulating directory-based libvirt storage pools.
 
  Nothing in the unit tests should rely on being able to connect to the
  libvirt daemon, as the unit tests should still be able to pass when the
  daemon is not running at all. We should be either using fakelibvirt or
  mocking any libvirt APIs that need to be tested
 
  Regards,
  Daniel
 
 
 Also, doesn't this kind of break with the test requirement on
 libvirt-python now?  Before I was on trusty and trying to install that
 it was failing because I didn't have a new enough version of libvirt-bin
 installed.  So if we require libvirt-python for tests and that requires
 libvirt-bin, what's stopping us from just removing fakelibvirt since
 it's kind of useless now anyway, right?
 
 --
 
 Thanks,
 
 Matt Riedemann
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Change of meeting time

2014-08-21 Thread Giulio Fidente

On 08/20/2014 11:15 AM, Derek Higgins wrote:

I'll try to make the meeting regardless if its moved or not but an hour
later would certainly make it a little more palatable.


+1

--
Giulio Fidente
GPG KEY: 08D733BA

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party] What tests are required to be run

2014-08-21 Thread Dane Leblanc (leblancd)
Edgar:

The status on the wiki page says Results are not accurate. Needs clarification 
from Cisco.
Can you please tell me what we are missing?

-Dane

-Original Message-
From: Dane Leblanc (leblancd) 
Sent: Tuesday, August 19, 2014 3:05 PM
To: 'Edgar Magana'; OpenStack Development Mailing List (not for usage questions)
Subject: RE: [openstack-dev] [neutron] [third-party] What tests are required to 
be run

The APIC CI did run tests against that commit (after some queue latency):

http://128.107.233.28:8080/job/apic/1860/
http://cisco-neutron-ci.cisco.com/logs/apic/1860/

But the review comments never showed up on Gerrit. This seems to be an 
intermittent quirk of Jenkins/Gerrit: We have 3 CIs triggered from this 
Jenkins/Gerrit server. Whenever we disable another one of our other Jenkins 
jobs (in this case, we disabled DFA for some rework), the review comments 
sometimes stop showing up on Gerrit.

-Original Message-
From: Edgar Magana [mailto:edgar.mag...@workday.com]
Sent: Tuesday, August 19, 2014 1:33 PM
To: Dane Leblanc (leblancd); OpenStack Development Mailing List (not for usage 
questions)
Subject: Re: [openstack-dev] [neutron] [third-party] What tests are required to 
be run

I was looking to one of the most recent Neutron commits:
https://review.openstack.org/#/c/115175/


I could not find the APIC report.

Edgar

On 8/19/14, 9:48 AM, Dane Leblanc (leblancd) lebla...@cisco.com wrote:

From which commit is it missing?
https://review.openstack.org/#/c/114629/
https://review.openstack.org/#/c/114393/

-Original Message-
From: Edgar Magana [mailto:edgar.mag...@workday.com]
Sent: Tuesday, August 19, 2014 12:28 PM
To: Dane Leblanc (leblancd); OpenStack Development Mailing List (not 
for usage questions)
Subject: Re: [openstack-dev] [neutron] [third-party] What tests are 
required to be run

Dane,

Are you sure about it?
I just went to this commit and I could not find the APIC tests.

Thanks,

Edgar

On 8/17/14, 8:47 PM, Dane Leblanc (leblancd) lebla...@cisco.com wrote:

Edgar:

The Cisco APIC should be reporting results for both APIC-related and 
non-APIC related changes now.
(See http://cisco-neutron-ci.cisco.com/logs/apic/1738/).

Will you be updating the wiki page?

-Dane

-Original Message-
From: Dane Leblanc (leblancd)
Sent: Friday, August 15, 2014 8:18 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [third-party] What tests are 
required to be run

Also, you can add me as a contact person for the Cisco VPNaaS driver.

-Original Message-
From: Dane Leblanc (leblancd)
Sent: Friday, August 15, 2014 8:14 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: RE: [openstack-dev] [neutron] [third-party] What tests are 
required to be run

Edgar:

For the Notes for the Cisco APIC, can you change the comment results 
are fake to something like results are only valid for APIC-related 
commits? I think this more accurately represents our current results 
(for reasons we chatted about on another thread).

Thanks,
Dane

-Original Message-
From: Edgar Magana [mailto:edgar.mag...@workday.com]
Sent: Friday, August 15, 2014 6:36 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [third-party] What tests are 
required to be run
Importance: High

Team,

I did a quick audit on the Neutron CI. Very sad results. Only few 
plugins and drivers are running properly and testing all Neutron commits.
I created a report here:
https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers#Existing_P
l
ugi
n
_and_Drivers


We will discuss the actions to take on the next Neutron IRC meeting. 
So please, reach me out to clarify what is the status of your CI.
I had two commits to quickly verify the CI reliability:

https://review.openstack.org/#/c/114393/

https://review.openstack.org/#/c/40296/


I would expect all plugins and drivers passing on the first one and 
failing for the second but I got so many surprises.

Neutron code quality and reliability is a top priority, if you ignore 
this report that plugin/driver will be candidate to be remove from 
Neutron tree.

Cheers,

Edgar

P.s. I hate to be the inquisitor hereŠ but someone has to do the dirty 
job!


On 8/14/14, 8:30 AM, Kyle Mestery mest...@mestery.com wrote:

Folks, I'm not sure if all CI accounts are running sufficient tests.
Per the requirements wiki page here [1], everyone needs to be running 
more than just Tempest API tests, which I still see most neutron 
third-party CI setups doing. I'd like to ask everyone who operates a 
third-party CI account for Neutron to please look at the link below 
and make sure you are running appropriate tests. If you have 
questions, the weekly third-party meeting [2] is a great place to ask 
questions.

Thanks,
Kyle

[1] https://wiki.openstack.org/wiki/NeutronThirdPartyTesting
[2] https://wiki.openstack.org/wiki/Meetings/ThirdParty


Re: [openstack-dev] [Congress] Policy Enforcement logic

2014-08-21 Thread Jay Lau
I know that Congress is still under development, but it is better that it
can provide some info for How to use it just like docker
https://wiki.openstack.org/wiki/Docker , this might attract more people
contributing to it.


2014-08-21 22:07 GMT+08:00 Madhu Mohan mmo...@mvista.com:

 Hi,

 I am quite new to the Congress and Openstack as well and this question may
 seem very trivial and basic.

 I am trying to figure out the policy enforcement logic,

 Can some body help me understand how exactly, a policy enforcement action
 is taken.

 From the example policy there is an action defined as:



 *action(disconnect_network) nova:network-(vm, network) :-
 disconnect_network(vm, network) *
 I assume that this statement when applied would translate to deletion of
 entry in the database.

 But, how does this affect the actual setup (i.e) How is this database
 update translated to actual disconnection of the VM from the network.
 How does nova know that it has to disconnect the VM from the network ?

 Thanks and Regards,
 Madhu Mohan




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] New feature on Nova

2014-08-21 Thread thomas.pessione
Hello,



Sorry if I am not on the right mailing list. I would like to get some 
information.



I would like to know if I am a company who wants to add a feature on an 
openstack module. How do we have to proceed ? And so, what is the way this new 
feature be adopted by the community.



The feature is, the maintenance mode.  That is to say, disable a compute node 
and do live migration on all the instances which are  running on the host.

I know we can do an evacuate, but evacuate restart the instances. I have 
already written a shell script to do this using command-cli.


Regards,

_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.
Thank you.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Non-readonly connection to libvirt in unit tests

2014-08-21 Thread Daniel P. Berrange
On Thu, Aug 21, 2014 at 10:52:42AM -0400, Solly Ross wrote:
 FYI, the context of this is that I would like to be able to test some 
 of the libvirt storage pool code against a live file system, as we
 currently test the storage pool code.  To do this, we need at least to
 be able to get a proper connection to a session daemon.  IMHO, since
 these calls aren't expensive, so to speak, it should be fine to have
 them run against a real libvirt.

No it really isn't OK to run against the real libvirt host system when
in the unit tests. Unit tests must *not* rely on external system state
in this way because it will lead to greater instability and unreliability
of our unit tests. If you want to test stuff against the real libvirt
storage pools then that becomes a functional / integration test suite
which is pretty much what tempest is targetting.
 
  So If we require libvirt-python for tests and that requires
  libvirt-bin, what's stopping us from just removing fakelibvirt since
  it's kind of useless now anyway, right?
 
 The thing about fakelibvirt is that it allows us to operate against
 against a libvirt API without actually doing libvirt-y things like
 launching VMs.  Now, libvirt does have a test:///default URI that
 IIRC has similar functionality, so we could start to phase out fake
 libvirt in favor of that.  However, there are probably still some
 spots where we'll want to use fakelibvirt.

I'm actually increasingly of the opinion that we should not in fact
be trying to use the real libvirt library in the unit tests at all
as it is not really adding any value. We typically nmock out all the
actual API calls we exercise so despite using libvirt-python we
are not in fact exercising its code or even validating that we're
passing the correct numbers of parameters to API calls. Pretty much
all we really relying on is the existance of the various global
constants that are defined, and that has been nothing but trouble
because the constants may or may not be defined depending on the
version.

The downside of fakelibvirt is that it is a half-assed implementation
of libvirt that we evolve in an adhoc fashion. I'm exploring the idea
of using pythons introspection abilities to query the libvirt-python
API and automatically generate a better 'fakelibvirt' that we can
guarantee to match the signatures of the real libvirt library. If we
had something like that which we had more confidence in, then we could
make the unit tests use that unconditionally. This would make our unit
tests more reliable since we would not be suspectible to different API
coverage in different libvirt module versions which have tripped us up
so many times

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] New feature on Nova

2014-08-21 Thread Daniel P. Berrange
On Thu, Aug 21, 2014 at 05:00:59PM +0200, thomas.pessi...@orange.com wrote:
 Hello,
 
 Sorry if I am not on the right mailing list. I would like to get some
 information.
 
 I would like to know if I am a company who wants to add a feature on
 an openstack module. How do we have to proceed ? And so, what is the
 way this new feature be adopted by the community.

The Nova team follows what we call the Specs  blueprints process
for the proposal  approval  implementation of new features. There
is a reasonable overview of it here:

  https://wiki.openstack.org/wiki/Blueprints#Spec_.2B_Blueprints_lifecycle

In essence we have a template text document which you fill in with
info about your desired feature. The Nova team reviews that and
after one or more iterations of feedback+update we'll either approve
or reject the proposed feature. Once approved you can write the code
and submit it for review in the appropriate release.

 The feature is, the maintenance mode.  That is to say, disable a compute
 node and do live migration on all the instances which are  running on
 the host.

I have a feeling that this proposal from another contributor might do
the kind of thing you are describing here:

  
https://blueprints.launchpad.net/python-novaclient/+spec/host-servers-live-migrate

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Non-readonly connection to libvirt in unit tests

2014-08-21 Thread Solly Ross
(reply inline)

- Original Message -
 From: Daniel P. Berrange berra...@redhat.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Thursday, August 21, 2014 11:05:18 AM
 Subject: Re: [openstack-dev] [nova][libvirt] Non-readonly connection to 
 libvirt in unit tests
 
 On Thu, Aug 21, 2014 at 10:52:42AM -0400, Solly Ross wrote:
  FYI, the context of this is that I would like to be able to test some
  of the libvirt storage pool code against a live file system, as we
  currently test the storage pool code.  To do this, we need at least to
  be able to get a proper connection to a session daemon.  IMHO, since
  these calls aren't expensive, so to speak, it should be fine to have
  them run against a real libvirt.
 
 No it really isn't OK to run against the real libvirt host system when
 in the unit tests. Unit tests must *not* rely on external system state
 in this way because it will lead to greater instability and unreliability
 of our unit tests. If you want to test stuff against the real libvirt
 storage pools then that becomes a functional / integration test suite
 which is pretty much what tempest is targetting.

That's all well and good, but we *currently* manipulates the actual file
system manually in tests.  Should we then say that we should never manipulate
the actual file system either?  In that case, there are some tests which
need to be refactored.

  
   So If we require libvirt-python for tests and that requires
   libvirt-bin, what's stopping us from just removing fakelibvirt since
   it's kind of useless now anyway, right?
  
  The thing about fakelibvirt is that it allows us to operate against
  against a libvirt API without actually doing libvirt-y things like
  launching VMs.  Now, libvirt does have a test:///default URI that
  IIRC has similar functionality, so we could start to phase out fake
  libvirt in favor of that.  However, there are probably still some
  spots where we'll want to use fakelibvirt.
 
 I'm actually increasingly of the opinion that we should not in fact
 be trying to use the real libvirt library in the unit tests at all
 as it is not really adding any value. We typically nmock out all the
 actual API calls we exercise so despite using libvirt-python we
 are not in fact exercising its code or even validating that we're
 passing the correct numbers of parameters to API calls. Pretty much
 all we really relying on is the existance of the various global
 constants that are defined, and that has been nothing but trouble
 because the constants may or may not be defined depending on the
 version.

Isn't that what 'test:///default' is supposed to be?  A version of libvirt
with libvirt not actually touching the rest of the system?

 
 The downside of fakelibvirt is that it is a half-assed implementation
 of libvirt that we evolve in an adhoc fashion. I'm exploring the idea
 of using pythons introspection abilities to query the libvirt-python
 API and automatically generate a better 'fakelibvirt' that we can
 guarantee to match the signatures of the real libvirt library. If we
 had something like that which we had more confidence in, then we could
 make the unit tests use that unconditionally. This would make our unit
 tests more reliable since we would not be suspectible to different API
 coverage in different libvirt module versions which have tripped us up
 so many times
 
 Regards,
 Daniel
 --
 |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
 |: http://libvirt.org  -o- http://virt-manager.org :|
 |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
 |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] New feature on Nova

2014-08-21 Thread Jay Lau
There is already a blueprint tracing KVM host maintain:
https://blueprints.launchpad.net/nova/+spec/host-maintenance , but I think
that nova will not handle the case of auto live migration for maintenance
host, this should be a use case of Congress:
https://wiki.openstack.org/wiki/Congress


2014-08-21 23:00 GMT+08:00 thomas.pessi...@orange.com:

 Hello,



 Sorry if I am not on the right mailing list. I would like to get some
 information.



 I would like to know if I am a company who wants to add a feature on an
 openstack module. How do we have to proceed ? And so, what is the way this
 new feature be adopted by the community.



 The feature is, the maintenance mode.  That is to say, disable a compute
 node and do live migration on all the instances which are  running on the
 host.

 I know we can do an evacuate, but evacuate restart the instances. I have
 already written a shell script to do this using command-cli.



 Regards,

 _

 Ce message et ses pieces jointes peuvent contenir des informations 
 confidentielles ou privilegiees et ne doivent donc
 pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu 
 ce message par erreur, veuillez le signaler
 a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
 electroniques etant susceptibles d'alteration,
 Orange decline toute responsabilite si ce message a ete altere, deforme ou 
 falsifie. Merci.

 This message and its attachments may contain confidential or privileged 
 information that may be protected by law;
 they should not be distributed, used or copied without authorisation.
 If you have received this email in error, please notify the sender and delete 
 this message and its attachments.
 As emails may be altered, Orange is not liable for messages that have been 
 modified, changed or falsified.
 Thank you.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Non-readonly connection to libvirt in unit tests

2014-08-21 Thread Daniel P. Berrange
On Thu, Aug 21, 2014 at 11:14:33AM -0400, Solly Ross wrote:
 (reply inline)
 
 - Original Message -
  From: Daniel P. Berrange berra...@redhat.com
  To: OpenStack Development Mailing List (not for usage questions) 
  openstack-dev@lists.openstack.org
  Sent: Thursday, August 21, 2014 11:05:18 AM
  Subject: Re: [openstack-dev] [nova][libvirt] Non-readonly connection to 
  libvirt in unit tests
  
  On Thu, Aug 21, 2014 at 10:52:42AM -0400, Solly Ross wrote:
   FYI, the context of this is that I would like to be able to test some
   of the libvirt storage pool code against a live file system, as we
   currently test the storage pool code.  To do this, we need at least to
   be able to get a proper connection to a session daemon.  IMHO, since
   these calls aren't expensive, so to speak, it should be fine to have
   them run against a real libvirt.
  
  No it really isn't OK to run against the real libvirt host system when
  in the unit tests. Unit tests must *not* rely on external system state
  in this way because it will lead to greater instability and unreliability
  of our unit tests. If you want to test stuff against the real libvirt
  storage pools then that becomes a functional / integration test suite
  which is pretty much what tempest is targetting.
 
 That's all well and good, but we *currently* manipulates the actual file
 system manually in tests.  Should we then say that we should never manipulate
 the actual file system either?  In that case, there are some tests which
 need to be refactored.

Places where the tests manipulate the filesystem though should be doing
so in an isolated playpen directory, not in the live location where
a deployed nova runs, so that's not the same thing.

So If we require libvirt-python for tests and that requires
libvirt-bin, what's stopping us from just removing fakelibvirt since
it's kind of useless now anyway, right?
   
   The thing about fakelibvirt is that it allows us to operate against
   against a libvirt API without actually doing libvirt-y things like
   launching VMs.  Now, libvirt does have a test:///default URI that
   IIRC has similar functionality, so we could start to phase out fake
   libvirt in favor of that.  However, there are probably still some
   spots where we'll want to use fakelibvirt.
  
  I'm actually increasingly of the opinion that we should not in fact
  be trying to use the real libvirt library in the unit tests at all
  as it is not really adding any value. We typically nmock out all the
  actual API calls we exercise so despite using libvirt-python we
  are not in fact exercising its code or even validating that we're
  passing the correct numbers of parameters to API calls. Pretty much
  all we really relying on is the existance of the various global
  constants that are defined, and that has been nothing but trouble
  because the constants may or may not be defined depending on the
  version.
 
 Isn't that what 'test:///default' is supposed to be?  A version of libvirt
 with libvirt not actually touching the rest of the system?

Yes, that is what it allows for, however, even if we used that URI we
still wouldn't be actually exercising any of the libvirt code in any
meaningful way because our unit tests mock out all the API calls that
get touched. So using libvirt-python + test:///default URI doesn't
really seem to buy us anything, but it does still mean that developers
need to have libvirt installed in order to run  the unit tests. I'm
not convinced that is a beneficial tradeoff.

  The downside of fakelibvirt is that it is a half-assed implementation
  of libvirt that we evolve in an adhoc fashion. I'm exploring the idea
  of using pythons introspection abilities to query the libvirt-python
  API and automatically generate a better 'fakelibvirt' that we can
  guarantee to match the signatures of the real libvirt library. If we
  had something like that which we had more confidence in, then we could
  make the unit tests use that unconditionally. This would make our unit
  tests more reliable since we would not be suspectible to different API
  coverage in different libvirt module versions which have tripped us up
  so many times

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][ml2] Openvswitch agent support for non promic mode adapters

2014-08-21 Thread Andreas Scheuring
Hi, 
last week I started discussing an extension to the existing neutron
openvswitch agent to support network adapters that are not in
promiscuous mode. Now I would like to enhance the round to get feedback
from a broader audience via the mailing list.


The Problem
When driving vlan or flat networking, openvswitch requires an network
adapter in promiscuous mode. 


Why not having promiscuous mode in your adapter?
- Admins like to have full control over their environment and which
network packets enter the system.
- The network adapter just does not have support for it.


What to do?
Linux net-dev driver offer an interface to manually register additional
mac addresses (also called secondary unicast addresses). Exploiting this
one can register additional mac addresses to the network adapter. This
also works via a well known ip user space tool. 

`bridge fdb add aa:aa:aa:aa:aa:aa dev eth0`


What to do in openstack?
As neutron is aware of all the mac addresses that are in use it's the
perfect candidate for doing the mac registrations. The idea is to modify
the neutron openvswitch agent that it does the registration on port
add and port remove via the bridge command.
There would be a new optional configuration parameter, something like
'non-promisc-mode' that is by default set to false. Only when set to
true, macs get manually registered. Otherwise the agent behaves like it
does today. So I guess only very little changes to the agent code are
required. From my current point of view we do not need any changes to
the ml2 plug-in.


Blueprint or a bug?
I guess it's a blueprint.

What's the timeframe?
K would be great.



I would be thankful for any feedback on this! Feel free to contact me
anytime. Thanks in advance!

Regards, 
Andreas

(irc: scheuran)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [third-party] Some third party results are not showing up in the CI summary on Gerrit

2014-08-21 Thread Dane Leblanc (leblancd)
If you look at review page for any of our Neutron commits, e.g.:
   https://review.openstack.org/#/c/115025/
You'll see that there is a handy summary of CI results (with links to logs) in 
the upper right. However, there are some CIs which are missing from this 
summary, although we know that they had left a review comment or voted +1/-1 
because they're listed in the Reviewer list on the left.

Note that there is an indirect way to find the logs for these CIs, by clicking 
on Toggle CI at the very bottom and scrolling through the results.

Henry Gessau checked with the openstack-infra team for clarification, and the 
explanation he got was that the Gerrit review page will only be able to include 
summaries for the CIs which leave review comments in the same format that the 
community Jenkins CI uses.

Here's an example of the Jenkins format (for illustration, hyperlinks and tabs 
are lost here in text mode):

Jenkins Aug 20 6:52 PM

Patch Set 9: Verified+1

Build succeeded.

gate-neutron-pep8 SUCCESS in 5m 36s
gate-neutron-docs SUCCESS in 3m 46s
gate-neutron-python26 SUCCESS in 37m 37s
gate-neutron-python27 SUCCESS in 29m 26s
check-tempest-dsvm-neutron-heat-slow SUCCESS in 26m 40s
check-tempest-dsvm-neutron-pg SUCCESS in 1h 04m 55s
check-tempest-dsvm-neutron-full SUCCESS in 1h 03m 18s
check-tempest-dsvm-neutron-pg-full FAILURE in 1h 11m 01s (non-voting)
gate-tempest-dsvm-neutron-large-ops SUCCESS in 25m 11s
check-grenade-dsvm-neutron FAILURE in 32m 38s (non-voting)
check-neutron-dsvm-functional SUCCESS in 21m 20s
gate-rally-dsvm-neutron-neutron SUCCESS in 19m 31s (non-voting)
check-tempest-dsvm-neutron-pg-2 SUCCESS in 1h 00m 40s
check-tempest-dsvm-neutron-full-2 SUCCESS in 55m 28s
check-tempest-dsvm-neutron-pg-full-2 FAILURE in 1h 05m 17s (non-voting)

For CIs which use a Zuul front end, this format comes automatic. But for 
others, the format of review comments may need some changes in order to match 
the Jenkins review comment format.

I don't think this is critical, but it would be nice to have everything 
summarized consistently.

Thanks,
Dane


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-stable-maint] [all][oslo] official recommendations to handle oslo-incubator sync requests

2014-08-21 Thread Alan Pevec
 2. For stable branches, the process is a bit different. For those
 branches, we don't generally want to introduce changes that are not
 related to specific issues in a project. So in case of backports, we
 tend to do per-patch consideration when synchronizing from incubator.

I'd call this cherry-sync: format-patch commit from oslo stable,
update file and import paths and apply it on project's stable branch.
That could be an oslo-incubator RFE:  command option for update.py
--cherry-pick COMMIT

Cheers,
Alan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Policy Enforcement logic

2014-08-21 Thread Tim Hinrichs
Hi Madhu,

For the alpha release (due soon), we’re focusing on just monitoring policy 
violations—we’ve disabled all the enforcement code in master.  (Though we never 
actually hooked up the enforcement policy to the real world, so all Congress 
has ever done is compute what actions to take to enforce policy.)  There’s a 
ton of interest in enforcement, so we’re planning to add enforcement features 
to the beta release.

Tim


On Aug 21, 2014, at 7:07 AM, Madhu Mohan 
mmo...@mvista.commailto:mmo...@mvista.com wrote:

Hi,

I am quite new to the Congress and Openstack as well and this question may seem 
very trivial and basic.

I am trying to figure out the policy enforcement logic,

Can some body help me understand how exactly, a policy enforcement action is 
taken.

From the example policy there is an action defined as:

action(disconnect_network)
nova:network-(vm, network) :- disconnect_network(vm, network)

I assume that this statement when applied would translate to deletion of entry 
in the database.

But, how does this affect the actual setup (i.e) How is this database update 
translated to actual disconnection of the VM from the network.
How does nova know that it has to disconnect the VM from the network ?

Thanks and Regards,
Madhu Mohan



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Policy Enforcement logic

2014-08-21 Thread Tim Hinrichs
Hi Jay,

We have a tutorial in review right now.  It should be merged in a couple of 
days.  Thanks for the suggestion!

Tim


On Aug 21, 2014, at 7:54 AM, Jay Lau 
jay.lau@gmail.commailto:jay.lau@gmail.com wrote:

I know that Congress is still under development, but it is better that it can 
provide some info for How to use it just like docker 
https://wiki.openstack.org/wiki/Docker , this might attract more people 
contributing to it.


2014-08-21 22:07 GMT+08:00 Madhu Mohan 
mmo...@mvista.commailto:mmo...@mvista.com:
Hi,

I am quite new to the Congress and Openstack as well and this question may seem 
very trivial and basic.

I am trying to figure out the policy enforcement logic,

Can some body help me understand how exactly, a policy enforcement action is 
taken.

From the example policy there is an action defined as:

action(disconnect_network)
nova:network-(vm, network) :- disconnect_network(vm, network)

I assume that this statement when applied would translate to deletion of entry 
in the database.

But, how does this affect the actual setup (i.e) How is this database update 
translated to actual disconnection of the VM from the network.
How does nova know that it has to disconnect the VM from the network ?

Thanks and Regards,
Madhu Mohan




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Adding GateFailureFix tag to commit messages

2014-08-21 Thread Armando M.
Hi folks,

According to [1], we have ways to introduce external references to commit
messages.

These are useful to mark certain patches and their relevance in the context
of documentation, upgrades, etc.

I was wondering if it would be useful considering the addition of another
tag:

GateFailureFix

The objective of this tag, mainly for consumption by the review team, would
be to make sure that some patches get more attention than others, as they
affect the velocity of how certain critical issues are addressed (and gate
failures affect everyone).

As for machine consumption, I know that some projects use the
'gate-failure' tag to categorize LP bugs that affect the gate. The use of a
GateFailureFix tag in the commit message could make the tagging automatic,
so that we can keep a log of what all the gate failures are over time.

Not sure if this was proposed before, and I welcome any input on the matter.

Cheers,
Armando

[1] -
https://wiki.openstack.org/wiki/GitCommitMessages#Including_external_references
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Ramp-up strategy

2014-08-21 Thread Tim Hinrichs
Hi Madhu,

We have an end-user tutorial in review right now.  That should help you get 
started understanding the end-to-end flow a bit better.  Look for it to be 
merged today or tomorrow.

Tim



On Aug 21, 2014, at 2:44 AM, Madhu Mohan 
mmo...@mvista.commailto:mmo...@mvista.com wrote:

Hi,

Since a few weeks I am trying to get a hold on Congress code base and 
understand the flow.

Here is a brief summary what I am trying out:

Prepared a dummy client to send the policy strings to congress_server listening 
at the path /policies. This is now changed to v1/policies. I am using 
POST request to send the policy string to the server.

The call to server somehow seems to get converted to an action with the name 
create_policies
Added a new API create_policies in the api model policy_model.py which gets 
the policy string in params.

I am able to call compile.parse() and runtime.initialize() functions from this 
API.
The compilation produces a result in the format below:

Rule(head=[Literal(table=u'error', arguments=[Variable(name=u'vm')], 
negated=False)], body=[Literal(table=u'nova:virtual_machine', 
arguments=[Variable(name=u'vm')],.

I am not really sure about how to go about from here to see the policies 
actually getting applied and monitored.

Any resource or instructions on getting through the code flow will be of great 
help to proceed further.

Thanks in Advance,
Madhu Mohan



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Criteria for giving a -1 in a review

2014-08-21 Thread Matthew Booth
I would prefer that you didn't merge this.

i.e. The project is better off without it.

This seems to mean different things to different people. There's a list
here which contains some criteria for new commits:

https://wiki.openstack.org/wiki/ReviewChecklist.

There's also a treatise on git commit messages and the structure of a
commit here:

https://wiki.openstack.org/wiki/GitCommitMessages

However, these don't really cover the general case of what a -1 means.
Here's my brain dump:

* It contains bugs
* It is likely to confuse future developers/maintainers
* It is likely to lead to bugs
* It is inconsistent with other solutions to similar problems
* It adds complexity which is not matched by its benefits
* It isn't flexible enough for future work landing RSN
* It combines multiple changes in a single commit

Any more? I'd be happy to update the above wiki page with any consensus.
It would be useful if any generally accepted criteria were readily
referenceable.

I also think it's worth explicitly documenting a few things we
might/should mention in a review, but which aren't a reason that the
project would be better off without it:

* Stylistic issues which are not covered by HACKING

By stylistic, I mean changes which have no functional impact on the code
whatsoever. If a purely stylistic issue is sufficiently important to
reject code which doesn't adhere to it, it is important enough to add to
HACKING.

* I can think of a better way of doing this

There may be a better solution, but there is already an existing
solution. We should only be rejecting work that has already been done if
it would detract from the project for one of the reasons above. We can
always improve it further later if we find the developer time.

* It isn't flexible enough for any conceivable future feature

Lets avoid premature generalisation. We can always generalise as part of
landing the future feature.

Any more of these?

Thanks,

Matt
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Criteria for giving a -1 in a review

2014-08-21 Thread Daniel P. Berrange
On Thu, Aug 21, 2014 at 05:05:04PM +0100, Matthew Booth wrote:
 I would prefer that you didn't merge this.
 
 i.e. The project is better off without it.

A bit off topic, but I've never liked this message that gets added
as it think it sounds overly negative. It would better written
as

  This patch needs further work before it can be merged

as that gives a positive expectation that the work is still
wanted by the project in general


 This seems to mean different things to different people. There's a list
 here which contains some criteria for new commits:
 
 https://wiki.openstack.org/wiki/ReviewChecklist.
 
 There's also a treatise on git commit messages and the structure of a
 commit here:
 
 https://wiki.openstack.org/wiki/GitCommitMessages
 
 However, these don't really cover the general case of what a -1 means.
 Here's my brain dump:
 
 * It contains bugs
 * It is likely to confuse future developers/maintainers
 * It is likely to lead to bugs
 * It is inconsistent with other solutions to similar problems
 * It adds complexity which is not matched by its benefits
 * It isn't flexible enough for future work landing RSN

s/RSN//

There are times where the design is not flexible enough and we
do not want to accept regardless of when future work might land.
This is specifically the case with things that are adding APIs
or impacting the RPC protocol. For example proposals for new
virt driver methods that can't possibly work with other virt
drivers in the future and would involve incompatible RPC changes
to fix it.

 * It combines multiple changes in a single commit
 
 Any more? I'd be happy to update the above wiki page with any consensus.
 It would be useful if any generally accepted criteria were readily
 referenceable.

It is always worth improving our docs to give more guidance like
ths.

 I also think it's worth explicitly documenting a few things we
 might/should mention in a review, but which aren't a reason that the
 project would be better off without it:
 
 * Stylistic issues which are not covered by HACKING
 
 By stylistic, I mean changes which have no functional impact on the code
 whatsoever. If a purely stylistic issue is sufficiently important to
 reject code which doesn't adhere to it, it is important enough to add to
 HACKING.

Broadly speaking I agree with the idea that style cleanups should
have corresponding hacking rules validated automatically. If some
one proposes a style cleanup which isn't validated I'll typically
request that they write a hacking check to do so. There might be
some cases where it isn't practical to validate the rule automatically
which are none the less worthwhile -1'ing  for - these should be the
exception though. In general we shouldn't do style cleanups that we
can not automatically validate in some way.

 * I can think of a better way of doing this
 
 There may be a better solution, but there is already an existing
 solution. We should only be rejecting work that has already been done if
 it would detract from the project for one of the reasons above. We can
 always improve it further later if we find the developer time.
 
 * It isn't flexible enough for any conceivable future feature
 
 Lets avoid premature generalisation. We can always generalise as part of
 landing the future feature.

See my note about - it isn't always just about premature generalization
per se, but rather seeing things we are just clearly written from too
narrow a POV and will cause us pain down the line which could be easily
mitigated right away.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Non-readonly connection to libvirt in unit tests

2014-08-21 Thread Matt Riedemann



On 8/21/2014 10:23 AM, Daniel P. Berrange wrote:

On Thu, Aug 21, 2014 at 11:14:33AM -0400, Solly Ross wrote:

(reply inline)

- Original Message -

From: Daniel P. Berrange berra...@redhat.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Thursday, August 21, 2014 11:05:18 AM
Subject: Re: [openstack-dev] [nova][libvirt] Non-readonly connection to libvirt 
in unit tests

On Thu, Aug 21, 2014 at 10:52:42AM -0400, Solly Ross wrote:

FYI, the context of this is that I would like to be able to test some
of the libvirt storage pool code against a live file system, as we
currently test the storage pool code.  To do this, we need at least to
be able to get a proper connection to a session daemon.  IMHO, since
these calls aren't expensive, so to speak, it should be fine to have
them run against a real libvirt.


No it really isn't OK to run against the real libvirt host system when
in the unit tests. Unit tests must *not* rely on external system state
in this way because it will lead to greater instability and unreliability
of our unit tests. If you want to test stuff against the real libvirt
storage pools then that becomes a functional / integration test suite
which is pretty much what tempest is targetting.


That's all well and good, but we *currently* manipulates the actual file
system manually in tests.  Should we then say that we should never manipulate
the actual file system either?  In that case, there are some tests which
need to be refactored.


Places where the tests manipulate the filesystem though should be doing
so in an isolated playpen directory, not in the live location where
a deployed nova runs, so that's not the same thing.


So If we require libvirt-python for tests and that requires
libvirt-bin, what's stopping us from just removing fakelibvirt since
it's kind of useless now anyway, right?


The thing about fakelibvirt is that it allows us to operate against
against a libvirt API without actually doing libvirt-y things like
launching VMs.  Now, libvirt does have a test:///default URI that
IIRC has similar functionality, so we could start to phase out fake
libvirt in favor of that.  However, there are probably still some
spots where we'll want to use fakelibvirt.


I'm actually increasingly of the opinion that we should not in fact
be trying to use the real libvirt library in the unit tests at all
as it is not really adding any value. We typically nmock out all the
actual API calls we exercise so despite using libvirt-python we
are not in fact exercising its code or even validating that we're
passing the correct numbers of parameters to API calls. Pretty much
all we really relying on is the existance of the various global
constants that are defined, and that has been nothing but trouble
because the constants may or may not be defined depending on the
version.


Isn't that what 'test:///default' is supposed to be?  A version of libvirt
with libvirt not actually touching the rest of the system?


Yes, that is what it allows for, however, even if we used that URI we
still wouldn't be actually exercising any of the libvirt code in any
meaningful way because our unit tests mock out all the API calls that
get touched. So using libvirt-python + test:///default URI doesn't
really seem to buy us anything, but it does still mean that developers
need to have libvirt installed in order to run  the unit tests. I'm
not convinced that is a beneficial tradeoff.


The downside of fakelibvirt is that it is a half-assed implementation
of libvirt that we evolve in an adhoc fashion. I'm exploring the idea
of using pythons introspection abilities to query the libvirt-python
API and automatically generate a better 'fakelibvirt' that we can
guarantee to match the signatures of the real libvirt library. If we
had something like that which we had more confidence in, then we could
make the unit tests use that unconditionally. This would make our unit
tests more reliable since we would not be suspectible to different API
coverage in different libvirt module versions which have tripped us up
so many times


Regards,
Daniel



+1000 to removing the need to have libvirt installed to run unit tests, 
but that's what I'm seeing today unless I'm mistaken since we require 
libvirt-python which requires libvirt as already pointed out.


If you revert the change to require libvirt-python and try to run the 
unit tests, it fails, see bug 1357437 [1].


[1] https://bugs.launchpad.net/nova/+bug/1357437

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Criteria for giving a -1 in a review

2014-08-21 Thread Dolph Mathews
On Thu, Aug 21, 2014 at 11:21 AM, Daniel P. Berrange berra...@redhat.com
wrote:

 On Thu, Aug 21, 2014 at 05:05:04PM +0100, Matthew Booth wrote:
  I would prefer that you didn't merge this.
 
  i.e. The project is better off without it.

 A bit off topic, but I've never liked this message that gets added
 as it think it sounds overly negative. It would better written
 as

   This patch needs further work before it can be merged


++ This patch needs further work before it can be merged, and as a
reviewer, I am either too lazy or just unwilling to checkout your patch and
fix those issues myself.

http://dolphm.com/reviewing-code


 as that gives a positive expectation that the work is still
 wanted by the project in general


  This seems to mean different things to different people. There's a list
  here which contains some criteria for new commits:
 
  https://wiki.openstack.org/wiki/ReviewChecklist.
 
  There's also a treatise on git commit messages and the structure of a
  commit here:
 
  https://wiki.openstack.org/wiki/GitCommitMessages
 
  However, these don't really cover the general case of what a -1 means.
  Here's my brain dump:
 
  * It contains bugs
  * It is likely to confuse future developers/maintainers
  * It is likely to lead to bugs
  * It is inconsistent with other solutions to similar problems
  * It adds complexity which is not matched by its benefits
  * It isn't flexible enough for future work landing RSN

 s/RSN//

 There are times where the design is not flexible enough and we
 do not want to accept regardless of when future work might land.
 This is specifically the case with things that are adding APIs
 or impacting the RPC protocol. For example proposals for new
 virt driver methods that can't possibly work with other virt
 drivers in the future and would involve incompatible RPC changes
 to fix it.

  * It combines multiple changes in a single commit
 
  Any more? I'd be happy to update the above wiki page with any consensus.
  It would be useful if any generally accepted criteria were readily
  referenceable.

 It is always worth improving our docs to give more guidance like
 ths.

  I also think it's worth explicitly documenting a few things we
  might/should mention in a review, but which aren't a reason that the
  project would be better off without it:
 
  * Stylistic issues which are not covered by HACKING
 
  By stylistic, I mean changes which have no functional impact on the code
  whatsoever. If a purely stylistic issue is sufficiently important to
  reject code which doesn't adhere to it, it is important enough to add to
  HACKING.

 Broadly speaking I agree with the idea that style cleanups should
 have corresponding hacking rules validated automatically. If some
 one proposes a style cleanup which isn't validated I'll typically
 request that they write a hacking check to do so. There might be
 some cases where it isn't practical to validate the rule automatically
 which are none the less worthwhile -1'ing  for - these should be the
 exception though. In general we shouldn't do style cleanups that we
 can not automatically validate in some way.

  * I can think of a better way of doing this
 
  There may be a better solution, but there is already an existing
  solution. We should only be rejecting work that has already been done if
  it would detract from the project for one of the reasons above. We can
  always improve it further later if we find the developer time.
 
  * It isn't flexible enough for any conceivable future feature
 
  Lets avoid premature generalisation. We can always generalise as part of
  landing the future feature.

 See my note about - it isn't always just about premature generalization
 per se, but rather seeing things we are just clearly written from too
 narrow a POV and will cause us pain down the line which could be easily
 mitigated right away.

 Regards,
 Daniel
 --
 |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/
 :|
 |: http://libvirt.org  -o- http://virt-manager.org
 :|
 |: http://autobuild.org   -o- http://search.cpan.org/~danberr/
 :|
 |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc
 :|

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Non-readonly connection to libvirt in unit tests

2014-08-21 Thread Clark Boylan


On Thu, Aug 21, 2014, at 09:25 AM, Matt Riedemann wrote:
 
 
 On 8/21/2014 10:23 AM, Daniel P. Berrange wrote:
  On Thu, Aug 21, 2014 at 11:14:33AM -0400, Solly Ross wrote:
  (reply inline)
 
  - Original Message -
  From: Daniel P. Berrange berra...@redhat.com
  To: OpenStack Development Mailing List (not for usage questions) 
  openstack-dev@lists.openstack.org
  Sent: Thursday, August 21, 2014 11:05:18 AM
  Subject: Re: [openstack-dev] [nova][libvirt] Non-readonly connection to 
  libvirt in unit tests
 
  On Thu, Aug 21, 2014 at 10:52:42AM -0400, Solly Ross wrote:
  FYI, the context of this is that I would like to be able to test some
  of the libvirt storage pool code against a live file system, as we
  currently test the storage pool code.  To do this, we need at least to
  be able to get a proper connection to a session daemon.  IMHO, since
  these calls aren't expensive, so to speak, it should be fine to have
  them run against a real libvirt.
 
  No it really isn't OK to run against the real libvirt host system when
  in the unit tests. Unit tests must *not* rely on external system state
  in this way because it will lead to greater instability and unreliability
  of our unit tests. If you want to test stuff against the real libvirt
  storage pools then that becomes a functional / integration test suite
  which is pretty much what tempest is targetting.
 
  That's all well and good, but we *currently* manipulates the actual file
  system manually in tests.  Should we then say that we should never 
  manipulate
  the actual file system either?  In that case, there are some tests which
  need to be refactored.
 
  Places where the tests manipulate the filesystem though should be doing
  so in an isolated playpen directory, not in the live location where
  a deployed nova runs, so that's not the same thing.
 
  So If we require libvirt-python for tests and that requires
  libvirt-bin, what's stopping us from just removing fakelibvirt since
  it's kind of useless now anyway, right?
 
  The thing about fakelibvirt is that it allows us to operate against
  against a libvirt API without actually doing libvirt-y things like
  launching VMs.  Now, libvirt does have a test:///default URI that
  IIRC has similar functionality, so we could start to phase out fake
  libvirt in favor of that.  However, there are probably still some
  spots where we'll want to use fakelibvirt.
 
  I'm actually increasingly of the opinion that we should not in fact
  be trying to use the real libvirt library in the unit tests at all
  as it is not really adding any value. We typically nmock out all the
  actual API calls we exercise so despite using libvirt-python we
  are not in fact exercising its code or even validating that we're
  passing the correct numbers of parameters to API calls. Pretty much
  all we really relying on is the existance of the various global
  constants that are defined, and that has been nothing but trouble
  because the constants may or may not be defined depending on the
  version.
 
  Isn't that what 'test:///default' is supposed to be?  A version of libvirt
  with libvirt not actually touching the rest of the system?
 
  Yes, that is what it allows for, however, even if we used that URI we
  still wouldn't be actually exercising any of the libvirt code in any
  meaningful way because our unit tests mock out all the API calls that
  get touched. So using libvirt-python + test:///default URI doesn't
  really seem to buy us anything, but it does still mean that developers
  need to have libvirt installed in order to run  the unit tests. I'm
  not convinced that is a beneficial tradeoff.
 
  The downside of fakelibvirt is that it is a half-assed implementation
  of libvirt that we evolve in an adhoc fashion. I'm exploring the idea
  of using pythons introspection abilities to query the libvirt-python
  API and automatically generate a better 'fakelibvirt' that we can
  guarantee to match the signatures of the real libvirt library. If we
  had something like that which we had more confidence in, then we could
  make the unit tests use that unconditionally. This would make our unit
  tests more reliable since we would not be suspectible to different API
  coverage in different libvirt module versions which have tripped us up
  so many times
 
  Regards,
  Daniel
 
 
 +1000 to removing the need to have libvirt installed to run unit tests, 
 but that's what I'm seeing today unless I'm mistaken since we require 
 libvirt-python which requires libvirt as already pointed out.
 
 If you revert the change to require libvirt-python and try to run the 
 unit tests, it fails, see bug 1357437 [1].
 
 [1] https://bugs.launchpad.net/nova/+bug/1357437
 
Reverting the change to require libvirt-python is insufficient. That
revert will flip back to using system packages and include libvirt
python lib from your operating system. Libvirt will still be required
just via a different avenue (nova does try to fall 

Re: [openstack-dev] Criteria for giving a -1 in a review

2014-08-21 Thread Adam Young

On 08/21/2014 12:21 PM, Daniel P. Berrange wrote:

On Thu, Aug 21, 2014 at 05:05:04PM +0100, Matthew Booth wrote:

I would prefer that you didn't merge this.

i.e. The project is better off without it.

A bit off topic, but I've never liked this message that gets added
as it think it sounds overly negative. It would better written
as

   This patch needs further work before it can be merged

Excellent.

It also bothers me that a -1 is dropped upon a new submission of the 
patch.  It seems to me that the review should instead indicate on a 
given line level comment whether it is grounds for -1.  If it is then 
either that same reviewer or another can decide whether a given fix 
address the reviewers request.



As a core reviewer, I have the power to -2 something.  That is 
considered a do not follow this approach message today.  I rarely 
exercise it, even for changes that I consider essential.  One reason is 
that a review with a -2 on it won't get additional reviews, and that is 
not my intention.







as that gives a positive expectation that the work is still
wanted by the project in general



This seems to mean different things to different people. There's a list
here which contains some criteria for new commits:

https://wiki.openstack.org/wiki/ReviewChecklist.

There's also a treatise on git commit messages and the structure of a
commit here:

https://wiki.openstack.org/wiki/GitCommitMessages

However, these don't really cover the general case of what a -1 means.
Here's my brain dump:

* It contains bugs
* It is likely to confuse future developers/maintainers
* It is likely to lead to bugs
* It is inconsistent with other solutions to similar problems
* It adds complexity which is not matched by its benefits
* It isn't flexible enough for future work landing RSN

s/RSN//

There are times where the design is not flexible enough and we
do not want to accept regardless of when future work might land.
This is specifically the case with things that are adding APIs
or impacting the RPC protocol. For example proposals for new
virt driver methods that can't possibly work with other virt
drivers in the future and would involve incompatible RPC changes
to fix it.


* It combines multiple changes in a single commit

Any more? I'd be happy to update the above wiki page with any consensus.
It would be useful if any generally accepted criteria were readily
referenceable.

It is always worth improving our docs to give more guidance like
ths.


I also think it's worth explicitly documenting a few things we
might/should mention in a review, but which aren't a reason that the
project would be better off without it:

* Stylistic issues which are not covered by HACKING

By stylistic, I mean changes which have no functional impact on the code
whatsoever. If a purely stylistic issue is sufficiently important to
reject code which doesn't adhere to it, it is important enough to add to
HACKING.

Broadly speaking I agree with the idea that style cleanups should
have corresponding hacking rules validated automatically. If some
one proposes a style cleanup which isn't validated I'll typically
request that they write a hacking check to do so. There might be
some cases where it isn't practical to validate the rule automatically
which are none the less worthwhile -1'ing  for - these should be the
exception though. In general we shouldn't do style cleanups that we
can not automatically validate in some way.


* I can think of a better way of doing this

There may be a better solution, but there is already an existing
solution. We should only be rejecting work that has already been done if
it would detract from the project for one of the reasons above. We can
always improve it further later if we find the developer time.

* It isn't flexible enough for any conceivable future feature

Lets avoid premature generalisation. We can always generalise as part of
landing the future feature.

See my note about - it isn't always just about premature generalization
per se, but rather seeing things we are just clearly written from too
narrow a POV and will cause us pain down the line which could be easily
mitigated right away.

Regards,
Daniel



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Criteria for giving a -1 in a review

2014-08-21 Thread Adam Young

On 08/21/2014 12:34 PM, Dolph Mathews wrote:


On Thu, Aug 21, 2014 at 11:21 AM, Daniel P. Berrange 
berra...@redhat.com mailto:berra...@redhat.com wrote:


On Thu, Aug 21, 2014 at 05:05:04PM +0100, Matthew Booth wrote:
 I would prefer that you didn't merge this.

 i.e. The project is better off without it.

A bit off topic, but I've never liked this message that gets added
as it think it sounds overly negative. It would better written
as

  This patch needs further work before it can be merged


++ This patch needs further work before it can be merged, and as a 
reviewer, I am either too lazy or just unwilling to checkout your 
patch and fix those issues myself.


Heh...well, there are a couple other aspects:

1.  I am unsure if my understanding is correct.  I'd like to have some 
validation, and, if I am wrong, I'll withdraw the objections.


2.  If I make the change, I can no longer +2/+A the review.  If you make 
the change, I can approve it.





http://dolphm.com/reviewing-code


as that gives a positive expectation that the work is still
wanted by the project in general


 This seems to mean different things to different people. There's
a list
 here which contains some criteria for new commits:

 https://wiki.openstack.org/wiki/ReviewChecklist.

 There's also a treatise on git commit messages and the structure
of a
 commit here:

 https://wiki.openstack.org/wiki/GitCommitMessages

 However, these don't really cover the general case of what a -1
means.
 Here's my brain dump:

 * It contains bugs
 * It is likely to confuse future developers/maintainers
 * It is likely to lead to bugs
 * It is inconsistent with other solutions to similar problems
 * It adds complexity which is not matched by its benefits
 * It isn't flexible enough for future work landing RSN

s/RSN//

There are times where the design is not flexible enough and we
do not want to accept regardless of when future work might land.
This is specifically the case with things that are adding APIs
or impacting the RPC protocol. For example proposals for new
virt driver methods that can't possibly work with other virt
drivers in the future and would involve incompatible RPC changes
to fix it.

 * It combines multiple changes in a single commit

 Any more? I'd be happy to update the above wiki page with any
consensus.
 It would be useful if any generally accepted criteria were readily
 referenceable.

It is always worth improving our docs to give more guidance like
ths.

 I also think it's worth explicitly documenting a few things we
 might/should mention in a review, but which aren't a reason that the
 project would be better off without it:

 * Stylistic issues which are not covered by HACKING

 By stylistic, I mean changes which have no functional impact on
the code
 whatsoever. If a purely stylistic issue is sufficiently important to
 reject code which doesn't adhere to it, it is important enough
to add to
 HACKING.

Broadly speaking I agree with the idea that style cleanups should
have corresponding hacking rules validated automatically. If some
one proposes a style cleanup which isn't validated I'll typically
request that they write a hacking check to do so. There might be
some cases where it isn't practical to validate the rule automatically
which are none the less worthwhile -1'ing  for - these should be the
exception though. In general we shouldn't do style cleanups that we
can not automatically validate in some way.

 * I can think of a better way of doing this

 There may be a better solution, but there is already an existing
 solution. We should only be rejecting work that has already been
done if
 it would detract from the project for one of the reasons above.
We can
 always improve it further later if we find the developer time.

 * It isn't flexible enough for any conceivable future feature

 Lets avoid premature generalisation. We can always generalise as
part of
 landing the future feature.

See my note about - it isn't always just about premature
generalization
per se, but rather seeing things we are just clearly written from too
narrow a POV and will cause us pain down the line which could be
easily
mitigated right away.

Regards,
Daniel
--
|: http://berrange.com -o-
http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org -o- http://virt-manager.org :|
|: http://autobuild.org  -o- http://search.cpan.org/~danberr/
http://search.cpan.org/%7Edanberr/ :|
|: http://entangle-photo.org  -o- http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list

Re: [openstack-dev] [Fuel] Use lrzip for upgrade tarball - reject?

2014-08-21 Thread Sergii Golovatiuk
Hi,

I think 15 minutes is not too bad. Additionally, it will reduce download
time and price for bandwidth. It's worth to leave lrzip for customers, as
upgrade is one time operation so user can wait for a while. For development
it would be nice to have the fastest solution to boost development time.

--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser


On Thu, Aug 21, 2014 at 5:40 PM, Igor Kalnitsky ikalnit...@mirantis.com
wrote:

 Hi,

 Hmm.. I think ~15 minutes isn't long enough to skip this approach in
 production.
 What about using lrzip only for end-users, but keep regular tarball
 for CI and internal usage?

 Thanks,
 Igor

 On Thu, Aug 21, 2014 at 5:22 PM, Dmitry Pyzhov dpyz...@mirantis.com
 wrote:
  I see no other quick solutions in 5.1. We can find the difference in
  packages between 5.0 and 5.0.2, put only updated packages in tarball and
 get
  missed packages from existing repos on master node.
 
 
  On Thu, Aug 21, 2014 at 5:55 PM, Mike Scherbakov 
 mscherba...@mirantis.com
  wrote:
 
  What are other possible solutions to this issue?
 
 
  On Thu, Aug 21, 2014 at 5:50 PM, Dmitry Pyzhov dpyz...@mirantis.com
  wrote:
 
  Fuelers,
 
  Our upgrade tarball for 5.1 is more than 4.5Gb. We can reduce it size
 by
  2Gb with lrzip tool (ticket, change in build system, change in docs),
 but it
  will dramatically increase unpacking time. I've run unpack on my
 virtualbox
  environment and got this result:
  [root@fuel var]# lrzuntar fuel-5.1-upgrade.tar.lrz
  Decompressing...
  100%7637.48 /   7637.48 MB
  Average DeCompression Speed:  8.014MB/s
  [OK] - 8008478720 bytes
  Total time: 00:15:52.93
 
  My suggestion is to reject this change, release 5.1 with big tarball
 and
  find another solution in next release. Any objections?
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  --
  Mike Scherbakov
  #mihgen
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Criteria for giving a -1 in a review

2014-08-21 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 21/08/14 18:34, Dolph Mathews wrote:
 
 On Thu, Aug 21, 2014 at 11:21 AM, Daniel P. Berrange 
 berra...@redhat.com mailto:berra...@redhat.com wrote:
 
 On Thu, Aug 21, 2014 at 05:05:04PM +0100, Matthew Booth wrote:
 I would prefer that you didn't merge this.
 
 i.e. The project is better off without it.
 
 A bit off topic, but I've never liked this message that gets added 
 as it think it sounds overly negative. It would better written as
 
 This patch needs further work before it can be merged
 
 
 ++ This patch needs further work before it can be merged, and as
 a reviewer, I am either too lazy or just unwilling to checkout your
 patch and fix those issues myself.

Remember: in lots of cases, modifying patches from other people can be
considered offensive (you can't fix it in reasonable time) or
inconvenient (the guy may have local changes not sent for review yet,
and not he will need to checkout your version and meld them; and the
author may even forget it was modified by someone, so your changes may
be dropped during his next upload).

 
 http://dolphm.com/reviewing-code
 
 
 as that gives a positive expectation that the work is still wanted
 by the project in general
 
 
 This seems to mean different things to different people. There's
 a
 list
 here which contains some criteria for new commits:
 
 https://wiki.openstack.org/wiki/ReviewChecklist.
 
 There's also a treatise on git commit messages and the structure
 of a commit here:
 
 https://wiki.openstack.org/wiki/GitCommitMessages
 
 However, these don't really cover the general case of what a -1
 means. Here's my brain dump:
 
 * It contains bugs * It is likely to confuse future
 developers/maintainers * It is likely to lead to bugs * It is
 inconsistent with other solutions to similar problems * It adds
 complexity which is not matched by its benefits * It isn't
 flexible enough for future work landing RSN
 
 s/RSN//
 
 There are times where the design is not flexible enough and we do
 not want to accept regardless of when future work might land. This
 is specifically the case with things that are adding APIs or
 impacting the RPC protocol. For example proposals for new virt
 driver methods that can't possibly work with other virt drivers in
 the future and would involve incompatible RPC changes to fix it.
 
 * It combines multiple changes in a single commit
 
 Any more? I'd be happy to update the above wiki page with any
 consensus.
 It would be useful if any generally accepted criteria were
 readily referenceable.
 
 It is always worth improving our docs to give more guidance like 
 ths.
 
 I also think it's worth explicitly documenting a few things we 
 might/should mention in a review, but which aren't a reason that
 the project would be better off without it:
 
 * Stylistic issues which are not covered by HACKING
 
 By stylistic, I mean changes which have no functional impact on
 the code
 whatsoever. If a purely stylistic issue is sufficiently important
 to reject code which doesn't adhere to it, it is important enough
 to
 add to
 HACKING.
 
 Broadly speaking I agree with the idea that style cleanups should 
 have corresponding hacking rules validated automatically. If some 
 one proposes a style cleanup which isn't validated I'll typically 
 request that they write a hacking check to do so. There might be 
 some cases where it isn't practical to validate the rule
 automatically which are none the less worthwhile -1'ing  for -
 these should be the exception though. In general we shouldn't do
 style cleanups that we can not automatically validate in some way.
 
 * I can think of a better way of doing this
 
 There may be a better solution, but there is already an existing 
 solution. We should only be rejecting work that has already been
 done if
 it would detract from the project for one of the reasons above.
 We can always improve it further later if we find the developer
 time.
 
 * It isn't flexible enough for any conceivable future feature
 
 Lets avoid premature generalisation. We can always generalise as
 part of
 landing the future feature.
 
 See my note about - it isn't always just about premature
 generalization per se, but rather seeing things we are just clearly
 written from too narrow a POV and will cause us pain down the line
 which could be easily mitigated right away.
 
 Regards, Daniel -- |: http://berrange.com  -o- 
 http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org
 -o- http://virt-manager.org :| |: http://autobuild.org   -o-
  http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org
 -o- http://live.gnome.org/gtk-vnc :|
 
 ___ OpenStack-dev
 mailing list OpenStack-dev@lists.openstack.org 
 mailto:OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 ___ OpenStack-dev
 mailing list 

Re: [openstack-dev] Criteria for giving a -1 in a review

2014-08-21 Thread Daniel P. Berrange
On Thu, Aug 21, 2014 at 11:34:48AM -0500, Dolph Mathews wrote:
 On Thu, Aug 21, 2014 at 11:21 AM, Daniel P. Berrange berra...@redhat.com
 wrote:
 
  On Thu, Aug 21, 2014 at 05:05:04PM +0100, Matthew Booth wrote:
   I would prefer that you didn't merge this.
  
   i.e. The project is better off without it.
 
  A bit off topic, but I've never liked this message that gets added
  as it think it sounds overly negative. It would better written
  as
 
This patch needs further work before it can be merged
 
 
 ++ This patch needs further work before it can be merged, and as a
 reviewer, I am either too lazy or just unwilling to checkout your patch and
 fix those issues myself.

I find the suggestion that reviewers are either too lazy or unwilling
to fix it themselves rather distasteful to be honest.

It is certainly valid for a code reviewer to fix an issue themselves 
re-post the patch, but that is not something to encourage a general day
to day practice for a number of reasons.

 - When there are multiple people reviewing it would quickly become a
   mess of conflicts if each  every reviewer took it upon themselves
   to rework  repost the patch.

 - The original submitter should generally always have the chance to
   rebut any feedback from reviewers, since reviewers are not infallible,
   nor always aware of the bigger picture or as familiar with the code
   being changed. 

 - When a patch is a small part of a larger series, it can be a very
   disruptive if someone else takes it, changes it  resubmits it,
   as that invalidates all following patches in a series in gerrit.

 - It does not scale to have reviewers take on much of the burden of
   actually writing the fixes, running the tests  resubmitting.

 - Having the original author deal with the review feedback actually
   helps that contributor learn new things, so that they will be able
   to do a better job for future patches they contribute

I'd only recommend fixing  resubmitting someone else's patch if it is
a really trivial thing that needed tweaking before approval for merge,
or if they are known to be away for a prolonged time and the patch was
blocking other important pending work.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Adding GateFailureFix tag to commit messages

2014-08-21 Thread Sean Dague
On 08/21/2014 11:02 AM, Armando M. wrote:
 Hi folks,
 
 According to [1], we have ways to introduce external references to
 commit messages.
 
 These are useful to mark certain patches and their relevance in the
 context of documentation, upgrades, etc.
 
 I was wondering if it would be useful considering the addition of
 another tag:
 
 GateFailureFix
 
 The objective of this tag, mainly for consumption by the review team,
 would be to make sure that some patches get more attention than others,
 as they affect the velocity of how certain critical issues are addressed
 (and gate failures affect everyone).
 
 As for machine consumption, I know that some projects use the
 'gate-failure' tag to categorize LP bugs that affect the gate. The use
 of a GateFailureFix tag in the commit message could make the tagging
 automatic, so that we can keep a log of what all the gate failures are
 over time.
 
 Not sure if this was proposed before, and I welcome any input on the matter.

A concern with this approach is it's pretty arbitrary, and not always
clear which bugs are being addressed and how severe they are.

An idea that came up in the Infra/QA meetup was to build a custom review
dashboard based on the bug list in elastic recheck. That would also
encourage people to categorize this bugs through that system, and I
think provide a virtuous circle around identifying the issues at hand.

I think Joe Gordon had a first pass at this, but I'd be more interested
in doing it this way because it means the patch author fixing a bug just
needs to know they are fixing the bug. Whether or not it's currently a
gate issue would be decided not by the commit message (static) but by
our system that understands what are the gate issues *right now* (dynamic).

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Criteria for giving a -1 in a review

2014-08-21 Thread Vishvananda Ishaya

On Aug 21, 2014, at 9:42 AM, Adam Young ayo...@redhat.com wrote:

 On 08/21/2014 12:34 PM, Dolph Mathews wrote:
 
 On Thu, Aug 21, 2014 at 11:21 AM, Daniel P. Berrange berra...@redhat.com 
 wrote:
 On Thu, Aug 21, 2014 at 05:05:04PM +0100, Matthew Booth wrote:
  I would prefer that you didn't merge this.
 
  i.e. The project is better off without it.
 
 A bit off topic, but I've never liked this message that gets added
 as it think it sounds overly negative. It would better written
 as
 
   This patch needs further work before it can be merged
 
 ++ This patch needs further work before it can be merged, and as a 
 reviewer, I am either too lazy or just unwilling to checkout your patch and 
 fix those issues myself.
 
 Heh...well, there are a couple other aspects:
 
 1.  I am unsure if my understanding is correct.  I'd like to have some 
 validation, and, if I am wrong, I'll withdraw the objections.
 
 2.  If I make the change, I can no longer +2/+A the review.  If you make the 
 change, I can approve it.

I don’t think this is correct. I’m totally ok with a core reviewer making a 
minor change to a patch AND +2ing it. This is especially true of minor things 
like spelling issues or code cleanliness. The only real functional difference 
between:

1) commenting “please change if foo==None: to if foo is None:”
2) waiting for the reviewer to exactly what you suggested
3) +2 the result

and:

1) you change if foo==None: to if foo is None: for the author
2) +2 the result

is the second set is WAY faster. Of course this only applies to minor changes. 
If you are refactoring more significantly it is nice to get the original 
poster’s feedback so the first option might be better.

Vish





signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-21 Thread Jay Pipes

On 08/19/2014 11:28 PM, Robert Collins wrote:

On 20 August 2014 02:37, Jay Pipes jaypi...@gmail.com wrote:
...


I'd like to see more unification of implementations in TripleO - but I
still believe our basic principle of using OpenStack technologies that
already exist in preference to third party ones is still sound, and
offers substantial dogfood and virtuous circle benefits.



No doubt Triple-O serves a valuable dogfood and virtuous cycle purpose.
However, I would move that the Deployment Program should welcome the many
projects currently in the stackforge/ code namespace that do deployment of
OpenStack using traditional configuration management tools like Chef,
Puppet, and Ansible. It cannot be argued that these configuration management
systems are the de-facto way that OpenStack is deployed outside of HP, and
they belong in the Deployment Program, IMO.


I think you mean it 'can be argued'... ;).


No, I definitely mean cannot be argued :) HP is the only company I 
know of that is deploying OpenStack using Triple-O. The vast majority of 
deployers I know of are deploying OpenStack using configuration 
management platforms and various systems or glue code for baremetal 
provisioning.


Note that I am not saying that Triple-O is bad in any way! I'm only 
saying that it does not represent the way that the majority of 
real-world deployments are done.


 And I'd be happy if folk in

those communities want to join in the deployment program and have code
repositories in openstack/. To date, none have asked.


My point in this thread has been and continues to be that by having the 
TC bless a certain project as The OpenStack Way of X, that we 
implicitly are saying to other valid alternatives Sorry, no need to 
apply here..



As a TC member, I would welcome someone from the Chef community proposing
the Chef cookbooks for inclusion in the Deployment program, to live under
the openstack/ code namespace. Same for the Puppet modules.


While you may personally welcome the Chef community to propose joining 
the deployment Program and living under the openstack/ code namespace, 
I'm just saying that the impression our governance model and policies 
create is one of exclusion, not inclusion. Hope that clarifies better 
what I've been getting at.


All the best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Criteria for giving a -1 in a review

2014-08-21 Thread Daniel P. Berrange
On Thu, Aug 21, 2014 at 12:42:43PM -0400, Adam Young wrote:
 On 08/21/2014 12:34 PM, Dolph Mathews wrote:
 
 On Thu, Aug 21, 2014 at 11:21 AM, Daniel P. Berrange berra...@redhat.com
 mailto:berra...@redhat.com wrote:
 
 On Thu, Aug 21, 2014 at 05:05:04PM +0100, Matthew Booth wrote:
  I would prefer that you didn't merge this.
 
  i.e. The project is better off without it.
 
 A bit off topic, but I've never liked this message that gets added
 as it think it sounds overly negative. It would better written
 as
 
   This patch needs further work before it can be merged
 
 
 ++ This patch needs further work before it can be merged, and as a
 reviewer, I am either too lazy or just unwilling to checkout your patch
 and fix those issues myself.
 
 Heh...well, there are a couple other aspects:
 
 1.  I am unsure if my understanding is correct.  I'd like to have some
 validation, and, if I am wrong, I'll withdraw the objections.
 
 2.  If I make the change, I can no longer +2/+A the review.  If you make the
 change, I can approve it.

If it is something totally minor like a typo fix, or docs grammar
fix or whitespace cleanup it is reasonable to +2/+A something that
you took over from the original author, but that would be a pretty
rare scenario in general. Certainly a change which had any kind of
a functional impact, I'd not be happy with a person +2/+A'ing their
re-post.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Criteria for giving a -1 in a review

2014-08-21 Thread Lance Bragstad
Comments inline below.


Best Regards,
Lance


On Thu, Aug 21, 2014 at 11:40 AM, Adam Young ayo...@redhat.com wrote:

 On 08/21/2014 12:21 PM, Daniel P. Berrange wrote:

 On Thu, Aug 21, 2014 at 05:05:04PM +0100, Matthew Booth wrote:

 I would prefer that you didn't merge this.

 i.e. The project is better off without it.

 A bit off topic, but I've never liked this message that gets added
 as it think it sounds overly negative. It would better written
 as

This patch needs further work before it can be merged

 Excellent.

 It also bothers me that a -1 is dropped upon a new submission of the
 patch.  It seems to me that the review should instead indicate on a given
 line level comment whether it is grounds for -1.  If it is then either that
 same reviewer or another can decide whether a given fix address the
 reviewers request.


 As a core reviewer, I have the power to -2 something.  That is considered
 a do not follow this approach message today.  I rarely exercise it, even
 for changes that I consider essential.  One reason is that a review with a
 -2 on it won't get additional reviews, and that is not my intention.



In this case, one way we can try to avoid the 'negativity' of a -2 is to
suggest the committer to mark the patch as Workflow -1 (WIP), and encourage
them to air out their explanation in project meeting open discussion and
IRC. To me, this changes the idea from this patch is going in a separate
direction than the project to this patch/idea hasn't been shot down, but
the committer needs a little help fleshing it out.





 as that gives a positive expectation that the work is still
 wanted by the project in general


  This seems to mean different things to different people. There's a list
 here which contains some criteria for new commits:

 https://wiki.openstack.org/wiki/ReviewChecklist.

 There's also a treatise on git commit messages and the structure of a
 commit here:

 https://wiki.openstack.org/wiki/GitCommitMessages

 However, these don't really cover the general case of what a -1 means.
 Here's my brain dump:

 * It contains bugs
 * It is likely to confuse future developers/maintainers
 * It is likely to lead to bugs
 * It is inconsistent with other solutions to similar problems
 * It adds complexity which is not matched by its benefits
 * It isn't flexible enough for future work landing RSN

 s/RSN//

 There are times where the design is not flexible enough and we
 do not want to accept regardless of when future work might land.
 This is specifically the case with things that are adding APIs
 or impacting the RPC protocol. For example proposals for new
 virt driver methods that can't possibly work with other virt
 drivers in the future and would involve incompatible RPC changes
 to fix it.

  * It combines multiple changes in a single commit

 Any more? I'd be happy to update the above wiki page with any consensus.
 It would be useful if any generally accepted criteria were readily
 referenceable.

 It is always worth improving our docs to give more guidance like
 ths.

  I also think it's worth explicitly documenting a few things we
 might/should mention in a review, but which aren't a reason that the
 project would be better off without it:

 * Stylistic issues which are not covered by HACKING

 By stylistic, I mean changes which have no functional impact on the code
 whatsoever. If a purely stylistic issue is sufficiently important to
 reject code which doesn't adhere to it, it is important enough to add to
 HACKING.

 Broadly speaking I agree with the idea that style cleanups should
 have corresponding hacking rules validated automatically. If some
 one proposes a style cleanup which isn't validated I'll typically
 request that they write a hacking check to do so. There might be
 some cases where it isn't practical to validate the rule automatically
 which are none the less worthwhile -1'ing  for - these should be the
 exception though. In general we shouldn't do style cleanups that we
 can not automatically validate in some way.

  * I can think of a better way of doing this

 There may be a better solution, but there is already an existing
 solution. We should only be rejecting work that has already been done if
 it would detract from the project for one of the reasons above. We can
 always improve it further later if we find the developer time.

 * It isn't flexible enough for any conceivable future feature

 Lets avoid premature generalisation. We can always generalise as part of
 landing the future feature.

 See my note about - it isn't always just about premature generalization
 per se, but rather seeing things we are just clearly written from too
 narrow a POV and will cause us pain down the line which could be easily
 mitigated right away.

 Regards,
 Daniel



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Criteria for giving a -1 in a review

2014-08-21 Thread Daniel P. Berrange
On Thu, Aug 21, 2014 at 12:40:59PM -0400, Adam Young wrote:
 On 08/21/2014 12:21 PM, Daniel P. Berrange wrote:
 On Thu, Aug 21, 2014 at 05:05:04PM +0100, Matthew Booth wrote:
 I would prefer that you didn't merge this.
 
 i.e. The project is better off without it.
 A bit off topic, but I've never liked this message that gets added
 as it think it sounds overly negative. It would better written
 as
 
This patch needs further work before it can be merged
 Excellent.
 
 It also bothers me that a -1 is dropped upon a new submission of the patch.
 It seems to me that the review should instead indicate on a given line level
 comment whether it is grounds for -1.  If it is then either that same
 reviewer or another can decide whether a given fix address the reviewers
 request.

I guess the idea of dropping the -1 is based on the understanding that
most contributors are working in good faith. ie that in the common case
they will actually address the review feedback before re-submitting a
new version. Sure some people violate this expectation, but in general
our contributor base does the right thing in this respect, which is good.

As a core reviewer I'd aim to look at the previous version to see if
there was an serious -1 there that was not address before approving
something anyway.

The problem with always preserving the -1 across re-posts, is that it
would discourage people from looking at new postings of the patch.
A gerrit query will show the -1's sitting there against the patch
with no indication that those -1s are probably stale and now
fixed by the new posting. So I really wouldn't want to see the -1's
preserved

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] IRC meetings

2014-08-21 Thread Doug Wiegley
Hi all,

We made the voice/IRC decision in the very format that favors voice.  So
in the interest of putting the discussion to bed, voice your opinions here
in a non-voice way:

https://review.openstack.org/#/c/116042/


Thanks,
Doug




On 8/18/14, 3:06 PM, Salvatore Orlando sorla...@nicira.com wrote:

This is one of the reasons for which I don't attend load balancing
meetings.
I find IRC much simpler and effective - and is also fairer to people for
whom English is not their first language.
Also, perusing IRC logs is much easier than watch/listen to webex
recordings.
Moreover, you'd get minutes for free - and you can control the density
you want them to have during the meeting!


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] oslo.db 0.4.0 released

2014-08-21 Thread Andreas Jaeger
On 08/20/2014 02:38 PM, Victor Sergeyev wrote:
 Hello Folks!
 
 Oslo team is pleased to announce the new Oslo database handling library
 release - oslo.db 0.4.0
 Thanks all for contributions to this release.


Unfortunately this breaks manila. If I downgrade to oslo.db 0.3.0, it works.

https://bugs.launchpad.net/oslo/+bug/1359888

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB16746 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Criteria for giving a -1 in a review

2014-08-21 Thread Zane Bitter

On 21/08/14 12:21, Daniel P. Berrange wrote:

On Thu, Aug 21, 2014 at 05:05:04PM +0100, Matthew Booth wrote:

I would prefer that you didn't merge this.

i.e. The project is better off without it.

A bit off topic, but I've never liked this message that gets added
as it think it sounds overly negative. It would better written
as

   This patch needs further work before it can be merged

as that gives a positive expectation that the work is still
wanted by the project in general


Well, there are two audiences for that message: the developer and the 
reviewer. I can't help thinking that if instead of trying to be positive 
it said what it really means - Today, I have chosen to obstruct your 
work for the greater good of the project - we might have a few less -1s 
for trivial issues.


Maybe, while we're at it, we could stop publishing taxonomies of reasons 
to -1 a patch as if code reviews were a competition to see who can find 
the most.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-21 Thread Clint Byrum
Excerpts from Duncan Thomas's message of 2014-08-21 09:21:06 -0700:
 On 21 August 2014 14:27, Jay Pipes jaypi...@gmail.com wrote:
 
  Specifically for Triple-O, by making the Deployment program == Triple-O, the
  TC has picked the disk-image-based deployment of an undercloud design as The
  OpenStack Way of Deployment. And as I've said previously in this thread, I
  believe that the deployment space is similarly unsettled, and that it would
  be more appropriate to let the Chef cookbooks and Puppet modules currently
  sitting in the stackforge/ code namespace live in the openstack/ code
  namespace.
 
 Totally agree with Jay here, I know people who gave up on trying to
 get any official project around deployment because they were told they
 had to do it under the TripleO umbrella
 

This was why the _program_ versus _project_ distinction was made. But
I think we ended up being 1:1 anyway.

Perhaps the deployment program's mission statement is too narrow, and
we should iterate on that. That others took their ball and went home,
instead of asking for a review of that ruling, is a bit disconcerting.

That probably strikes to the heart of the current crisis. If we were
being reasonable, alternatives to an official OpenStack program's mission
statement would be debated and considered thoughtfully. I know I made the
mistake early on of pushing the narrow _TripleO_ vision into what should
have been a much broader Deployment program. I'm not entirely sure why
that seemed o-k to me at the time, or why it was allowed to continue, but
I think it may be a good exercise to review those events and try to come
up with a few theories or even conclusions as to what we could do better.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-21 Thread Zane Bitter

On 20/08/14 15:37, Jay Pipes wrote:

For example, everyone agrees that Ceilometer has
room for improvement, but any implication that the Ceilometer is not
interested in or driving towards those improvements (because of NIH or
whatever) is, as has been pointed out, grossly unfair to the Ceilometer
team.


I certainly have not made such an implication about Ceilometer.


Sorry, yes, I didn't intend to imply any such... implication on your 
part. I was actually trying (evidently unsuccessfully) to avoid getting 
into finger-pointing at all, and simply make a general statement that if 
anyone were, hypothetically, to imply that the team are not committed to 
improvements, then that would be unfair. Hypothetically.


Is it Friday yet?

- ZB

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Non-readonly connection to libvirt in unit tests

2014-08-21 Thread Matt Riedemann



On 8/21/2014 11:37 AM, Clark Boylan wrote:



On Thu, Aug 21, 2014, at 09:25 AM, Matt Riedemann wrote:



On 8/21/2014 10:23 AM, Daniel P. Berrange wrote:

On Thu, Aug 21, 2014 at 11:14:33AM -0400, Solly Ross wrote:

(reply inline)

- Original Message -

From: Daniel P. Berrange berra...@redhat.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Thursday, August 21, 2014 11:05:18 AM
Subject: Re: [openstack-dev] [nova][libvirt] Non-readonly connection to libvirt 
in unit tests

On Thu, Aug 21, 2014 at 10:52:42AM -0400, Solly Ross wrote:

FYI, the context of this is that I would like to be able to test some
of the libvirt storage pool code against a live file system, as we
currently test the storage pool code.  To do this, we need at least to
be able to get a proper connection to a session daemon.  IMHO, since
these calls aren't expensive, so to speak, it should be fine to have
them run against a real libvirt.


No it really isn't OK to run against the real libvirt host system when
in the unit tests. Unit tests must *not* rely on external system state
in this way because it will lead to greater instability and unreliability
of our unit tests. If you want to test stuff against the real libvirt
storage pools then that becomes a functional / integration test suite
which is pretty much what tempest is targetting.


That's all well and good, but we *currently* manipulates the actual file
system manually in tests.  Should we then say that we should never manipulate
the actual file system either?  In that case, there are some tests which
need to be refactored.


Places where the tests manipulate the filesystem though should be doing
so in an isolated playpen directory, not in the live location where
a deployed nova runs, so that's not the same thing.


So If we require libvirt-python for tests and that requires
libvirt-bin, what's stopping us from just removing fakelibvirt since
it's kind of useless now anyway, right?


The thing about fakelibvirt is that it allows us to operate against
against a libvirt API without actually doing libvirt-y things like
launching VMs.  Now, libvirt does have a test:///default URI that
IIRC has similar functionality, so we could start to phase out fake
libvirt in favor of that.  However, there are probably still some
spots where we'll want to use fakelibvirt.


I'm actually increasingly of the opinion that we should not in fact
be trying to use the real libvirt library in the unit tests at all
as it is not really adding any value. We typically nmock out all the
actual API calls we exercise so despite using libvirt-python we
are not in fact exercising its code or even validating that we're
passing the correct numbers of parameters to API calls. Pretty much
all we really relying on is the existance of the various global
constants that are defined, and that has been nothing but trouble
because the constants may or may not be defined depending on the
version.


Isn't that what 'test:///default' is supposed to be?  A version of libvirt
with libvirt not actually touching the rest of the system?


Yes, that is what it allows for, however, even if we used that URI we
still wouldn't be actually exercising any of the libvirt code in any
meaningful way because our unit tests mock out all the API calls that
get touched. So using libvirt-python + test:///default URI doesn't
really seem to buy us anything, but it does still mean that developers
need to have libvirt installed in order to run  the unit tests. I'm
not convinced that is a beneficial tradeoff.


The downside of fakelibvirt is that it is a half-assed implementation
of libvirt that we evolve in an adhoc fashion. I'm exploring the idea
of using pythons introspection abilities to query the libvirt-python
API and automatically generate a better 'fakelibvirt' that we can
guarantee to match the signatures of the real libvirt library. If we
had something like that which we had more confidence in, then we could
make the unit tests use that unconditionally. This would make our unit
tests more reliable since we would not be suspectible to different API
coverage in different libvirt module versions which have tripped us up
so many times


Regards,
Daniel



+1000 to removing the need to have libvirt installed to run unit tests,
but that's what I'm seeing today unless I'm mistaken since we require
libvirt-python which requires libvirt as already pointed out.

If you revert the change to require libvirt-python and try to run the
unit tests, it fails, see bug 1357437 [1].

[1] https://bugs.launchpad.net/nova/+bug/1357437


Reverting the change to require libvirt-python is insufficient. That
revert will flip back to using system packages and include libvirt
python lib from your operating system. Libvirt will still be required
just via a different avenue (nova does try to fall back on its fake
libvirt but iirc that doesn't always work so well).

If you want to stop depending on 

Re: [openstack-dev] Criteria for giving a -1 in a review

2014-08-21 Thread Adam Young

On 08/21/2014 12:53 PM, Daniel P. Berrange wrote:

On Thu, Aug 21, 2014 at 11:34:48AM -0500, Dolph Mathews wrote:

On Thu, Aug 21, 2014 at 11:21 AM, Daniel P. Berrange berra...@redhat.com
wrote:


On Thu, Aug 21, 2014 at 05:05:04PM +0100, Matthew Booth wrote:

I would prefer that you didn't merge this.

i.e. The project is better off without it.

A bit off topic, but I've never liked this message that gets added
as it think it sounds overly negative. It would better written
as

   This patch needs further work before it can be merged


++ This patch needs further work before it can be merged, and as a
reviewer, I am either too lazy or just unwilling to checkout your patch and
fix those issues myself.

I find the suggestion that reviewers are either too lazy or unwilling
to fix it themselves rather distasteful to be honest.
That was from the Keystone PTL.  I think he was going for vaguely self 
deprecating  as opposed to dissing the reviewers.





It is certainly valid for a code reviewer to fix an issue themselves 
re-post the patch, but that is not something to encourage a general day
to day practice for a number of reasons.

  - When there are multiple people reviewing it would quickly become a
mess of conflicts if each  every reviewer took it upon themselves
to rework  repost the patch.

  - The original submitter should generally always have the chance to
rebut any feedback from reviewers, since reviewers are not infallible,
nor always aware of the bigger picture or as familiar with the code
being changed.

  - When a patch is a small part of a larger series, it can be a very
disruptive if someone else takes it, changes it  resubmits it,
as that invalidates all following patches in a series in gerrit.

  - It does not scale to have reviewers take on much of the burden of
actually writing the fixes, running the tests  resubmitting.

  - Having the original author deal with the review feedback actually
helps that contributor learn new things, so that they will be able
to do a better job for future patches they contribute

I'd only recommend fixing  resubmitting someone else's patch if it is
a really trivial thing that needed tweaking before approval for merge,
or if they are known to be away for a prolonged time and the patch was
blocking other important pending work.

Regards,
Daniel



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Criteria for giving a -1 in a review

2014-08-21 Thread Daniel P. Berrange
On Thu, Aug 21, 2014 at 01:12:16PM -0400, Zane Bitter wrote:
 On 21/08/14 12:21, Daniel P. Berrange wrote:
 On Thu, Aug 21, 2014 at 05:05:04PM +0100, Matthew Booth wrote:
 I would prefer that you didn't merge this.
 
 i.e. The project is better off without it.
 A bit off topic, but I've never liked this message that gets added
 as it think it sounds overly negative. It would better written
 as
 
This patch needs further work before it can be merged
 
 as that gives a positive expectation that the work is still
 wanted by the project in general
 
 Well, there are two audiences for that message: the developer and the
 reviewer. I can't help thinking that if instead of trying to be positive it
 said what it really means - Today, I have chosen to obstruct your work for
 the greater good of the project - we might have a few less -1s for trivial
 issues.
 
 Maybe, while we're at it, we could stop publishing taxonomies of reasons to
 -1 a patch as if code reviews were a competition to see who can find the
 most.

The reason for putting together examples of reasons to -1 a patch is not
to encourage people to -1 as much as possible. Rather the aim is to get
more consistency in how reviewers treat different types of problems. If
anything the intent of the mail is to actually help reviewers to allow
more trivial problems to get past review. Currently we give contributors
a inconsistent message with one reviewer saying some trivial thing should
be fixed while others will come along and say it is not worth fixing, or
can be fixed later on with a followup.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] IRC meetings

2014-08-21 Thread Stefano Maffulli
On 08/21/2014 10:14 AM, Doug Wiegley wrote:
 We made the voice/IRC decision in the very format that favors voice.  So
 in the interest of putting the discussion to bed, voice your opinions here
 in a non-voice way:

I was about to voice (ha!) my opinion there but I stopped because I
don't think we should have that conversation to start with. Audio calls
are ok for a lot of things but if we're talking about enabling an open
collaboration then there is one rule to follow:

 - provide a URL with indexed, searchable text or it didn't happen

My vote goes for the Octavia team to provide just that. If you have a
way to do that with webex, hangout, anything fancy, use them. If not,
consider revert to the minimum common denominator.

/stef

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Non-readonly connection to libvirt in unit tests

2014-08-21 Thread Matt Riedemann



On 8/21/2014 12:26 PM, Daniel P. Berrange wrote:

On Thu, Aug 21, 2014 at 12:23:12PM -0500, Matt Riedemann wrote:



On 8/21/2014 11:37 AM, Clark Boylan wrote:



On Thu, Aug 21, 2014, at 09:25 AM, Matt Riedemann wrote:



On 8/21/2014 10:23 AM, Daniel P. Berrange wrote:

On Thu, Aug 21, 2014 at 11:14:33AM -0400, Solly Ross wrote:

(reply inline)

- Original Message -

From: Daniel P. Berrange berra...@redhat.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Thursday, August 21, 2014 11:05:18 AM
Subject: Re: [openstack-dev] [nova][libvirt] Non-readonly connection to libvirt 
in unit tests

On Thu, Aug 21, 2014 at 10:52:42AM -0400, Solly Ross wrote:

FYI, the context of this is that I would like to be able to test some
of the libvirt storage pool code against a live file system, as we
currently test the storage pool code.  To do this, we need at least to
be able to get a proper connection to a session daemon.  IMHO, since
these calls aren't expensive, so to speak, it should be fine to have
them run against a real libvirt.


No it really isn't OK to run against the real libvirt host system when
in the unit tests. Unit tests must *not* rely on external system state
in this way because it will lead to greater instability and unreliability
of our unit tests. If you want to test stuff against the real libvirt
storage pools then that becomes a functional / integration test suite
which is pretty much what tempest is targetting.


That's all well and good, but we *currently* manipulates the actual file
system manually in tests.  Should we then say that we should never manipulate
the actual file system either?  In that case, there are some tests which
need to be refactored.


Places where the tests manipulate the filesystem though should be doing
so in an isolated playpen directory, not in the live location where
a deployed nova runs, so that's not the same thing.


So If we require libvirt-python for tests and that requires
libvirt-bin, what's stopping us from just removing fakelibvirt since
it's kind of useless now anyway, right?


The thing about fakelibvirt is that it allows us to operate against
against a libvirt API without actually doing libvirt-y things like
launching VMs.  Now, libvirt does have a test:///default URI that
IIRC has similar functionality, so we could start to phase out fake
libvirt in favor of that.  However, there are probably still some
spots where we'll want to use fakelibvirt.


I'm actually increasingly of the opinion that we should not in fact
be trying to use the real libvirt library in the unit tests at all
as it is not really adding any value. We typically nmock out all the
actual API calls we exercise so despite using libvirt-python we
are not in fact exercising its code or even validating that we're
passing the correct numbers of parameters to API calls. Pretty much
all we really relying on is the existance of the various global
constants that are defined, and that has been nothing but trouble
because the constants may or may not be defined depending on the
version.


Isn't that what 'test:///default' is supposed to be?  A version of libvirt
with libvirt not actually touching the rest of the system?


Yes, that is what it allows for, however, even if we used that URI we
still wouldn't be actually exercising any of the libvirt code in any
meaningful way because our unit tests mock out all the API calls that
get touched. So using libvirt-python + test:///default URI doesn't
really seem to buy us anything, but it does still mean that developers
need to have libvirt installed in order to run  the unit tests. I'm
not convinced that is a beneficial tradeoff.


The downside of fakelibvirt is that it is a half-assed implementation
of libvirt that we evolve in an adhoc fashion. I'm exploring the idea
of using pythons introspection abilities to query the libvirt-python
API and automatically generate a better 'fakelibvirt' that we can
guarantee to match the signatures of the real libvirt library. If we
had something like that which we had more confidence in, then we could
make the unit tests use that unconditionally. This would make our unit
tests more reliable since we would not be suspectible to different API
coverage in different libvirt module versions which have tripped us up
so many times


Regards,
Daniel



+1000 to removing the need to have libvirt installed to run unit tests,
but that's what I'm seeing today unless I'm mistaken since we require
libvirt-python which requires libvirt as already pointed out.

If you revert the change to require libvirt-python and try to run the
unit tests, it fails, see bug 1357437 [1].

[1] https://bugs.launchpad.net/nova/+bug/1357437


Reverting the change to require libvirt-python is insufficient. That
revert will flip back to using system packages and include libvirt
python lib from your operating system. Libvirt will still be required
just via a different avenue (nova does try 

  1   2   >