Re: [openstack-dev] [Mistral] Changing expression delimiters in Mistral DSL

2015-02-17 Thread Patrick Hoolboom
 My main concern with the {} delimiters in YAQL is that the curly brace
already has a defined use within YAML.  We most definitely will eventually
run in to parsing errors with whatever delimiter we choose but I don't feel
that it should conflict with the markup language it is directly embedded
in.  It gets quite difficult to, at a glance, identify YAQL expressions.
 % % may appear ugly to some but I feel that it works as a clear
delimiter of both the beginning AND the end of the YAQL query. The options
that only escape the beginning look fine in small examples like this but
the workflows that we have written or seen in the wild tend to have some
fairly large expressions.  If the opening and closing delimiters don't
match, it gets quite difficult to read.


 *From: *Anastasia Kuznetsova akuznets...@mirantis.com
 *Subject: **Re: [openstack-dev] [Mistral] Changing expression
 delimiters in Mistral DSL*
 *Date: *February 17, 2015 at 8:28:27 AM PST
 *To: *OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 *Reply-To: *OpenStack Development Mailing List \(not for usage
 questions\) openstack-dev@lists.openstack.org

 As for me, I think that % ... % is not an elegant solution and looks
 massive because of '%' sign. Also I agree with Renat, that % ... %
 reminds HTML/Jinja2 syntax.

 I am not sure that similarity with something should be one of the main
 criteria, because we don't know who will use Mistral.

 I like:
 - {1 + $.var} Renat's example
 - variant with using some functions (item 2 in Dmitry's list):  { yaql:
 “1+1+$.my.var  100” } or yaql: 'Hello' + $.name 
 - my two cents, maybe we can use something like: result: - Hello +
 $.name -


 Regards,
 Anastasia Kuznetsova

 On Tue, Feb 17, 2015 at 1:17 PM, Nikolay Makhotkin 
 nmakhot...@mirantis.com wrote:

 Some suggestions from me:

 1. y 1 + $.var  # (short from yaql).
 2. { 1 + $.var }  # as for me, looks more elegant than % %. And
 visually it is more strong

 A also like p7 and p8 suggested by Renat.

 On Tue, Feb 17, 2015 at 11:43 AM, Renat Akhmerov rakhme...@mirantis.com
 wrote:

 One more:

 p9: \{1 + $.var} # That’s pretty much what
 https://review.openstack.org/#/c/155348/ addresses but it’s not exactly
 that. Note that we don’t have to put it in quotes in this case to deal with
 YAML {} semantics, it’s just a string



 Renat Akhmerov
 @ Mirantis Inc.



 On 17 Feb 2015, at 13:37, Renat Akhmerov rakhme...@mirantis.com wrote:

 Along with % % syntax here are some other alternatives that I checked
 for YAML friendliness with my short comments:

 p1: ${1 + $.var} # Here it’s bad that $ sign is used for two
 different things
 p2: ~{1 + $.var} # ~ is easy to miss in a text
 p3: ^{1 + $.var} # For someone may be associated with regular
 expressions
 p4: ?{1 + $.var}
 p5: {1 + $.var} # This is kinda crazy
 p6: e{1 + $.var} # That looks a pretty interesting option to me, “e”
 could mean “expression” here.
 p7: yaql{1 + $.var} # This is interesting because it would give a clear
 and easy mechanism to plug in other expression languages, “yaql” here is a
 used dialect for the following expression
 p8: y{1 + $.var} # “y” here is just shortened “yaql


 Any ideas and thoughts would be really appreciated!

 Renat Akhmerov
 @ Mirantis Inc.



 On 17 Feb 2015, at 12:53, Renat Akhmerov rakhme...@mirantis.com wrote:

 Dmitri,

 I agree with all your reasonings and fully support the idea of changing
 the syntax now as well as changing system’s API a little bit due to
 recently found issues in the current engine design that don’t allow us, for
 example, to fully implement ‘with-items’ (although that’s a little bit
 different story).

 Just a general note about all changes happening now: *Once we release
 kilo stable release our API, DSL of version 2 must be 100% stable*. I
 was hoping to stabilize it much earlier but the start of production use
 revealed a number of things (I think this is normal) which we need to
 address, but not later than the end of Kilo.

 As far as % % syntax. I see that it would solve a number of problems
 (YAML friendliness, type ambiguity) but my only not strong argument is that
 it doesn’t look that elegant in YAML as it looks, for example, in ERB
 templates. It really reminds me XML/HTML and looks like a bear in a grocery
 store (tried to make it close to one old russian saying :) ). So just for
 this only reason I’d suggest we think about other alternatives, maybe not
 so familiar to Ruby/Chef/Puppet users but looking better with YAML and at
 the same time being YAML friendly.

 I would be good if we could here more feedback on this, especially from
 people who started using Mistral.

 Thanks

 Renat Akhmerov
 @ Mirantis Inc.



 On 17 Feb 2015, at 03:06, Dmitri Zimine dzim...@stackstorm.com wrote:

 SUMMARY:
 

 We are changing the syntax for inlining YAQL expressions in Mistral YAML
 from {1+$.my.var} (or “{1+$.my.var}”) to % 1+$.my.var %

 Below I explain the 

Re: [openstack-dev] [nova] Outcome of the nova FFE meeting for Kilo

2015-02-17 Thread Christopher Yeoh
On Wed, Feb 18, 2015 at 6:18 AM, Matt Riedemann mrie...@linux.vnet.ibm.com
wrote:



 On 2/16/2015 9:57 PM, Jay Pipes wrote:

 Hi Mikal, sorry for top-posting. What was the final decision regarding
 the instance tagging work?

 Thanks,
 -jay

 On 02/16/2015 09:44 PM, Michael Still wrote:

 Hi,

 we had a meeting this morning to try and work through all the FFE
 requests for Nova. The meeting was pretty long -- two hours or so --
 and we did in in the nova IRC channel in an attempt to be as open as
 possible. The agenda for the meeting was the list of FFE requests at
 https://etherpad.openstack.org/p/kilo-nova-ffe-requests

 I recognise that this process is difficult for all, and that it is
 frustrating when your FFE request is denied. However, we have tried
 very hard to balance distractions from completing priority tasks and
 getting as many features into Kilo as possible. I ask for your
 patience as we work to finalize the Kilo release.

 That said, here's where we ended up:

 Approved:

  vmware: ephemeral disk support
  API: Keypair support for X509 public key certificates

 We were also presented with a fair few changes which are relatively
 trivial (single patch, not very long) and isolated to a small part of
 the code base. For those, we've selected the ones with the greatest
 benefit. These ones are approved so long as we can get the code merged
 before midnight on 20 February 2015 (UTC). The deadline has been
 introduced because we really are trying to focus on priority work and
 bug fixes for the remainder of the release, so I want to time box the
 amount of distraction these patches cause.

 Those approved in this way are:

  ironic: Pass the capabilities to ironic node instance_info
  libvirt: Nova vif driver plugin for opencontrail
  libvirt: Quiescing filesystems with QEMU guest agent during image
 snapshotting
  libvirt: Support vhost user in libvirt vif driver
  libvirt: Support KVM/libvirt on System z (S/390) as a hypervisor
 platform

 It should be noted that there was one request which we decided didn't
 need a FFE as it isn't feature work. That may proceed:

  hyperv: unit tests refactoring

 Finally, there were a couple of changes we were uncomfortable merging
 this late in the release as we think they need time to bed down
 before a release we consider stable for a long time. We'd like to see
 these merge very early in Liberty:

  libvirt: use libvirt storage pools
  libvirt: Generic Framework for Securing VNC and SPICE
 Proxy-To-Compute-Node Connections

 Thanks again to everyone with their patience with our process, and
 helping to make Kilo an excellent Nova release.

 Michael


 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
 unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 There are notes in the etherpad,

 https://etherpad.openstack.org/p/kilo-nova-ffe-requests

 but I think we wanted to get cyeoh and Ken'ichi's thoughts on the v2
 and/or v2.1 question about the change, i.e. should it be v2.1 only with
 microversions or if that is going to block it, is it fair to keep out the
 v2 change that's already in the patch?


So if it can be fully merged by end of week I'm ok with it going into v2
and v2.1. Otherwise I think it needs to wait for microversions. I'd like to
see v2.1 enabled next Monday (I don't want it go in just before a weekend).
And the first microversion change (which is ready to go) a couple of days
after). And we want a bit of an API freeze while that is happening.

Chris




 --

 Thanks,

 Matt Riedemann



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [CINDER] Exception request : Making Introducing micro_states for create workflow part of K-3

2015-02-17 Thread Vilobh Meshram
Hi,
As discussed in Cinder Weekly meeting on 02/12 the deadline for K3 (kilo-3 k3 
: Cinder) is Feb28 (please correct me if I am wrong). So I have a working 
prototype for micro-states feature https://review.openstack.org/#/c/124205 and 
is been already out for review for quite some time now; if it gets the needed 
attention then it should definitely be able to make it to K-3. I see lot of 
features that are planned for K-3 still in Started or Needs code review  so 
I thought it would be wise to request you for the same.
Please let me know your thoughts regarding the same.
Thanks,Vilobh
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Summit Voting and ATC emails?

2015-02-17 Thread vishal yadav
So I believe vote counts for a given entry would not be visible to every
one and only intended for OpenStack Summit Track Chairs...

Vishal
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] per-agent/driver/plugin requirements

2015-02-17 Thread YAMAMOTO Takashi
hi,

 On Wednesday, 18 de February de 2015 at 07:00, yamam...@valinux.co.jp wrote:
 hi,
  
 i want to add an extra requirement specific to OVS-agent.
 (namely, I want to add ryu for ovs-ofctl-to-python blueprint. [1]
 but the question is not specific to the blueprint.)
 to avoid messing deployments without OVS-agent, such a requirement
 should be per-agent/driver/plugin/etc. however, there currently
 seems no standard mechanism for such a requirement.
  
  
 
 
 Awesome, I was thinking of the same a few days ago, we make lots
 and lots of calls to ovs-ofctl, and we will do more if we change to
 security groups/routers in OF, if that proves to be efficient, and we
 get CT.

CT?

 
 After this change, what would be the differences of ofagent to ovs-agent ?  
 
 I guess OVS set’s rules in advance, while ofagent works as a normal
 OF controller?

the basic architecture will be same.

actually it was suggested to merge two agents during spec review.
i think it's a good idea for longer term.  (but unlikely for kilo)

   
   
  
 some ideas:
  
 a. don't bother to make it per-agent.
 add it to neutron's requirements. (and global-requirement)
 simple, but this would make non-ovs plugin users unhappy.
  
 I would simply go with a, what’s the ryu’s internal requirement list? is
 it big?

no additional requirements as far as we use only openflow part of ryu.

   
  
 b. make devstack look at per-agent extra requirements file in neutron tree.
 eg. neutron/plugins/$Q_AGENT/requirements.txt
  
 IMHO that would make distribution work a bit harder because we
 may need to process new requirement files, but my answer could depend
 on what I asked for a.  

probably.
i guess distributors can speak up.

  
 c. move OVS agent to a separate repository, just like other
 after-decomposition vendor plugins. and use requirements.txt there.
 for longer term, this might be a way to go. but i don't want to
 block my work until it happens.
  
  
 
 We’re not ready for that yet, as co-gating has proven as a bad strategy
 and we need to keep the reference implementation working for tests.  

i agree that it will not likely be ready in near future.

YAMAMOTO Takashi

  
 d. follow the way how openvswitch is installed by devstack.
 a downside: we can't give a jenkins run for a patch which introduces
 an extra requirement. (like my patch for the mentioned blueprint [2])
  
 i think b. is the most reasonable choice, at least for short/mid term.
  
 any comments/thoughts?
  
 YAMAMOTO Takashi
  
 [1] https://blueprints.launchpad.net/neutron/+spec/ovs-ofctl-to-python
 [2] https://review.openstack.org/#/c/153946/
  
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 (mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
 
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Propose removing Dmitry Guryanov from magnum-core

2015-02-17 Thread Jay Lau
-1. Thanks Dmitry for the contribution and welcome back in near future!

2015-02-17 23:36 GMT+08:00 Hongbin Lu hongbin...@gmail.com:

 -1

 On Mon, Feb 16, 2015 at 10:20 PM, Steven Dake (stdake) std...@cisco.com
 wrote:

  The initial magnum core team was founded at a meeting where several
 people committed to being active in reviews and writing code for Magnum.
 Nearly all of the folks that made that initial commitment have been active
 in IRC, on the mailing lists, or participating in code reviews or code
 development.

  Out of our core team of 9 members [1], everyone has been active in some
 way except for Dmitry.  I propose removing him from the core team.  Dmitry
 is welcome to participate in the future if he chooses and be held to the
 same high standards we have held our last 4 new core members to that didn’t
 get an initial opt-in but were voted in by their peers.

  Please vote (-1 remove, abstain, +1 keep in core team) - a vote of +1
 from any core acts as a veto meaning Dmitry will remain in the core team.

  [1] https://review.openstack.org/#/admin/groups/473,members

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] per-agent/driver/plugin requirements

2015-02-17 Thread Miguel Ángel Ajo


Miguel Ángel Ajo


On Wednesday, 18 de February de 2015 at 08:14, yamam...@valinux.co.jp wrote:

 hi,
  
  On Wednesday, 18 de February de 2015 at 07:00, yamam...@valinux.co.jp 
  (mailto:yamam...@valinux.co.jp) wrote:
   hi,

   i want to add an extra requirement specific to OVS-agent.
   (namely, I want to add ryu for ovs-ofctl-to-python blueprint. [1]
   but the question is not specific to the blueprint.)
   to avoid messing deployments without OVS-agent, such a requirement
   should be per-agent/driver/plugin/etc. however, there currently
   seems no standard mechanism for such a requirement.



   
   
   
  Awesome, I was thinking of the same a few days ago, we make lots
  and lots of calls to ovs-ofctl, and we will do more if we change to
  security groups/routers in OF, if that proves to be efficient, and we
  get CT.
   
  
  
 CT?

Connection tracking in OVS. At that point we could do NAT/stateful
firewalling, etc  
  
   
  After this change, what would be the differences of ofagent to ovs-agent ?  
   
  I guess OVS set’s rules in advance, while ofagent works as a normal
  OF controller?
   
  
  
 the basic architecture will be same.
  
 actually it was suggested to merge two agents during spec review.
 i think it's a good idea for longer term. (but unlikely for kilo)
  
  


If that’s the case, I would love to see both evaluated side to side,
and make a community decision on that.  
  
   some ideas:

   a. don't bother to make it per-agent.
   add it to neutron's requirements. (and global-requirement)
   simple, but this would make non-ovs plugin users unhappy.


   
  I would simply go with a, what’s the ryu’s internal requirement list? is
  it big?
   
  
  
 no additional requirements as far as we use only openflow part of ryu.

Then IMHO I don’t believe this is a big deal as for any other dependency.  
  
   b. make devstack look at per-agent extra requirements file in neutron 
   tree.
   eg. neutron/plugins/$Q_AGENT/requirements.txt


   
  IMHO that would make distribution work a bit harder because we
  may need to process new requirement files, but my answer could depend
  on what I asked for a.  
   
  
  
 probably.
 i guess distributors can speak up.
  
  

I speak up, I prefer a. But looping Ihar as he’s doing the majority of
work related to neutron distribution in RH/RDO.
  
  
   c. move OVS agent to a separate repository, just like other
   after-decomposition vendor plugins. and use requirements.txt there.
   for longer term, this might be a way to go. but i don't want to
   block my work until it happens.



   
   
  We’re not ready for that yet, as co-gating has proven as a bad strategy
  and we need to keep the reference implementation working for tests.  
   
  
  
 i agree that it will not likely be ready in near future.
  
 YAMAMOTO Takashi
  
   d. follow the way how openvswitch is installed by devstack.
   a downside: we can't give a jenkins run for a patch which introduces
   an extra requirement. (like my patch for the mentioned blueprint [2])

   i think b. is the most reasonable choice, at least for short/mid term.

   any comments/thoughts?

   YAMAMOTO Takashi

   [1] https://blueprints.launchpad.net/neutron/+spec/ovs-ofctl-to-python
   [2] https://review.openstack.org/#/c/153946/

   __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe: 
   openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
   (mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



   
  
  
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 (mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [Telco][NFV][infra] Review process of TelcoWG use cases

2015-02-17 Thread Marc Koderer
Hello everyone,

We already got good feedback on my sandbox test review. So I would like
to move forward.

With review [1] we will get a stackforge repo called „telcowg-usecases“.
Submitting a usecase will then follow the process of OpenStack development (see 
[2]).

The is one thing currently open: Anita suggested to rename our IRC channel from
#openstack-nfv to #openstack-telcowg which seems logical to me. If we agree
to this I will register the channel and we can move forward.

I won’t be able to participate in our meeting today, but feel free to discuss
this topic there and let me know.

Regards
Marc

[1]: https://review.openstack.org/#/c/155248/
[2]: https://wiki.openstack.org/wiki/How_To_Contribute


Am 06.02.2015 um 12:11 schrieb Marc Koderer m...@koderer.com:

 Hello everyone,
 
 we are currently facing the issue that we don’t know how to proceed with
 our telco WG use cases. There are many of them already defined but the
 reviews via Etherpad doesn’t seem to work.
 
 I suggest to do a review on them with the usual OpenStack tooling.
 Therefore I uploaded one of them (Session Border Controller) to the
 Gerrit system into the sandbox repo:
 
https://review.openstack.org/#/c/152940/1
 
 I would really like to see how many review we can get on it.
 If this works out my idea is the following:
 
 - we create a project under Stackforge called telcowg-usecases
 - we link blueprint related to this use case
 - we build a core team and approve/prioritize them
 
 Regards
 Marc
 ___
 OpenStack-operators mailing list
 openstack-operat...@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] monkey patching strategy

2015-02-17 Thread Akihiro Motoki
Hi,

The general discussion is going in https://review.openstack.org/#/c/148318/.
My opinions on the following specific topics inline.

2015-02-18 0:04 GMT+09:00 Ihar Hrachyshka ihrac...@redhat.com:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Hi,

 response was huge so far :) so to add more traction, I have a question
 for everyone. Let's assume we want to move entry points for all
 services and agents into neutron/cmd/... If so,

 - - Do we want all existing tools stored in the path to be monkey
 patched too?

I start to think neutron/cmd/eventlet/* can be an option.
The single place to apply monkey-patch is nice, but I am not sure
it is good that all commands in neutron/cmd are monkey-patched.

At now we only have small tools in cmd and they are not affected even
if monkey-patched.
If we have non-eventlet version of commands/agents, where should we place them?
I believe that a good naming convention helps new/most developers
understand the code base.

 I would say 'yes', to make sure we run our unit tests in
 the same environment as in real life;

Regarding our unit tests, I am not sure what way is good.
At now most codes are run with monkey-patched version of stdlib,
so apply monkey-pathc in neutron/tests/__init__.py sounds good.
However, what can we do if some non-eventlet modules are available?
These modules should not be monkey-patched.

 - - Which parts of services we want to see there? Should they include
 any real main() or register_options() code, or should they be just a
 wrappers to call actual main() located somewhere in other parts of the
 tree? I lean toward leaving just a one liner main() under
 neutron/cmd/... that calls to 'real' main() located in a different
 place in the tree.

My vote is for one-liner main(). More precisely, codes which are directly
related to eventlet monkey-patch should be placed.
It seems config options are not related to event monkey-patch directly.
If we have non-eventlet version of commands/agents, real 'main' can stay
in the same place and we can just remove the corresponding part from
neutron/cmd/.

Thanks,
Akihiro


 Comments?

 /Ihar


 On 02/13/2015 04:37 PM, Ihar Hrachyshka wrote:
 On 02/13/2015 02:33 AM, Kevin Benton wrote:
 Why did the services fail with the stdlib patched? Are they
 incompatible with eventlet?

 It's not like *service entry points* are not ready for neutron.* to
 be monkey patched, but tools around it (flake8 that imports
 neutron.hacking.checks, setuptools that import hooks from
 neutron.hooks etc). It's also my belief that base library should
 not be monkey patched not to put additional assumptions onto
 consumers.

 (Though I believe that all the code in the tree should be monkey
 patched, including those agents that currently run without the
 library patched - for consistency and to reflect the same test
 environment for unit tests that will be patched from
 neutron/tests/__init__.py).

 /Ihar

 __


 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1

 iQEcBAEBAgAGBQJU41iWAAoJEC5aWaUY1u57zBYIAIuobIYMZ1NJmm+7sV+NW6LS
 ZS4PNKlwcYRrdfArGliUq7GLVi/ZRNPNgilF9RIJXQAiOXEc6PmKqpKw1JnwkQ7v
 l3/NeciYmkMhSNRv1vIrOBHegAYx9Js6o2lOBCF7BFKIpu88OsC95oobcLGtcrYU
 BxoBUM7DYvHssDhRp3NujNbyMrRkg4roer7+4qGE3a449tv4xViTcoUWg5MoNalY
 vD1ld/Gg8LfKPt7v7FbF2YnHkMG+UJSk47rRd0yv9KGABS69TkNuvJXeJ14sgw0O
 YqIY3oMO0nza+T8tdQGTrYv9N4rWOMFsJMyrOLIvoUyq526QQZ/K7Hrijj1IQjE=
 =ZtVP
 -END PGP SIGNATURE-

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Akihiro Motoki amot...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] FFE driver-private-data + pure-iscsi-chap-support

2015-02-17 Thread Mike Perez
On 14:50 Sun 15 Feb , Patrick East wrote:
 Hi All,
 
 I would like to request a FFE for the following blueprints:
 
 https://blueprints.launchpad.net/cinder/+spec/driver-private-data
 https://blueprints.launchpad.net/cinder/+spec/pure-iscsi-chap-support
 
 The first being a dependency for the second.
 
 The new database table for driver data feature was discussed at the Cinder
 mid-cycle meetup and seemed to be generally approved by the team in person
 at the meeting as something we can get into Kilo.
 
 There is currently a spec up for review for it here:
 https://review.openstack.org/#/c/15/ but doesn't look like it will be
 approved by the end of the day for the deadline. I have code pretty much
 ready to go for review as soon as the spec is approved, it is a relatively
 small patch set.

I already told Patrick I would help see this change in Kilo. If I can get
another Cinder core to sponsor this, that would be great.

This change makes it possible for some drivers to be able to have chap auth
support in their unique setup, and I'd rather not leave people out in Cinder.

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] per-agent/driver/plugin requirements

2015-02-17 Thread Miguel Ángel Ajo
On Wednesday, 18 de February de 2015 at 07:00, yamam...@valinux.co.jp wrote:
 hi,
  
 i want to add an extra requirement specific to OVS-agent.
 (namely, I want to add ryu for ovs-ofctl-to-python blueprint. [1]
 but the question is not specific to the blueprint.)
 to avoid messing deployments without OVS-agent, such a requirement
 should be per-agent/driver/plugin/etc. however, there currently
 seems no standard mechanism for such a requirement.
  
  


Awesome, I was thinking of the same a few days ago, we make lots
and lots of calls to ovs-ofctl, and we will do more if we change to
security groups/routers in OF, if that proves to be efficient, and we
get CT.

After this change, what would be the differences of ofagent to ovs-agent ?  

I guess OVS set’s rules in advance, while ofagent works as a normal
OF controller?
  
  
  
 some ideas:
  
 a. don't bother to make it per-agent.
 add it to neutron's requirements. (and global-requirement)
 simple, but this would make non-ovs plugin users unhappy.
  
I would simply go with a, what’s the ryu’s internal requirement list? is
it big?
  
  
 b. make devstack look at per-agent extra requirements file in neutron tree.
 eg. neutron/plugins/$Q_AGENT/requirements.txt
  
IMHO that would make distribution work a bit harder because we
may need to process new requirement files, but my answer could depend
on what I asked for a.  
  
 c. move OVS agent to a separate repository, just like other
 after-decomposition vendor plugins. and use requirements.txt there.
 for longer term, this might be a way to go. but i don't want to
 block my work until it happens.
  
  

We’re not ready for that yet, as co-gating has proven as a bad strategy
and we need to keep the reference implementation working for tests.  
  
 d. follow the way how openvswitch is installed by devstack.
 a downside: we can't give a jenkins run for a patch which introduces
 an extra requirement. (like my patch for the mentioned blueprint [2])
  
 i think b. is the most reasonable choice, at least for short/mid term.
  
 any comments/thoughts?
  
 YAMAMOTO Takashi
  
 [1] https://blueprints.launchpad.net/neutron/+spec/ovs-ofctl-to-python
 [2] https://review.openstack.org/#/c/153946/
  
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 (mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Summit Voting and ATC emails?

2015-02-17 Thread vishal yadav
Can anyone tell why the voting counts for all the entries are same!
(attached screenshot). Even though I voted for some entries, voting counts
did not change.

Vishal

On Mon, Feb 16, 2015 at 8:00 PM, Stefano Maffulli stef...@openstack.org
wrote:

 On Sat, 2015-02-14 at 21:11 -0500, Nick Chase wrote:
  Does anybody know if a) ATC emails have started to go out yet, and b)
  when proposal voting will start?


 Voting started:

 http://www.openstack.org/vote-vancouver


 Hurry, voting closes at 5pm CT on Monday, February 23.


 Continue to visit openstack.org/summit for all Summit-related
 information, including registration, visa letters, hotels and FAQ.

 /stef



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] per-agent/driver/plugin requirements

2015-02-17 Thread YAMAMOTO Takashi
hi,

i want to add an extra requirement specific to OVS-agent.
(namely, I want to add ryu for ovs-ofctl-to-python blueprint. [1]
but the question is not specific to the blueprint.)
to avoid messing deployments without OVS-agent, such a requirement
should be per-agent/driver/plugin/etc.  however, there currently
seems no standard mechanism for such a requirement.

some ideas:

a. don't bother to make it per-agent.
   add it to neutron's requirements. (and global-requirement)
   simple, but this would make non-ovs plugin users unhappy.

b. make devstack look at per-agent extra requirements file in neutron tree.
   eg. neutron/plugins/$Q_AGENT/requirements.txt

c. move OVS agent to a separate repository, just like other
   after-decomposition vendor plugins.  and use requirements.txt there.
   for longer term, this might be a way to go.  but i don't want to
   block my work until it happens.

d. follow the way how openvswitch is installed by devstack.
   a downside: we can't give a jenkins run for a patch which introduces
   an extra requirement.  (like my patch for the mentioned blueprint [2])

i think b. is the most reasonable choice, at least for short/mid term.

any comments/thoughts?

YAMAMOTO Takashi

[1] https://blueprints.launchpad.net/neutron/+spec/ovs-ofctl-to-python
[2] https://review.openstack.org/#/c/153946/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Question about force_host skip filters

2015-02-17 Thread Joe Cropper
+1 to using a filter property to indicate whether the filter needs to be run on 
force_hosts.  As others have said, there are certain cases that need to be 
checked even if the admin is trying to intentionally place a VM somewhere such 
that we can fail early vs. letting the hypervisor blow up on the request in the 
future (i.e., to help prevent the user from stepping on their own toes).  :-)

Along these lines—dare I bring up the topic of providing an enhanced mechanism 
to determine which filter(s) contributed to NoValidHost exceptions?  Do others 
ever hear about operators getting this, and then having no idea why a VM deploy 
failed?  This is likely another thread, but thought I’d pose it here to see if 
we think this might be a potential blueprint as well.

- Joe

 On Feb 17, 2015, at 10:20 AM, Nikola Đipanov ndipa...@redhat.com wrote:
 
 On 02/17/2015 04:59 PM, Chris Friesen wrote:
 On 02/16/2015 01:17 AM, Nikola Đipanov wrote:
 On 02/14/2015 08:25 AM, Alex Xu wrote:
 
 Agree with Nikola, the claim already checking that. And instance booting
 must be failed if there isn't pci device. But I still think it should go
 through the filters, because in the future we may move the claim into
 the scheduler. And we needn't any new options, I didn't see there is any
 behavior changed.
 
 
 I think that it's not as simple as just re-running all the filters. When
 we want to force a host - there are certain things we may want to
 disregard (like aggregates? affinity?) that the admin de-facto overrides
 by saying they want a specific host, and there are things we definitely
 need to re-run to set the limits and for the request to even make sense
 (like NUMA, PCI, maybe some others).
 
 So what I am thinking is that we need a subset of filters that we flag
 as - we need to re-run this even for force-host, and then run them on
 every request.
 
 Yeah, that makes sense.  Also, I think that flag should be an attribute
 of the filter itself, so that people adding new filters don't need to
 also add the filter to a list somewhere.
 
 
 This is basically what I had in mind - definitely a filter property!
 
 N.
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org 
 mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Summit Voting and ATC emails?

2015-02-17 Thread Fawad Khaliq
These are not the counts. These seem to be vote weights.

Fawad Khaliq


On Wed, Feb 18, 2015 at 11:53 AM, vishal yadav vishalcda...@gmail.com
wrote:

 Can anyone tell why the voting counts for all the entries are same!
 (attached screenshot). Even though I voted for some entries, voting counts
 did not change.

 Vishal

 On Mon, Feb 16, 2015 at 8:00 PM, Stefano Maffulli stef...@openstack.org
 wrote:

 On Sat, 2015-02-14 at 21:11 -0500, Nick Chase wrote:
  Does anybody know if a) ATC emails have started to go out yet, and b)
  when proposal voting will start?


 Voting started:

 http://www.openstack.org/vote-vancouver


 Hurry, voting closes at 5pm CT on Monday, February 23.


 Continue to visit openstack.org/summit for all Summit-related
 information, including registration, visa letters, hotels and FAQ.

 /stef



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [congress] following up on releasing kilo milestone 2

2015-02-17 Thread sean roberts
I will take a close look and see what mods I need to make. Or we could just
move the congress repo :)
Thanks for reaching out on this.

On Thursday, February 12, 2015, Thierry Carrez thie...@openstack.org
wrote:

 Thierry Carrez wrote:
  You could also try to use the milestone.sh release script I use:
 
  http://git.openstack.org/cgit/openstack-infra/release-tools/tree/

 Hrm... I now realize the script is forcing openstack/* namespace and
 won't work as-is for stackforge projects.

 That repo is accepting patches, though :)

 --
 Thierry Carrez (ttx)

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
~sean
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Extensions for standalone EC2 API

2015-02-17 Thread Alexandre Levine
I started this thread to get a couple of pointers from Dan Smith and 
Sean Dague but it turned out to be a bigger discussion than I expected.


So the history is, we're trying to add a few properties to be reported 
for instances in order to cut workaround access to novaDB from the 
standalone EC2 API project implementation. Previous nova meeting it was 
discussed that there is still potentially a chance to get this done for 
Kilo providing the changes are not risky and not complex. The changes 
really are not complex and not risky, you can see it in this prototype 
review:


https://review.openstack.org/#/c/155853/

As you can see we just need to expose some more info which is already 
available.

Two problems have arisen:

1. I should correctly pack it into this new mechanism of microversions 
and Cristopher Yeoh and Alex Xu are very helpful in this area.


2. The os-extended-server-attributes extension is actually admin-only 
accessible.


And this second problem produced several options some of which are based 
on Alex Xu's suggestions.


1. Stay with the admin-only access. (this is the easiest one)
Problems:
- Standalone EC2 API will have to use admin context to get this info (it 
already has creds configured for its metadata service anyways, so no big 
deal).
- Some of the data potentially can be usable for regular users (this can 
be addressed later by specific policies configurations mechanism as 
suggested by Alex Xu).


2. Allow new properties to be user-available, the existing ones will 
stay admin-only (extension for the previous one)

Problems:
- The obvious way is to check for context.is_admin for existing options 
while allowing the extension to be user-available in policy.json. It 
leads to hardcode of this behavior and is not recommended. (see previous 
thread for details on that)


3. Put new properties in some non-admin extensions, like 
os-extended-status. (almost as easy as the first one)

Problems:
- They just don't fit in there. Status is about statuses, not about some 
static or dynamic properties of the object.


4. Create new extension for this. (more complicated)
Problems:
- To start with I couldn't come up with the naming for it. Because 
existing os-extended-server-attributes is such and obvious choice for 
this. Having os-extended-attributes, or os-extended-instance-attributes, 
or os-server-attributes besides would be very confusing for both users 
and future developers.


5. Put it into different extensions - reservation_id and launch_index 
into os-multiple-create, root_device_name into os_extended_volumes,  
(most complicated)

Problems:
- Not all of the ready extensions exist. There is no ready place to put 
hostname, ramdisk_id, kernel_id. We'd still have to create a new extension.


I personally tend to go for 1. It's easiest and fastest at the moment to 
put everything in admin-only access and since nova API guys consider 
allowing fine-tuning of policies for individual properties, it'll be 
possible later to make some of it available for users. Or if necessary 
it'll be possible to just switch off admin restriction altogether for 
this extension. I don't think hypervisor_name, host and instance_name 
are such a secret info that it should be hidden from users.


Please let me know what you think.

Best regards,
  Alex Levine

On 2/16/15 12:45 PM, Alex Xu wrote:



2015-02-16 9:47 GMT+08:00 Christopher Yeoh cbky...@gmail.com 
mailto:cbky...@gmail.com:


Hi,

Am happy for this to be continued on openstack-dev as long as no
one disagrees (maybe the next person can just CC it there?

So as was pointed out on the review there is some early
documentation (we're still working on v2.1 specific doco as well
as expanding on the microversion docs) on microversions in-tree:

http://docs.openstack.org/developer/nova/devref/api_microversions.html

The spec should give some context to what we are trying to do with
microversions:


http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/api-microversions.html

So in your specific case just some comments:

- No change necessary to the v2 code
(nova/api/openstack/compute/contriib

- Will need to modify the v2.1
(nova/api/openstack/compute/plugins/v3) code
  - Need to ensure that when run with the users requesting v2.1
that it still acts like v2
  - Given what is in the queue at the moment I'm guessing yours
will probably end up being v2.4 at least

- unittests for v2 and v2.1/v2.1 microversions have mostly been
merged to reduce code overhead.
  - You can see that the tests are able to handle talking to
either v2 (old code) or v2.1 (new code) or v2.1 with
microversions.

This https://review.openstack.org/#/c/140313/ is a good example of
that and is likely to be the first microversioned code to merge

- We need to keep api samples for each interface where the version
changes. The above review shows that and as 

Re: [openstack-dev] [nova] Question about force_host skip filters

2015-02-17 Thread Lingxian Kong
Good idea, it really makes sense. Just like the option
'run_filter_once_per_request' does.

2015-02-16 15:17 GMT+08:00 Nikola Đipanov ndipa...@redhat.com:
 On 02/14/2015 08:25 AM, Alex Xu wrote:


 2015-02-14 1:41 GMT+08:00 Nikola Đipanov ndipa...@redhat.com
 mailto:ndipa...@redhat.com:

 On 02/12/2015 04:10 PM, Chris Friesen wrote:
  On 02/12/2015 03:44 AM, Sylvain Bauza wrote:
 
  Any action done by the operator is always more important than what the
  Scheduler
  could decide. So, in an emergency situation, the operator wants to
  force a
  migration to an host, we need to accept it and do it, even if it
  doesn't match
  what the Scheduler could decide (and could violate any policy)
 
  That's a *force* action, so please leave the operator decide.
 
  Are we suggesting that the operator would/should only ever specify a
  specific host if the situation is an emergency?
 
  If not, then perhaps it would make sense to have it go through the
  scheduler filters even if a host is specified.  We could then have a
  --force flag that would proceed anyways even if the filters don't 
 match.
 
  There are some cases (provider networks or PCI passthrough for example)
  where it really makes no sense to try and run an instance on a compute
  node that wouldn't pass the scheduler filters.  Maybe it would make the
  most sense to specify a list of which filters to override while still
  using the others.
 

 Actually this kind of already happens on the compute node when doing
 claims. Even if we do force the host, the claim will fail on the compute
 node and we will end up with a consistent scheduling.



 Agree with Nikola, the claim already checking that. And instance booting
 must be failed if there isn't pci device. But I still think it should go
 through the filters, because in the future we may move the claim into
 the scheduler. And we needn't any new options, I didn't see there is any
 behavior changed.


 I think that it's not as simple as just re-running all the filters. When
 we want to force a host - there are certain things we may want to
 disregard (like aggregates? affinity?) that the admin de-facto overrides
 by saying they want a specific host, and there are things we definitely
 need to re-run to set the limits and for the request to even make sense
 (like NUMA, PCI, maybe some others).

 So what I am thinking is that we need a subset of filters that we flag
 as - we need to re-run this even for force-host, and then run them on
 every request.

 thoughts?

 N.



 This sadly breaks down for stuff that needs to use limits, as limits
 won't be set by the filters.

 Jay had a BP before to move limits onto compute nodes, which would solve
 this issue, as you would not need to run the filters at all - all the
 stuff would be known to the compute host that could then easily say
 nice of you to want this here, but it ain't happening.

 It will also likely need a check in the retry logic to make sure we
 don't hit the host 'retry' number of times.

 N.


 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Regards!
---
Lingxian Kong

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Lets keep our community open, lets fight for it

2015-02-17 Thread Flavio Percoco

On 17/02/15 10:44 -0500, Doug Hellmann wrote:



On Tue, Feb 17, 2015, at 05:37 AM, Daniel P. Berrange wrote:

On Wed, Feb 11, 2015 at 03:14:39PM +0100, Stefano Maffulli wrote:
  ## Cores are *NOT* special
 
  At some point, for some reason that is unknown to me, this message
  changed and the feeling of core's being some kind of superheros became
  a thing. It's gotten far enough to the point that I've came to know
  that some projects even have private (flagged with +s), password
  protected, irc channels for core reviewers.

 This is seriously disturbing.

 If you're one of those core reviewers hanging out on a private channel,
 please contact me privately: I'd love to hear from you why we failed as
 a community at convincing you that an open channel is the place to be.

 No public shaming, please: education first.

I've been thinking about these last few lines a bit, I'm not entirely
comfortable with the dynamic this sets up.

What primarily concerns me is the issue of community accountability. A
core
feature of OpenStack's project  individual team governance is the idea
of democractic elections, where the individual contributors can vote in
people who they think will lead OpenStack in a positive way, or
conversely
hold leadership to account by voting them out next time. The ability of
individuals contributors to exercise this freedom though, relies on the
voters being well informed about what is happening in the community.

If cases of bad community behaviour, such as use of passwd protected IRC
channels, are always primarily dealt with via further private
communications,
then we are denying the voters the information they need to hold people
to
account. I can understand the desire to avoid publically shaming people
right away, because the accusations may be false, or may be arising from
a
simple mis-understanding, but at some point genuine issues like this need
to be public. Without this we make it difficult for contributors to make
an informed decision at future elections.

Right now, this thread has left me wondering whether there are still any
projects which are using password protected IRC channels, or whether they
have all been deleted, and whether I will be unwittingly voting for
people
who supported their use in future openstack elections.


I trust Stef, as one of our Community Managers, to investigate and
report back. Let's give that a little time, and allow for the fact that
with travel and other things going on it may take a while. I've added it
to the TC agenda [1] for next week so we can check in to see where
things stand.

Doug

[1] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Agenda



Thanks!

FWIW, I'm share Dan's concerns with regards to generating community
awareness on what's considered a violation of openness not being
enough. The issues discussed in this thread have a broader impact than
just openness.

Also, the channel still exists, despite dropping it being so simple:

   /msg chanserv drop #your-super-secret-channel

But even if that drop happens in the next couple of minutes, I'd
really love for us to find a better way to generate more awareness on
these topics. The whole problem goes even beyond that channel existing
now but the fact that it's been around for 1 year.

This thread also meantioned other things that violate our openness.
For instance:

 - Closed phone calls considered the place for making *final*
   decisions
 - Closed planning tools with restricted access. Nothing bad about
   using external tools as far as they remain OPEN.
 - Assuming 1 medium is the right tool for everything without
   taking under consideration other aspects of our community (TZ,
   language, etc).

Again, thanks for making this point a priority for the TC as well,
looking forward to the next TC meeting, I'll try to be there.

Cheers,
Flavio



Regards,
Daniel
--
|: http://berrange.com  -o-
http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o-
http://virt-manager.org :|
|: http://autobuild.org   -o-
http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-
http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


pgpWOvxKuDDsl.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [nova] Question about force_host skip filters

2015-02-17 Thread Nikola Đipanov
On 02/17/2015 04:59 PM, Chris Friesen wrote:
 On 02/16/2015 01:17 AM, Nikola Đipanov wrote:
 On 02/14/2015 08:25 AM, Alex Xu wrote:
 
 Agree with Nikola, the claim already checking that. And instance booting
 must be failed if there isn't pci device. But I still think it should go
 through the filters, because in the future we may move the claim into
 the scheduler. And we needn't any new options, I didn't see there is any
 behavior changed.


 I think that it's not as simple as just re-running all the filters. When
 we want to force a host - there are certain things we may want to
 disregard (like aggregates? affinity?) that the admin de-facto overrides
 by saying they want a specific host, and there are things we definitely
 need to re-run to set the limits and for the request to even make sense
 (like NUMA, PCI, maybe some others).

 So what I am thinking is that we need a subset of filters that we flag
 as - we need to re-run this even for force-host, and then run them on
 every request.
 
 Yeah, that makes sense.  Also, I think that flag should be an attribute
 of the filter itself, so that people adding new filters don't need to
 also add the filter to a list somewhere.
 

This is basically what I had in mind - definitely a filter property!

N.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [oslo] MySQL connection shared between green threads concurrently

2015-02-17 Thread Mike Bayer


Mike Bayer mba...@redhat.com wrote:

 
 I haven’t confirmed this yet today but based on some greenlet research as
 well as things I observed with PDB yesterday, my suspicion is that Cinder’s
 startup code runs in a traditional thread, at the same time the service is
 allowing connections to come in via green-threads, which are running in a
 separate greenlet event loop

OK, it seems that is not what’s happening. Which is actually very bad news
because it’s starting to look like SQLAlchemy’s connection pool, even if I
directly patch it with eventlet’s threading and Queue implementations, is
failing. Which would just be all the more amazing that we don’t see this
happening everywhere, all the time? The story is still not told yet.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Separate package repo for master node

2015-02-17 Thread Vladimir Kuklin
Folks

We had a long discussion on making Fuel master node use additional repo
which contains specific package diversions that are not required for slave
nodes. We decided to do it. So starting from 6.1 release there will be
master-node specific repositories that are not used during cluster
deployment.

If you have any objections, please provide them now or forever hold your
peace.

Thanks

-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
45bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com http://www.mirantis.ru/
www.mirantis.ru
vkuk...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [oslo] MySQL connection shared between green threads concurrently

2015-02-17 Thread Mike Bayer


Doug Hellmann d...@doughellmann.com wrote:

 
 So I’m not really sure what’s going on here.  Cinder seems to have some
 openstack greenlet code of its own in
 cinder/openstack/common/threadgroup.py, I don’t know the purpose of this
 code.   SQLAlchemy’s connection pool has been tested a lot with eventlet
 / gevent and this has never been reported before. This is a very
 severe race and I’d think that this would be happening all over the
 place.
 
 The threadgroup module is from the Oslo incubator, so if you need to
 review the git history you'll want to look at that copy.


I haven’t confirmed this yet today but based on some greenlet research as
well as things I observed with PDB yesterday, my suspicion is that Cinder’s
startup code runs in a traditional thread, at the same time the service is
allowing connections to come in via green-threads, which are running in a
separate greenlet event loop (how else did my PDB sessions have continued
echo output stepping on my typing?). greenlet performs stack-slicing where
it is memoizing the state of the interpreter to some extent, but importantly
it does not provide this in conjunction with traditional threads. So Python
code can’t even tell that it’s being shared, because all of the state is
completely swapped out (but of course this doesn’t count when you’re a file
descriptor). I’ve been observing this by watching the identical objects
(same ID) magically have different state as a stale greenlet suddenly wakes
up in the middle of the presumably thread-bound initialization code.

My question is then how is it that such an architecture would be possible,
that Cinder’s service starts up without greenlets yet allows greenlet-based
requests to come in before this critical task is complete? Shouldn’t the
various oslo systems be providing patterns to prevent this disastrous
combination?   


 Doug
 
 Current status is that I’m continuing to try to determine why this is
 happening here, and seemingly nowhere else.
 
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Changing expression delimiters in Mistral DSL

2015-02-17 Thread Anastasia Kuznetsova
As for me, I think that % ... % is not an elegant solution and looks
massive because of '%' sign. Also I agree with Renat, that % ... %
reminds HTML/Jinja2 syntax.

I am not sure that similarity with something should be one of the main
criteria, because we don't know who will use Mistral.

I like:
- {1 + $.var} Renat's example
- variant with using some functions (item 2 in Dmitry's list):  { yaql:
“1+1+$.my.var  100” } or yaql: 'Hello' + $.name 
- my two cents, maybe we can use something like: result: - Hello +
$.name -


Regards,
Anastasia Kuznetsova

On Tue, Feb 17, 2015 at 1:17 PM, Nikolay Makhotkin nmakhot...@mirantis.com
wrote:

 Some suggestions from me:

 1. y 1 + $.var  # (short from yaql).
 2. { 1 + $.var }  # as for me, looks more elegant than % %. And
 visually it is more strong

 A also like p7 and p8 suggested by Renat.

 On Tue, Feb 17, 2015 at 11:43 AM, Renat Akhmerov rakhme...@mirantis.com
 wrote:

 One more:

 p9: \{1 + $.var} # That’s pretty much what
 https://review.openstack.org/#/c/155348/ addresses but it’s not exactly
 that. Note that we don’t have to put it in quotes in this case to deal with
 YAML {} semantics, it’s just a string



 Renat Akhmerov
 @ Mirantis Inc.



 On 17 Feb 2015, at 13:37, Renat Akhmerov rakhme...@mirantis.com wrote:

 Along with % % syntax here are some other alternatives that I checked
 for YAML friendliness with my short comments:

 p1: ${1 + $.var} # Here it’s bad that $ sign is used for two
 different things
 p2: ~{1 + $.var} # ~ is easy to miss in a text
 p3: ^{1 + $.var} # For someone may be associated with regular
 expressions
 p4: ?{1 + $.var}
 p5: {1 + $.var} # This is kinda crazy
 p6: e{1 + $.var} # That looks a pretty interesting option to me, “e”
 could mean “expression” here.
 p7: yaql{1 + $.var} # This is interesting because it would give a clear
 and easy mechanism to plug in other expression languages, “yaql” here is a
 used dialect for the following expression
 p8: y{1 + $.var} # “y” here is just shortened “yaql


 Any ideas and thoughts would be really appreciated!

 Renat Akhmerov
 @ Mirantis Inc.



 On 17 Feb 2015, at 12:53, Renat Akhmerov rakhme...@mirantis.com wrote:

 Dmitri,

 I agree with all your reasonings and fully support the idea of changing
 the syntax now as well as changing system’s API a little bit due to
 recently found issues in the current engine design that don’t allow us, for
 example, to fully implement ‘with-items’ (although that’s a little bit
 different story).

 Just a general note about all changes happening now: *Once we release
 kilo stable release our API, DSL of version 2 must be 100% stable*. I
 was hoping to stabilize it much earlier but the start of production use
 revealed a number of things (I think this is normal) which we need to
 address, but not later than the end of Kilo.

 As far as % % syntax. I see that it would solve a number of problems
 (YAML friendliness, type ambiguity) but my only not strong argument is that
 it doesn’t look that elegant in YAML as it looks, for example, in ERB
 templates. It really reminds me XML/HTML and looks like a bear in a grocery
 store (tried to make it close to one old russian saying :) ). So just for
 this only reason I’d suggest we think about other alternatives, maybe not
 so familiar to Ruby/Chef/Puppet users but looking better with YAML and at
 the same time being YAML friendly.

 I would be good if we could here more feedback on this, especially from
 people who started using Mistral.

 Thanks

 Renat Akhmerov
 @ Mirantis Inc.



 On 17 Feb 2015, at 03:06, Dmitri Zimine dzim...@stackstorm.com wrote:

 SUMMARY:
 

 We are changing the syntax for inlining YAQL expressions in Mistral YAML
 from {1+$.my.var} (or “{1+$.my.var}”) to % 1+$.my.var %

 Below I explain the rationale and the criteria for the choice. Comments
 and suggestions welcome.

 DETAILS:
 -

 We faced a number of problems with using YAQL expressions in Mistral DSL:
 [1] must handle any YAQL, not only the ones started with $; [2] must
 preserve types and [3] must comply with YAML. We fixed these problems by
 applying Ansible style syntax, requiring quotes around delimiters (e.g.
 “{1+$.my.yaql.var}”). However, it lead to unbearable confusion in DSL
 readability, in regards to types:

 publish:
intvalue1: {1+1}” # Confusing: you expect quotes to be string.
intvalue2: {int(1+1)}” # Even this doestn’ clean the confusion
whatisthis:{$.x + $.y}” # What type would this return?

 We got a very strong push back from users in the filed on this syntax.

 The crux of the problem is using { } as delimiters YAML. It is plain
 wrong to use the reserved character. The clean solution is to find a
 delimiter that won’t conflict with YAML.

 Criteria for selecting best alternative are:
 1) Consistently applies to to all cases of using YAML in DSL
 2) Complies with YAML
 3) Familiar to target user audience - openstack and devops

 We prefer using 

Re: [openstack-dev] [nova] Question about force_host skip filters

2015-02-17 Thread Chris Friesen

On 02/16/2015 01:17 AM, Nikola Đipanov wrote:

On 02/14/2015 08:25 AM, Alex Xu wrote:



Agree with Nikola, the claim already checking that. And instance booting
must be failed if there isn't pci device. But I still think it should go
through the filters, because in the future we may move the claim into
the scheduler. And we needn't any new options, I didn't see there is any
behavior changed.



I think that it's not as simple as just re-running all the filters. When
we want to force a host - there are certain things we may want to
disregard (like aggregates? affinity?) that the admin de-facto overrides
by saying they want a specific host, and there are things we definitely
need to re-run to set the limits and for the request to even make sense
(like NUMA, PCI, maybe some others).

So what I am thinking is that we need a subset of filters that we flag
as - we need to re-run this even for force-host, and then run them on
every request.


Yeah, that makes sense.  Also, I think that flag should be an attribute of the 
filter itself, so that people adding new filters don't need to also add the 
filter to a list somewhere.


Have the default meaning be can be skipped, then the critical ones can set it 
to can't be skipped.


Chris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Team meeting minutes 02/16/2015

2015-02-17 Thread Lingxian Kong
hi, Nikolay,

Thanks for sending them out. It will be appreciated that there will be
reminder before the meeting starts.

Regards!

2015-02-17 0:52 GMT+08:00 Nikolay Makhotkin nmakhot...@mirantis.com:
 Thanks for joining our team meeting today!

  * Meeting minutes:
 http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-02-16-16.00.html
  * Meeting log:
 http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-02-16-16.00.log.html

 The next meeting is scheduled for Feb 23 at 16.00 UTC.
 --
 Best Regards,
 Nikolay

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Regards!
---
Lingxian Kong

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Stepping down as TripleO PTL

2015-02-17 Thread Clint Byrum
Excerpts from Anita Kuno's message of 2015-02-17 07:38:01 -0800:
 On 02/17/2015 09:21 AM, Clint Byrum wrote:
  There has been a recent monumental shift in my focus around OpenStack,
  and it has required me to take most of my attention off TripleO. Given
  that, I don't think it is in the best interest of the project that I
  continue as PTL for the Kilo cycle.
  
  I'd like to suggest that we hold an immediate election for a replacement
  who can be 100% focused on the project.
  
  Thanks everyone for your hard work up to this point. I hope that one day
  soon TripleO can deliver on the promise of a self-deploying OpenStack
  that is stable and automated enough to sit in the gate for many if not
  all OpenStack projects.
  
  
  
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 So in the middle of a release, changing PTLs can take 3 avenues:
 
 1) The new PTL is appointed. Usually there is a leadership candidate in
 waiting which the rest of the project feels it can rally around until
 the next election. The stepping down PTL takes the pulse of the
 developers on the project and informs us on the mailing list who the
 appointed PTL is. Barring any huge disagreement, we continue on with
 work and the appointed PTL has the option of standing for election in
 the next election round. The appointment lasts until the next round of
 elections.
 

Thanks for letting me know about this Anita.

I'd like to appoint somebody, but I need to have some discussions with a
few people first. As luck would have it, some of those people will be in
Seattle with us for the mid-cycle starting tomorrow.

 2) We have an election, in which case we need candidates and some dates.
 Let me know if we want to exercise this option so that Tristan and I can
 organize some dates.
 

Let's wait a bit until I figure out if there's a clear and willing
appointee. That should be clear by Thursday.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [oslo] MySQL connection shared between green threads concurrently

2015-02-17 Thread Doug Hellmann


On Tue, Feb 17, 2015, at 11:17 AM, Mike Bayer wrote:
 
 
 Doug Hellmann d...@doughellmann.com wrote:
 
  
  So I’m not really sure what’s going on here.  Cinder seems to have some
  openstack greenlet code of its own in
  cinder/openstack/common/threadgroup.py, I don’t know the purpose of this
  code.   SQLAlchemy’s connection pool has been tested a lot with eventlet
  / gevent and this has never been reported before. This is a very
  severe race and I’d think that this would be happening all over the
  place.
  
  The threadgroup module is from the Oslo incubator, so if you need to
  review the git history you'll want to look at that copy.
 
 
 I haven’t confirmed this yet today but based on some greenlet research as
 well as things I observed with PDB yesterday, my suspicion is that
 Cinder’s
 startup code runs in a traditional thread, at the same time the service
 is
 allowing connections to come in via green-threads, which are running in a
 separate greenlet event loop (how else did my PDB sessions have continued
 echo output stepping on my typing?). greenlet performs stack-slicing
 where
 it is memoizing the state of the interpreter to some extent, but
 importantly
 it does not provide this in conjunction with traditional threads. So
 Python
 code can’t even tell that it’s being shared, because all of the state is
 completely swapped out (but of course this doesn’t count when you’re a
 file
 descriptor). I’ve been observing this by watching the identical objects
 (same ID) magically have different state as a stale greenlet suddenly
 wakes
 up in the middle of the presumably thread-bound initialization code.
 
 My question is then how is it that such an architecture would be
 possible,
 that Cinder’s service starts up without greenlets yet allows
 greenlet-based
 requests to come in before this critical task is complete? Shouldn’t the
 various oslo systems be providing patterns to prevent this disastrous
 combination?   

I would have thought so, but they are (mostly) libraries not frameworks
so they are often combined in unexpected ways. Let's see where the issue
is before deciding on where the fix should be.

Doug

 
 
  Doug
  
  Current status is that I’m continuing to try to determine why this is
  happening here, and seemingly nowhere else.
  
  
  
  
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Stepping down as TripleO PTL

2015-02-17 Thread Flavio Percoco

On 17/02/15 10:38 -0500, Anita Kuno wrote:

On 02/17/2015 09:21 AM, Clint Byrum wrote:

There has been a recent monumental shift in my focus around OpenStack,
and it has required me to take most of my attention off TripleO. Given
that, I don't think it is in the best interest of the project that I
continue as PTL for the Kilo cycle.

I'd like to suggest that we hold an immediate election for a replacement
who can be 100% focused on the project.

Thanks everyone for your hard work up to this point. I hope that one day
soon TripleO can deliver on the promise of a self-deploying OpenStack
that is stable and automated enough to sit in the gate for many if not
all OpenStack projects.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


So in the middle of a release, changing PTLs can take 3 avenues:

1) The new PTL is appointed. Usually there is a leadership candidate in
waiting which the rest of the project feels it can rally around until
the next election. The stepping down PTL takes the pulse of the
developers on the project and informs us on the mailing list who the
appointed PTL is. Barring any huge disagreement, we continue on with
work and the appointed PTL has the option of standing for election in
the next election round. The appointment lasts until the next round of
elections.


Just as a general advice. We did the above for Zaqar during Juno and
in my opinion it was the right call. It allowed the team to keep their
focus on what they were working on without the election
preasure/distraction.

I'm aware it may be quite different for triple-o since the community
is bigger than Zaqar's.

Just my $0.02
Fla.



2) We have an election, in which case we need candidates and some dates.
Let me know if we want to exercise this option so that Tristan and I can
organize some dates.

3) We exercise the new governance resolution for the first time:
http://git.openstack.org/cgit/openstack/governance/tree/resolutions/20141128-elections-process-for-leaderless-programs.rst

Now this resolution can only be invoked after an election is called and
if there is not a minimum of one self-nominating candidate. So the
question posed in 2) still stands, do you want the election officials to
come up with some dates?

Let us know how TripleO would like to proceed.

Thank you,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


pgpG0w9koZfZg.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] monkey patching strategy

2015-02-17 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 02/17/2015 04:19 PM, Salvatore Orlando wrote:
 My opinions inline.
 
 On 17 February 2015 at 16:04, Ihar Hrachyshka ihrac...@redhat.com 
 mailto:ihrac...@redhat.com wrote:
 
 Hi,
 
 response was huge so far :) so to add more traction, I have a
 question for everyone. Let's assume we want to move entry points
 for all services and agents into neutron/cmd/... If so,
 
 
 I don't have anything again this assumption. Also it seems other 
 projects are already doing it this way so there is no
 divergence issue here.
 
 
 - Do we want all existing tools stored in the path to be monkey 
 patched too? I would say 'yes', to make sure we run our unit tests
 in the same environment as in real life;
 
 
 I say yes but mildly here. If you're referring to the tools used
 for running flake8 or unit tests in theory it should not really
 matter whether they're patched or not. However, I'm aware of unit
 tests which spawn eventlet threadpools, so it's definitely better
 to ensure all these tools are patched.
 

No, I mean ovs_cleanup, sanity_check, usage_audit that are located in
the neutron/cmd path but not patched.

 
 - Which parts of services we want to see there? Should they
 include any real main() or register_options() code, or should they
 be just a wrappers to call actual main() located somewhere in other
 parts of the tree? I lean toward leaving just a one liner main()
 under neutron/cmd/... that calls to 'real' main() located in a
 different place in the tree.
 
 
 My vote is for the one-liner.
 
 
 
 Comments?
 
 /Ihar
 
 
 On 02/13/2015 04:37 PM, Ihar Hrachyshka wrote:
 On 02/13/2015 02:33 AM, Kevin Benton wrote:
 Why did the services fail with the stdlib patched? Are they 
 incompatible with eventlet?
 
 It's not like *service entry points* are not ready for neutron.*
 to be monkey patched, but tools around it (flake8 that imports 
 neutron.hacking.checks, setuptools that import hooks from 
 neutron.hooks etc). It's also my belief that base library should 
 not be monkey patched not to put additional assumptions onto 
 consumers.
 
 (Though I believe that all the code in the tree should be monkey 
 patched, including those agents that currently run without the 
 library patched - for consistency and to reflect the same test 
 environment for unit tests that will be patched from 
 neutron/tests/__init__.py).
 
 /Ihar
 
 __

 
 
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __

 
OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 __

 
OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJU429PAAoJEC5aWaUY1u57IkcH/1HCRHYzeXfWE3I3ZamAJufI
/1/CPcrzW3w3pJebfSLXDNbdv9ziQXwZogcM7JcDMeeYfWA42OexhFQJqerkoJnr
oL3eqUOgh19pgVUgAah1n7yQEHxyzbnVR0TcdVOvMlxno8I3hUXy78WvBWQPYIpr
NRDSbT+SiQv/OP6/zTkKLkk2SA88lJlKQpGg5Q0iRQTnqiNtF3REBdUTM/32aJyh
h9ZxmR8wyrXJv6oEUfGj210vJHUvmPHk3SsH2udQjCG0MdbIQTIZJwgzMVxD4aIs
3uQ9ONqn8Fd2LKzfawHVwT0azd6kCkQjEalZx9Tn/58NPuQ4WERhHcCzBT8pNyI=
=DGy3
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress][Delegation] Google doc for working notes

2015-02-17 Thread Tim Hinrichs
Hi Ruby,

I was envisioning the VM-placement engine choosing the hard-coded 
implementation or converting to LP.

There are 2 places that logic could go: in the VM-placement engine wrapper that 
runs on the DSE message bus, or within the VM-placement engine itself (assuming 
the two are different).  I think we’re still trying to figure out which one 
(and the initial PoC and the long-term solution may be different).

Right now I’m thinking that the VM-placement engine wrapper running on the DSE 
bus should subscribe to whatever data it needs.

Tim

On Feb 16, 2015, at 9:05 AM, 
ruby.krishnasw...@orange.commailto:ruby.krishnasw...@orange.com 
ruby.krishnasw...@orange.commailto:ruby.krishnasw...@orange.com wrote:

Hi Tim

What I’d like to see is the best of both worlds.  Users write Datalog policies 
describing whatever VM-placement policy they want.  If the policy they’ve 
written is on the solver-scheduler’s list of options, we use the hard-coded 
implementation, but if the policy isn’t on that list we translate directly to 
LP.


ð  How (calling the hard-coded implementation) ?

o   Through the message bus?

ð  Is it Congress that will send out the data or should each implementation (of 
a policy) read it in directly?

Ruby

De : Tim Hinrichs [mailto:thinri...@vmware.com]
Envoyé : jeudi 12 février 2015 19:03
À : OpenStack Development Mailing List (not for usage questions)
Objet : Re: [openstack-dev] [Congress][Delegation] Google doc for working notes

Hi Debo and Yathiraj,

I took a third look at the solver-scheduler docs and code with your comments in 
mind.  A few things jumped out.

1)  Choice of LP solver.

I see solver-scheduler uses Pulp, which was on the Congress short list as well. 
 So we’re highly aligned on the choice of underlying solver.

2) User control over VM-placement.

To choose the criteria for VM-placement, the solver-scheduler user picks from a 
list of predefined options, e.g. ActiveHostConstraint, 
MaxRamAllocationPerHostConstraint.

We’re investigating a slightly different approach, where the user defines the 
criteria for VM-placement by writing any policy they like in Datalog.  Under 
the hood we then convert that Datalog to an LP problem.  From the developer’s 
perspective, with the Congress approach we don’t attempt to anticipate the 
different policies the user might want and write code for each policy; instead, 
we as developers write a translator from Datalog to LP.  From the user’s 
perspective, the difference is that if the option they want isn’t on the 
solver-scheduler's list, they’re out of luck or need to write the code 
themselves.  But with the Congress approach, they can write any VM-placement 
policy they like.

What I’d like to see is the best of both worlds.  Users write Datalog policies 
describing whatever VM-placement policy they want.  If the policy they’ve 
written is on the solver-scheduler’s list of options, we use the hard-coded 
implementation, but if the policy isn’t on that list we translate directly to 
LP.  This approach gives us the ability to write custom code to handle common 
cases while at the same time letting users write whatever policy they like.

3) API and architecture.

Today the solver-scheduler's VM-placement policy is defined at config-time 
(i.e. not run-time).  Am I correct that this limitation is only because there’s 
no API call to set the solver-scheduler’s policy?  Or is there some other 
reason the policy is set at config-time?

Congress policies change at runtime, so we’ll definitely need a VM-placement 
engine whose policy can be changed at run-time as well.

If we focus on just migration (and not provisioning), we can build a 
VM-placement engine that sits outside of Nova that has an API call that allows 
us to set policy at runtime.  We can also set up that engine to get data 
updates that influence the policy.  We were planning on creating this kind of 
VM-placement engine within Congress as a node on the DSE (our message bus).  
This is convenient because all nodes on the DSE run in their own thread, any 
node on the DSE can subscribe to any data from any other node (e.g. 
ceilometer’s data), and the algorithms for translating Datalog to LP look to be 
quite similar to the algorithms we’re using in our domain-agnostic policy 
engine.

Tim


On Feb 11, 2015, at 4:50 PM, Debojyoti Dutta 
ddu...@gmail.commailto:ddu...@gmail.com wrote:


Hi Tim: moving our thread to the mailer. Excited to collaborate!



From: Debo~ Dutta dedu...@cisco.commailto:dedu...@cisco.com
Date: Wednesday, February 11, 2015 at 4:48 PM
To: Tim Hinrichs thinri...@vmware.commailto:thinri...@vmware.com
Cc: Yathiraj Udupi (yudupi) yud...@cisco.commailto:yud...@cisco.com, 
Gokul B Kandiraju go...@us.ibm.commailto:go...@us.ibm.com, Prabhakar Kudva 
ku...@us.ibm.commailto:ku...@us.ibm.com, 
ruby.krishnasw...@orange.commailto:ruby.krishnasw...@orange.com 
ruby.krishnasw...@orange.commailto:ruby.krishnasw...@orange.com, 

[openstack-dev] [Ironic] Weekly subteam status report

2015-02-17 Thread Ruby Loo
Hi,

Following is the subteam report for Ironic. As usual, this is pulled
directly from the Ironic whiteboard[0] and formatted.

Bugs (dtantsur)

(As of Mon, 16 Feb 20:00 UTC)
Open: 133 (0).
4 new (-5), 39 in progress (+5), 0 critical, 19 high (+2) and 7 incomplete


Drivers
==

IPA (jroll/JayF/JoshNang)
--
image builder being moved to separate (more generic) repo:
https://review.openstack.org/#/c/155868/

lucas is working on moving iscsi ramdisk code into IPA, and making IPA
image the default image for all the things

iLO (wanyen)
--
Attended Nova IRC meeting last  week to discuss passig capabilities info to
Ironic FFE. Nova project will have a core team meeting to review all FFE
requests this week.  The deciision will be notified before this week's Nova
weekly IRC meeting.

iRMC (naohirot)
-
iRMC management driver code and iRMC deploy driver spec need core team's
review and approval which was supposed to be done in the mid cycle code
sprint in S.F.

Toward kilo-3, iRMC deployment  code is slicited for core team's review,
and testing with real hardware is on schedule



Until next week,
--ruby

[0] https://etherpad.openstack.org/p/IronicWhiteBoard
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Artifacts] Object Version format: SemVer vs pep440

2015-02-17 Thread Alexander Tivelkov
Hi Rob,

That is slightly different: from logical point of view that are
different schemas indeed, however they all map to the same DB schema,
so we do not have any issues with upgrades. There are some limitations
on the schema modifications as well, and being able to work with
multiple versions of the same artifact type is supported from the
beginning and works quite well.
--
Regards,
Alexander Tivelkov


On Mon, Feb 16, 2015 at 9:50 PM, Robert Collins
robe...@robertcollins.net wrote:
 On 17 February 2015 at 03:31, Alexander Tivelkov ativel...@mirantis.com 
 wrote:
 Hi Client,

 Thanks for your input.

 We actually support the scenarios you speak about, yet in a slightly
 different way.  The authors of the Artifact Type (the plugin
 developers) may define their own custom field (or set of fields) to
 store their sequence information or any other type-specific
 version-related metadata. So, they may use generic version field
 (which is defined in the base artifact type) to store their numeric
 version - and use their type-specific field for local client-side
 processing.

 That sounds scarily like what Neutron did, leading to a different
 schema for every configuration. The reason Clint brought up Debian
 version numbers is that to sort them in a database you need a custom
 field type - e.g.
 http://bazaar.launchpad.net/~launchpad-pqm/launchpad/devel/view/head:/database/schema/launchpad-2209-00-0.sql#L25
 . And thats quite a burden :)

 We've had fairly poor results with the Neutron variation in schemas,
 as it tightly couples things, making upgrades that change plugins
 super tricky, as well as making it very hard to concurrently support
 multiple plugins. I hope you don't mean you're doing the same thing :)

 -Rob

 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] End-of-cycle Code Sprint

2015-02-17 Thread Matthew Treinish
Hi Everyone,

With the end of the cycle quickly approaching we've still got a few higher
priority work items for this cycle which are still in an unfinished state. This
includes things like:

 - Tempest test accounts
 - Tempest's cli interface
 - Devstack Neutron as default switch

To ensure we finish up most of these before the end of the cycle, I feel like
having high bandwidth face to face time to work through some of these items
would be valuable. Therefore I'd like to announced the QA end of cycle code
sprint. The sprint will take place on March 25-27th, and HP has offered to
sponsor the event and it'll be held in HP's office in NYC.

If you're planning on attending please sign up please using the following wiki
page:

https://wiki.openstack.org/wiki/QA/CodeSprintKiloNYC#Registration

However, please note given the nature of the code sprint and space limitations
at the location space is limited to 10 seats. So if you're not going to be
directly involved in any of the work items for this sprint I'd ask that you
please be mindful of that before registering to attend.

Thanks,

Matthew Treinish


pgpokPu31O4Qx.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer][keystone] Domain information through ceilometer and authentication against keystone v3

2015-02-17 Thread gordon chung

1) Getting domain information: I haven't came across but is there any 
ceilometer API which would provide domain information along with the usage 
data? 
2) Ceilometer auth against keystone v3: As
 domain feature is provided in keystone v3 API, I am using that. Is 
there a way to configure ceilometer so that it would use keystone v3 
API? I tried doing that but it didnt work for me. Also, I came across a 
question forum 
(https://ask.openstack.org/en/question/55353/ceilometer-v3-auth-against-keystone/)
 which says that ceilometer can't use v3 for getting service tokens since 
middleware doesn't support it.
i've never actually tried tihs but if you are referring to ceilometer's api, it 
uses keystonemiddleware to authenticate so you'd probably need to add 
auth_version keystone_authtoken section in ceilometer.conf...

regarding ceilometer speaking to other services, the service_credentials 
options are available here: 
http://docs.openstack.org/trunk/config-reference/content/ch_configuring-openstack-telemetry.html.
  are any additional options required to be passed in?

adding keystone tag incase they feel like pointing out something obvious.

cheers,
gord

  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Stepping down as TripleO PTL

2015-02-17 Thread Chris Jones
Hi Clint

Thanks very much for your awesome work as our PTL :)

Cheers,

Chris

On 17 February 2015 at 14:21, Clint Byrum cl...@fewbar.com wrote:

 There has been a recent monumental shift in my focus around OpenStack,
 and it has required me to take most of my attention off TripleO. Given
 that, I don't think it is in the best interest of the project that I
 continue as PTL for the Kilo cycle.

 I'd like to suggest that we hold an immediate election for a replacement
 who can be 100% focused on the project.

 Thanks everyone for your hard work up to this point. I hope that one day
 soon TripleO can deliver on the promise of a self-deploying OpenStack
 that is stable and automated enough to sit in the gate for many if not
 all OpenStack projects.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Cheers,

Chris
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Propose removing Dmitry Guryanov from magnum-core

2015-02-17 Thread Hongbin Lu
-1

On Mon, Feb 16, 2015 at 10:20 PM, Steven Dake (stdake) std...@cisco.com
wrote:

  The initial magnum core team was founded at a meeting where several
 people committed to being active in reviews and writing code for Magnum.
 Nearly all of the folks that made that initial commitment have been active
 in IRC, on the mailing lists, or participating in code reviews or code
 development.

  Out of our core team of 9 members [1], everyone has been active in some
 way except for Dmitry.  I propose removing him from the core team.  Dmitry
 is welcome to participate in the future if he chooses and be held to the
 same high standards we have held our last 4 new core members to that didn’t
 get an initial opt-in but were voted in by their peers.

  Please vote (-1 remove, abstain, +1 keep in core team) - a vote of +1
 from any core acts as a veto meaning Dmitry will remain in the core team.

  [1] https://review.openstack.org/#/admin/groups/473,members

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] monkey patching strategy

2015-02-17 Thread Salvatore Orlando
My opinions inline.

On 17 February 2015 at 16:04, Ihar Hrachyshka ihrac...@redhat.com wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Hi,

 response was huge so far :) so to add more traction, I have a question
 for everyone. Let's assume we want to move entry points for all
 services and agents into neutron/cmd/... If so,


I don't have anything again this assumption. Also it seems other projects
are already doing it this way so there is no divergence issue here.


 - - Do we want all existing tools stored in the path to be monkey
 patched too? I would say 'yes', to make sure we run our unit tests in
 the same environment as in real life;


I say yes but mildly here. If you're referring to the tools used for
running flake8 or unit tests in theory it should not really matter whether
they're patched or not. However, I'm aware of unit tests which spawn
eventlet threadpools, so it's definitely better to ensure all these tools
are patched.


 - - Which parts of services we want to see there? Should they include
 any real main() or register_options() code, or should they be just a
 wrappers to call actual main() located somewhere in other parts of the
 tree? I lean toward leaving just a one liner main() under
 neutron/cmd/... that calls to 'real' main() located in a different
 place in the tree.


My vote is for the one-liner.



 Comments?

 /Ihar


 On 02/13/2015 04:37 PM, Ihar Hrachyshka wrote:
  On 02/13/2015 02:33 AM, Kevin Benton wrote:
  Why did the services fail with the stdlib patched? Are they
  incompatible with eventlet?
 
  It's not like *service entry points* are not ready for neutron.* to
  be monkey patched, but tools around it (flake8 that imports
  neutron.hacking.checks, setuptools that import hooks from
  neutron.hooks etc). It's also my belief that base library should
  not be monkey patched not to put additional assumptions onto
  consumers.
 
  (Though I believe that all the code in the tree should be monkey
  patched, including those agents that currently run without the
  library patched - for consistency and to reflect the same test
  environment for unit tests that will be patched from
  neutron/tests/__init__.py).
 
  /Ihar
 
 
 __
 
 
 OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1

 iQEcBAEBAgAGBQJU41iWAAoJEC5aWaUY1u57zBYIAIuobIYMZ1NJmm+7sV+NW6LS
 ZS4PNKlwcYRrdfArGliUq7GLVi/ZRNPNgilF9RIJXQAiOXEc6PmKqpKw1JnwkQ7v
 l3/NeciYmkMhSNRv1vIrOBHegAYx9Js6o2lOBCF7BFKIpu88OsC95oobcLGtcrYU
 BxoBUM7DYvHssDhRp3NujNbyMrRkg4roer7+4qGE3a449tv4xViTcoUWg5MoNalY
 vD1ld/Gg8LfKPt7v7FbF2YnHkMG+UJSk47rRd0yv9KGABS69TkNuvJXeJ14sgw0O
 YqIY3oMO0nza+T8tdQGTrYv9N4rWOMFsJMyrOLIvoUyq526QQZ/K7Hrijj1IQjE=
 =ZtVP
 -END PGP SIGNATURE-

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Hyper-V Meeting

2015-02-17 Thread Peter Pouliot
Hi All,

Due to our current CI outage and weather situation , we're forgoing the meeting 
in favor of getting everything back up and running.

We'll attempt to resume meetings next week.

p
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Stepping down as TripleO PTL

2015-02-17 Thread Anita Kuno
On 02/17/2015 09:21 AM, Clint Byrum wrote:
 There has been a recent monumental shift in my focus around OpenStack,
 and it has required me to take most of my attention off TripleO. Given
 that, I don't think it is in the best interest of the project that I
 continue as PTL for the Kilo cycle.
 
 I'd like to suggest that we hold an immediate election for a replacement
 who can be 100% focused on the project.
 
 Thanks everyone for your hard work up to this point. I hope that one day
 soon TripleO can deliver on the promise of a self-deploying OpenStack
 that is stable and automated enough to sit in the gate for many if not
 all OpenStack projects.
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
So in the middle of a release, changing PTLs can take 3 avenues:

1) The new PTL is appointed. Usually there is a leadership candidate in
waiting which the rest of the project feels it can rally around until
the next election. The stepping down PTL takes the pulse of the
developers on the project and informs us on the mailing list who the
appointed PTL is. Barring any huge disagreement, we continue on with
work and the appointed PTL has the option of standing for election in
the next election round. The appointment lasts until the next round of
elections.

2) We have an election, in which case we need candidates and some dates.
Let me know if we want to exercise this option so that Tristan and I can
organize some dates.

3) We exercise the new governance resolution for the first time:
http://git.openstack.org/cgit/openstack/governance/tree/resolutions/20141128-elections-process-for-leaderless-programs.rst

Now this resolution can only be invoked after an election is called and
if there is not a minimum of one self-nominating candidate. So the
question posed in 2) still stands, do you want the election officials to
come up with some dates?

Let us know how TripleO would like to proceed.

Thank you,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Repurposing HP CI regions

2015-02-17 Thread Clint Byrum
FYI: Recently HP's focus for deployment has changed, and as such, some of
the resources we had dedicated for TripleO are being redistributed. As
such, the HP CI region won't be returning to the pool (it is currently
removed due to some stability issues). Nor will we be adding region #2,
which never quite made it into the pool.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Extensions for standalone EC2 API

2015-02-17 Thread Sean Dague
On 02/17/2015 09:38 AM, Alexandre Levine wrote:
 I started this thread to get a couple of pointers from Dan Smith and
 Sean Dague but it turned out to be a bigger discussion than I expected.
 
 So the history is, we're trying to add a few properties to be reported
 for instances in order to cut workaround access to novaDB from the
 standalone EC2 API project implementation. Previous nova meeting it was
 discussed that there is still potentially a chance to get this done for
 Kilo providing the changes are not risky and not complex. The changes
 really are not complex and not risky, you can see it in this prototype
 review:
 
 https://review.openstack.org/#/c/155853/
 
 As you can see we just need to expose some more info which is already
 available.
 Two problems have arisen:
 
 1. I should correctly pack it into this new mechanism of microversions
 and Cristopher Yeoh and Alex Xu are very helpful in this area.
 
 2. The os-extended-server-attributes extension is actually admin-only
 accessible.
 
 And this second problem produced several options some of which are based
 on Alex Xu's suggestions.
 
 1. Stay with the admin-only access. (this is the easiest one)
 Problems:
 - Standalone EC2 API will have to use admin context to get this info (it
 already has creds configured for its metadata service anyways, so no big
 deal).
 - Some of the data potentially can be usable for regular users (this can
 be addressed later by specific policies configurations mechanism as
 suggested by Alex Xu).
 
 2. Allow new properties to be user-available, the existing ones will
 stay admin-only (extension for the previous one)
 Problems:
 - The obvious way is to check for context.is_admin for existing options
 while allowing the extension to be user-available in policy.json. It
 leads to hardcode of this behavior and is not recommended. (see previous
 thread for details on that)
 
 3. Put new properties in some non-admin extensions, like
 os-extended-status. (almost as easy as the first one)
 Problems:
 - They just don't fit in there. Status is about statuses, not about some
 static or dynamic properties of the object.
 
 4. Create new extension for this. (more complicated)
 Problems:
 - To start with I couldn't come up with the naming for it. Because
 existing os-extended-server-attributes is such and obvious choice for
 this. Having os-extended-attributes, or os-extended-instance-attributes,
 or os-server-attributes besides would be very confusing for both users
 and future developers.
 
 5. Put it into different extensions - reservation_id and launch_index
 into os-multiple-create, root_device_name into os_extended_volumes, 
 (most complicated)
 Problems:
 - Not all of the ready extensions exist. There is no ready place to put
 hostname, ramdisk_id, kernel_id. We'd still have to create a new extension.
 
 I personally tend to go for 1. It's easiest and fastest at the moment to
 put everything in admin-only access and since nova API guys consider
 allowing fine-tuning of policies for individual properties, it'll be
 possible later to make some of it available for users. Or if necessary
 it'll be possible to just switch off admin restriction altogether for
 this extension. I don't think hypervisor_name, host and instance_name
 are such a secret info that it should be hidden from users.
 
 Please let me know what you think.

Option 1 seems fine for now, I feel like we can decide on different
approaches in Liberty, but getting a microversion adding this as admin
only seems fine.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Stepping down as TripleO PTL

2015-02-17 Thread Clint Byrum
There has been a recent monumental shift in my focus around OpenStack,
and it has required me to take most of my attention off TripleO. Given
that, I don't think it is in the best interest of the project that I
continue as PTL for the Kilo cycle.

I'd like to suggest that we hold an immediate election for a replacement
who can be 100% focused on the project.

Thanks everyone for your hard work up to this point. I hope that one day
soon TripleO can deliver on the promise of a self-deploying OpenStack
that is stable and automated enough to sit in the gate for many if not
all OpenStack projects.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Removal of copyright statements above the Apache 2.0 license header

2015-02-17 Thread Monty Taylor
On 02/17/2015 07:58 AM, Daniel P. Berrange wrote:
 On Tue, Feb 17, 2015 at 01:16:46PM +0100, Christian Berendt wrote:
 On 02/17/2015 12:05 PM, Daniel P. Berrange wrote:
 In section 4.(c) the LICENSE text says

   (c) You must retain, in the Source form of any Derivative Works
   that You distribute, all copyright, patent, trademark, and
   attribution notices from the Source form of the Work,
   excluding those notices that do not pertain to any part of
   the Derivative Works; and

 So based on that, I think it would be a violation to remove any of the
 Copyright acmeco lines in the file header.

 Section 4 is about the redistribution of the code. In my understanding
 this means that I am not allowed to remove the license header if I
 redistribute a source file (e.g. in a package or in my own software).
 
 The OpenStack project and/or many of our participating contributors
 and users, are all considered to be distributing the source code,
 so this section applies IMHO.
 
 If I add code to OpenStack I have to sign the CLA. The CLA includes:

2. Grant of Copyright License. Subject to the terms and conditions of
   this License, each Contributor hereby grants to You a perpetual,
   worldwide, non-exclusive, no-charge, royalty-free, irrevocable
   copyright license to reproduce, prepare Derivative Works of,
   publicly display, publicly perform, sublicense, and distribute the
   Work and such Derivative Works in Source or Object form.

 Does this not mean that it is not necessary to explicitly add a
 copyright statement above the license headers?
 
 Whether the copyright statements are required or not in the first place,
 is tangential to whether you are legally permitted to remove any which
 already exist.
 
 According to
 http://www.apache.org/dev/apply-license.html#contributor-copyright and
 http://www.apache.org/legal/src-headers.html copyright statements should
 not be added to the headers in source files.
 
 That is outlining the Apache project's chosen policy. It is reasonable
 for them to define a policy that copyright statements not be added to
 source file headers. Note, however, that it says the copyright holder
 (or someone who has been granted permission to act on their behalf) is
 the party who is responsible for removing them. They are not saying
 that you can just remove copyright notices that were added by someone
 else.

This is a very important point. That is what the Apache project has
chosen to do. It is not what we've chosen to do.

I recommend reading this:

https://wiki.openstack.org/wiki/LegalIssuesFAQ#Copyright_Headers

But also, what Daniel says is right - while it may or may not be
necessary to put the headers in the files (and reasonable people
disagree on this point) removing ones that are there is an action almost
guaranteed to provoke a bunch of anger.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] monkey patching strategy

2015-02-17 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

response was huge so far :) so to add more traction, I have a question
for everyone. Let's assume we want to move entry points for all
services and agents into neutron/cmd/... If so,

- - Do we want all existing tools stored in the path to be monkey
patched too? I would say 'yes', to make sure we run our unit tests in
the same environment as in real life;

- - Which parts of services we want to see there? Should they include
any real main() or register_options() code, or should they be just a
wrappers to call actual main() located somewhere in other parts of the
tree? I lean toward leaving just a one liner main() under
neutron/cmd/... that calls to 'real' main() located in a different
place in the tree.

Comments?

/Ihar


On 02/13/2015 04:37 PM, Ihar Hrachyshka wrote:
 On 02/13/2015 02:33 AM, Kevin Benton wrote:
 Why did the services fail with the stdlib patched? Are they 
 incompatible with eventlet?
 
 It's not like *service entry points* are not ready for neutron.* to
 be monkey patched, but tools around it (flake8 that imports 
 neutron.hacking.checks, setuptools that import hooks from 
 neutron.hooks etc). It's also my belief that base library should
 not be monkey patched not to put additional assumptions onto
 consumers.
 
 (Though I believe that all the code in the tree should be monkey 
 patched, including those agents that currently run without the
 library patched - for consistency and to reflect the same test
 environment for unit tests that will be patched from
 neutron/tests/__init__.py).
 
 /Ihar
 
 __

 
OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJU41iWAAoJEC5aWaUY1u57zBYIAIuobIYMZ1NJmm+7sV+NW6LS
ZS4PNKlwcYRrdfArGliUq7GLVi/ZRNPNgilF9RIJXQAiOXEc6PmKqpKw1JnwkQ7v
l3/NeciYmkMhSNRv1vIrOBHegAYx9Js6o2lOBCF7BFKIpu88OsC95oobcLGtcrYU
BxoBUM7DYvHssDhRp3NujNbyMrRkg4roer7+4qGE3a449tv4xViTcoUWg5MoNalY
vD1ld/Gg8LfKPt7v7FbF2YnHkMG+UJSk47rRd0yv9KGABS69TkNuvJXeJ14sgw0O
YqIY3oMO0nza+T8tdQGTrYv9N4rWOMFsJMyrOLIvoUyq526QQZ/K7Hrijj1IQjE=
=ZtVP
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Feature Freeze Exception Request - bp/libvirt-kvm-systemz

2015-02-17 Thread Marco Pavone
regarding the following question about this remaining nova FFE patch...

 https://review.openstack.org/#/c/149242/

 The question is what is the impact to s390x users without this?  Does
 this make using cinder impossible for zKVM users?
The impact of this patch not beeing merged would be that s390x users could
use cinder only for iSCSI attached volumes. FC-attached volumes could not
be used.
For developers and CI that would be an acceptable restriction, but for real
s390x users/customers this would be a heavy restriction, since FibreChannel
is the default and most important attachment in the enterprise market.

Hope, this helps.

Kind regards,
Marco Pavone


Matt Riedemann mrie...@linux.vnet.ibm.com wrote on 16.02.2015 22:10:35:

 From: Matt Riedemann mrie...@linux.vnet.ibm.com
 To: openstack-dev@lists.openstack.org
 Date: 16.02.2015 22:16
 Subject: Re: [openstack-dev] [nova] Feature Freeze Exception Request
 - bp/libvirt-kvm-systemz



 On 2/10/2015 4:27 AM, Daniel P. Berrange wrote:
  On Mon, Feb 09, 2015 at 05:15:26PM +0100, Andreas Maier wrote:
 
  Hello,
  I would like to ask for the following feature freeze exceptions in
Nova.
 
  The patch sets below are all part of this blueprint:
  https://review.openstack.org/#/q/status:open+project:openstack/nova
  +branch:master+topic:bp/libvirt-kvm-systemz,n,z
  and affect only the kvm/libvirt driver of Nova.
 
  The decision for merging these patch sets by exception can be made one
by
  one; they are independent of each other.
 
  1. https://review.openstack.org/149242 - FCP support
 
  Title: libvirt: Adjust Nova to support FCP on System z systems
 
  What it does: This patch set enables FCP support for KVM on System
z.
 
  Impact if we don't get this: FCP attached storage does not work
for KVM
  on System z.
 
  Why we need it: We really depend on this particular patch set,
because
  FCP is our most important storage attachment.
 
  Additional notes: The code in the libvirt driver that is
 updated by this
  patch set is consistent with corresponding code in the Cinder
driver,
  and it has seen review by the Cinder team.
 
  2. https://review.openstack.org/150505 - Console support
 
  Title: libvirt: Enable serial_console feature for system z
 
  What it does: This patch set enables the backing support in
 Nova for the
  interactive console in Horizon.
 
  Impact if we don't get this: Console in Horizon does not work. The
  mitigation for a user would be to use the Log in Horizon (i.e.
with
  serial_console disabled), or the virsh console command in an ssh
  session to the host Linux.
 
  Why we need it: We'd like to have console support. Also, because
the
  Nova support for the Log in Horizon has been merged in an earlier
patch
  set as part of this blueprint, this remaining patch set makes the
  console/log support consistent for KVM on System z Linux.
 
  3. https://review.openstack.org/150497 - ISO/CDROM support
 
  Title: libvirt: Set SCSI as the default cdrom bus on System z
 
  What it does: This patch set enables that cdrom drives can be
attached
  to an instance on KVM on System z. This is needed for example for
  cloud-init config files, but also for simply attaching ISO images
to
  instances. The technical reason for this change is that the IDE
  attachment is not available on System z, and we need SCSI (just
like
  Power Linux).
 
  Impact if we don't get this:
 - Cloud-init config files cannot be on a cdrom drive. A
mitigation
for a user would be to have such config files on a
cloud-init
server.
 - ISO images cannot be attached to instances. There is no
 mitigation.
 
  Why we need it: We would like to avoid having to restrict
cloud-init
  configuration to just using cloud-init servers. We would liketo be
able
  to support ISO images.
 
  Additional notes: This patch is a one line change (it simply
extends
  what is already done in a platform specific case for the
 Power platform,
  to be also used for System z).
 
  I will happily sponsor exception on patches 2  3, since they are
pretty
  trivial  easily understood.
 
 
  I will tenatively sponsor patch 1, if other reviewers feel able to do a
  strong review of the SCSI stuff, since this is SCSI host setup is not
  something I'm particularly familiar with.
 
  Regards,
  Daniel
 

 2 of the 3 changes have been merged outside of the FFE process, the only
 remaining one is the FCP support:

 https://review.openstack.org/#/c/149242/

 The question is what is the impact to s390x users without this?  Does
 this make using cinder impossible for zKVM users?

 --

 Thanks,

 Matt Riedemann



__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 

Re: [openstack-dev] [Ceilometer] Need help on https://bugs.launchpad.net/ceilometer/+bug/1310580

2015-02-17 Thread gordon chung
 I am newbie and I want to start with my first contribution to open 
stack. I have chosen the bug - 
https://bugs.launchpad.net/ceilometer/+bug/1310580 to start with​. I 
have added comments on the bug and need some developer to validate 
those. Additionally I need some help on how to go about editing the 
wiki. Please help and advice.

welcome Ashish!  feel free to jump on to irc #openstack-ceilometer if you have 
any questions (you already did but just mentioning for other's reference).

looking at the bug, it's a bit dated and might not be that relevant anymore. we 
are in the process of a major overhaul of our dev docs as they were littered 
with years of patch work and were debatably unusable. i would suggest looking 
at some of the existing doc patches here: 
https://review.openstack.org/#/q/status:open+project:openstack/ceilometer,n,z

you can also look at any telemetry related items at http://docs.openstack.org/ 
to see if there's any gaps (there are)

cheers,
gord
  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Separating granular tasks validator

2015-02-17 Thread Dmitriy Shulyak
+1 for separate tasks/graph validation library

In my opinion we may even migrate graph visualizer to this library, cause
it is most usefull during development and to demand installed fuel with
nailgun feels a bit suboptimal


On Tue, Feb 17, 2015 at 12:58 PM, Kamil Sambor ksam...@mirantis.com wrote:

 Hi all,

 I want to discuss separating validation from our repositories to one. On
 this moment in fuel we have validation for  granular deployment tasks in 3
 separate repositories so we need to maintain very similar code in all of
 them. New idea that we discussed with guys assumes to keep this code in one
 place. Below are more details.

 Schema validator should be in separate repo, we will install validator in
 fuel-plugin, fuel-lib, fuel-nailgun. Validator should support versions
 (return schemas and validate them for selected version).
 Reasons why we should have validation in all three repositories:
 nailgun: we need validation in api because we are able to send our own
 tasks to nailgun and execute them (now we validate type of tasks in
 deployment graph and  during installation of plugin)
 fuel-library: we need to check if tasks schema is correct defined in
 task.yaml files and if tasks not create cycles (actually we do both things)
 fuel-plugin: we need to check if defined tasks are supported by selected
 version of nailgun (now we check if task type are the same with hardcoded
 types in fuel-plugin, we not update this part since a while and now there
 are only 2 type of tasks: shell and puppet)
 With versioning we shouldn't have conflicts between nailgun serialization
 and fuel-plugin because plugin will be able to use schemas for specified
 version of nailgun.

 As a core reviewers of repositories we should keep the same reviewers as
 we have in fuel-core.

 How validator should looks like:
 separate repo, to install using pip
 need to return correct schema for selected version of fuel
 should be able to validate schema for selected version and ignore selected
 fields
 validate graph from selected tasks

 Pros and cons of this solution:
 pros:
 one place to keep validation
 less error prone - we will eliminate errors caused by not updating one of
 the repos, also it will be easy to test if changes are correct and
 compatible with all repos
 easier to develop (less changes in cases when we add new type of task or
 we change schemas of tasks - we edit just one place)
 easy distribution of code between repositories and easy to use by external
 developers
 cons:
 new repository that needs to be managed (and included into CI/QA/release
 cycle)
 new dependency for fuel-library, fuel-web, fuel-plugins (fuel plugin
 builder) of which developer need to be aware of

 Please comments and give opinions.

 Best regards,
 Kamil Sambor

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] stepping down as core reviewer

2015-02-17 Thread Chris Jones
Hi Rob

Thanks for your excellent run of insightful reviewing :)

Cheers,

Chris

On 15 February 2015 at 21:40, Robert Collins robe...@robertcollins.net
wrote:

 Hi, I've really not been pulling my weight as a core reviewer in
 TripleO since late last year when personal issues really threw me for
 a while. While those are behind me now, and I had a good break over
 the christmas and new year period, I'm sufficiently out of touch with
 the current (fantastic) progress being made that I don't feel
 comfortable +2'ing anything except the most trivial things.

 Now the answer to that is to get stuck back in, page in the current
 blueprints and charge ahead - but...

 One of the things I found myself reflecting on during my break was the
 extreme fragility of the things we were deploying in TripleO - most of
 our time is spent fixing fallout from unintended, unexpected
 consequences in the system. I think its time to put some effort
 directly in on that in a proactive fashion rather than just reacting
 to whichever failure du jour is breaking deployments / scale /
 performance.

 So for the last couple of weeks I've been digging into the Nova
 (initially) bugtracker and code with an eye to 'how did we get this
 bug in the first place', and refreshing my paranoid
 distributed-systems-ops mindset: I'll be writing more about that
 separately, but its clear to me that there's enough meat there - both
 analysis, discussion, and hopefully execution - that it would be
 self-deceptive for me to think I'll be able to meaningfully contribute
 to TripleO in the short term.

 I'm super excited by Kolla - I think that containers really address
 the big set of hurdles we had with image based deployments, and if we
 can one-way-or-another get cinder and Ironic running out of
 containers, we should have a pretty lovely deployment story. But I
 still think helping on the upstream stuff more is more important for
 now. We'll see where we're at in a cycle or two :)

 -Rob

 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Cheers,

Chris
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Ubuntu, qemu, NUMA support

2015-02-17 Thread Chris Friesen


Hi all,

Just thought I'd highlight here that Ubuntu 14.10 is using qemu 2.1, but they're 
not currently enabling NUMA support.


I've reported it as a bug and it's been fixed for 15.04, but there is some 
pushback about fixing it in 14.10 on the grounds that it is a feature 
enhancement and not a bugfix:

https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1417937


Also, we currently assume that qemu can pin to NUMA nodes.  This is an invalid 
assumption since this was only added as of qemu 2.1, and there only if it's 
compiled with NUMA support.  At the very least we should have a version check, 
but if Ubuntu doesn't fix things then maybe we should actually verify the 
functionality first before trying to use it.


I've opened a bug to track this issue:
https://bugs.launchpad.net/nova/+bug/1422775

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [oslo] MySQL connection shared between green threads concurrently

2015-02-17 Thread Doug Hellmann


On Mon, Feb 16, 2015, at 07:50 PM, Mike Bayer wrote:
 hi all -
 
 I’ve been researching this cinder issue
 https://bugs.launchpad.net/cinder/+bug/1417018 and I’ve found something
 concerning.
 
 Basically there’s a race condition here which is occurring because a
 single MySQL connection is shared between two green threads.  This occurs
 while Cinder is doing its startup work in cinder/volume/manager.py -
 init_host(), and at the same time a request comes in from a separate
 service call that seems to be part of the startup.
 
 The log output at http://paste.openstack.org/show/175571/ shows this
 happening.  I can break it down:
 
 
 1. A big query for volumes occurs as part of
 self.db.volume_get_all_by_host(ctxt, self.host)”.   To reproduce the
 error more regularly I’ve placed it into a loop of 100 calls.  We can see
 that thread id is 68089648 MySQL connection is 3a9c5a0.
 
 2015-02-16 19:32:47.236 INFO sqlalchemy.engine.base.Engine
 [req-ed3c0248-6ee5-4063-80b5-77c5c9a23c81 None None] tid: 68089648,
 connection: _mysql.connection open to '127.0.0.1' at 3a9c5a0, stmt
 SELECT volumes.created_at AS 
 2015-02-16 19:32:47.237 INFO sqlalchemy.engine.base.Engine
 [req-ed3c0248-6ee5-4063-80b5-77c5c9a23c81 None None]
 ('localhost.localdomain@ceph', 'localhost.localdomain@ceph#%’)
 
 2. A “ping” query comes in related to a different API call - different
 thread ID, *same* connection
 
 2015-02-16 19:32:47.276 INFO sqlalchemy.engine.base.Engine
 [req-600ef638-cb45-4a34-a3ab-6d22d83cfd00 None None] tid: 68081456,
 connection: _mysql.connection open to '127.0.0.1' at 3a9c5a0, stmt
 SELECT 1
 2015-02-16 19:32:47.279 INFO sqlalchemy.engine.base.Engine
 [req-600ef638-cb45-4a34-a3ab-6d22d83cfd00 None None] ()
 
 3. The first statement is still in the middle of invocation, so we get a
 failure, either a mismatch of the statement to the cursor, or a MySQL
 lost connection (stack trace begins)
 
 Traceback (most recent call last):
   File /usr/lib/python2.7/site-packages/eventlet/hubs/hub.py, line 457,
   in fire_timers
  … more stack trace
 
 4. another thread id, *same* connection.
 
 2015-02-16 19:32:47.290 INFO sqlalchemy.engine.base.Engine
 [req-f980de7c-151d-4fed-b45e-d12b133859a6 None None] tid: 61238160,
 connection: _mysql.connection open to '127.0.0.1' at 3a9c5a0, stmt
 SELECT 1
 
 rows = [process[0](row, None) for row in fetch]
   File /usr/lib64/python2.7/site-packages/sqlalchemy/orm/loading.py,
   line 363, in _instance
 tuple([row[column] for column in pk_cols])
   File /usr/lib64/python2.7/site-packages/sqlalchemy/engine/result.py,
   line 331, in _key_fallback
 expression._string_or_unprintable(key))
 NoSuchColumnError: Could not locate column in row for column
 'volumes.id'
 2015-02-16 19:32:47.293 ERROR cinder.openstack.common.threadgroup [-]
 Could not locate column in row for column 'volumes.id’
 
 
 So I’m not really sure what’s going on here.  Cinder seems to have some
 openstack greenlet code of its own in
 cinder/openstack/common/threadgroup.py, I don’t know the purpose of this
 code.   SQLAlchemy’s connection pool has been tested a lot with eventlet
 / gevent and this has never been reported before. This is a very
 severe race and I’d think that this would be happening all over the
 place.

The threadgroup module is from the Oslo incubator, so if you need to
review the git history you'll want to look at that copy.

Doug

 
 Current status is that I’m continuing to try to determine why this is
 happening here, and seemingly nowhere else.
 
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron-vnaas][neutron][qa] Functional/scenario testing for VPNaaS repo

2015-02-17 Thread Paul Michali
Hi need some guidance and feedback on our needs for testing in the VPNaaS
repo.

*Background...*

The VPNaaS reference implementation is currently using the open source
OpenSwan application that provides the IPSec site-to-site connection
functionality. The OpenStack code essentially creates the configuration
files for this app, and updates firewall rules for a connection.

A developer on the VPNaaS sub-team has been implementing a new driver that
uses the open source StrongSwan application (
https://review.openstack.org/#/c/144391/). This uses a different
configuration setup, requires installation on Ubuntu 14.04 (for example),
and disabling of AppArmor (to not enforce for the charon and stoke process).

The intent here is to replace OpenSwan, with StrongSwan as the reference
implementation in the future, as it is a newer implementation, has more
features, is supported on multiple operating systems now, and for Ubuntu
14.10, OpenSwan is beeing deprecated (no longer be installed).

Currently, there is only some API test cases in the Tempest repo for VPN.
There are no functional tests for VPNaaS, and in particular, no scenario
test that ensures that OpenSwan (and now StrongSwan) apps are properly
configured and can create and negotiate an end to end connection.


*Goals...*

The goal is to provide functional tests for the device drivers, that are
used to control the OpenSwan (and now StrongSwan). My guess here is that we
can verify that the right configuation files/directories are created, and
can check the status of the OpenSwan/StrongSwan process for different
operations.

In addition a scenario test is strongly desired (at least by me :) to
ensure that the feature indeed works (negotiating a connection and able to
pass traffic between the two nodes).

Personally, I'd like to see us get something in place for K-3, even if it
is limited in nature, as we've been 2+ releases without any
functional/scenario tests.

*Where we are today...*

As part of the StrongSwan driver implementation (
https://review.openstack.org/#/c/144391/), a functional test is being
developed. It currently checks the configuration files generated.

In addition, there are currently two implementations of a scenario test for
VPNaaS out for review (https://review.openstack.org/#/c/140072, and
https://review.openstack.org/#/c/153292/5) that developers have been
working on. Both of these are targeted for the Tempest repo. One does a
ping check and the other does an SSH.

I'm thinking of, but have not started, implementing functional tests for
the OpenSwan driver (if the community thinks this makes sense, given it
will be deprecated).

My understanding is that the Neutron tests in the Tempest repo are being
migrated into the Neutron repo, and a tempest library developed.


*Questions/guidance needed...*

With the scenario tests, there are several questions...

1) Is there a preference (from a Tempest standpoint) of one of the scenario
tests over the other (both do the same thing, just differently)?

2) Should an exception be made to the decision to not allow *aaS tests to
be added to Tempest? The two scenario test implementations mentioned above
are created for the Tempest repo (because they are based on two different
abandoned designs from 1/2014 and 7/2014). The test could be migrated to
Neutron (and later VPNaaS repos) as part of that migration process.

3) If not, when is it expected that the migration will be done (wondering
if this could make K-3)?

4) When will the tempest library be available, if we're waiting for the
migration to complete and then use the test in the VPNaaS repo?

5) Instead of being based on Tempest, and waiting for migration, could the
scenario test be adapted to run in the existing functional test area of the
VPNaaS repo as a dsvm-functional test (and would it make sense to go that
route)?

6) Since the StrongSwan test has different setup requirements than
OpenSwan, will we need a separate tempest jobs?


For functional tests (of the device drivers), there are several questions...

7) Because of the setup differences, the thought was to create two
functional jobs, one for StrongSwan and one for OpenSwan. Does that sound
right?

8) Should there be two root directories (tests/functional/openswan and
tests/functional/strongswan) or should there be one root (tests/functional)
using sub-directories and filters to select modules for the two jobs?

9) Would the latter scheme be better, in case there are tests that are
common to both implementations (and could be placed in tests/functional)?

10) The checking of the config files (I think) could be done w/o a devstack
environment. Should those be done in unit tests, or is it better to keep
all tests related to the specific driver, in the functional test area
(testing the config file, and querying the status for the process).

General...

11) Are there other testing approaches that we are missing or should
consider (and that we should be doing first, to meet our goals)?



Re: [openstack-dev] [all][tc] Lets keep our community open, lets fight for it

2015-02-17 Thread Doug Hellmann


On Tue, Feb 17, 2015, at 05:37 AM, Daniel P. Berrange wrote:
 On Wed, Feb 11, 2015 at 03:14:39PM +0100, Stefano Maffulli wrote:
   ## Cores are *NOT* special
   
   At some point, for some reason that is unknown to me, this message
   changed and the feeling of core's being some kind of superheros became
   a thing. It's gotten far enough to the point that I've came to know
   that some projects even have private (flagged with +s), password
   protected, irc channels for core reviewers.
  
  This is seriously disturbing.
  
  If you're one of those core reviewers hanging out on a private channel,
  please contact me privately: I'd love to hear from you why we failed as
  a community at convincing you that an open channel is the place to be.
  
  No public shaming, please: education first.
 
 I've been thinking about these last few lines a bit, I'm not entirely
 comfortable with the dynamic this sets up.
 
 What primarily concerns me is the issue of community accountability. A
 core
 feature of OpenStack's project  individual team governance is the idea
 of democractic elections, where the individual contributors can vote in
 people who they think will lead OpenStack in a positive way, or
 conversely
 hold leadership to account by voting them out next time. The ability of
 individuals contributors to exercise this freedom though, relies on the
 voters being well informed about what is happening in the community.
 
 If cases of bad community behaviour, such as use of passwd protected IRC
 channels, are always primarily dealt with via further private
 communications,
 then we are denying the voters the information they need to hold people
 to
 account. I can understand the desire to avoid publically shaming people
 right away, because the accusations may be false, or may be arising from
 a
 simple mis-understanding, but at some point genuine issues like this need
 to be public. Without this we make it difficult for contributors to make
 an informed decision at future elections.
 
 Right now, this thread has left me wondering whether there are still any
 projects which are using password protected IRC channels, or whether they
 have all been deleted, and whether I will be unwittingly voting for
 people
 who supported their use in future openstack elections.

I trust Stef, as one of our Community Managers, to investigate and
report back. Let's give that a little time, and allow for the fact that
with travel and other things going on it may take a while. I've added it
to the TC agenda [1] for next week so we can check in to see where
things stand.

Doug

[1] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Agenda

 
 Regards,
 Daniel
 -- 
 |: http://berrange.com  -o-   
 http://www.flickr.com/photos/dberrange/ :|
 |: http://libvirt.org  -o-
 http://virt-manager.org :|
 |: http://autobuild.org   -o-
 http://search.cpan.org/~danberr/ :|
 |: http://entangle-photo.org   -o-  
 http://live.gnome.org/gtk-vnc :|
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Outcome of the nova FFE meeting for Kilo

2015-02-17 Thread Matt Riedemann



On 2/16/2015 9:57 PM, Jay Pipes wrote:

Hi Mikal, sorry for top-posting. What was the final decision regarding
the instance tagging work?

Thanks,
-jay

On 02/16/2015 09:44 PM, Michael Still wrote:

Hi,

we had a meeting this morning to try and work through all the FFE
requests for Nova. The meeting was pretty long -- two hours or so --
and we did in in the nova IRC channel in an attempt to be as open as
possible. The agenda for the meeting was the list of FFE requests at
https://etherpad.openstack.org/p/kilo-nova-ffe-requests

I recognise that this process is difficult for all, and that it is
frustrating when your FFE request is denied. However, we have tried
very hard to balance distractions from completing priority tasks and
getting as many features into Kilo as possible. I ask for your
patience as we work to finalize the Kilo release.

That said, here's where we ended up:

Approved:

 vmware: ephemeral disk support
 API: Keypair support for X509 public key certificates

We were also presented with a fair few changes which are relatively
trivial (single patch, not very long) and isolated to a small part of
the code base. For those, we've selected the ones with the greatest
benefit. These ones are approved so long as we can get the code merged
before midnight on 20 February 2015 (UTC). The deadline has been
introduced because we really are trying to focus on priority work and
bug fixes for the remainder of the release, so I want to time box the
amount of distraction these patches cause.

Those approved in this way are:

 ironic: Pass the capabilities to ironic node instance_info
 libvirt: Nova vif driver plugin for opencontrail
 libvirt: Quiescing filesystems with QEMU guest agent during image
snapshotting
 libvirt: Support vhost user in libvirt vif driver
 libvirt: Support KVM/libvirt on System z (S/390) as a hypervisor
platform

It should be noted that there was one request which we decided didn't
need a FFE as it isn't feature work. That may proceed:

 hyperv: unit tests refactoring

Finally, there were a couple of changes we were uncomfortable merging
this late in the release as we think they need time to bed down
before a release we consider stable for a long time. We'd like to see
these merge very early in Liberty:

 libvirt: use libvirt storage pools
 libvirt: Generic Framework for Securing VNC and SPICE
Proxy-To-Compute-Node Connections

Thanks again to everyone with their patience with our process, and
helping to make Kilo an excellent Nova release.

Michael



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



There are notes in the etherpad,

https://etherpad.openstack.org/p/kilo-nova-ffe-requests

but I think we wanted to get cyeoh and Ken'ichi's thoughts on the v2 
and/or v2.1 question about the change, i.e. should it be v2.1 only with 
microversions or if that is going to block it, is it fair to keep out 
the v2 change that's already in the patch?


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] neutron-drivers meeting

2015-02-17 Thread Kyle Mestery
On Tue, Feb 17, 2015 at 2:05 PM, Armando M. arma...@gmail.com wrote:

 Hi folks,

 I was wondering if we should have a special neutron-drivers meeting on
 Wednesday Feb 18th (9:30AM CST / 7:30AM PST) to discuss recent patches
 where a few cores have not reached consensus on, namely:

 - https://review.openstack.org/#/c/155373/
 - https://review.openstack.org/#/c/148318/

 The Kilo cycle end is fast approaching and a speedy resolution of these
 matters would be better. I fear that leaving these items to the Open
 Discussion slot in the weekly IRC meeting will not give us enough time.

 Is there any other item where we need to get consensus on?

 Anyone is welcome to join.

 ++

Lets plan on having the drivers meeting [1] tomorrow at 1530 UTC in
#openstack-meeting-3. Thanks for proposing this Armando!

Kyle

[1] https://wiki.openstack.org/wiki/Meetings/NeutronDrivers

 Thanks,
 Armando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] [devstack] About _member_ role

2015-02-17 Thread Jamie Lennox


- Original Message -
 From: Pasquale Porreca pasquale.porr...@dektech.com.au
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Tuesday, 17 February, 2015 9:07:14 PM
 Subject: [openstack-dev]  [Keystone] [devstack] About _member_ role
 
 I proposed a fix for a bug in devstack
 https://review.openstack.org/#/c/156527/ caused by the fact the role
 _member_ was not anymore created due to a recent change.
 
 But why is the existence of _member_ role necessary, even if it is not
 necessary to be used? Is this a know/wanted feature or a bug by itself?

So the way to be a 'member' of a project so that you can get a token scoped to 
that project is to have a role defined on that project. 
The way we would handle that from keystone for default_projects is to create a 
default role _member_ which had no permissions attached to it, but by assigning 
it to the user on the project we granted membership of that project.
If the user has any other roles on the project then the _member_ role is 
essentially ignored. 

In that devstack patch I removed the default project because we want our users 
to explicitly ask for the project they want to be scoped to.
This patch shouldn't have caused any issues though because in each of those 
cases the user is immediately granted a different role on the project - 
therefore having 'membership'. 

Creating the _member_ role manually won't cause any problems, but what issue 
are you seeing where you need it?


Jamie


 --
 Pasquale Porreca
 
 DEK Technologies
 Via dei Castelli Romani, 22
 00040 Pomezia (Roma)
 
 Mobile +39 3394823805
 Skype paskporr
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Use of egg snapshots of neutron code in neutron-*aas projects/distributing openstack

2015-02-17 Thread Doug Wiegley
There's no need for additional neutron packaging.  The real trick is that the 
neutron-*aas packages need to have a package dependency of neutron, and 
whichever release of openstack they're all cut with, they just have to match. 
Put another way, use your existing neutron package from the same release, just 
make sure it's there if an *aas is installed.

Tox does those egg games because putting neutron in requirements.txt causes 
other problems.

When kilo gets a stable branch, that tox.ini will need to point at stable/kilo 
instead of master. But that shouldn't affect packaging.

Ultimately, I'm hoping we can split out the library code, to make things saner: 
https://review.openstack.org/#/c/154736/

Thanks,
doug


 On Feb 16, 2015, at 8:13 AM, James Page james.p...@ubuntu.com wrote:
 
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256
 
 Hi Folks
 
 The split-out drivers for vpn/fw/lb as-a-service all make use of a
 generated egg of the neutron git repository as part of their unit test
 suite dependencies.
 
 This presents a bit of a challenge for us downstream in distributions,
 as we can't really pull in a full source egg of neutron from
 git.openstack.org; we have the code base for neutron core available
 (python-neutron), but that does not appear to be enough (see [0]).
 
 I would appreciate if dev's working in this area could a) review the
 bug and the problems we are seeing a b) think about how this can work
 for distributions - I'm happy to create a new 'neutron-testing' type
 package from the neutron codebase to support this stuff, but right now
 I'm a bit unclear on exactly what its needs to contain!
 
 Cheers
 
 James
 
 
 [0] https://bugs.launchpad.net/neutron/+bug/1422376
 
 - -- 
 James Page
 Ubuntu and Debian Developer
 james.p...@ubuntu.com
 jamesp...@debian.org
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1
 
 iQIcBAEBCAAGBQJU4gj5AAoJEL/srsug59jD5agP/1NFgrLQjD7+d0OrxSByD+b0
 DEwlbSwFGvV6sIbjzP8/9ibUmxnCOcwW9Enn4+Ukp4KuhuWKuiEZYdOARkuEupaz
 IyDw3F9NzytnER2s+sn2+tddQloTjCk0vzk+e5uH19ovwoLBmFOd/g5d+yNYt/SB
 ozzA3S4WTyG8vws2AOBcubJkg1wYzyUSGATceBqYLFMa7e7GuazRR/XohOwa7iux
 T1+4t72juhXUiFiPn4GD2aWjl30Eer0+juHdlje6EHtSRnODJXnYeEHIw/ndmTCy
 gEmMZ3c9fUoJC51HBeOSjX+Mg5Hq/AaGLLQHU+shklg6pgXKZ1ZKFAYD5rjWrXB2
 jxPM0vFcJEh2yfMHTsgbgP6AnYF5g7/36izTdJsXWDgEJoE7Zt2J+NX5+SLTihtt
 GbWIUh5ZstZXBD85u4o8iB+whhpzZd7rE/GRK2Ax/kY8WnB8xeiU5wA5AQN6nTMr
 XPT/ObXsXKnyrLgn4KkRZymEeDO1yaaVrtGtLxF2Dap2CpH8so7hLQw/3KYxDsTP
 8dptOS4EzVm+jZPdAHMHIqsyA2wnRfyauPAyYDEeVioCUkijinrt61x62OM5s8+X
 MbAOyjGGOPVXq0tFChbB9ZdSkMDNvj98sv1xhZ1yHmoKvJ56EM1drh7HhcJWD6/v
 dv9uUmY4DhVlvjYKwPgY
 =C4vr
 -END PGP SIGNATURE-
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-17 Thread Joe Gordon
On Tue, Feb 17, 2015 at 4:19 AM, Sean Dague s...@dague.net wrote:

 On 02/16/2015 08:50 PM, Ian Cordasco wrote:
  On 2/16/15, 16:08, Sean Dague s...@dague.net wrote:
 
  On 02/16/2015 02:08 PM, Doug Hellmann wrote:
 
 
  On Mon, Feb 16, 2015, at 01:01 PM, Ian Cordasco wrote:
  Hey everyone,
 
  The os-ansible-deployment team was working on updates to add support
  for
  the latest version of juno and noticed some interesting version
  specifiers
  introduced into global-requirements.txt in January. It introduced some
  version specifiers that seem a bit impossible like the one for
 requests
  [1]. There are others that equate presently to pinning the versions of
  the
  packages [2, 3, 4].
 
  I understand fully and support the commit because of how it improves
  pretty much everyone’s quality of life (no fires to put out in the
  middle
  of the night on the weekend). I’m also aware that a lot of the
  downstream
  redistributors tend to work from global-requirements.txt when
  determining
  what to package/support.
 
  It seems to me like there’s room to clean up some of these
 requirements
  to
  make them far more explicit and less misleading to the human eye (even
  though tooling like pip can easily parse/understand these).
 
  I think that's the idea. These requirements were generated
  automatically, and fixed issues that were holding back several
 projects.
  Now we can apply updates to them by hand, to either move the lower
  bounds down (as in the case Ihar pointed out with stevedore) or clean
 up
  the range definitions. We should not raise the limits of any Oslo
  libraries, and we should consider raising the limits of third-party
  libraries very carefully.
 
  We should make those changes on one library at a time, so we can see
  what effect each change has on the other requirements.
 
 
  I also understand that stable-maint may want to occasionally bump the
  caps
  to see if newer versions will not break everything, so what is the
  right
  way forward? What is the best way to both maintain a stable branch
 with
  known working dependencies while helping out those who do so much work
  for
  us (downstream and stable-maint) and not permanently pinning to
 certain
  working versions?
 
  Managing the upper bounds is still under discussion. Sean pointed out
  that we might want hard caps so that updates to stable branch were
  explicit. I can see either side of that argument and am still on the
  fence about the best approach.
 
  History has shown that it's too much work keeping testing functioning
  for stable branches if we leave dependencies uncapped. If particular
  people are interested in bumping versions when releases happen, it's
  easy enough to do with a requirements proposed update. It will even run
  tests that in most cases will prove that it works.
 
  It might even be possible for someone to build some automation that did
  that as stuff from pypi released so we could have the best of both
  worlds. But I think capping is definitely something we want as a
  project, and it reflects the way that most deployments will consume this
  code.
 
   -Sean
 
  --
  Sean Dague
  http://dague.net
 
  Right. No one is arguing the very clear benefits of all of this.
 
  I’m just wondering if for the example version identifiers that I gave in
  my original message (and others that are very similar) if we want to make
  the strings much simpler for people who tend to work from them (i.e.,
  downstream re-distributors whose jobs are already difficult enough). I’ve
  offered to help at least one of them in the past who maintains all of
  their distro’s packages themselves, but they refused so I’d like to help
  them anyway possible. Especially if any of them chime in as this being
  something that would be helpful.

 Ok, your links got kind of scrambled. Can you next time please inline
 the key relevant content in the email, because I think we all missed the
 original message intent as the key content was only in footnotes.

 From my point of view, normalization patches would be fine.

 requests=1.2.1,!=2.4.0,=2.2.1

 Is actually an odd one, because that's still there because we're using
 Trusty level requests in the tests, and my ability to have devstack not
 install that has thus far failed.

 Things like:

 osprofiler=0.3.0,=0.3.0 # Apache-2.0

 Can clearly be normalized to osprofiler==0.3.0 if you want to propose
 the patch manually.


global-requirements for stable branches serves two uses:

1. Specify the set of dependencies that we would like to test against
2.  A tool for downstream packagers to use when determining what to
package/support.

For #1, Ideally we would like a set of all dependencies, including
transitive, with explicit versions (very similar to the output of
pip-freeze). But for #2 the standard requirement file with a range is
preferred. Putting an upper bound on each dependency, instead of using a
'==' was a compromise between the two use cases.

Going forward I 

Re: [openstack-dev] [all][tc] Lets keep our community open, lets fight for it

2015-02-17 Thread Clint Byrum
Excerpts from Daniel P. Berrange's message of 2015-02-17 02:37:50 -0800:
 On Wed, Feb 11, 2015 at 03:14:39PM +0100, Stefano Maffulli wrote:
   ## Cores are *NOT* special
   
   At some point, for some reason that is unknown to me, this message
   changed and the feeling of core's being some kind of superheros became
   a thing. It's gotten far enough to the point that I've came to know
   that some projects even have private (flagged with +s), password
   protected, irc channels for core reviewers.
  
  This is seriously disturbing.
  
  If you're one of those core reviewers hanging out on a private channel,
  please contact me privately: I'd love to hear from you why we failed as
  a community at convincing you that an open channel is the place to be.
  
  No public shaming, please: education first.
 
 I've been thinking about these last few lines a bit, I'm not entirely
 comfortable with the dynamic this sets up.
 
 What primarily concerns me is the issue of community accountability. A core
 feature of OpenStack's project  individual team governance is the idea
 of democractic elections, where the individual contributors can vote in
 people who they think will lead OpenStack in a positive way, or conversely
 hold leadership to account by voting them out next time. The ability of
 individuals contributors to exercise this freedom though, relies on the
 voters being well informed about what is happening in the community.
 
 If cases of bad community behaviour, such as use of passwd protected IRC
 channels, are always primarily dealt with via further private communications,
 then we are denying the voters the information they need to hold people to
 account. I can understand the desire to avoid publically shaming people
 right away, because the accusations may be false, or may be arising from a
 simple mis-understanding, but at some point genuine issues like this need
 to be public. Without this we make it difficult for contributors to make
 an informed decision at future elections.
 
 Right now, this thread has left me wondering whether there are still any
 projects which are using password protected IRC channels, or whether they
 have all been deleted, and whether I will be unwittingly voting for people
 who supported their use in future openstack elections.
 

Shaming a person is a last resort, when that person may not listen to
reason. It's sometimes necessary to bring shame to a practice, but even
then, those who are participating are now draped in shame as well and
will have a hard time saving face.

However, if we show respect to peoples' ideas, and take the time not
only to educate them on our values, but also to educate ourselves about
what motivates that practice, then I think we will have a much easier
time changing, or even accepting, these behaviors.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [oslo] MySQL connection shared between green threads concurrently

2015-02-17 Thread Mike Bayer


Doug Hellmann d...@doughellmann.com wrote:

 My question is then how is it that such an architecture would be
 possible,
 that Cinder’s service starts up without greenlets yet allows
 greenlet-based
 requests to come in before this critical task is complete? Shouldn’t the
 various oslo systems be providing patterns to prevent this disastrous
 combination?   
 
 I would have thought so, but they are (mostly) libraries not frameworks
 so they are often combined in unexpected ways. Let's see where the issue
 is before deciding on where the fix should be.


my next suspicion is that this is actually a spawned subprocess, though
oslo.concurrency, and it is failing to create a new connection pool and thus
is sharing the same file descriptor between processes. That is a much more
ordinary issue and would explain everything I’m seeing in a more familiar
way. Let me see if i can confirm *that*.



 Doug
 
 Doug
 
 Current status is that I’m continuing to try to determine why this is
 happening here, and seemingly nowhere else.
 
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Propose Andre Martin for kolla-core

2015-02-17 Thread Jeff Peeler

On Tue, Feb 17, 2015 at 03:07:31AM +, Steven Dake (stdake) wrote:

Hi folks,

I’d am proposing Andre Martin to join the kolla-core team.  Andre has been 
providing mostly code implementation, but as he contributes heavily, has 
indicated he will get more involved in our peer reviewing process.

He has contributed 30% of the commits for the Kilo development cycle, acting as 
our #1 commit contributor during Kilo.

http://stackalytics.com/?project_type=allmodule=kollametric=commits

Kolla-core members please vote +1/abstain/-1.  Remember that a any –1 
vote is a veto.


+1

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [oslo] MySQL connection shared between green threads concurrently

2015-02-17 Thread Mike Bayer


Mike Bayer mba...@redhat.com wrote:

 
 
 Doug Hellmann d...@doughellmann.com wrote:
 
 My question is then how is it that such an architecture would be
 possible,
 that Cinder’s service starts up without greenlets yet allows
 greenlet-based
 requests to come in before this critical task is complete? Shouldn’t the
 various oslo systems be providing patterns to prevent this disastrous
 combination?   
 
 I would have thought so, but they are (mostly) libraries not frameworks
 so they are often combined in unexpected ways. Let's see where the issue
 is before deciding on where the fix should be.
 
 
 my next suspicion is that this is actually a spawned subprocess, though
 oslo.concurrency, and it is failing to create a new connection pool and thus
 is sharing the same file descriptor between processes. That is a much more
 ordinary issue and would explain everything I’m seeing in a more familiar
 way. Let me see if i can confirm *that*.


So, that was it.

Here’s some debugging I added to oslo.db inside of its own create_engine() that 
catches this easily:

   engine = sqlalchemy.create_engine(url, **engine_args)

import os
from sqlalchemy import event

@event.listens_for(engine, connect)
def connect(dbapi_connection, connection_record):
connection_record.info['pid'] = os.getpid()

@event.listens_for(engine, checkout)
def checkout(dbapi_connection, connection_record, connection_proxy):
pid = os.getpid()
if connection_record.info['pid'] != pid:
raise Exception(
Connection record belongs to pid %s, 
attempting to check out in pid %s % 
(connection_record.info['pid'], pid))


sure enough, the database errors go away and Cinders logs when I do a plain
startup are filled with:

 ERROR cinder.openstack.common.threadgroup [-] Connection record belongs to pid 
21200, attempting to check out in pid 21408
 ERROR cinder.openstack.common.threadgroup [-] Connection record belongs to pid 
21200, attempting to check out in pid 21409

etc.

The subprocesses here are logged as:

2015-02-17 13:05:12.583 DEBUG oslo_concurrency.processutils 
[req-a06464d1-4785-4a29-8e58-239d5a674451 None None] CMD sudo cinder-rootwrap 
/etc/cinder/rootwrap.conf env LC_ALL=C LVM_SYSTEM_DIR=/etc/cinder vgs 
--noheadings --unit=g -o name,size,free,lv_count,uuid --separator : --nosuffix 
stack-volumes-lvm-1 returned: 0 in 0.828s from (pid=21408) execute 
/usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:221

a trace is as follows:

Traceback (most recent call last):
 File /opt/stack/cinder/cinder/openstack/common/threadgroup.py, line 143, in 
wait
   x.wait()
 File /opt/stack/cinder/cinder/openstack/common/threadgroup.py, line 47, in 
wait
   return self.thread.wait()
 File /usr/lib/python2.7/site-packages/eventlet/greenthread.py, line 175, in 
wait
   return self._exit_event.wait()
 File /usr/lib/python2.7/site-packages/eventlet/event.py, line 121, in wait
   return hubs.get_hub().switch()
 File /usr/lib/python2.7/site-packages/eventlet/hubs/hub.py, line 294, in 
switch
   return self.greenlet.switch()
 File /usr/lib/python2.7/site-packages/eventlet/greenthread.py, line 214, in 
main
   result = function(*args, **kwargs)
 File /opt/stack/cinder/cinder/openstack/common/service.py, line 492, in 
run_service
   service.start()
 File /opt/stack/cinder/cinder/service.py, line 142, in start
   self.manager.init_host()
 File /usr/lib/python2.7/site-packages/osprofiler/profiler.py, line 105, in 
wrapper
   return f(*args, **kwargs)
 File /usr/lib/python2.7/site-packages/osprofiler/profiler.py, line 105, in 
wrapper
   return f(*args, **kwargs)
 File /usr/lib/python2.7/site-packages/osprofiler/profiler.py, line 105, in 
wrapper
   return f(*args, **kwargs)
 File /opt/stack/cinder/cinder/volume/manager.py, line 294, in init_host
   volumes = self.db.volume_get_all_by_host(ctxt, self.host)
 File /opt/stack/cinder/cinder/db/api.py, line 191, in volume_get_all_by_host
   return IMPL.volume_get_all_by_host(context, host)
 File /opt/stack/cinder/cinder/db/sqlalchemy/api.py, line 160, in wrapper
   return f(*args, **kwargs)
 File /opt/stack/cinder/cinder/db/sqlalchemy/api.py, line 1229, in 
volume_get_all_by_host
   result = _volume_get_query(context).filter(or_(*conditions)).all()
 File /usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py, line 2320, 
in all
   return list(self)
 File /usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py, line 2438, 
in __iter__
   return self._execute_and_instances(context)
 File /usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py, line 2451, 
in _execute_and_instances
   close_with_result=True)
 File /usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py, line 2442, 
in _connection_from_session
   **kw)
 File /usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py, line 854, 
in connection
   close_with_result=close_with_result)
 File 

Re: [openstack-dev] [kolla] Propose Andre Martin for kolla-core

2015-02-17 Thread Daneyon Hansen (danehans)
+1

From: Steven Dake (stdake) std...@cisco.commailto:std...@cisco.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, February 16, 2015 at 7:20 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [kolla] Propose Andre Martin for kolla-core

+1 \o/ yay

From: Steven Dake std...@cisco.commailto:std...@cisco.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, February 16, 2015 at 8:07 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [kolla] Propose Andre Martin for kolla-core

Hi folks,

I’d am proposing Andre Martin to join the kolla-core team.  Andre has been 
providing mostly code implementation, but as he contributes heavily, has 
indicated he will get more involved in our peer reviewing process.

He has contributed 30% of the commits for the Kilo development cycle, acting as 
our #1 commit contributor during Kilo.

http://stackalytics.com/?project_type=allmodule=kollametric=commits

Kolla-core members please vote +1/abstain/-1.  Remember that a any –1 vote is a 
veto.

Regards
-steve


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The root-cause for IRC private channels (was Re: [all][tc] Lets keep our community open, lets fight for it)

2015-02-17 Thread Clark Boylan
On Tue, Feb 17, 2015, at 09:32 AM, Stefano Maffulli wrote:
 Changing the subject since Flavio's call for openness was broader than
 just private IRC channels.
 
 On Tue, 2015-02-17 at 10:37 +, Daniel P. Berrange wrote:
  If cases of bad community behaviour, such as use of passwd protected
  IRC channels, are always primarily dealt with via further private
  communications, then we are denying the voters the information they
  need to hold people to account. I can understand the desire to avoid
  publically shaming people right away, because the accusations may be
  false, or may be arising from a simple mis-understanding, but at some
  point genuine issues like this need to be public. Without this we make
  it difficult for contributors to make an informed decision at future
  elections.
 
 You got my intention right: I wanted to understand better what lead some
 people to create a private channel, what were their needs. For that
 objective, having an accusatory tone won't go anywhere and instead I
 needed to provide them a safe place to discuss and then I would report
 back in the open.
 
 So far, I've only received comments in private from only one person,
 concerned about public logging of channels without notification. I
 wished the people hanging out on at least one of such private channels
 would provide more insights on their choice but so far they have not.
 
 Regarding the why at least one person told me they prefer not to use
 official openstack IRC channels because there is no notification if a
 channel is being publicly logged. Together with freenode not obfuscating
 host names, and eavesdrop logs available to any spammer, one person at
 least is concerned that private information may leak. There may also be
 legal implications in Europe, under the Data Protection Directive, since
 IP addresses and hostnames can be considered sensitive data. Not to
 mention the casual dropping of emails or phone numbers in public+logged
 channels.
 
 I think these points are worth discussing. One easy fix this person
 suggests is to make it default that all channels are logged and write a
 warning on wiki/IRC page. Another is to make the channel bot announce
 whether the channel is logged. Cleaning up the hostname details on
 join/parts from eavesdrop and put the logs behind a login (to hide them
 from spam harvesters). 
 
 Thoughts?
 
It is worth noting that just about everything else is logged too. Git
repos track changes individuals have made, this mailing list post will
be publicly available, and so on. At the very least I think the
assumption should be that any openstack IRC channel is logged and since
assumptions are bad we should be explicit about this. I don't think this
means we require all channels actually be logged, just advertise than
many are and any can be (because really any individual with freenode
access can set up public logging).

I don't think we should need to explicitly cleanup our logs. Mostly
because any individual can set up public logs that are not sanitized.
Instead IRC users should use tools like cloaks or Tor to get the level
of obfuscation and security that they desire. Freenode has docs for
both, see https://freenode.net/faq.shtml#cloaks and
https://freenode.net/irc_servers.shtml#tor

Hope this helps,
Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Stepping down as TripleO PTL

2015-02-17 Thread Anita Kuno
On 02/17/2015 11:52 AM, Clint Byrum wrote:
 Excerpts from Anita Kuno's message of 2015-02-17 07:38:01 -0800:
 On 02/17/2015 09:21 AM, Clint Byrum wrote:
 There has been a recent monumental shift in my focus around OpenStack,
 and it has required me to take most of my attention off TripleO. Given
 that, I don't think it is in the best interest of the project that I
 continue as PTL for the Kilo cycle.

 I'd like to suggest that we hold an immediate election for a replacement
 who can be 100% focused on the project.

 Thanks everyone for your hard work up to this point. I hope that one day
 soon TripleO can deliver on the promise of a self-deploying OpenStack
 that is stable and automated enough to sit in the gate for many if not
 all OpenStack projects.



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 So in the middle of a release, changing PTLs can take 3 avenues:

 1) The new PTL is appointed. Usually there is a leadership candidate in
 waiting which the rest of the project feels it can rally around until
 the next election. The stepping down PTL takes the pulse of the
 developers on the project and informs us on the mailing list who the
 appointed PTL is. Barring any huge disagreement, we continue on with
 work and the appointed PTL has the option of standing for election in
 the next election round. The appointment lasts until the next round of
 elections.

 
 Thanks for letting me know about this Anita.
 
 I'd like to appoint somebody, but I need to have some discussions with a
 few people first. As luck would have it, some of those people will be in
 Seattle with us for the mid-cycle starting tomorrow.
Wonderful.
 
 2) We have an election, in which case we need candidates and some dates.
 Let me know if we want to exercise this option so that Tristan and I can
 organize some dates.

 
 Let's wait a bit until I figure out if there's a clear and willing
 appointee. That should be clear by Thursday.
That sounds fair to me. We tend to kick of election activities on
Fridays anyway.

I talked with Tristan and we have agreed to the following dates, if
there is no appointee and we need to have an election:
Feb 20 - 26: nomination period
Feb 27 to March 5: election period

Keep us posted.

Thanks,
Anita.
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] how to handle the 247 migration / model issue for DB2?

2015-02-17 Thread Matt Riedemann



On 1/26/2015 11:35 AM, Matt Riedemann wrote:

The change to add DB2 support to nova has hit a snag in the 247
migration [1] where it's altering the pci_devices.deleted column to be
nullable=True to match the model (it was defined nullable=False in the
216 migration which didn't match the model).

The problem is DB2 won't allow unique constraints over nullable columns
and the unique constraint is required for the foreign key constraint
between pci_devices and compute_nodes via the compute_node_id column in
pci_devices (DB2 requires that a FKey be on a column that's in a unique
or primary key constraint, and those must be non-nullable with DB2).

So I'm kind of stuck with what to do here. As discussed in the change
with Johannes (and Jay Pipes in IRC a week ago), it's kind of silly that
the deleted column is nullable (am I missing something), but that's
defined in the SoftDeleteMixin in oslo.db [2] so it's pervasive.

I see our options as (1) diverge the pci_devices.deleted model for DB2
and/or (2) add a HardDeleteMixin to oslo.db where deleted is
nullable=False, then work that into nova but it's going to require a big
migration (I'm thinking FKey drops, UC drops, table alterations, then
re-add the UCs and FKeys over the deleted columns that were altered,
plus possibly a CLI to scan the database looking for rows where deleted
is NULL, like we did for instances.uuid when making that non-nullable).

Option (1) is obviously a short-term fix and in my opinion relatively
painless since we aren't generating the DB schema from the modeles (yet,
until Johannes has his vision realized :) ).  Option (2) is going to
take some work, but I think it makes sense as something to do regardless
of DB2's restrictions here.

Thoughts?

[1]
https://review.openstack.org/#/c/69047/38/nova/db/sqlalchemy/migrate_repo/versions/247_nullable_mismatch.py

[2]
http://git.openstack.org/cgit/openstack/oslo.db/tree/oslo_db/sqlalchemy/models.py#n122




This is coming up again here now:

https://review.openstack.org/#/c/153123/6/nova/db/sqlalchemy/migrate_repo/versions/277_fix_unique_constraint_for_compute_node.py

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] oslotest 1.4.0 released

2015-02-17 Thread Doug Hellmann
The Oslo team is pleased to announce the release of:

oslotest 1.4.0: OpenStack test framework

For more details, please see the git log history below and:

http://launchpad.net/oslotest/+milestone/1.4.0

Please report issues through launchpad:

http://bugs.launchpad.net/oslotest

Changes in /home/dhellmann/repos/openstack/oslotest 1.3.0..1.4.0


c69ddb9 Move the script for running pre-releases into oslotest
b31057e Update docs for new script name
e50b8b3 Publish cross-test runner as part of oslotest
baf76c5 Fix for mktemp failure on osx
28f55cb Activate pep8 check that _ is imported
6081f73 Workflow documentation is now in infra-manual
6d06c93 Fix the URL for reporting bugs in the README

Diffstat (except docs and test files)
-

CONTRIBUTING.rst |   7 +-
README.rst   |   2 +-
setup.cfg|   2 +
tools/oslo_debug_helper  |   2 +-
tox.ini  |   1 -
9 files changed, 319 insertions(+), 78 deletions(-)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Propose Andre Martin for kolla-core

2015-02-17 Thread Steven Dake (stdake)
That is 3 votes.  Welcome to kolla-core Andre!

Regards
-steve


On 2/17/15, 10:59 AM, Jeff Peeler jpee...@redhat.com wrote:

On Tue, Feb 17, 2015 at 03:07:31AM +, Steven Dake (stdake) wrote:
Hi folks,

I¹d am proposing Andre Martin to join the kolla-core team.  Andre has
been providing mostly code implementation, but as he contributes
heavily, has indicated he will get more involved in our peer reviewing
process.

He has contributed 30% of the commits for the Kilo development cycle,
acting as our #1 commit contributor during Kilo.

http://stackalytics.com/?project_type=allmodule=kollametric=commits

Kolla-core members please vote +1/abstain/-1.  Remember that a any ­1
vote is a veto.

+1


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Lets keep our community open, lets fight for it

2015-02-17 Thread Ed Leafe
On Feb 17, 2015, at 11:29 AM, Clint Byrum cl...@fewbar.com wrote:

 Shaming a person is a last resort, when that person may not listen to
 reason. It's sometimes necessary to bring shame to a practice, but even
 then, those who are participating are now draped in shame as well and
 will have a hard time saving face.

Why must pointing out that someone is doing something incorrectly necessarily 
shaming? Those of us who review code do that all the time; telling someone 
that there is a better way to code something is certainly not shaming, since we 
all benefit from those suggestions.

Sure, you can also be a jerk about how you tell someone they can improve, but 
that's certainly not the norm in this community.


-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Propose Andre Martin for kolla-core

2015-02-17 Thread Daneyon Hansen (danehans)
Congrats Andre and thanks for all your contributions. 

Regards,
Daneyon Hansen, CCIE 9950
Software Engineer
Office of the Cloud CTO
Mobile: 303-718-0400
Office: 720-875-2936
Email: daneh...@cisco.com

 On Feb 17, 2015, at 10:13 AM, Steven Dake (stdake) std...@cisco.com wrote:
 
 That is 3 votes.  Welcome to kolla-core Andre!
 
 Regards
 -steve
 
 
 On 2/17/15, 10:59 AM, Jeff Peeler jpee...@redhat.com wrote:
 
 On Tue, Feb 17, 2015 at 03:07:31AM +, Steven Dake (stdake) wrote:
 Hi folks,
 
 I¹d am proposing Andre Martin to join the kolla-core team.  Andre has
 been providing mostly code implementation, but as he contributes
 heavily, has indicated he will get more involved in our peer reviewing
 process.
 
 He has contributed 30% of the commits for the Kilo development cycle,
 acting as our #1 commit contributor during Kilo.
 
 http://stackalytics.com/?project_type=allmodule=kollametric=commits
 
 Kolla-core members please vote +1/abstain/-1.  Remember that a any ­1
 vote is a veto.
 
 +1
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] TaskFlow 0.7.1 released

2015-02-17 Thread Joshua Harlow

The Oslo team is pleased to announce the release of:

taskflow 0.7.1: Taskflow structured state management library.

For more details, please see the git log history below and:

http://launchpad.net/taskflow/+milestone/0.7.1

Please report issues through launchpad:

http://bugs.launchpad.net/taskflow/

Notable changes


Deprecations


* Conductor(s)

  * e7df6c6 Modify stop and add wait on conductor to prevent lockups

* This change will now require the usage of stop() on a conductor
  to tell the conductor to stop processing further work and *now* to
  wait for the conductor to eventually stop a new method wait()
  has been added.

Bugs


* Zookeeper version checking flakiness

  * 35f07aa Allow turning off the version check

* This change allows turning off the zookeeper version check
  as that command was determined to inherently be flakey, a
  pull request  issue has been
  filed @ https://github.com/python-zk/kazoo/issues/274.
  To avoid this version checking these pluggable backends now
  take a 'check_compatible' configuration option (which defaults
  to true to retain prior behavior) to allow power-users to
  turn off this apparently flakey check when they know they
  are running the right zookeeper versions.

Changes in taskflow 0.7.0..0.7.1


NOTE: Skipping requirement commits...

9f60336 Revert Add retries to fetching the zookeeper server version
35f07aa Allow turning off the version check
5511011 adding check for str/unicode type in requires
a14adc3 Add retries to fetching the zookeeper server version
1ca123b Remove duplicate 'the' and link to worker engine section
f836433 Remove delayed decorator and replace with nicer method
9ea190d Fix log statement
101a47f Make the atom class an abstract class
0bcd1eb Mark conductor 'stop' method deprecation kwarg with versions
c7e472a Move to hacking 0.10
073eb32 catch NotFound errors when consuming or abandoning
5514fcd Use the new table length constants
4da581c Improve upon/adjust/move around new optional example
45c7b5c Clarify documentation related to inputs
b7d59ec Docstrings should document parameters return values
59771dd Let the multi-lock convert the provided value to a tuple
7f0c457 Map optional arguments as well as required arguments
19e0789 Add a BFS tree iterator
3bf3249 DFS in right order when not starting at the provided node
687ec91 Rework the sqlalchemy backend
e7df6c6 Modify stop and add wait on conductor to prevent lockups
08a1846 Default to using a thread-safe storage unit
1fc1a7e Add warning to sqlalchemy backend size limit docs
2f4bd68 Use a thread-identifier that can't easily be recycled
f2a6aca Use a notifier instead of a direct property assignment
2cd9074 Tweak the WBE diagram (and present it as an svg)
387e360 Remove duplicate code
a856d0b Improved diagram for Taskflow
6a6b50f Bump up the env_builder.sh to 2.7.9
20d85fe Add a capturing listener (for test or other usage)
d0d112d Add + use a staticmethod to fetch the immediate callables
bb43048 Just directly access the callback attributes
5242892 Use class constants during pformatting a tree node

Diffstat (except docs and test files)
-

taskflow/atom.py   |  93 +++--
taskflow/conductors/single_threaded.py |  38 +-
taskflow/engines/action_engine/actions/retry.py|   9 +-
taskflow/engines/action_engine/actions/task.py |  18 +-
taskflow/engines/action_engine/completer.py|   1 -
taskflow/engines/action_engine/engine.py   |   4 -
taskflow/engines/base.py   |   7 +-
taskflow/engines/worker_based/engine.py|   3 -
taskflow/engines/worker_based/executor.py  |   6 +-
taskflow/engines/worker_based/server.py|  58 ++-
taskflow/engines/worker_based/types.py |  10 +-
taskflow/examples/distance_calculator.py   | 109 ++
taskflow/exceptions.py |   2 +-
taskflow/jobs/backends/impl_zookeeper.py   |   3 +-
taskflow/listeners/capturing.py| 105 +
taskflow/persistence/backends/impl_sqlalchemy.py   | 430 
+

taskflow/persistence/backends/impl_zookeeper.py|   8 +-
.../versions/1cea328f0f65_initial_logbook_deta.py  |  38 +-
taskflow/persistence/backends/sqlalchemy/models.py |  97 -
taskflow/persistence/backends/sqlalchemy/tables.py |  99 +
taskflow/storage.py|  41 +-
taskflow/types/failure.py  |   6 +-
taskflow/types/fsm.py  |  84 ++--
taskflow/types/futures.py  |  46 ++-
taskflow/types/latch.py|   9 +-
taskflow/types/notifier.py |  65 +++-
taskflow/types/periodic.py |  32 +-
taskflow/types/table.py|  13 +-

[openstack-dev] The root-cause for IRC private channels (was Re: [all][tc] Lets keep our community open, lets fight for it)

2015-02-17 Thread Stefano Maffulli
Changing the subject since Flavio's call for openness was broader than
just private IRC channels.

On Tue, 2015-02-17 at 10:37 +, Daniel P. Berrange wrote:
 If cases of bad community behaviour, such as use of passwd protected
 IRC channels, are always primarily dealt with via further private
 communications, then we are denying the voters the information they
 need to hold people to account. I can understand the desire to avoid
 publically shaming people right away, because the accusations may be
 false, or may be arising from a simple mis-understanding, but at some
 point genuine issues like this need to be public. Without this we make
 it difficult for contributors to make an informed decision at future
 elections.

You got my intention right: I wanted to understand better what lead some
people to create a private channel, what were their needs. For that
objective, having an accusatory tone won't go anywhere and instead I
needed to provide them a safe place to discuss and then I would report
back in the open.

So far, I've only received comments in private from only one person,
concerned about public logging of channels without notification. I
wished the people hanging out on at least one of such private channels
would provide more insights on their choice but so far they have not.

Regarding the why at least one person told me they prefer not to use
official openstack IRC channels because there is no notification if a
channel is being publicly logged. Together with freenode not obfuscating
host names, and eavesdrop logs available to any spammer, one person at
least is concerned that private information may leak. There may also be
legal implications in Europe, under the Data Protection Directive, since
IP addresses and hostnames can be considered sensitive data. Not to
mention the casual dropping of emails or phone numbers in public+logged
channels.

I think these points are worth discussing. One easy fix this person
suggests is to make it default that all channels are logged and write a
warning on wiki/IRC page. Another is to make the channel bot announce
whether the channel is logged. Cleaning up the hostname details on
join/parts from eavesdrop and put the logs behind a login (to hide them
from spam harvesters). 

Thoughts?

/stef


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Team meeting minutes 02/16/2015

2015-02-17 Thread Anastasia Kuznetsova
Hi Lingxian,

Feel free to join us in IRC meeting every Monday at 16:00 UTC at
#openstack-meeting channel.


Regards,
Anastasia Kuznetsova

On Tue, Feb 17, 2015 at 7:05 PM, Lingxian Kong anlin.k...@gmail.com wrote:

 hi, Nikolay,

 Thanks for sending them out. It will be appreciated that there will be
 reminder before the meeting starts.

 Regards!

 2015-02-17 0:52 GMT+08:00 Nikolay Makhotkin nmakhot...@mirantis.com:
  Thanks for joining our team meeting today!
 
   * Meeting minutes:
 
 http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-02-16-16.00.html
   * Meeting log:
 
 http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-02-16-16.00.log.html
 
  The next meeting is scheduled for Feb 23 at 16.00 UTC.
  --
  Best Regards,
  Nikolay
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Regards!
 ---
 Lingxian Kong

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Removal of copyright statements above the Apache 2.0 license header

2015-02-17 Thread Christian Berendt
Is it safe to remove copyright statements above the Apache 2.0 license
headers stated in a lot of files?

We recenctly removed all @author tags and added a hacking check to not
longer add @author tags in the future. Can we do the same for the
copyright statements above the Apache 2.0 license headers?

Christian.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Propose removing Dmitry Guryanov from magnum-core

2015-02-17 Thread Davanum Srinivas
-1

On Tue, Feb 17, 2015 at 12:19 AM, Steven Dake (stdake) std...@cisco.com wrote:
 -1

 From: Steven Dake std...@cisco.com
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: Monday, February 16, 2015 at 8:20 PM
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [magnum] Propose removing Dmitry Guryanov from
 magnum-core

 The initial magnum core team was founded at a meeting where several people
 committed to being active in reviews and writing code for Magnum.  Nearly
 all of the folks that made that initial commitment have been active in IRC,
 on the mailing lists, or participating in code reviews or code development.

 Out of our core team of 9 members [1], everyone has been active in some way
 except for Dmitry.  I propose removing him from the core team.  Dmitry is
 welcome to participate in the future if he chooses and be held to the same
 high standards we have held our last 4 new core members to that didn’t get
 an initial opt-in but were voted in by their peers.

 Please vote (-1 remove, abstain, +1 keep in core team) - a vote of +1 from
 any core acts as a veto meaning Dmitry will remain in the core team.

 [1] https://review.openstack.org/#/admin/groups/473,members


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [UI] Sorting and filtering of node list

2015-02-17 Thread Przemyslaw Kaminski
+1 for that, it should be done with pagination too. IMHO pagination 
simple filtering by object's status can be done generically on the API
side for all GET methods that derive from CollectionHandler.

P.


On 02/17/2015 10:18 AM, Lukasz Oles wrote:
 Hello Julia,
 
 I think node filtering and sorting is a great feature and it will 
 improve UX, but we need to remember that with increasing number of 
 nodes increases number of automation tasks. If fuel user want to 
 automate something he will use fuel client not Fuel GUI. This is
 why I think sorting and filtering should be done on backend side. 
 We should stop thinking that Fuel UI is the only way to interact
 with Fuel.
 
 Regards,
 
 On Sat, Feb 14, 2015 at 9:27 AM, Julia Aranovich 
 jkirnos...@mirantis.com wrote:
 Hi All,
 
 Currently we [Fuel UI team] are planning the features of sorting
 and filtering of node list to introduce it in 6.1 release.
 
 Now user can filter nodes just by it's name or MAC address and no
 sorters are available. It's rather poor UI for managing 200+
 nodes environment. So, the current suggestion is to filter and
 sort nodes by the following parameters:
 
 name manufacturer IP address MAC address CPU memory disks total
 size (we need to think about less than/more than 
 representation) interfaces speed status (Ready, Pending Addition,
 Error, etc.) roles
 
 
 It will be a form-based filter. Items [1-4] should go to a single
 text input and other go to a separate controls. And also there is
 an idea to translate a user filter selection to a query and add
 it to a location string. Like it's done for the logs search: 
 #cluster/x/logs/type:local;source:api;level:info.
 
 Please also note, that the changes we are thinking about should
 not affect backend code.
 
 
 I will be very grateful if you share your ideas about this or
 tell some of the cases that would be useful to you at work with
 real deployments. We would like to introduce really usefull tools
 based on your feedback.
 
 
 Best regards, Julia
 
 -- Kind Regards, Julia Aranovich, Software Engineer, Mirantis,
 Inc +7 (905) 388-82-61 (cell) Skype: juliakirnosova 
 www.mirantis.ru jaranov...@mirantis.com
 
 __

 
OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 
 
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance]'Add' capability to the HTTP store

2015-02-17 Thread Flavio Percoco

On 13/02/15 17:01 +0100, Jordan Pittier wrote:

Humm this doesn't have to be complicated, for a start.



Sorry for my late reply


- Figuring out the http method the server expects (POST/PUT)

Yeah, I agree. Theres no definitive answer to this but I think PUT makes sense
here. I googled 'post vs put' and I found that the idempotent and who is in
charge of the actual resource location choice (the client vs the server),
favors PUT. 


Right but that's not what the remote server may be expecting. One of
the problems about the HTTP store is that there's not real API
besides what the HTTP protocol allows you to do. That is to say that a
remote server may accept POST/PUT and in order to keep the
implementation non-opinionated, you'd need to have a way for these
things to be specified.




- Adding support for at least few HTTP auth methods

Why should the write path be more secured/more flexible than the read path ?
If I take a look at the current HTTP store, only basic auth is supported (ie
http://user:pass@server1/myLinuxImage). I suggest the write path (ie the add()
method) should support the same auth mecanism. The cloud admin could also add
some firewall rules to make sure the HTTP backend server can only be accessed
by the Glance-api servers.


I didn't say the read path was correct :P

That said, I agree that we should keep both paths consistent.


- Having a sufixed URL where we're sure glance will have proper
permissions to upload data.

That's up the the cloud admin/operator to make it work. The HTTP glance_store
could have 2 config flags :
a) http_server, a string with the scheme (http vs https) and the hostname of
the HTTP server, ie 'http://server1' 
b) path_prefix. A string that will prefix the path part of the image URL.
This config flag could be left empty/is optional. 


Yes, it was probably not clear from my previous email that these were
not ands but things that would need to be brought up.




Handling HTTP responses from the server

That's of course to be discussed. But, IMO, this should be as simple as if
response.code is 200 or 202 then OKAY else raise GlanceStoreException. I am
not sure any other glance store is more granular than this. 


Again, this assumes too much from the server. So, either we impose
some kind of rules as to how Glance expects the HTTP server to behave
or we try to be bullet proof API-wise.


How can we handle quota?

I am new to glance_store, is there a notion of quotas in glance stores ? I
though Glance (API) was handling this. What kind of quotas are we talking about
here ?


Glance handles quotas. The problem is that when the data is sent to
the remote store, glance looses some control on it. A user may upload
some data, the HTTP push could fail and we may try to delete the data
without any proof that it will be correctly deleted.

Also, without auth, we will have to force the user to send all image
data through glance. The reason is that we don't know whether the HTTP
store has support for HEAD to report the image size when using
`--location`.

Sorry if all the above sounds confusing. The problem with the HTTP
store is that we have basically no control over it and that is
worrisome from a security and implementation perspective.

Flavio


Frankly, it shouldn't add that much code. I feel we can make it clean if we
leverage the different Python modules (httplib etc.) 

Regards,
Jordan


On Fri, Feb 13, 2015 at 4:20 PM, Flavio Percoco fla...@redhat.com wrote:

   On 13/02/15 16:01 +0100, Jordan Pittier wrote:

   What is the difference between just calling the Glance API to
   upload an image,

   versus adding add() functionality to the HTTP image store?
   You mean using glance image-create --location http://server1/
   myLinuxImage [..]
? If so, I guess adding the add() functionality will save the user
   from
   having to find the right POST curl/wget command to properly upload his
   image.


   I believe it's more complex than this. Having an `add` method for the
   HTTP store implies:

   - Figuring out the http method the server expects (POST/PUT)
   - Adding support for at least few HTTP auth methods
   - Having a sufixed URL where we're sure glance will have proper
    permissions to upload data.
   - Handling HTTP responses from the server w.r.t the status of the data
    upload. For example: What happens if the remote http server runs out
    of space? What's the response status going to be like? How can we
    make glance agnostic to these discrepancies across HTTP servers so
    that it's consistent in its responses to glance users?
   - How can we handle quota?

   I'm not fully opposed, although it sounds like not worth it code-wise,
   maintenance-wise and performance-wise. The user will have to run just
   1 command but at the cost of all of the above.

   Do the points listed above make sense to you?

   Cheers,
   Flavio




   On Fri, Feb 13, 2015 at 3:55 PM, Jay Pipes jaypi...@gmail.com 

Re: [openstack-dev] [glance]'Add' capability to the HTTP store

2015-02-17 Thread Jordan Pittier
Jay, Flavio, thanks for this interesting discussion. I get your points and
they really make sense to me.

I'll go for a specific driver that will inherits from the HTTP Store for
the read path and implements the write path.

Jordan

On Tue, Feb 17, 2015 at 12:52 PM, Flavio Percoco fla...@redhat.com wrote:

 On 13/02/15 17:01 +0100, Jordan Pittier wrote:

 Humm this doesn't have to be complicated, for a start.


 Sorry for my late reply

  - Figuring out the http method the server expects (POST/PUT)

 Yeah, I agree. Theres no definitive answer to this but I think PUT makes
 sense
 here. I googled 'post vs put' and I found that the idempotent and who
 is in
 charge of the actual resource location choice (the client vs the server),
 favors PUT.


 Right but that's not what the remote server may be expecting. One of
 the problems about the HTTP store is that there's not real API
 besides what the HTTP protocol allows you to do. That is to say that a
 remote server may accept POST/PUT and in order to keep the
 implementation non-opinionated, you'd need to have a way for these
 things to be specified.


  - Adding support for at least few HTTP auth methods

 Why should the write path be more secured/more flexible than the read
 path ?
 If I take a look at the current HTTP store, only basic auth is supported
 (ie
 http://user:pass@server1/myLinuxImage). I suggest the write path (ie the
 add()
 method) should support the same auth mecanism. The cloud admin could also
 add
 some firewall rules to make sure the HTTP backend server can only be
 accessed
 by the Glance-api servers.


 I didn't say the read path was correct :P

 That said, I agree that we should keep both paths consistent.

  - Having a sufixed URL where we're sure glance will have proper
 permissions to upload data.

 That's up the the cloud admin/operator to make it work. The HTTP
 glance_store
 could have 2 config flags :
 a) http_server, a string with the scheme (http vs https) and the
 hostname of
 the HTTP server, ie 'http://server1'
 b) path_prefix. A string that will prefix the path part of the image
 URL.
 This config flag could be left empty/is optional.


 Yes, it was probably not clear from my previous email that these were
 not ands but things that would need to be brought up.


  Handling HTTP responses from the server

 That's of course to be discussed. But, IMO, this should be as simple as
 if
 response.code is 200 or 202 then OKAY else raise GlanceStoreException. I
 am
 not sure any other glance store is more granular than this.


 Again, this assumes too much from the server. So, either we impose
 some kind of rules as to how Glance expects the HTTP server to behave
 or we try to be bullet proof API-wise.

  How can we handle quota?

 I am new to glance_store, is there a notion of quotas in glance stores ? I
 though Glance (API) was handling this. What kind of quotas are we talking
 about
 here ?


 Glance handles quotas. The problem is that when the data is sent to
 the remote store, glance looses some control on it. A user may upload
 some data, the HTTP push could fail and we may try to delete the data
 without any proof that it will be correctly deleted.

 Also, without auth, we will have to force the user to send all image
 data through glance. The reason is that we don't know whether the HTTP
 store has support for HEAD to report the image size when using
 `--location`.

 Sorry if all the above sounds confusing. The problem with the HTTP
 store is that we have basically no control over it and that is
 worrisome from a security and implementation perspective.

 Flavio


  Frankly, it shouldn't add that much code. I feel we can make it clean if
 we
 leverage the different Python modules (httplib etc.)

 Regards,
 Jordan


 On Fri, Feb 13, 2015 at 4:20 PM, Flavio Percoco fla...@redhat.com
 wrote:

On 13/02/15 16:01 +0100, Jordan Pittier wrote:

What is the difference between just calling the Glance API to
upload an image,

versus adding add() functionality to the HTTP image store?
You mean using glance image-create --location http://server1/
myLinuxImage [..]
 ? If so, I guess adding the add() functionality will save the
 user
from
having to find the right POST curl/wget command to properly upload
 his
image.


I believe it's more complex than this. Having an `add` method for the
HTTP store implies:

- Figuring out the http method the server expects (POST/PUT)
- Adding support for at least few HTTP auth methods
- Having a sufixed URL where we're sure glance will have proper
 permissions to upload data.
- Handling HTTP responses from the server w.r.t the status of the data
 upload. For example: What happens if the remote http server runs out
 of space? What's the response status going to be like? How can we
 make glance agnostic to these discrepancies across HTTP servers so
 that it's consistent in its 

Re: [openstack-dev] [horizon]

2015-02-17 Thread David Lyle
Feedback on security is of the utmost concern. However, due to the short
time left in Kilo, we are going to hold the sprint beginning Wed at 20:00
UTC--23:30 UTC. All code will still go through the standard review process.

On Wednesday, we will judge the demand/usefulness of a repeat on Thursday.

The sprint will be held in #openstack-sprint

David

On Mon, Feb 16, 2015 at 3:46 PM, Gabriel Hurley gabriel.hur...@nebula.com
wrote:

  FWIW, this week conflicts with the OpenStack Security Group midcycle
 meetup. I’ll be attending that, so I thought I’d point it out in case it
 affects anyone else.



 Having some cross-pollination between Security and Horizon on this
 significant shift in the codebase and architecture would probably be
 advisable.



 -  Gabriel



 *From:* David Lyle [mailto:dkly...@gmail.com]
 *Sent:* Monday, February 16, 2015 10:19 AM
 *To:* OpenStack Development Mailing List
 *Subject:* [openstack-dev] [horizon]



 A couple of high priority items for Horizon's Kilo release could use some
 targeted attention to drive progress forward. These items are related to
 angularJS based UX improvements, especially Launch Instance and the
 conversion of the Identity Views.



 These efforts are suffering from a few issues, education, consensus on
 direction, working through blockers and drawn out dev and review cycles. In
 order to help insure these high priority issues have the best possible
 chance to land in Kilo, I have proposed a virtual sprint to happen this
 week. I created an etherpad [1] with proposed dates and times. Anyone who
 is interested is welcome to participate, please register your intent in the
 etherpad and availability.



 David



 [1] https://etherpad.openstack.org/p/horizon-kilo-virtual-sprint

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Lets keep our community open, lets fight for it

2015-02-17 Thread Clint Byrum
Excerpts from Ed Leafe's message of 2015-02-17 10:11:01 -0800:
 On Feb 17, 2015, at 11:29 AM, Clint Byrum cl...@fewbar.com wrote:
 
  Shaming a person is a last resort, when that person may not listen to
  reason. It's sometimes necessary to bring shame to a practice, but even
  then, those who are participating are now draped in shame as well and
  will have a hard time saving face.
 
 Why must pointing out that someone is doing something incorrectly necessarily 
 shaming? Those of us who review code do that all the time; telling someone 
 that there is a better way to code something is certainly not shaming, since 
 we all benefit from those suggestions.
 

Funny you should bring that up, that may be an entirely new branch of this
thread which is how harmful some of our review practices are to overall
community harmony. I definitely think there's a small amount of unhealthy
shaming in reviews, and a not small amount of non-constructive criticism.

Saying This code is not covered by tests. or You could make this less
complex by using a generator. is constructive criticism that has as
little shaming effect as possible without beating around the bush. This
is the very definition of _educating_.

However, being entirely subjective and attacking stylistic issues
(please know that I'm not claiming innocence at all here) does damage to
the relationship between coder and review team. Of course, a discussion
of style has a place, but I believe that place is in a private
conversation, not out in the open where it will almost certainly bring
shame to the submitter.

 Sure, you can also be a jerk about how you tell someone they can improve, but 
 that's certainly not the norm in this community.
 

I agree that the subjective stylistic nit picking comes in a polite way.
I think that only softens the blow to someone's ego and still conveys a
level of disrespect that will eventually erode the level of trust
between the submitter and the project as a whole.

So, somewhat ironically, I think the right place to make subjective
observations about someone's work is in a private message.

Unfortunately, I think humans are quite subjective themselves, and so
what might be too harsh and shameful to one ego, might be just the right
thing to educate the next. Calibration of one's criticism practices is
one of those things I'm sure most of us geeks would like to think we
don't have to worry about. However, I think it is worthwhile to consider
it before making any critique, especially when one doesn't know the
recipient of the critique extremely well.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] neutron-drivers meeting

2015-02-17 Thread Armando M.
Hi folks,

I was wondering if we should have a special neutron-drivers meeting on
Wednesday Feb 18th (9:30AM CST / 7:30AM PST) to discuss recent patches
where a few cores have not reached consensus on, namely:

- https://review.openstack.org/#/c/155373/
- https://review.openstack.org/#/c/148318/

The Kilo cycle end is fast approaching and a speedy resolution of these
matters would be better. I fear that leaving these items to the Open
Discussion slot in the weekly IRC meeting will not give us enough time.

Is there any other item where we need to get consensus on?

Anyone is welcome to join.

Thanks,
Armando
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] test fixtures not returning memory

2015-02-17 Thread Robert Collins
On 16 Feb 2015 03:56, Sean Dague s...@dague.net wrote:
 Fixtures seems to be doing the
 right thing with it's copy.

I'm glad you sorted the issue out. If it had been a fixtures issue I would
have considered it a critical bug and dived on it... The testing cabal set
of libraries are many to be bulletproof things you can rely on and resource
leaks would rather get in the way of that!

Rob
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Use of egg snapshots of neutron code in neutron-*aas projects/distributing openstack

2015-02-17 Thread Armando M.
I also failed to understand the issue, and I commented on the bug report,
where it's probably best to continue this conversation.

Thanks,
Armando

On 16 February 2015 at 07:54, Ihar Hrachyshka ihrac...@redhat.com wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 02/16/2015 04:13 PM, James Page wrote:
  Hi Folks
 
  The split-out drivers for vpn/fw/lb as-a-service all make use of a
  generated egg of the neutron git repository as part of their unit
  test suite dependencies.
 
  This presents a bit of a challenge for us downstream in
  distributions,

 I am packaging neutron for RDO, but I fail to understand your issue.

  as we can't really pull in a full source egg of neutron from
  git.openstack.org; we have the code base for neutron core
  available (python-neutron), but that does not appear to be enough
  (see [0]).

 Don't you ship egg files with python-neutron package? I think this
 should be enough to get access to entry points.

 
  I would appreciate if dev's working in this area could a) review
  the bug and the problems we are seeing a b) think about how this
  can work for distributions - I'm happy to create a new
  'neutron-testing' type package from the neutron codebase to support
  this stuff, but right now I'm a bit unclear on exactly what its
  needs to contain!
 
  Cheers
 
  James
 
 
  [0] https://bugs.launchpad.net/neutron/+bug/1422376
 
 
 
 __
 
 
 OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1

 iQEcBAEBAgAGBQJU4hLIAAoJEC5aWaUY1u57+QEH/1ZaBmuEpIHiW1/67Lh452PU
 o2dXy3fy23fns/9GUHbXn6ASRPi5usEqe4Qa+Z0jaVnipIQcdjvGZg8RET2KQsyo
 RsmLJlOJHA2USJP62PvbkgZ5bmIlFSIi0vgNs75904tGp+UqGkpW4VZ/KTYyzVL2
 kpBaMfJxHdjmEnPAdfk14u5kHkblavGqQO7plmjCRncFkUy63m/qWQ2zjQbpUxCZ
 wZJ1FTNqA16mo4ThFzdn/br5Mqeopfkcwht7EQV/cCYz6b9Y0oU4qXmL5qy/k8Xz
 yyU9hLagPrffLf0hJWdf3Zt0K3FqYDND1GJRvjgGvKSri4ylRt1zG07RG1ZdiWg=
 =QffD
 -END PGP SIGNATURE-

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] On VM placement

2015-02-17 Thread Halterman, Jonathan
I'm working on some services that require the ability to place VMs onto the
same or separate racks, and I wanted to start a discussion to see what the
community thinks the best way of achieving this with Nova might be.

Quick overview:

Various clustered datastores require related data to be placed in close
proximity (such as on the same rack) for optimum read latency across
contiguous/partitioned datasets. Additionally, clustered datastores may
require that replicas be placed in particular locations, such as on the same
rack to minimize network saturation or on separate racks to enhance fault
tolerance. An example of this is Hadoop's common policy of placing two
replicas onto one rack and another onto a separate rack. For datastores that
use ephemeral storage, the ability to control the rack locality of Nova VMs
is crucial for meeting these needs. Breaking this down we come up with the
following potential requirements:

1. Nova should allow a VM to be booted onto the same rack as existing VMs
(rack affinity).
2. Nova should allow a VM to be booted onto a different rack from existing
VMs (rack anti-affinity).
3. Nova should allow authorized services to learn which rack a VM resides
on.

Currently, host aggregates are the best way to approximate a solution for
requirements 1 and 2. One could create host aggregates to represent the
physical racks in a datacenter and boot VMs into those racks as necessary,
but there are some challenges with this approach including the management of
different flavors to correspond to host aggregates, the need to determine
the placement of existing VMs, and the general problem of maintaining the
host aggregate information as hosts come and go. Simply booting VMs with
server-group style rack affinity and anti-affinity is not a direct process.
Requirement 3 is a move towards allowing authorized in cloud services to
learn about their location relative to other cloud resources such as Swift,
so that they might place compute and data in close proximity.

I'm interested to gather input on how we might approach this problem and
what the best path forward for implementing a solution might be. Please
share your ideas and input. It's also worth noting that a similar/related
need exists for Swift which I'm addressing in a separate message.

Cheers,
Jonathan




smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [swift] On Object placement

2015-02-17 Thread Halterman, Jonathan
I've been working on some services that require the ability to exploit the
co-location of compute and data storage (via Swift) onto the same racks, and
I wanted to start a discussion to see what the best way of controlling the
physical placement of Swift replicas might be.

Quick overview:

Various services desire the ability to control the location of data placed
in Swift in order to minimize network saturation when moving data to
compute, or in the case of services like Hadoop, to ensure that compute can
be moved to wherever the data resides. Read/write latency can also be
minimized by allowing authorized services to place one or more replicas onto
the same rack (with other replicas being placed on separate racks). Fault
tolerance can also be enhanced by ensuring that some replica(s) are placed
onto separate racks. Breaking this down we come up with the following
potential requirements:

1. Swift should allow authorized services to place a given number of object
replicas onto a particular rack, and onto separate racks.
2. Swift should allow authorized services and administrators to learn which
racks an object resides on, along with endpoints.

While requirement 1 addresses the rack-local writing of objects, requirement
2 facilitates the rack-local reading of objects. Swift's middelware
currently offers a list endpoints capability which could allow services to
select an endpoint on the same rack to read an object from, but there
doesn't appear to be a comparable solution for authorized in cloud services.

Currently I'm not sure of the best way to approach this problem. While
storage policies might offer some solution, I'm interested to gather input
on how we might move forward on a solution that addresses these requirements
in as direct a way as possible. Please share your ideas and input. It's also
worth noting that a similar need exists for Nova which I'm addressing in a
separate message.

Cheers,
Jonathan




smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Changing expression delimiters in Mistral DSL

2015-02-17 Thread Renat Akhmerov
One more:

p9: \{1 + $.var}# That’s pretty much what 
https://review.openstack.org/#/c/155348/ 
https://review.openstack.org/#/c/155348/ addresses but it’s not exactly that. 
Note that we don’t have to put it in quotes in this case to deal with YAML {} 
semantics, it’s just a string



Renat Akhmerov
@ Mirantis Inc.



 On 17 Feb 2015, at 13:37, Renat Akhmerov rakhme...@mirantis.com wrote:
 
 Along with % % syntax here are some other alternatives that I checked for 
 YAML friendliness with my short comments:
 
 p1: ${1 + $.var}  # Here it’s bad that $ sign is used for two different 
 things
 p2: ~{1 + $.var}  # ~ is easy to miss in a text
 p3: ^{1 + $.var}  # For someone may be associated with regular expressions
 p4: ?{1 + $.var}  
 p5: {1 + $.var} # This is kinda crazy
 p6: e{1 + $.var}  # That looks a pretty interesting option to me, “e” 
 could mean “expression” here.
 p7: yaql{1 + $.var}   # This is interesting because it would give a clear and 
 easy mechanism to plug in other expression languages, “yaql” here is a used 
 dialect for the following expression
 p8: y{1 + $.var}  # “y” here is just shortened “yaql
 
 
 Any ideas and thoughts would be really appreciated!
 
 Renat Akhmerov
 @ Mirantis Inc.
 
 
 
 On 17 Feb 2015, at 12:53, Renat Akhmerov rakhme...@mirantis.com 
 mailto:rakhme...@mirantis.com wrote:
 
 Dmitri,
 
 I agree with all your reasonings and fully support the idea of changing the 
 syntax now as well as changing system’s API a little bit due to recently 
 found issues in the current engine design that don’t allow us, for example, 
 to fully implement ‘with-items’ (although that’s a little bit different 
 story).
 
 Just a general note about all changes happening now: Once we release kilo 
 stable release our API, DSL of version 2 must be 100% stable. I was hoping 
 to stabilize it much earlier but the start of production use revealed a 
 number of things (I think this is normal) which we need to address, but not 
 later than the end of Kilo.
 
 As far as % % syntax. I see that it would solve a number of problems (YAML 
 friendliness, type ambiguity) but my only not strong argument is that it 
 doesn’t look that elegant in YAML as it looks, for example, in ERB 
 templates. It really reminds me XML/HTML and looks like a bear in a grocery 
 store (tried to make it close to one old russian saying :) ). So just for 
 this only reason I’d suggest we think about other alternatives, maybe not so 
 familiar to Ruby/Chef/Puppet users but looking better with YAML and at the 
 same time being YAML friendly.
 
 I would be good if we could here more feedback on this, especially from 
 people who started using Mistral.
 
 Thanks
 
 Renat Akhmerov
 @ Mirantis Inc.
 
 
 
 On 17 Feb 2015, at 03:06, Dmitri Zimine dzim...@stackstorm.com 
 mailto:dzim...@stackstorm.com wrote:
 
 SUMMARY: 
 
 
 We are changing the syntax for inlining YAQL expressions in Mistral YAML 
 from {1+$.my.var} (or “{1+$.my.var}”) to % 1+$.my.var %
 
 Below I explain the rationale and the criteria for the choice. Comments and 
 suggestions welcome.
 
 DETAILS: 
 -
 
 We faced a number of problems with using YAQL expressions in Mistral DSL: 
 [1] must handle any YAQL, not only the ones started with $; [2] must 
 preserve types and [3] must comply with YAML. We fixed these problems by 
 applying Ansible style syntax, requiring quotes around delimiters (e.g. 
 “{1+$.my.yaql.var}”). However, it lead to unbearable confusion in DSL 
 readability, in regards to types:
 
 publish:
intvalue1: {1+1}” # Confusing: you expect quotes to be string.
intvalue2: {int(1+1)}” # Even this doestn’ clean the confusion
whatisthis:{$.x + $.y}” # What type would this return? 
 
 We got a very strong push back from users in the filed on this syntax. 
 
 The crux of the problem is using { } as delimiters YAML. It is plain wrong 
 to use the reserved character. The clean solution is to find a delimiter 
 that won’t conflict with YAML.
 
 Criteria for selecting best alternative are: 
 1) Consistently applies to to all cases of using YAML in DSL
 2) Complies with YAML 
 3) Familiar to target user audience - openstack and devops
 
 We prefer using two-char delimiters to avoid requiring extra escaping 
 within the expressions.
 
 The current winner is % %. It fits YAML well. It is familiar to 
 openstack/devops as this is used for embedding Ruby expressions in Puppet 
 and Chef (for instance, [4]). It plays relatively well across all cases of 
 using expressions in Mistral (see examples in [5]):
 
 ALTERNATIVES considered:
 --
 
 1) Use Ansible-like syntax: http://docs.ansible.com/YAMLSyntax.html#gotchas 
 http://docs.ansible.com/YAMLSyntax.html#gotchas
 Rejected for confusion around types. See above.
 
 2) Use functions, like Heat HOT or TOSCA:
 
 HOT templates and TOSCA doesn’t seem to have a concept of typed variables 
 

Re: [openstack-dev] [all] Replace eventlet with asyncio

2015-02-17 Thread Joshua Harlow

I also just put up another proposal to consider:

https://review.openstack.org/#/c/156711/

Sew over eventlet + patching with threads

It goes along the thread usage case, and seems useful to consider/think 
about (and if it's thrown away, that's ok IMHO, after all thats the 
point of these discussions) when making this kind of analysis and decision.


-Josh

Mike Bayer wrote:

I’ve spent most of the past week deeply reviewing the asyncio system,
including that I’ve constructed a comprehensive test suite designed to
discover exactly what kinds of latencies and/or throughput advantages or
disadvantages we may see each from: threaded database code, gevent-based
code using Psycopg2’s asynchronous API, and asyncio using aiopg. I’ve
written a long blog post describing a bit of background about non-blocking
IO and its use in Python, and listed out detailed and specific reasons why I
don’t think asyncio is an appropriate fit for those parts of Openstack that
are associated with relational databases. We in fact don’t get much benefit
from eventlet either in this regard, and with the current situation of
non-eventlet compatible DBAPIs, our continued use of eventlet for
database-oriented code is hurting Openstack deeply.

My recommendations are that whether or not we use eventlet or asyncio in
order to receive HTTP connections, the parts of our application that focus
on querying and updating databases should at least be behind a thread pool.
I’ve also responded to the notions that asyncio-style programming will lead
to fewer bugs and faster production of code, and in that area I think there
are also some misconceptions regarding code that’s designed to deal with
relational databases.

The blog post is at
http://techspot.zzzeek.org/2015/02/15/asynchronous-python-and-databases/ and
you’ll find links to the test suite, which is fully runnable, within that
post.

Victor Stinnervstin...@redhat.com  wrote:


Hi,

I wrote a second version of my cross-project specification Replace eventlet with 
asyncio. It's now open for review:

https://review.openstack.org/#/c/153298/

I copied it below if you prefer to read it and/or comment it by email. Sorry, 
I'm not sure that the spec will be correctly formatted in this email. Use the 
URL if it's not case.

Victor

..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.

http://creativecommons.org/licenses/by/3.0/legalcode

=
Replace eventlet with asyncio
=

This specification proposes to replace eventlet, implicit async programming,
with asyncio, explicit async programming. It should fix eventlet issues,
prepare OpenStack for the future (asyncio is now part of the Python language)
and may improve overall OpenStack performances. It also makes usage of native
threads simpler and more natural.

Even if the title contains asyncio, the spec proposes to use trollius. The
name asyncio is used in the spec because it is more well known than trollius,
and because trollius is almost the same thing than asyncio.

The spec doesn't change OpenStack components running WSGI servers like
nova-api.  Compatibility issue between WSGI and asyncio should be solved first.

The spec is focused on Oslo Messaging and Ceilometer projects. More OpenStack
components may be modified later if the Ceilometer port to asyncio is
successful. Ceilometer will be used to find and solve technical issues with
asyncio, so the same solutions can be used on other OpenStack components.

Blueprint: 
https://blueprints.launchpad.net/oslo.messaging/+spec/greenio-executor

Note: Since Trollius will be used, this spec is unrelated to Python 3. See the
`OpenStack Python 3 wiki pagehttps://wiki.openstack.org/wiki/Python3`_ to
get the status of the port.


Problem description
===

OpenStack components are designed to scale. There are differenet options
to support a lot of concurrent requests: implicit asynchronous programming,
explicit programming, threads, processes, and combination of these options.

In the past, the Nova project used Tornado, then Twisted and it is now using
eventlet which also became the defacto standard in OpenStack. The rationale to
switch from Twisted to eventlet in Nova can be found in the old `eventlet vs
Twisted
https://wiki.openstack.org/wiki/UnifiedServiceArchitecture#eventlet_vs_Twisted`_
article.

Eventlet issues
---

This section only gives some examples of eventlet issues. There are more
eventlet issues, but tricky issues are not widely discussed and so not well
known. Most interesting issues are issues caused by the design of eventlet,
especially the monkey-patching of the Python standard library.

Eventlet itself is not really evil. Most issues come from the monkey-patching.
The problem is that eventlet is almost always used with monkey-patching in
OpenStack.

The implementation of the monkey-patching is fragile. It's easy to forget to
patch a function or have issues when the standard 

Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-17 Thread Joshua Harlow

Joe Gordon wrote:



On Tue, Feb 17, 2015 at 4:19 AM, Sean Dague s...@dague.net
mailto:s...@dague.net wrote:

On 02/16/2015 08:50 PM, Ian Cordasco wrote:
  On 2/16/15, 16:08, Sean Dague s...@dague.net
mailto:s...@dague.net wrote:
 
  On 02/16/2015 02:08 PM, Doug Hellmann wrote:
 
 
  On Mon, Feb 16, 2015, at 01:01 PM, Ian Cordasco wrote:
  Hey everyone,
 
  The os-ansible-deployment team was working on updates to add
support
  for
  the latest version of juno and noticed some interesting version
  specifiers
  introduced into global-requirements.txt in January. It
introduced some
  version specifiers that seem a bit impossible like the one for
requests
  [1]. There are others that equate presently to pinning the
versions of
  the
  packages [2, 3, 4].
 
  I understand fully and support the commit because of how it
improves
  pretty much everyone’s quality of life (no fires to put out in the
  middle
  of the night on the weekend). I’m also aware that a lot of the
  downstream
  redistributors tend to work from global-requirements.txt when
  determining
  what to package/support.
 
  It seems to me like there’s room to clean up some of these
requirements
  to
  make them far more explicit and less misleading to the human
eye (even
  though tooling like pip can easily parse/understand these).
 
  I think that's the idea. These requirements were generated
  automatically, and fixed issues that were holding back several
projects.
  Now we can apply updates to them by hand, to either move the lower
  bounds down (as in the case Ihar pointed out with stevedore) or
clean up
  the range definitions. We should not raise the limits of any Oslo
  libraries, and we should consider raising the limits of third-party
  libraries very carefully.
 
  We should make those changes on one library at a time, so we
can see
  what effect each change has on the other requirements.
 
 
  I also understand that stable-maint may want to occasionally
bump the
  caps
  to see if newer versions will not break everything, so what is the
  right
  way forward? What is the best way to both maintain a stable
branch with
  known working dependencies while helping out those who do so
much work
  for
  us (downstream and stable-maint) and not permanently pinning
to certain
  working versions?
 
  Managing the upper bounds is still under discussion. Sean
pointed out
  that we might want hard caps so that updates to stable branch were
  explicit. I can see either side of that argument and am still
on the
  fence about the best approach.
 
  History has shown that it's too much work keeping testing
functioning
  for stable branches if we leave dependencies uncapped. If particular
  people are interested in bumping versions when releases happen, it's
  easy enough to do with a requirements proposed update. It will
even run
  tests that in most cases will prove that it works.
 
  It might even be possible for someone to build some automation
that did
  that as stuff from pypi released so we could have the best of both
  worlds. But I think capping is definitely something we want as a
  project, and it reflects the way that most deployments will
consume this
  code.
 
   -Sean
 
  --
  Sean Dague
  http://dague.net
 
  Right. No one is arguing the very clear benefits of all of this.
 
  I’m just wondering if for the example version identifiers that I
gave in
  my original message (and others that are very similar) if we want
to make
  the strings much simpler for people who tend to work from them (i.e.,
  downstream re-distributors whose jobs are already difficult
enough). I’ve
  offered to help at least one of them in the past who maintains all of
  their distro’s packages themselves, but they refused so I’d like
to help
  them anyway possible. Especially if any of them chime in as this
being
  something that would be helpful.

Ok, your links got kind of scrambled. Can you next time please inline
the key relevant content in the email, because I think we all missed the
original message intent as the key content was only in footnotes.

 From my point of view, normalization patches would be fine.

requests=1.2.1,!=2.4.0,=2.2.1

Is actually an odd one, because that's still there because we're using
Trusty level requests in the tests, and my ability to have devstack not
install that has thus far failed.

Things like:

osprofiler=0.3.0,=0.3.0 # Apache-2.0

Can clearly be normalized to osprofiler==0.3.0 if you want to propose
the patch manually.



Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-17 Thread Ian Cordasco


On 2/17/15, 16:27, Joshua Harlow harlo...@outlook.com wrote:

Joe Gordon wrote:


 On Tue, Feb 17, 2015 at 4:19 AM, Sean Dague s...@dague.net
 mailto:s...@dague.net wrote:

 On 02/16/2015 08:50 PM, Ian Cordasco wrote:
   On 2/16/15, 16:08, Sean Dague s...@dague.net
 mailto:s...@dague.net wrote:
  
   On 02/16/2015 02:08 PM, Doug Hellmann wrote:
  
  
   On Mon, Feb 16, 2015, at 01:01 PM, Ian Cordasco wrote:
   Hey everyone,
  
   The os-ansible-deployment team was working on updates to add
 support
   for
   the latest version of juno and noticed some interesting
version
   specifiers
   introduced into global-requirements.txt in January. It
 introduced some
   version specifiers that seem a bit impossible like the one for
 requests
   [1]. There are others that equate presently to pinning the
 versions of
   the
   packages [2, 3, 4].
  
   I understand fully and support the commit because of how it
 improves
   pretty much everyone’s quality of life (no fires to put out
in the
   middle
   of the night on the weekend). I’m also aware that a lot of the
   downstream
   redistributors tend to work from global-requirements.txt when
   determining
   what to package/support.
  
   It seems to me like there’s room to clean up some of these
 requirements
   to
   make them far more explicit and less misleading to the human
 eye (even
   though tooling like pip can easily parse/understand these).
  
   I think that's the idea. These requirements were generated
   automatically, and fixed issues that were holding back several
 projects.
   Now we can apply updates to them by hand, to either move the
lower
   bounds down (as in the case Ihar pointed out with stevedore) or
 clean up
   the range definitions. We should not raise the limits of any
Oslo
   libraries, and we should consider raising the limits of
third-party
   libraries very carefully.
  
   We should make those changes on one library at a time, so we
 can see
   what effect each change has on the other requirements.
  
  
   I also understand that stable-maint may want to occasionally
 bump the
   caps
   to see if newer versions will not break everything, so what
is the
   right
   way forward? What is the best way to both maintain a stable
 branch with
   known working dependencies while helping out those who do so
 much work
   for
   us (downstream and stable-maint) and not permanently pinning
 to certain
   working versions?
  
   Managing the upper bounds is still under discussion. Sean
 pointed out
   that we might want hard caps so that updates to stable branch
were
   explicit. I can see either side of that argument and am still
 on the
   fence about the best approach.
  
   History has shown that it's too much work keeping testing
 functioning
   for stable branches if we leave dependencies uncapped. If
particular
   people are interested in bumping versions when releases happen,
it's
   easy enough to do with a requirements proposed update. It will
 even run
   tests that in most cases will prove that it works.
  
   It might even be possible for someone to build some automation
 that did
   that as stuff from pypi released so we could have the best of
both
   worlds. But I think capping is definitely something we want as a
   project, and it reflects the way that most deployments will
 consume this
   code.
  
-Sean
  
   --
   Sean Dague
   http://dague.net
  
   Right. No one is arguing the very clear benefits of all of this.
  
   I’m just wondering if for the example version identifiers that I
 gave in
   my original message (and others that are very similar) if we want
 to make
   the strings much simpler for people who tend to work from them
(i.e.,
   downstream re-distributors whose jobs are already difficult
 enough). I’ve
   offered to help at least one of them in the past who maintains
all of
   their distro’s packages themselves, but they refused so I’d like
 to help
   them anyway possible. Especially if any of them chime in as this
 being
   something that would be helpful.

 Ok, your links got kind of scrambled. Can you next time please
inline
 the key relevant content in the email, because I think we all
missed the
 original message intent as the key content was only in footnotes.

  From my point of view, normalization patches would be fine.

 requests=1.2.1,!=2.4.0,=2.2.1

 Is actually an odd one, because that's still there because we're
using
 Trusty level requests in the tests, and my ability to have devstack
not
 install that 

Re: [openstack-dev] [all][tc] Lets keep our community open, lets fight for it

2015-02-17 Thread Daniel P. Berrange
On Wed, Feb 11, 2015 at 03:14:39PM +0100, Stefano Maffulli wrote:
  ## Cores are *NOT* special
  
  At some point, for some reason that is unknown to me, this message
  changed and the feeling of core's being some kind of superheros became
  a thing. It's gotten far enough to the point that I've came to know
  that some projects even have private (flagged with +s), password
  protected, irc channels for core reviewers.
 
 This is seriously disturbing.
 
 If you're one of those core reviewers hanging out on a private channel,
 please contact me privately: I'd love to hear from you why we failed as
 a community at convincing you that an open channel is the place to be.
 
 No public shaming, please: education first.

I've been thinking about these last few lines a bit, I'm not entirely
comfortable with the dynamic this sets up.

What primarily concerns me is the issue of community accountability. A core
feature of OpenStack's project  individual team governance is the idea
of democractic elections, where the individual contributors can vote in
people who they think will lead OpenStack in a positive way, or conversely
hold leadership to account by voting them out next time. The ability of
individuals contributors to exercise this freedom though, relies on the
voters being well informed about what is happening in the community.

If cases of bad community behaviour, such as use of passwd protected IRC
channels, are always primarily dealt with via further private communications,
then we are denying the voters the information they need to hold people to
account. I can understand the desire to avoid publically shaming people
right away, because the accusations may be false, or may be arising from a
simple mis-understanding, but at some point genuine issues like this need
to be public. Without this we make it difficult for contributors to make
an informed decision at future elections.

Right now, this thread has left me wondering whether there are still any
projects which are using password protected IRC channels, or whether they
have all been deleted, and whether I will be unwittingly voting for people
who supported their use in future openstack elections.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] some questions about bp filtering-weighing-with-driver-supplied-functions

2015-02-17 Thread Duncan Thomas
Hi

Thanks for looking at / thinking about this. So the idea was (I started
this whole thing rolling long ago after a rather heated discussion with a
very bright chap from Redhat):

1) Driver authors tend, in my experience, to know more than admins, so
drivers should be able (where useful) to be able to set a default value to
either filter expression or weighting expression

2) Admins definitely need to be able to over-ride this if desired via
cinder.conf

I thing it is fairly easy (and beneficial) to go through the in-tree
drivers and add the conf value to the stats report, once the base driver
change has merged.

I think that puts me in reasonable accord with your opinions? I think most
admins won't bother setting this up, but might benefit from good defaults,
but I think any admin who wants to customise it absolutely should be able
to.

In regards to a setting per volume type, that isn't really implemented in
any easy-to-use way, and I think a clean implementation of that would be
difficult to design and arguably not worth the effort - from the feedback
on the operators list, it looks like most operators aren't using / don't
understand the facilities we already have. If you really want to experiment
with this sort of setup, you can achieve it with tertiary operators in the
current code with a bit of though, e.g.

type gold key=isgold, value=True
type silver key=issilver value=True

expression = (type.isgold? (expression 1):(type.issilver?(expression
2):(default expression)))

It isn't neat and tidy, but it should work - the tool was designed with far
more power and flexibility than most people need in part to allow weird
experiments to be tried without having to change code.

On 17 February 2015 at 08:00, Zhangli (ISSP) zhangl...@huawei.com wrote:

  Hi,

 I noticed the following BP has been merged recently:


 https://blueprints.launchpad.net/cinder/+spec/filtering-weighing-with-driver-supplied-functions

 i have read the related spec(
 http://git.openstack.org/cgit/openstack/cinder-specs/tree/specs/kilo/filtering-weighing-with-driver-supplied-functions.rst
 ) and have got some questions.



 For my understanding, this BP brought two benefits:

 1) different admins can make various configurations on filtering/weighing
 (by editing equation in cinder.conf) to meet their various requirement; the
 equation way itself is much more flexible than single capability scheduling.

 2) different backend drivers can take vendor specific evaluation;



 The BP seems focus more on the second target: letting drivers do
 evaluation by themselves. In the spec “editing equation in cinder.conf” is
 just an example of driver implementation, “it is up to the driver to

 determine how to generate the equations…Some choices a driver has are to
 use values defined in cinder.conf, hard-code the values in the driver or
 not implement the properties at all”.

 But I think it is also a fact that a lot of devices have common
 capabilities/attributes(even thin-provisionning can take as a common
 attribute today), so can we make the “editing equation in cinder.conf” a
 base/common implementation to this new scheduler?

 Which means:

 1) this new scheduler has a built-in implementation of filter/goodness
 funtion;

 2) drivers can supply their own functions as they do now; If a driver do
 not supply one, built-in function will work;



 Another question:

 Can we make different volume-types associated with different evaluation
 rule (means different filter/goodness function pair)? I think this is also
 very useful.



 Thanks.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] [devstack] About _member_ role

2015-02-17 Thread Pasquale Porreca
I proposed a fix for a bug in devstack
https://review.openstack.org/#/c/156527/ caused by the fact the role
_member_ was not anymore created due to a recent change.

But why is the existence of _member_ role necessary, even if it is not
necessary to be used? Is this a know/wanted feature or a bug by itself?

-- 
Pasquale Porreca

DEK Technologies
Via dei Castelli Romani, 22
00040 Pomezia (Roma)

Mobile +39 3394823805
Skype paskporr


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [UI] Sorting and filtering of node list

2015-02-17 Thread Lukasz Oles
Hello Julia,

I think node filtering and sorting is a great feature and it will
improve UX, but we need to remember that with increasing number of
nodes increases number of automation tasks. If fuel user want to
automate something he will use fuel client not Fuel GUI. This is why I
think sorting and filtering should be done on backend side.
We should stop thinking that Fuel UI is the only way to interact with Fuel.

Regards,

On Sat, Feb 14, 2015 at 9:27 AM, Julia Aranovich
jkirnos...@mirantis.com wrote:
 Hi All,

 Currently we [Fuel UI team] are planning the features of sorting and
 filtering of node list to introduce it in 6.1 release.

 Now user can filter nodes just by it's name or MAC address and no sorters
 are available. It's rather poor UI for managing 200+ nodes environment. So,
 the current suggestion is to filter and sort nodes by the following
 parameters:

 name
 manufacturer
 IP address
 MAC address
 CPU
 memory
 disks total size (we need to think about less than/more than
 representation)
 interfaces speed
 status (Ready, Pending Addition, Error, etc.)
 roles


 It will be a form-based filter. Items [1-4] should go to a single text input
 and other go to a separate controls.
 And also there is an idea to translate a user filter selection to a query
 and add it to a location string. Like it's done for the logs search:
 #cluster/x/logs/type:local;source:api;level:info.

 Please also note, that the changes we are thinking about should not affect
 backend code.


 I will be very grateful if you share your ideas about this or tell some of
 the cases that would be useful to you at work with real deployments.
 We would like to introduce really usefull tools based on your feedback.


 Best regards,
 Julia

 --
 Kind Regards,
 Julia Aranovich,
 Software Engineer,
 Mirantis, Inc
 +7 (905) 388-82-61 (cell)
 Skype: juliakirnosova
 www.mirantis.ru
 jaranov...@mirantis.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Łukasz Oleś

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Changing expression delimiters in Mistral DSL

2015-02-17 Thread Nikolay Makhotkin
Some suggestions from me:

1. y 1 + $.var  # (short from yaql).
2. { 1 + $.var }  # as for me, looks more elegant than % %. And
visually it is more strong

A also like p7 and p8 suggested by Renat.

On Tue, Feb 17, 2015 at 11:43 AM, Renat Akhmerov rakhme...@mirantis.com
wrote:

 One more:

 p9: \{1 + $.var} # That’s pretty much what
 https://review.openstack.org/#/c/155348/ addresses but it’s not exactly
 that. Note that we don’t have to put it in quotes in this case to deal with
 YAML {} semantics, it’s just a string



 Renat Akhmerov
 @ Mirantis Inc.



 On 17 Feb 2015, at 13:37, Renat Akhmerov rakhme...@mirantis.com wrote:

 Along with % % syntax here are some other alternatives that I checked
 for YAML friendliness with my short comments:

 p1: ${1 + $.var} # Here it’s bad that $ sign is used for two
 different things
 p2: ~{1 + $.var} # ~ is easy to miss in a text
 p3: ^{1 + $.var} # For someone may be associated with regular
 expressions
 p4: ?{1 + $.var}
 p5: {1 + $.var} # This is kinda crazy
 p6: e{1 + $.var} # That looks a pretty interesting option to me, “e”
 could mean “expression” here.
 p7: yaql{1 + $.var} # This is interesting because it would give a clear
 and easy mechanism to plug in other expression languages, “yaql” here is a
 used dialect for the following expression
 p8: y{1 + $.var} # “y” here is just shortened “yaql


 Any ideas and thoughts would be really appreciated!

 Renat Akhmerov
 @ Mirantis Inc.



 On 17 Feb 2015, at 12:53, Renat Akhmerov rakhme...@mirantis.com wrote:

 Dmitri,

 I agree with all your reasonings and fully support the idea of changing
 the syntax now as well as changing system’s API a little bit due to
 recently found issues in the current engine design that don’t allow us, for
 example, to fully implement ‘with-items’ (although that’s a little bit
 different story).

 Just a general note about all changes happening now: *Once we release
 kilo stable release our API, DSL of version 2 must be 100% stable*. I was
 hoping to stabilize it much earlier but the start of production use
 revealed a number of things (I think this is normal) which we need to
 address, but not later than the end of Kilo.

 As far as % % syntax. I see that it would solve a number of problems
 (YAML friendliness, type ambiguity) but my only not strong argument is that
 it doesn’t look that elegant in YAML as it looks, for example, in ERB
 templates. It really reminds me XML/HTML and looks like a bear in a grocery
 store (tried to make it close to one old russian saying :) ). So just for
 this only reason I’d suggest we think about other alternatives, maybe not
 so familiar to Ruby/Chef/Puppet users but looking better with YAML and at
 the same time being YAML friendly.

 I would be good if we could here more feedback on this, especially from
 people who started using Mistral.

 Thanks

 Renat Akhmerov
 @ Mirantis Inc.



 On 17 Feb 2015, at 03:06, Dmitri Zimine dzim...@stackstorm.com wrote:

 SUMMARY:
 

 We are changing the syntax for inlining YAQL expressions in Mistral YAML
 from {1+$.my.var} (or “{1+$.my.var}”) to % 1+$.my.var %

 Below I explain the rationale and the criteria for the choice. Comments
 and suggestions welcome.

 DETAILS:
 -

 We faced a number of problems with using YAQL expressions in Mistral DSL:
 [1] must handle any YAQL, not only the ones started with $; [2] must
 preserve types and [3] must comply with YAML. We fixed these problems by
 applying Ansible style syntax, requiring quotes around delimiters (e.g.
 “{1+$.my.yaql.var}”). However, it lead to unbearable confusion in DSL
 readability, in regards to types:

 publish:
intvalue1: {1+1}” # Confusing: you expect quotes to be string.
intvalue2: {int(1+1)}” # Even this doestn’ clean the confusion
whatisthis:{$.x + $.y}” # What type would this return?

 We got a very strong push back from users in the filed on this syntax.

 The crux of the problem is using { } as delimiters YAML. It is plain wrong
 to use the reserved character. The clean solution is to find a delimiter
 that won’t conflict with YAML.

 Criteria for selecting best alternative are:
 1) Consistently applies to to all cases of using YAML in DSL
 2) Complies with YAML
 3) Familiar to target user audience - openstack and devops

 We prefer using two-char delimiters to avoid requiring extra escaping
 within the expressions.

 The current winner is % %. It fits YAML well. It is familiar to
 openstack/devops as this is used for embedding Ruby expressions in Puppet
 and Chef (for instance, [4]). It plays relatively well across all cases of
 using expressions in Mistral (see examples in [5]):

 ALTERNATIVES considered:
 --

 1) Use Ansible-like syntax:
 http://docs.ansible.com/YAMLSyntax.html#gotchas
 Rejected for confusion around types. See above.

 2) Use functions, like Heat HOT or TOSCA:

 HOT templates and TOSCA doesn’t seem to have a 

Re: [openstack-dev] [neutron][neutron-*aas] Is lockutils-wrapper needed for tox.ini commands?

2015-02-17 Thread Paul Michali
Thanks for the insight into lockutils-wrapper usage Ihar.  I had worked on
getting the coverage target for VPNaaS repo working, and then have been
working on a coverage target for functional testing. As part of that, I
pulled the use of lockutils-wrapper.

If you can review it, I'd appreciate it very much:
https://review.openstack.org/#/c/155889/2

I was thinking of doing this same coverage testing for FWaaS and LBaaS, if
you want me to remove lockutils-wrapper there too, let me know, as I was
planning on working on these today...

Regards,
PCM


PCM (Paul Michali)

IRC pc_m (irc.freenode.com)
Twitter... @pmichali


On Mon, Feb 16, 2015 at 8:16 AM, Ihar Hrachyshka ihrac...@redhat.com
wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 02/13/2015 11:13 PM, Paul Michali wrote:
  I see that in tox.ini, several commands have lockutils-wrapper
  prefix on them in the neutron-vpnaas repo. Seems like this was
  added as part of commit 88e2d801 for Migration to
  oslo.concurrency.

 Those would be interesting for unit tests only. And now that neutron
 BaseTestCase uses proper fixture [1], we can just remove the wrapper
 call from all *aas repos.

 That was actually one of the things I was going to do as a oslo
 liaison during Kilo [2] (see the 1st entry in the todo list.) But if
 you want to go with this before I reach this cleanup, I will be glad
 to review the changes. :)

 [1]:

 http://git.openstack.org/cgit/openstack/neutron/tree/neutron/tests/base.py#n89
 [2]:
 http://lists.openstack.org/pipermail/openstack-dev/2015-January/054753.html

 /Ihar
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1

 iQEcBAEBAgAGBQJU4e21AAoJEC5aWaUY1u57zKcH/iZ3SFi3BmyYaEwch5jipbzw
 Byxn2QxyjRNcQ6m/dr6ihpvXS2bIo75mrNajc+mdnKTqAdXebceSQRPAw4EX3c9r
 qtlaGzSrBqmSiyl/YnbqUiUf2zcXpFIpiTJswbdhv10P5Gi/Q64m6d+ipQsIUaMP
 4sY/0sjAV5Gn9cpkBZn9LY1/CrWnP7eqFMBYvFTsyEpGHdgJ4heAx2dLCqY2DE9H
 bVFexZK1yMqLzEIwmHtzSyifcFZkC39fa6bsxCVlLkbfU7+KC56FOOHARsjf+grd
 ReQuGH4QIS0aTMkrd7qmJRkaK7BudkX1yfOY68jsYSwrpKoia7pMZ+tbPosfUbk=
 =+HKf
 -END PGP SIGNATURE-

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Removal of copyright statements above the Apache 2.0 license header

2015-02-17 Thread Daniel P. Berrange
On Tue, Feb 17, 2015 at 11:28:58AM +0100, Christian Berendt wrote:
 Is it safe to remove copyright statements above the Apache 2.0 license
 headers stated in a lot of files?
 
 We recenctly removed all @author tags and added a hacking check to not
 longer add @author tags in the future. Can we do the same for the
 copyright statements above the Apache 2.0 license headers?

In section 4.(c) the LICENSE text says

  (c) You must retain, in the Source form of any Derivative Works
  that You distribute, all copyright, patent, trademark, and
  attribution notices from the Source form of the Work,
  excluding those notices that do not pertain to any part of
  the Derivative Works; and

So based on that, I think it would be a violation to remove any of the
Copyright acmeco lines in the file header.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Removal of copyright statements above the Apache 2.0 license header

2015-02-17 Thread Christian Berendt
On 02/17/2015 12:05 PM, Daniel P. Berrange wrote:
 In section 4.(c) the LICENSE text says
 
   (c) You must retain, in the Source form of any Derivative Works
   that You distribute, all copyright, patent, trademark, and
   attribution notices from the Source form of the Work,
   excluding those notices that do not pertain to any part of
   the Derivative Works; and
 
 So based on that, I think it would be a violation to remove any of the
 Copyright acmeco lines in the file header.

Section 4 is about the redistribution of the code. In my understanding
this means that I am not allowed to remove the license header if I
redistribute a source file (e.g. in a package or in my own software).

If I add code to OpenStack I have to sign the CLA. The CLA includes:

   2. Grant of Copyright License. Subject to the terms and conditions of
  this License, each Contributor hereby grants to You a perpetual,
  worldwide, non-exclusive, no-charge, royalty-free, irrevocable
  copyright license to reproduce, prepare Derivative Works of,
  publicly display, publicly perform, sublicense, and distribute the
  Work and such Derivative Works in Source or Object form.

Does this not mean that it is not necessary to explicitly add a
copyright statement above the license headers?

According to
http://www.apache.org/dev/apply-license.html#contributor-copyright and
http://www.apache.org/legal/src-headers.html copyright statements should
not be added to the headers in source files.

Christian.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [UI] Sorting and filtering of node list

2015-02-17 Thread Sergey Vasilenko
+1, sorting is should be...

Paginating may be too, but not activated by default.


/sv
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-17 Thread Sean Dague
On 02/16/2015 08:50 PM, Ian Cordasco wrote:
 On 2/16/15, 16:08, Sean Dague s...@dague.net wrote:
 
 On 02/16/2015 02:08 PM, Doug Hellmann wrote:


 On Mon, Feb 16, 2015, at 01:01 PM, Ian Cordasco wrote:
 Hey everyone,

 The os-ansible-deployment team was working on updates to add support
 for
 the latest version of juno and noticed some interesting version
 specifiers
 introduced into global-requirements.txt in January. It introduced some
 version specifiers that seem a bit impossible like the one for requests
 [1]. There are others that equate presently to pinning the versions of
 the
 packages [2, 3, 4].

 I understand fully and support the commit because of how it improves
 pretty much everyone’s quality of life (no fires to put out in the
 middle
 of the night on the weekend). I’m also aware that a lot of the
 downstream
 redistributors tend to work from global-requirements.txt when
 determining
 what to package/support.

 It seems to me like there’s room to clean up some of these requirements
 to
 make them far more explicit and less misleading to the human eye (even
 though tooling like pip can easily parse/understand these).

 I think that's the idea. These requirements were generated
 automatically, and fixed issues that were holding back several projects.
 Now we can apply updates to them by hand, to either move the lower
 bounds down (as in the case Ihar pointed out with stevedore) or clean up
 the range definitions. We should not raise the limits of any Oslo
 libraries, and we should consider raising the limits of third-party
 libraries very carefully.

 We should make those changes on one library at a time, so we can see
 what effect each change has on the other requirements.


 I also understand that stable-maint may want to occasionally bump the
 caps
 to see if newer versions will not break everything, so what is the
 right
 way forward? What is the best way to both maintain a stable branch with
 known working dependencies while helping out those who do so much work
 for
 us (downstream and stable-maint) and not permanently pinning to certain
 working versions?

 Managing the upper bounds is still under discussion. Sean pointed out
 that we might want hard caps so that updates to stable branch were
 explicit. I can see either side of that argument and am still on the
 fence about the best approach.

 History has shown that it's too much work keeping testing functioning
 for stable branches if we leave dependencies uncapped. If particular
 people are interested in bumping versions when releases happen, it's
 easy enough to do with a requirements proposed update. It will even run
 tests that in most cases will prove that it works.

 It might even be possible for someone to build some automation that did
 that as stuff from pypi released so we could have the best of both
 worlds. But I think capping is definitely something we want as a
 project, and it reflects the way that most deployments will consume this
 code.

  -Sean

 -- 
 Sean Dague
 http://dague.net
 
 Right. No one is arguing the very clear benefits of all of this.
 
 I’m just wondering if for the example version identifiers that I gave in
 my original message (and others that are very similar) if we want to make
 the strings much simpler for people who tend to work from them (i.e.,
 downstream re-distributors whose jobs are already difficult enough). I’ve
 offered to help at least one of them in the past who maintains all of
 their distro’s packages themselves, but they refused so I’d like to help
 them anyway possible. Especially if any of them chime in as this being
 something that would be helpful.

Ok, your links got kind of scrambled. Can you next time please inline
the key relevant content in the email, because I think we all missed the
original message intent as the key content was only in footnotes.

From my point of view, normalization patches would be fine.

requests=1.2.1,!=2.4.0,=2.2.1

Is actually an odd one, because that's still there because we're using
Trusty level requests in the tests, and my ability to have devstack not
install that has thus far failed.

Things like:

osprofiler=0.3.0,=0.3.0 # Apache-2.0

Can clearly be normalized to osprofiler==0.3.0 if you want to propose
the patch manually.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Propose removing Dmitry Guryanov from magnum-core

2015-02-17 Thread Dmitry Guryanov

On 02/17/2015 06:20 AM, Steven Dake (stdake) wrote:
The initial magnum core team was founded at a meeting where several 
people committed to being active in reviews and writing code for 
Magnum.  Nearly all of the folks that made that initial commitment 
have been active in IRC, on the mailing lists, or participating in 
code reviews or code development.


Out of our core team of 9 members [1], everyone has been active in 
some way except for Dmitry.  I propose removing him from the core 
team.  Dmitry is welcome to participate in the future if he chooses 
and be held to the same high standards we have held our last 4 new 
core members to that didn’t get an initial opt-in but were voted in by 
their peers.


Please vote (-1 remove, abstain, +1 keep in core team) - a vote of +1 
from any core acts as a veto meaning Dmitry will remain in the core team.


Hello, Steven,

Sorry for being inactive for so long. I have no real objections for 
removing me from magnum-core. I hope I'll return to the project in the 
near future.




[1] https://review.openstack.org/#/admin/groups/473,members


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Dmitry Guryanov


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >