Re: [openstack-dev] [global-requirements][pbr] tarball and git requirements no longer supported in requirements.txt

2015-06-08 Thread Robert Collins
On 9 June 2015 at 04:19, Doug Wiegley doug...@parksidesoftware.com wrote:

 On Jun 8, 2015, at 9:58 AM, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2015-06-08 13:29:50 +1200 (+1200), Robert Collins wrote:
 [...]
 However, we are going to move from test-requirements.txt to setup.cfg
 eventually, but thats a separate transition - and one could still use
 test-requirements.txt there to provide git references.

 Except please don't. If you put a git requirement entry in that file
 for neutron, every CI job which uses that test-requirements.txt will
 reclone all of neutron over the network from scratch because pip
 isn't smart enough to do otherwise unless you take additional
 measures to preinstall that requirement in the environment where the
 test is run. That's why we use tools like devstack-gate or
 zuul-cloner which know how to check for cached repos and update them to
 the ref that you've (or that zuul has) requested.

 The neutron-*aas repos were among the worst offenders in the ‘ninja cloning’ 
 racket, and in addition to working against the CI infrastructure, it bit us 
 by testing master instead of the review patches in some cases. Note that we 
 were grabbing neutron at master and tempest at a pinned commit. And since we 
 use the same tox env’s for devs and CI, there are slightly different 
 requirements depending on which is in use.

tempest-lib or tempest? tempest-lib cuts releases and should AIUI be
consumed via those: if you need something thats not in the latest
release, ask that it be released.

 What I am attempting to do, and I’m open to feedback on ‘best practices’:

 - pep8/unit tests - override ‘install_program’ in tox, to be an in-repo 
 script that detects if you’re inside jenkins or not, and then uses either 
 zuul-cloner or pulls in neutron via a git url, respectively, then calls ‘pip 
 install’ as usual for the rest.

So, we're going to be adding in this thing for constraints over this
cycle (https://review.openstack.org/#/c/186635/) and it will also be
gluing into tox in that sort of way. Since you can specify a git url
and reference there, I wonder if editing the constraints file
just-in-time to add e.g. a neutron reference, will be useful. Or yeah
zuul-cloner. We'll need to make sure these things play nice together.

 - tempest api tests - pulled in-tree the subset of tempest that we’re using 
 and which is not yet in tempest-lib, with an eye to migrating it away. This 
 part could’ve also been done via zuul-cloner at a specific commit.

I'd get the stuff into tempest-lib ASAP rather than re-arranging the
way this is done IMO.

 - We do *not* put ‘neutron’ into any requirements file, and I wasn’t planning 
 to, until such time as neutron, or neutron-lib, is stable and published to 
 pypi.

Without neutron there, there is no signal to packagers, deployers, or
pip that neutron is needed. I think thats just bugs waiting to happen.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [global-requirements][pbr] tarball and git requirements no longer supported in requirements.txt

2015-06-08 Thread Jeremy Stanley
On 2015-06-09 05:49:35 +1200 (+1200), Robert Collins wrote:
 I already said that the git entry should be to a local zuul-cloner
 cloned repo. Kevin's *current* 3rd-party CI solution is doing
 full-clones each time.

Aah, yep, I missed in his reply that it would be a local repo on the
filesystem referenced as a URL in test-requirements.txt. Something
still needs to create that local repo, so for devs running tox
locally a little additional automation is needed. Sounds like that's
what his installer script callout would take care of.

 BTW pip can trivially re-use git repositories if you specify the
 --source path in e.g. pip.conf, but thats a separate discussion IMO
 :).

Oh! That's useful. I wonder if we could leverage that in our CI to
tell tox where to find git repos we've pre-cloned for the job
without having to precreate an env and manually pip install into it.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][third-party] Common-CI Virtual Sprint

2015-06-08 Thread Asselin, Ramy
I didn’t hear any objections, so I’ll plan on scheduling this sprint as 
proposed on July 8  9.

Thanks!
Ramy Asselin

From: Ricardo Carrillo Cruz [mailto:ricardo.carrillo.c...@gmail.com]
Sent: Thursday, June 04, 2015 12:14 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [infra][third-party] Common-CI Virtual Sprint

Whenever is, I'll participate.

Regards

2015-06-04 21:09 GMT+02:00 Spencer Krum 
n...@spencerkrum.commailto:n...@spencerkrum.com:
I'm in!

--
  Spencer Krum
  n...@spencerkrum.commailto:n...@spencerkrum.com

On Thu, Jun 4, 2015, at 09:22 AM, Asselin, Ramy wrote:
 Hi,

 It was nice to meet many of you at the Vancouver Infra Working Session.
 Quite a bit of progress was made finding owners for some of the common-ci
 refactoring work [1] and puppet testing work. A few patches were
 proposed, reviewed, and some merged!

 As we continue the effort over the coming weeks, I thought it would be
 helpful to schedule a  virtual sprint to complete the remaining tasks.

 GOAL: By the end of the sprint, we should be able to set up a 3rd party
 CI system using the same puppet components that the OpenStack
 infrastructure team is using in its production CI system.

 I proposed this in Tuesday's Infra meeting [1] there was general
 consensus that this would be valuable (if clear goals are documented) and
 that July 8  9 are good dates. (After the US  Canada July 1st and 4th
 holidays, not on Tuesday, and not near a Liberty Milestone)

 I would like to get comments from a broader audience on the goals and
 dates.

 You can show interest by adding your name to the etherpad [3].

 Thank you,
 Ramy
 irc: asselin



 [1]
 http://specs.openstack.org/openstack-infra/infra-specs/specs/openstackci.html
 [2]
 http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-06-02-19.01.html
 [3] https://etherpad.openstack.org/p/common-ci-sprint



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Proposal for changing 1600UTC meeting to 1700 UTC

2015-06-08 Thread Smigiel, Dariusz
 Folks,
 
 Several people have messaged me from EMEA timezones that 1600UTC fits right 
 into the middle of their family life (ferrying kids from school and what-not) 
 and 1700UTC while not perfect, would be a better fit time-wise.
 
 For all people that intend to attend the 1600 UTC, could I get your feedback 
 on this thread if a change of the 1600UTC timeslot to 1700UTC would be 
 acceptable?  If it wouldn't be acceptable, please chime in as well.
 
 Thanks
 -steve

+1
It suits me.

-- 
 Dariusz Smigiel
 Intel Technology Poland

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [DevStack][Neutron] PHYSICAL_NETWORK vs. PUBLIC_PHYSICAL_NETWORK - rant

2015-06-08 Thread Sean M. Collins
On Mon, Jun 08, 2015 at 01:38:28PM EDT, Jay Pipes wrote:
 On 06/08/2015 12:16 PM, Sean M. Collins wrote:
 I'd expect the above, where we pull the setting from PHYSICAL_NETWORK.
 Then perhaps in the lib/neutron-legacy define
 
 PHYSICAL_NETWORK={$PHYSICAL_NETWORK:-public}
 
 Based on your original post, wouldn't you want the above to be this instead?
 
 PHYSICAL_NETWORK={$PHYSICAL_NETWORK:-default}

The actual value doesn't matter, it's just the fact that I had defined a
value in PHYSICAL_NETWORK that wasn't being carried over to
PUBLIC_PHYSICAL_NETWORK since it was defining it's own default value.

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Progressing/tracking work on libvirt / vif drivers

2015-06-08 Thread Ian Wells
Hey Gary,

Sorry for being a little late with the followup...

Concerns with binding type negotiation, or with the scripting?  And could
you summarise the concerns, for those of us that didn't hear them?
-- 
Ian,

On 2 June 2015 at 07:08, Gary Kotton gkot...@vmware.com wrote:

  Hi,
 At the summit this was discussed in the nova sessions and there were a
 number of concerns regarding security etc.
 Thanks
 Gary

   From: Irena Berezovsky irenab@gmail.com
 Reply-To: OpenStack List openstack-dev@lists.openstack.org
 Date: Tuesday, June 2, 2015 at 1:44 PM
 To: OpenStack List openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [nova] Progressing/tracking work on libvirt
 / vif drivers

   Hi Ian,
 I like your proposal. It sounds very reasonable and makes separation of
 concerns between neutron and nova very clear. I think with vif plug script
 support [1]. it will help to decouple neutron from nova dependency.
 Thank you for sharing this,
 Irena
 [1] https://review.openstack.org/#/c/162468/

 On Tue, Jun 2, 2015 at 10:45 AM, Ian Wells ijw.ubu...@cack.org.uk wrote:

  VIF plugging, but not precisely libvirt VIF plugging, so I'll tout this
 to a hopefully interested audience.

 At the summit, we wrote up a spec we were thinking of doing at [1].  It
 actually proposes two things, which is a little naughty really, but hey.

 Firstly we propose that we turn binding into a negotiation, so that Nova
 can offer binding options it supports to Neutron and Neutron can pick the
 one it likes most.  This is necessary if you happen to use vhostuser with
 qemu, as it doesn't work for some circumstances, and desirable all around,
 since it means you no longer have to configure Neutron to choose a binding
 type that Nova likes and Neutron can choose different binding types
 depending on circumstances.  As a bonus, it should make inter-version
 compatibility work better.

  Secondly we suggest that some of the information that Nova and Neutron
 currently calculate independently should instead be passed from Neutron to
 Nova, simplifying the Nova code since it no longer has to take an educated
 guess at things like TAP device names.  That one is more contentious, since
 in theory Neutron could pass an evil value, but if we can find some pattern
 that works (and 'pattern' might be literally true, in that you could get
 Nova to confirm that the TAP name begins with a magic string and is not
 going to be a physical device or other interface on the box) I think that
 would simplify the code there.

  Read, digest, see what you think.  I haven't put it forward yet
 (actually I've lost track of which projects take specs at this point) but I
 would very much like to get it implemented and it's not a drastic change
 (in fact, it's a no-op until we change Neutron to respect what Nova passes).

 [1] https://etherpad.openstack.org/p/YVR-nova-neutron-binding-spec

 On 1 June 2015 at 10:37, Neil Jerram neil.jer...@metaswitch.com wrote:

 On 01/06/15 17:45, Neil Jerram wrote:

  Many thanks, John  Dan.  I'll start by drafting a summary of the work
 that I'm aware of in this area, at
 https://etherpad.openstack.org/p/liberty-nova-libvirt-vif-work.


 OK, my first draft of this is now there at [1].  Please could folk with
 VIF-related work pending check that I haven't missed or misrepresented
 them?  Especially, please could owners of the 'Infiniband SR-IOV' and
 'mlnx_direct removal' changes confirm that those are really ready for core
 review?  It would be bad to ask for core review that wasn't in fact wanted.

 Thanks,
 Neil


 [1] https://etherpad.openstack.org/p/liberty-nova-libvirt-vif-work



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] removal of deprecated items from Kilo

2015-06-08 Thread Ruby Loo
Hi,

As noted in the Upgrade Notes in Kilo[1]:

1. The driver_info parameters of pxe_deploy_kernel and
pxe_deploy_ramdisk were deprecated in favour of deploy_kernel and
deploy_ramdisk.

2. Drivers implementing their own version of the vendor_passthru() method
has been deprecated in favour of the new @passthru decorator.

Both have been removed[2] from master branch and will no longer work.

--ruby

[1]
https://wiki.openstack.org/wiki/ReleaseNotes/Kilo#OpenStack_Bare_Metal_service_.28Ironic.29
[2] Change-ids: I07eb7ad28929b651cf04ef3955903e4f4ecf9900 
I5e134e42449f2bbeefee1f9cb1712f59fa7a2b9f respectively
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo][Automaton] Proposal for new core reviewer (min pae)

2015-06-08 Thread Davanum Srinivas
+1 from me Josh. welcome Min Pae.

-- dims

On Mon, Jun 8, 2015 at 2:31 PM, Joshua Harlow harlo...@outlook.com wrote:
 Greetings all stackers,

 I propose that we add Min Pae (sputnik13) to the automaton-core team.

 Min has been actively contributing to taskflow for a while now and as
 automaton (the library came out of taskflow) is a new library it would be
 great to have his participation there as well. He is willing (and able!) to
 help with the review load when he can. He has provided quality reviews in
 other projects and would be a welcome addition in helping make automaton the
 best library it can be!

 Overall I think he would make a great addition to the core review team.

 Please respond with +1/-1.

 Thanks much!

 --

 Joshua Harlow

 It's openstack, relax... | harlo...@yahoo-inc.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [openstack-operators][neutron[dhcp][dnsmask]: duplicate entries in addn_hosts causing no IP allocation

2015-06-08 Thread Andrew Woodward
Daniel,

This sounds familiar, see if this matches [1]. IIRC, there was another
issue like this that was might already address this in the updates into
Fuel 5.1.2 packages repo [2]. You can either update the neutron packages
from [2] Or try one of community builds for 5.1.2 [3]. If this doesn't
resolve the issue, open a bug against MOS dev [4].

[1] https://bugs.launchpad.net/bugs/1295715
[2] http://fuel-repository.mirantis.com/fwm/5.1.2/ubuntu/pool/main/
[3] https://ci.fuel-infra.org/
[4] https://bugs.launchpad.net/mos/+filebug

On Mon, Jun 8, 2015 at 10:15 AM Neil Jerram neil.jer...@metaswitch.com
wrote:

 Two further thoughts on this:

 1. Another DHCP agent problem that my team noticed is that it
 call_driver('reload_allocations') takes a bit of time (to regenerate the
 Dnsmasq config files, and to spawn a shell that sends a HUP signal) -
 enough so that if there is a fast steady rate of port-create and
 port-delete notifications coming from the Neutron server, these can
 build up in DHCPAgent's RPC queue, and then they still only get
 dispatched one at a time.  So the queue and the time delay become longer
 and longer.

 I have a fix pending for this, which uses an extra thread to read those
 notifications off the RPC queue onto an internal queue, and then batches
 the call_driver('reload_allocations') processing when there is a
 contiguous sequence of such notifications - i.e. only does the config
 regeneration and HUP once, instead of lots of times.

 I don't think this is directly related to what you are seeing - but
 perhaps there actually is some link that I am missing.

 2. There is an interesting and vaguely similar thread currently being
 discussed about the L3 agent (subject L3 agent rescheduling issue) -
 about possible RPC/threading issues between the agent and the Neutron
 server.  You might like to review that thread and see if it describes
 any problems analogous to your DHCP one.

 Regards,
 Neil


 On 08/06/15 17:53, Neil Jerram wrote:
  My team has seen a problem that could be related: in a churn test where
  VMs are created and terminated at a constant rate - but so that the
  number of active VMs should remain roughly constant - the size of the
  host and addn_hosts files keeps increasing.
 
  In other words, it appears that the config for VMs that have actually
  been terminated is not being removed from the config file.  Clearly, if
  you have a limited pool of IP addresses, this can eventually lead to the
  problem that you have described.
 
  For your case - i.e. with Icehouse - the problem might be
  https://bugs.launchpad.net/neutron/+bug/1192381.  I'm not sure if the
  fix for that problem - i.e. sending port-create and port-delete
  notifications to DHCP agents even when the server thinks they are down -
  was merged before the Icehouse release, or not.
 
  But there must be at least one other cause as well, because my team was
  seeing this with Juno-level code.
 
  Therefore I, too, would be interested in any other insights about this
  problem.
 
  Regards,
   Neil
 
 
 
  On 08/06/15 16:26, Daniel Comnea wrote:
  Any help, ideas please?
 
  Thx,
  Dani
 
  On Mon, Jun 8, 2015 at 9:25 AM, Daniel Comnea comnea.d...@gmail.com
  mailto:comnea.d...@gmail.com wrote:
 
  + Operators
 
  Much thanks in advance,
  Dani
 
 
 
 
  On Sun, Jun 7, 2015 at 6:31 PM, Daniel Comnea 
 comnea.d...@gmail.com
  mailto:comnea.d...@gmail.com wrote:
 
  Hi all,
 
  I'm running IceHouse (build using Fuel 5.1.1) on Ubuntu where
  dnsmask version 2.59-4.
  I have a very basic network layout where i have a private net
  which has 2 subnets
 
2fb7de9d-d6df-481f-acca-2f7860cffa60 | private-net
  |
  e79c3477-d3e5-471c-a728-8d881cf31bee 192.168.110.0/24
  http://192.168.110.0/24 |
  |
  | |
  f48c3223-8507-455c-9c13-8b727ea5f441 192.168.111.0/24
  http://192.168.111.0/24 |
 
  and i'm creating VMs via HEAT.
  What is happening is that sometimes i get duplicated entries in
  [1] and because of that the VM which was spun up doesn't get
  an ip.
  The Dnsmask processes are running okay [2] and i can't see
  anything special/ wrong in it.
 
  Any idea why this is happening? Or are you aware of any bugs
  around this area? Do you see a problems with having 2 subnets
  mapped to 1 private-net?
 
 
 
  Thanks,
  Dani
 
  [1]
 
  /var/lib/neutron/dhcp/2fb7de9d-d6df-481f-acca-2f7860cffa60/addn_hosts
 
  [2]
 
  nobody5664 1  0 Jun02 ?00:00:08 dnsmasq
  --no-hosts --no-resolv --strict-order --bind-interfaces
  --interface=tapc9164734-0c --except-interface=lo
 
 
 --pid-file=/var/lib/neutron/dhcp/2fb7de9d-d6df-481f-acca-2f7860cffa60/pid
 
 
 

Re: [openstack-dev] [global-requirements][pbr] tarball and git requirements no longer supported in requirements.txt

2015-06-08 Thread Robert Collins
On 9 June 2015 at 03:58, Jeremy Stanley fu...@yuggoth.org wrote:
 On 2015-06-08 13:29:50 +1200 (+1200), Robert Collins wrote:
 [...]
 However, we are going to move from test-requirements.txt to setup.cfg
 eventually, but thats a separate transition - and one could still use
 test-requirements.txt there to provide git references.

 Except please don't. If you put a git requirement entry in that file
 for neutron, every CI job which uses that test-requirements.txt will
 reclone all of neutron over the network from scratch because pip
 isn't smart enough to do otherwise unless you take additional
 measures to preinstall that requirement in the environment where the
 test is run. That's why we use tools like devstack-gate or
 zuul-cloner which know how to check for cached repos and update them to
 the ref that you've (or that zuul has) requested.

I already said that the git entry should be to a local zuul-cloner
cloned repo. Kevin's *current* 3rd-party CI solution is doing
full-clones each time.

BTW pip can trivially re-use git repositories if you specify the
--source path in e.g. pip.conf, but thats a separate discussion IMO
:).

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Keystone] Glance and trusts

2015-06-08 Thread Steve Lewis
Monday, June 8, 2015 07:10, Adam Young wrote:
 2.  Delegation are long lived affairs.  If anything is going to take
 longer than the duration of the token, it should be in the context of a
 delegation, and the user should re-authenticate to prove identity.

Requiring re-authenticating to perform many tasks that involves delegation (a 
distinction that users don't understand, or care to) is a sure way to convince 
users to use short and weak passwords. Please, no.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Mechanism drivers and Neutron server forking?

2015-06-08 Thread Armando M.
Interestingly, [1] was filed a few moments ago:

[1] https://bugs.launchpad.net/neutron/+bug/1463129

On 2 June 2015 at 22:48, Salvatore Orlando sorla...@nicira.com wrote:

 I'm not sure if you can test this behaviour on your own because it
 requires the VMware plugin and the eventlet handling of backend response.

 But the issue was manifesting and had to be fixed with this mega-hack [1].
 The issue was not about several workers executing the same code - the
 loopingcall was always started on a single thread. The issue I witnessed
 was that the other API workers just hang.

 There's probably something we need to understand about how eventlet can
 work safely with a os.fork (I just think they're not really made to work
 together!).
 Regardless, I did not spent too much time on it, because I thought that
 the multiple workers code might have been rewritten anyway by the pecan
 switch activities you're doing.

 Salvatore


 [1] https://review.openstack.org/#/c/180145/

 On 3 June 2015 at 02:20, Kevin Benton blak...@gmail.com wrote:

 Sorry about the long delay.

 Even the LOG.error(KEVIN PID=%s network response: %s % (os.getpid(),
 r.text)) line?  Surely the server would have forked before that line was
 executed - so what could prevent it from executing once in each forked
 process, and hence generating multiple logs?

 Yes, just once. I wasn't able to reproduce the behavior you ran into.
 Maybe eventlet has some protection for this? Can you provide small sample
 code for the logging driver that does reproduce the issue?

 On Wed, May 13, 2015 at 5:19 AM, Neil Jerram neil.jer...@metaswitch.com
 wrote:

 Hi Kevin,

 Thanks for your response...

 On 08/05/15 08:43, Kevin Benton wrote:

 I'm not sure I understand the behavior you are seeing. When your
 mechanism driver gets initialized and kicks off processing, all of that
 should be happening in the parent PID. I don't know why your child
 processes start executing code that wasn't invoked. Can you provide a
 pointer to the code or give a sample that reproduces the issue?


 https://github.com/Metaswitch/calico/tree/master/calico/openstack

 Basically, our driver's initialize method immediately kicks off a green
 thread to audit what is now in the Neutron DB, and to ensure that the other
 Calico components are consistent with that.

  I modified the linuxbridge mech driver to try to reproduce it:
 http://paste.openstack.org/show/216859/

 In the output, I never received any of the init code output I added more
 than once, including the function spawned using eventlet.


 Interesting.  Even the LOG.error(KEVIN PID=%s network response: %s %
 (os.getpid(), r.text)) line?  Surely the server would have forked before
 that line was executed - so what could prevent it from executing once in
 each forked process, and hence generating multiple logs?

 Thanks,
 Neil

  The only time I ever saw anything executed by a child process was actual
 API requests (e.g. the create_port method).




  On Thu, May 7, 2015 at 6:08 AM, Neil Jerram neil.jer...@metaswitch.com
 mailto:neil.jer...@metaswitch.com wrote:

 Is there a design for how ML2 mechanism drivers are supposed to cope
 with the Neutron server forking?

 What I'm currently seeing, with api_workers = 2, is:

 - my mechanism driver gets instantiated and initialized, and
 immediately kicks off some processing that involves communicating
 over the network

 - the Neutron server process then forks into multiple copies

 - multiple copies of my driver's network processing then continue,
 and interfere badly with each other :-)

 I think what I should do is:

 - wait until any forking has happened

 - then decide (somehow) which mechanism driver is going to kick off
 that processing, and do that.

 But how can a mechanism driver know when the Neutron server forking
 has happened?

 Thanks,
  Neil


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 
 http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton

 

[openstack-dev] [Infra] Meeting Tuesday June 9th at 19:00 UTC

2015-06-08 Thread Elizabeth K. Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is having our next weekly
meeting on Tuesday June 9th, at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

In case you missed it or would like a refresher, the meeting minutes
and log from our last meeting are available here:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-06-02-19.01.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-06-02-19.01.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-06-02-19.01.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Oslo][Automaton] Proposal for new core reviewer (min pae)

2015-06-08 Thread Joshua Harlow

Greetings all stackers,

I propose that we add Min Pae (sputnik13) to the automaton-core team.

Min has been actively contributing to taskflow for a while now and as
automaton (the library came out of taskflow) is a new library it would 
be great to have his participation there as well. He is willing (and 
able!) to help with the review load when he can. He has provided quality 
reviews in other projects and would be a welcome addition in helping 
make automaton the best library it can be!


Overall I think he would make a great addition to the core review team.

Please respond with +1/-1.

Thanks much!

--

Joshua Harlow

It's openstack, relax... | harlo...@yahoo-inc.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [app-catalog] Updating existing artifacts on storage.apps.openstack.org

2015-06-08 Thread Christopher Aedo
On Mon, Jun 8, 2015 at 4:47 AM, Serg Melikyan smelik...@mirantis.com wrote:
 We found few issues with images used by murano application and we
 would like to update them. What is procedure for updating existing
 images/apps hosted on storage.apps.openstack.org?

This question hits on some really important points:

-The CDN lives on Rackspace at the moment.  Update/Add/Delete actions
are manual, and managed by me at the moment.  This is not desirable,
but was a concession we had to make in order to get something up and
running in time for the summit.

-The app catalog has no concept of versions.  To your specific point,
there are binary assets that need to be updated, but there's no
reasonable way to update them (manual process is unreasonable IMO).
Ultimately I don't think any asset (blob or text in yaml) should go in
to the catalog without being versioned.

-Hosting binary assets could potentially be of concern due to
licensing and liability issues.  Right not it's not a significant
concern as we can make it clear anything downloaded requires total
assumption of risk on the part of the downloader.  Maybe it's never
going to be a significant concern (dockerhub has a whole lot of
content, and nobody has sued them AFAIK).

I would like to work with the OpenStack infra team to migrate the CDN
components to swift managed by the community (similar to what I'm
working on for the web site itself).  Handling this content
programatically would be excellent, but I'm not sure the best way to
proceed.  Do we continue hosting on RAX and managing manually while we
work on a more thorough approach hosted on OpenStack infra?

At any rate Serg to directly answer your question on how to update
binary assets - I would suggest a new CR with a link to where the new
binary can be found (including the associated md5 or sha), with a
clear note in the commit message that this updated an existing binary
asset living on the CDN.  As long as the volume does not become
tremendous, this seems to me like a workable solution for now.

-Christopher

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Barbican : Retrieval of the secret in text/plain format generated from Barbican order resource

2015-06-08 Thread Asha Seshagiri
Sure John . Thanks a lot John for your response.

I would like Barbican to support the retrieval of the secret in plain/text
format generated from the order resource.Since it is very important for our
Encryption usecase which is dependent on the key generated from Barbican.

I would like to know your opinion.

Thanks and Regards,
Asha Seshagiri




On Mon, Jun 8, 2015 at 8:36 AM, John Wood john.w...@rackspace.com wrote:

  Hello Asha,

  Barbican is not yet supporting the conversion of secrets of one format
 to another. If you have thoughts on desired conversions however, please
 mentioned them in this thread, or else consider mentioning them in our
 weekly IRC meeting (freenode #openstack-meeting-alt at 3pm CDT).

  Thanks,
 John



   From: Asha Seshagiri asha.seshag...@gmail.com
 Date: Monday, June 8, 2015 at 12:17 AM
 To: John Wood john.w...@rackspace.com
 Cc: openstack-dev openstack-dev@lists.openstack.org, Douglas Mendizabal
 douglas.mendiza...@rackspace.com, Reller, Nathan S. 
 nathan.rel...@jhuapl.edu, Adam Harwell adam.harw...@rackspace.com,
 Paul Kehrer paul.keh...@rackspace.com

 Subject: Re: Barbican : Retrieval of the secret in text/plain format
 generated from Barbican order resource

   Thanks John for your response.
 I am aware that application/octet-stream works for the retrieval of secret
 .
 We are utilizing the key generated from Barbican in our AES encryption
 algorithm . Hence we  wanted the response in text/plain format from
 Barbican since AES encryption algorithm would need the key of ASCII format
 which should be either 16,24 or 32 bytes.

  The AES encyption algorithms would not accept the binary format and even
 if binary  is converted into ascii , encoding is failing for few of the
 keys because some characters exceeeds the range of ASCII and for some keys
  after encoding length exceeds 32 bytes  which is the maximum length for
 doing AES encryption.

  Would like to know the reason behind Barbican not supporting
 the retrieval of the secret in text/plain format generated from the order
 resource in plain/text format.

  Thanks and Regards,
 Asha Seshagiri

 On Sun, Jun 7, 2015 at 11:43 PM, John Wood john.w...@rackspace.com
 wrote:

  Hello Asha,

  The AES type key should require an application/octet-stream Accept
 header to retrieve the secret as it is a binary type. Please replace
 ‘text/plain’ with ‘application/octet-stream’ in your curl calls below.

  Thanks,
 John


   From: Asha Seshagiri asha.seshag...@gmail.com
 Date: Friday, June 5, 2015 at 2:42 PM
 To: openstack-dev openstack-dev@lists.openstack.org
 Cc: Douglas Mendizabal douglas.mendiza...@rackspace.com, John Wood 
 john.w...@rackspace.com, Reller, Nathan S. nathan.rel...@jhuapl.edu,
 Adam Harwell adam.harw...@rackspace.com, Paul Kehrer 
 paul.keh...@rackspace.com
 Subject: Re: Barbican : Retrieval of the secret in text/plain format
 generated from Barbican order resource

   Hi All ,

  I am currently working on use cases for database and file Encryption.It
 is really important for us to know since my Encryption use case would be
 using the key generated by Barbican through order resource as the key.
 The encyption algorithms would not accept the binary format and even if
 converted into ascii , encoding is failing for few of the keys because some
 characters exceeeds the range of ASCII and for some key  after encoding
 length exceeds 32 bytes  which is the maximum length for doing AES
 encryption.
 It would be great if  someone could respond to the query ,since it would
 block my further investigations on Encryption usecases using Babrican

  Thanks and Regards,
 Asha Seshagiri


 On Wed, Jun 3, 2015 at 3:51 PM, Asha Seshagiri asha.seshag...@gmail.com
 wrote:

   Hi All,

  Unable to retrieve the secret in text/plain format  generated from
 Barbican order resource

  Please find the curl command and responses for

  *Order creation with payload content type as text/plain* :

 [root@barbican-automation ~]# curl -X POST -H
 'content-type:application/json' -H
 X-Auth-Token:9b211b06669249bb89665df068828ee8 \
  -d '{type : key, meta: {name: secretname2,algorithm:
 aes, bit_length:256,  mode: cbc, payload_content_type:
 *text/plain*}}'  -k https://169.53.235.102:9311/v1/orders

 *{order_ref:
 https://169.53.235.102:9311/v1/orders/727113f9-fcda-4366-9f85-93b15edd4680
 https://169.53.235.102:9311/v1/orders/727113f9-fcda-4366-9f85-93b15edd4680*
 }

  *Retrieval of the order by ORDER ID in order to get to know the secret
 generated by Barbican*

 [root@barbican-automation ~]# curl -H 'Accept: application/json' -H
 X-Auth-Token:9b211b06669249bb89665df068828ee8 \
  -k  
  *https://169.53.235.102:9311/v1/orders/727113f9-fcda-4366-9f85-93b15edd4680
 https://169.53.235.102:9311/v1/orders/727113f9-fcda-4366-9f85-93b15edd4680*
 {status: ACTIVE, sub_status: Unknown, updated:
 2015-06-03T19:08:13, created: 2015-06-03T19:08:12, order_ref: 
 https://169.53.235.102:9311/v1/orders/727113f9-fcda-4366-9f85-93b15edd4680;,
 secret_ref: 

[openstack-dev] [third-party] Weekly Third Party CI Working Group meeting Wednesday June 10th 1500UTC

2015-06-08 Thread Kurt Taylor
Hi all,

The weekly Third Party CI Working Group team meeting is at 1500UTC this
Wednesday, June 10th in #openstack-meeting-4.

The agenda for the meeting is here:
https://wiki.openstack.org/wiki/Meetings/ThirdParty

We will be discussing the CI systems monitoring dashboard, among other
things. Please try to attend if at all possible.

As always, feel free to add items to the agenda and we will get to them as
time permits.

Thanks!
Kurt Taylor (krtaylor)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Barbican : Retrieval of the secret in text/plain format generated from Barbican order resource

2015-06-08 Thread Asha Seshagiri
Thanks Nate for your response.
I would need Barbican to generate the key in plain/text format which is the
human readable form so that I can use that key in Standard Crytp graphy
libraries in python which takes key as the argument.
Yeah , text/plain format means the bytes are in base64 format.

Thanks and Regards,
Asha Seshgiri

On Mon, Jun 8, 2015 at 8:37 AM, Nathan Reller nathan.s.rel...@gmail.com
wrote:

 Asha,

 When you say you want your key in ASCII does that also mean putting
 the bytes in hex or base64 format? Isn't ASCII only 7 bits?

 -Nate

 On Mon, Jun 8, 2015 at 1:17 AM, Asha Seshagiri asha.seshag...@gmail.com
 wrote:
  Thanks John for your response.
  I am aware that application/octet-stream works for the retrieval of
 secret .
  We are utilizing the key generated from Barbican in our AES encryption
  algorithm . Hence we  wanted the response in text/plain format from
 Barbican
  since AES encryption algorithm would need the key of ASCII format which
  should be either 16,24 or 32 bytes.
 
  The AES encyption algorithms would not accept the binary format and even
 if
  binary  is converted into ascii , encoding is failing for few of the keys
  because some characters exceeeds the range of ASCII and for some keys
 after
  encoding length exceeds 32 bytes  which is the maximum length for doing
 AES
  encryption.
 
  Would like to know the reason behind Barbican not supporting the
 retrieval
  of the secret in text/plain format generated from the order resource in
  plain/text format.
 
  Thanks and Regards,
  Asha Seshagiri
 
  On Sun, Jun 7, 2015 at 11:43 PM, John Wood john.w...@rackspace.com
 wrote:
 
  Hello Asha,
 
  The AES type key should require an application/octet-stream Accept
 header
  to retrieve the secret as it is a binary type. Please replace
 ‘text/plain’
  with ‘application/octet-stream’ in your curl calls below.
 
  Thanks,
  John
 
 
  From: Asha Seshagiri asha.seshag...@gmail.com
  Date: Friday, June 5, 2015 at 2:42 PM
  To: openstack-dev openstack-dev@lists.openstack.org
  Cc: Douglas Mendizabal douglas.mendiza...@rackspace.com, John Wood
  john.w...@rackspace.com, Reller, Nathan S. 
 nathan.rel...@jhuapl.edu,
  Adam Harwell adam.harw...@rackspace.com, Paul Kehrer
  paul.keh...@rackspace.com
  Subject: Re: Barbican : Retrieval of the secret in text/plain format
  generated from Barbican order resource
 
  Hi All ,
 
  I am currently working on use cases for database and file Encryption.It
 is
  really important for us to know since my Encryption use case would be
 using
  the key generated by Barbican through order resource as the key.
  The encyption algorithms would not accept the binary format and even if
  converted into ascii , encoding is failing for few of the keys because
 some
  characters exceeeds the range of ASCII and for some key  after encoding
  length exceeds 32 bytes  which is the maximum length for doing AES
  encryption.
  It would be great if  someone could respond to the query ,since it would
  block my further investigations on Encryption usecases using Babrican
 
  Thanks and Regards,
  Asha Seshagiri
 
 
  On Wed, Jun 3, 2015 at 3:51 PM, Asha Seshagiri 
 asha.seshag...@gmail.com
  wrote:
 
  Hi All,
 
  Unable to retrieve the secret in text/plain format  generated from
  Barbican order resource
 
  Please find the curl command and responses for
 
  Order creation with payload content type as text/plain :
 
  [root@barbican-automation ~]# curl -X POST -H
  'content-type:application/json' -H
  X-Auth-Token:9b211b06669249bb89665df068828ee8 \
   -d '{type : key, meta: {name: secretname2,algorithm:
 aes,
   bit_length:256,  mode: cbc, payload_content_type:
 text/plain}}'
   -k https://169.53.235.102:9311/v1/orders
 
  {order_ref:
  
 https://169.53.235.102:9311/v1/orders/727113f9-fcda-4366-9f85-93b15edd4680
 }
 
  Retrieval of the order by ORDER ID in order to get to know the secret
  generated by Barbican
 
  [root@barbican-automation ~]# curl -H 'Accept: application/json' -H
  X-Auth-Token:9b211b06669249bb89665df068828ee8 \
   -k
  
 https://169.53.235.102:9311/v1/orders/727113f9-fcda-4366-9f85-93b15edd4680
  {status: ACTIVE, sub_status: Unknown, updated:
  2015-06-03T19:08:13, created: 2015-06-03T19:08:12, order_ref:
  
 https://169.53.235.102:9311/v1/orders/727113f9-fcda-4366-9f85-93b15edd4680
 ,
  secret_ref:
  
 https://169.53.235.102:9311/v1/secrets/5c25525d-a162-4b0b-9954-90c4ce426c4e
 ,
  creator_id: cedd848a8a9e410196793c601c03b99a, meta: {name:
  secretname2, algorithm: aes, payload_content_type:
 text/plain,
  mode: cbc, bit_length: 256, expiration: null},
 sub_status_message:
  Unknown, type: key}[root@barbican-automation ~]#
 
 
  Retrieval of the secret failing with the content type text/plain
 
  [root@barbican-automation ~]# curl -H 'Accept:text/plain' -H
  X-Auth-Token:9b211b06669249bb89665df068828ee8 -k
 
 https://169.53.235.102:9311/v1/secrets/5c25525d-a162-4b0b-9954-90c4ce426c4e/payload
  {code: 500, description: Secret payload 

Re: [openstack-dev] The Nova API in Kilo and Beyond

2015-06-08 Thread Jay Pipes

On 06/05/2015 10:56 AM, Neil Jerram wrote:

I guess that's why the GNU autoconf/configure system has always advised
testing for particular wanted features, instead of looking at versions
and then relying on carnal knowledge to know what those versions imply.


I'm pretty sure you meant tribal knowledge, not carnal knowledge :)

-jay

http://en.wikipedia.org/wiki/Carnal_knowledge
http://en.wikipedia.org/wiki/Tribal_knowledge

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [DevStack][Neutron] PHYSICAL_NETWORK vs. PUBLIC_PHYSICAL_NETWORK - rant

2015-06-08 Thread Jay Pipes

On 06/08/2015 12:16 PM, Sean M. Collins wrote:

I'd expect the above, where we pull the setting from PHYSICAL_NETWORK.
Then perhaps in the lib/neutron-legacy define

PHYSICAL_NETWORK={$PHYSICAL_NETWORK:-public}


Based on your original post, wouldn't you want the above to be this instead?

PHYSICAL_NETWORK={$PHYSICAL_NETWORK:-default}

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [glance] How to deal with aborted image read?

2015-06-08 Thread Chris Friesen

On 06/08/2015 12:30 PM, Robert Collins wrote:

On 9 June 2015 at 03:50, Chris Friesen chris.frie...@windriver.com wrote:

On 06/07/2015 04:22 PM, Robert Collins wrote:



Hi, original reporter here.

There's no LB involved.  The issue was noticed in a test lab that is tight
on disk space.  When an instance failed to boot the person using the lab
tried to delete some images to free up space, at which point it was noticed
that space didn't actually free up.  (For at least half an hour, exact time
unknown.)

I'm more of a nova guy, so could you elaborate a bit on the GC?  Is
something going to delete the ChunkedFile object after a certain amount of
inactivity? What triggers the GC to run?


Ok, updating my theory...  I'm not super familiar with glances
innards, so I'm going to call out my assumptions.

The ChunkedFile object is in the Nova process. It reads from a local
file, so its the issue - and it has a handle onto the image because
glance arranged for it to read from it.


As I understand it, the ChunkedFile object is in the glance-api process.  (At 
least, it's the glance-api process that is holding the open file descriptor.)




Anyhow - to answer your question: the iterator is only referenced by
the for loop, nothing else *can* hold a reference to it (without nasty
introspection hacks) and so as long as the iterator has an appropriate
try:finally:, which the filesystem ChunkedFile one does- the file will
be closed automatically.


From what I understand, the iterator (in the glance-api process) normally 
breaks out of the while loop once the whole file has been read and the read() 
call returns an empty string.


It's not clear to me how an error in the nova process (which causes it to stop 
reading the file from glance-api)  will cause glance-api to break out of the 
while loop in ChunkedFile.__iter__().


Chris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] MOS 6.1 Release - Hard Code Freeze in action

2015-06-08 Thread Eugene Bogdanov

Hello everyone,

Please be informed that Hard Code Freeze for MOS 6.1 Release is 
officially in action. As mentioned earlier, stable/6.1 branch was 
created for the following repos:


  fuel-astute
  fuel-docs
  fuel-library
  fuel-main
  fuel-ostf
  fuel-qa
  fuel-web

Bug reporters, please do not forget to target both master and 6.1 
(stable/6.1) milestones since now. Also, please remember that all fixes 
for stable/6.1 branch should first be applied to master and then 
cherry-picked to stable/6.1. As always, please ensure that you do NOT 
merge changes to stable branch first. It always has to be a backport 
with the same Change-ID. Please see more on this at [1]


--
EugeneB

[1] 
https://wiki.openstack.org/wiki/Fuel/How_to_contribute#Backport_bugfixes_to_stable_release_series
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [stable] No longer doing stable point releases

2015-06-08 Thread Robert Collins
On 9 June 2015 at 03:48, Jeremy Stanley fu...@yuggoth.org wrote:
 On 2015-06-08 10:54:32 +1200 (+1200), Robert Collins wrote:
 On 8 June 2015 at 10:14, Alan Pevec ape...@gmail.com wrote:
  2015-06-06 19:08 GMT+02:00 Ian Cordasco ian.corda...@rackspace.com:
  Not exactly. PBR/OpenStack follow SemVer and 2015.1.0.38 isn't valid
  SemVer (http://semver.org/)
 
  Right, so semver compatible versioning on stable/kilo would be 2015.1.N
  but PBR doesn't support that.

 PBR supports it fine. Whats the issue?

 Having PBR automatically increment the N in 2015.1.N for each patch
 applied on the branch rather than needing it to be tagged.

I don't think pbr needs to do that to have the proposal work. We could
tag from cron and use the actual tag pipeline to do release related
stuff.

E.g. its layer conflation to have pbr start assuming that untagged
code isn't a release.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] Adding packaging as an OpenStack project

2015-06-08 Thread Thomas Goirand
On 06/08/2015 10:39 AM, James Page wrote:
 On 02/06/15 23:41, James E. Blair wrote:
 3) What are the plans for repositories and their contents?
 
 What repos will be created, and what will be in them.  When will
 new ones be created, and is there any process around that.
 
 Having taken some time to think about this over the weekend, I'm keen
 to ensure that any packaging repositories that move upstream are
 packaging for OpenStack and other OpenStack umbrella projects.
 
 Thomas - how many of the repositories under the pkg-openstack team in
 Debian fall into this category - specifically projects under
 /openstack or /stackforge namespaces?
 
 I don't think we should be upstreaming packaging for the wider
 OpenStack dependency chain - the Debian Python modules team is a much
 larger team of interested contributors and better place for this sort
 of work.

Ok, let's work this list of packages out.

The full list of packages currently maintained within the Debian
OpenStack PKG team is here:
https://qa.debian.org/developer.php?login=openstack-devel%40lists.alioth.debian.org

I have sorted this list into categories, and sorted these categories in
an increasing order of likelihood to be maintained in upstream gerrit.

On the below list, I believe we should have in upstream gerrit, at least:
- OpenStack maintained libraries and clients
- Debian specific packages (because that's needed tools for building and
running a Debian package based CI)
- server packages

All the 3rd party Python modules could either stay within the PKG
OpenStack Debian team, or move to the DPMT (Debian Python Module Team).
Though I will *refuse* that these packages are switched from Git to SVN,
so we will have to wait until the DPMT finishes the switch. I've heard
that Tumbleweed (that's a nick name...) is close to have this migration
finished though.

Also, probably it would make sense to keep some of the tooling within
the PKG OpenStack group. I'm thinking about all the unit test stuff,
like testr, subunit, and all of its dependencies (testtools,
testscenarios, etc.). Maybe it's a good fit for upstream packaging too?
Please voice your opinion here.

I would then like to keep side packages and Key dependencies within
the PKG OpenStack group in alioth.debian.org.

This overall means that we'd push 107 repositories to Gerrit, and even
119 if we include TripleO. And of course, this list would grow over time
(because that's OpenStack you know... things always grow, and never
shrink...).

It took me some time to produce this list below. I hope that's useful.

Cheers,

Thomas Goirand (zigo)

side packages (7):

cobbler
ftp-cloudfs
git-review
ntpstat
q-text-as-data
sftpcloudfs
sheepdog

Key dependencies (4):
-
alembic
migrate
novnc
rabbitmq-server

3rd party Python modules (101):
---
cliff-tablib
factory-boy
python-aioeventlet
python-autobahn
python-cloudfiles
python-coffin
python-colander
python-concurrent.futures
python-couleur
python-crcmod
python-croniter
python-daemonize
python-ddt
python-django-appconf
python-django-bootstrap-form
python-django-compressor
python-django-discover-runner
python-django-pyscss
python-dogpile.cache
python-dogpile.core
python-eventlet
python-extras
python-falcon
python-gabbi
python-greenio
python-happybase
python-httpretty
python-hurry.filesize
python-ibm-db-sa
python-invocations
python-invoke
python-jingo
python-json-patch
python-json-pointer
python-jsonpath-rw
python-jsonrpclib
python-jsonschema
python-kafka
python-ldappool
python-lesscpy
python-logutils
python-misaka
python-mockito
python-mox3
python-nose-exclude
python-nose-parameterized
python-nose-testconfig
python-nose-timer
python-nosehtmloutput
python-pecan
python-pint
python-posix-ipc
python-proboscis
python-protorpc-standalone
python-pyghmi
python-pygit2
python-pykmip
python-pymemcache
python-pymysql
python-pysaml2
python-pyvmomi
python-rednose
python-requestbuilder
python-requests-kerberos
python-requests-mock
python-retrying
python-rfc3986
python-rtslib-fb
python-rudolf
python-scciclient
python-seamicroclient
python-semantic-version
python-semver
python-sockjs-tornado
python-sphinxcontrib.plantuml
python-steadymark
python-sure
python-sysv-ipc
python-tablib
python-tasklib
python-termcolor
python-termstyle
python-testscenarios
python-trollius
python-txaio
python-warlock
python-wrapt
python-wsgi-intercept
python-wsme
python-xmlbuilder
python-xvfbwrapper
python-yaql
python-zake
sphinxcontrib-docbookrestapi
sphinxcontrib-httpdomain
sphinxcontrib-pecanwsme
sphinxcontrib-programoutput
spice-html5
subunit
testresources
websockify

TripleO (12):
-
python-dib-utils
python-diskimage-builder
python-os-apply-config
python-os-client-config
python-os-cloud-config
python-os-collect-config
python-os-net-config
python-os-refresh-config
tripleo-heat-templates
tripleo-image-elements
tuskar
tuskar-ui

server packages (25):
-
barbican
ceilometer
cinder
designate
glance

Re: [openstack-dev] [neutron] Mechanism drivers and Neutron server forking?

2015-06-08 Thread Robert Kukura
From a driver's perspective, it would be simpler, and I think 
sufficient, to change ML2 to call initialize() on drivers after the 
forking, rather than requiring drivers to know about forking.


-Bob

On 6/8/15 2:59 PM, Armando M. wrote:

Interestingly, [1] was filed a few moments ago:

[1] https://bugs.launchpad.net/neutron/+bug/1463129

On 2 June 2015 at 22:48, Salvatore Orlando sorla...@nicira.com 
mailto:sorla...@nicira.com wrote:


I'm not sure if you can test this behaviour on your own because it
requires the VMware plugin and the eventlet handling of backend
response.

But the issue was manifesting and had to be fixed with this
mega-hack [1]. The issue was not about several workers executing
the same code - the loopingcall was always started on a single
thread. The issue I witnessed was that the other API workers just
hang.

There's probably something we need to understand about how
eventlet can work safely with a os.fork (I just think they're not
really made to work together!).
Regardless, I did not spent too much time on it, because I thought
that the multiple workers code might have been rewritten anyway by
the pecan switch activities you're doing.

Salvatore


[1] https://review.openstack.org/#/c/180145/

On 3 June 2015 at 02:20, Kevin Benton blak...@gmail.com
mailto:blak...@gmail.com wrote:

Sorry about the long delay.

Even the LOG.error(KEVIN PID=%s network response: %s %
(os.getpid(), r.text)) line?  Surely the server would have
forked before that line was executed - so what could prevent
it from executing once in each forked process, and hence
generating multiple logs?

Yes, just once. I wasn't able to reproduce the behavior you
ran into. Maybe eventlet has some protection for this? Can you
provide small sample code for the logging driver that does
reproduce the issue?

On Wed, May 13, 2015 at 5:19 AM, Neil Jerram
neil.jer...@metaswitch.com
mailto:neil.jer...@metaswitch.com wrote:

Hi Kevin,

Thanks for your response...

On 08/05/15 08:43, Kevin Benton wrote:

I'm not sure I understand the behavior you are seeing.
When your
mechanism driver gets initialized and kicks off
processing, all of that
should be happening in the parent PID. I don't know
why your child
processes start executing code that wasn't invoked.
Can you provide a
pointer to the code or give a sample that reproduces
the issue?


https://github.com/Metaswitch/calico/tree/master/calico/openstack

Basically, our driver's initialize method immediately
kicks off a green thread to audit what is now in the
Neutron DB, and to ensure that the other Calico components
are consistent with that.

I modified the linuxbridge mech driver to try to
reproduce it:
http://paste.openstack.org/show/216859/

In the output, I never received any of the init code
output I added more
than once, including the function spawned using eventlet.


Interesting.  Even the LOG.error(KEVIN PID=%s network
response: %s % (os.getpid(), r.text)) line?  Surely the
server would have forked before that line was executed -
so what could prevent it from executing once in each
forked process, and hence generating multiple logs?

Thanks,
Neil

The only time I ever saw anything executed by a child
process was actual
API requests (e.g. the create_port method).




On Thu, May 7, 2015 at 6:08 AM, Neil Jerram
neil.jer...@metaswitch.com
mailto:neil.jer...@metaswitch.com
mailto:neil.jer...@metaswitch.com
mailto:neil.jer...@metaswitch.com wrote:

Is there a design for how ML2 mechanism drivers
are supposed to cope
with the Neutron server forking?

What I'm currently seeing, with api_workers = 2, is:

- my mechanism driver gets instantiated and
initialized, and
immediately kicks off some processing that
involves communicating
over the network

- the Neutron server process then forks into
multiple copies

- multiple copies of my driver's network
processing then continue,
and interfere badly with each other :-)

I think what I should do is:

- wait 

Re: [openstack-dev] [Neutron] Missing allowed address pairs?

2015-06-08 Thread Kevin Benton
It looks like security groups aren't enabled. Make sure you don't have a
config option setting 'enable_security_group' to False.

Check the startup log, you should see something like Driver configuration
doesn't match with enable_security_group and Disabled
allowed-address-pairs extension.

On Mon, Jun 8, 2015 at 7:50 AM, Ken D'Ambrosio k...@jots.org wrote:

 You have better chances of getting an answer if you asked the -dev list
 and add  [Neutron] to the subject (done here).

 That said, can you tell us a bit more about your deployment? You can also
 hop
 on #openstack-neutron on Freenode to look for neutron developers who can
 help
 you more interactively.

 Cheers,
 Armando


 Hi.  As per Armando's suggestion, e-mailing openstack-dev for advice, and
 have pasted files and command output, below.  Our Ubuntu-based Openstack
 installations do not seem to be enabling allowed address pairs.  It seems
 that we (or Ubuntu) are disabling them somehow, and we were wondering if
 you might have advice on where to look.  If there's any additional
 information you need, please let us know.

 Thanks kindly,

 -Ken

  files and output below 


 ubuntu@magnificent-hill:~$ neutron ext-list
 +---+---+
 | alias | name  |
 +---+---+
 | service-type  | Neutron Service Type Management   |
 | ext-gw-mode   | Neutron L3 Configurable external gateway mode |
 | l3_agent_scheduler| L3 Agent Scheduler|
 | lbaas_agent_scheduler | Loadbalancer Agent Scheduler  |
 | external-net  | Neutron external network  |
 | binding   | Port Binding  |
 | metering  | Neutron Metering  |
 | agent | agent |
 | quotas| Quota management support  |
 | dhcp_agent_scheduler  | DHCP Agent Scheduler  |
 | multi-provider| Multi Provider Network|
 | fwaas | Firewall service  |
 | router| Neutron L3 Router |
 | vpnaas| VPN service   |
 | extra_dhcp_opt| Neutron Extra DHCP opts   |
 | provider  | Provider Network  |
 | lbaas | LoadBalancing service |
 | extraroute| Neutron Extra Route   |
 +---+---+


 neutron.conf:
 ##
 # [ WARNING ]
 # Configuration file maintained by Juju. Local changes may be overwritten.
 ##
 [DEFAULT]
 verbose = False
 debug = False
 lock_path = /var/lock/neutron
 core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
 rabbit_userid = neutron
 rabbit_virtual_host = openstack
 rabbit_password = myhashhere
 rabbit_host = 10.10.3.6
 control_exchange = neutron
 notification_driver = neutron.openstack.common.notifier.list_notifier
 list_notifier_drivers = neutron.openstack.common.notifier.rabbit_notifier
 [agent]
 root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
  end neutron.conf ---

 ml2_conf.ini:

 ###
 # [ WARNING ]
 # Configuration file maintained by Juju. Local changes may be overwritten.

 ###
 [ml2]
 type_drivers = gre,vxlan,vlan,flat
 tenant_network_types = gre,vxlan,vlan,flat
 mechanism_drivers = openvswitch,l2population

 [ml2_type_gre]
 tunnel_id_ranges = 1:1000

 [ml2_type_vxlan]
 vni_ranges = 1001:2000

 [ml2_type_vlan]
 network_vlan_ranges = physnet1:1000:2000

 [ml2_type_flat]
 flat_networks =

 [ovs]
 enable_tunneling = True
 local_ip = 10.10.3.8
 bridge_mappings = physnet1:br-data

 [agent]
 tunnel_types = gre
 l2_population = False


 [securitygroup]
 firewall_driver =
 neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
  end ml2_conf.ini ---

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage 

[openstack-dev] [neutron] New RFE process and specs for Liberty

2015-06-08 Thread Kyle Mestery
tl;dr: Neutron has a new process for filing feature requests and dealing
with specs. See [1] and [2] for more specific details. We no longer have
spec deadlines. Existing specs will be reviewed until Liberty-1.

The long version:
Before the summit, a bunch of Neutron community members began the process
of evaluating where we were with regards to the specs process and landing
new features in Neutron. It became apparent the process we were using
wasn't working for Neutron and we decided to make changes. The new process
is built around RFEs (Request for Feature Enhancements). We are now
focusing on Use Cases and trying to tease out the What instead of the
How (thanks Maru) when evaluating these. We're also moving more detailed
documentation (still required) in-tree instead of in neutron-specs.

Please see the in-tree devref documentation [1] and a blog I wrote [2] for
more details on this process. The bottom line is that we'd like to improve
the process for adding features and open it up more to operators. The
current process required a deep understanding of Neutron to submit a useful
spec with a chance of landing. The new process allows for anyone to file a
feature request without understanding the code at a deep level.

I'll reiterate that we no longer have deadlines to file RFEs. It doesn't
mean you can file one in August and expect it to have code landed in the
Liberty release.

Thanks!
Kyle

[1]
http://docs.openstack.org/developer/neutron/policies/blueprints.html#neutron-request-for-feature-enhancements
[2] http://www.siliconloons.com/posts/2015-06-01-new-neutron-rfe-process/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Mechanism drivers and Neutron server forking?

2015-06-08 Thread Russell Bryant
Right, I think there are use cases for both.  I don't think it's a huge
burden to have to know about it.  I think it's actually quite important
to understand when the initialization happens.

-- 
Russell Bryant

On 06/08/2015 05:02 PM, Kevin Benton wrote:
 This depends on what initialize is supposed to be doing. If it's just a
 one-time sync with a back-end, then I think calling it once in each
 child process might not be what we want.
 
 I left a comment on Terry's patch. I think we should just use the
 callback manager to have a pre-fork and post-fork even to let
 drivers/plugins do whatever is appropriate for them.
 
 On Mon, Jun 8, 2015 at 1:00 PM, Robert Kukura kuk...@noironetworks.com
 mailto:kuk...@noironetworks.com wrote:
 
 From a driver's perspective, it would be simpler, and I think
 sufficient, to change ML2 to call initialize() on drivers after the
 forking, rather than requiring drivers to know about forking.
 
 -Bob
 
 
 On 6/8/15 2:59 PM, Armando M. wrote:
 Interestingly, [1] was filed a few moments ago:

 [1] https://bugs.launchpad.net/neutron/+bug/1463129

 On 2 June 2015 at 22:48, Salvatore Orlando sorla...@nicira.com
 mailto:sorla...@nicira.com wrote:

 I'm not sure if you can test this behaviour on your own
 because it requires the VMware plugin and the eventlet
 handling of backend response.

 But the issue was manifesting and had to be fixed with this
 mega-hack [1]. The issue was not about several workers
 executing the same code - the loopingcall was always started
 on a single thread. The issue I witnessed was that the other
 API workers just hang.

 There's probably something we need to understand about how
 eventlet can work safely with a os.fork (I just think they're
 not really made to work together!).
 Regardless, I did not spent too much time on it, because I
 thought that the multiple workers code might have been
 rewritten anyway by the pecan switch activities you're doing.

 Salvatore


 [1] https://review.openstack.org/#/c/180145/

 On 3 June 2015 at 02:20, Kevin Benton blak...@gmail.com
 mailto:blak...@gmail.com wrote:

 Sorry about the long delay.

 Even the LOG.error(KEVIN PID=%s network response: %s %
 (os.getpid(), r.text)) line?  Surely the server would have
 forked before that line was executed - so what could
 prevent it from executing once in each forked process, and
 hence generating multiple logs?

 Yes, just once. I wasn't able to reproduce the behavior
 you ran into. Maybe eventlet has some protection for this?
 Can you provide small sample code for the logging driver
 that does reproduce the issue? 

 On Wed, May 13, 2015 at 5:19 AM, Neil Jerram
 neil.jer...@metaswitch.com
 mailto:neil.jer...@metaswitch.com wrote:

 Hi Kevin,

 Thanks for your response...

 On 08/05/15 08:43, Kevin Benton wrote:

 I'm not sure I understand the behavior you are
 seeing. When your
 mechanism driver gets initialized and kicks off
 processing, all of that
 should be happening in the parent PID. I don't
 know why your child
 processes start executing code that wasn't
 invoked. Can you provide a
 pointer to the code or give a sample that
 reproduces the issue?


 
 https://github.com/Metaswitch/calico/tree/master/calico/openstack

 Basically, our driver's initialize method immediately
 kicks off a green thread to audit what is now in the
 Neutron DB, and to ensure that the other Calico
 components are consistent with that.

 I modified the linuxbridge mech driver to try to
 reproduce it:
 http://paste.openstack.org/show/216859/

 In the output, I never received any of the init
 code output I added more
 than once, including the function spawned using
 eventlet.


 Interesting.  Even the LOG.error(KEVIN PID=%s network
 response: %s % (os.getpid(), r.text)) line?  Surely
 the server would have forked before that line was
 executed - so what could prevent it from executing
 once in each forked process, and hence generating
 multiple logs?

 Thanks,
 Neil

 The only time I ever saw anything executed by a
 child process 

Re: [openstack-dev] [Security] [Bandit] Using multiprocessing/threading to speed up analysis

2015-06-08 Thread Ian Cordasco


On 6/8/15, 12:17, Finnigan, Jamie jamie.finni...@hp.com wrote:

On 6/8/15, 8:26 AM, Ian Cordasco ian.corda...@rackspace.com wrote:

Hey everyone,

I drew up a blueprint
(https://blueprints.launchpad.net/bandit/+spec/use-threading-when-running
-
c
hecks) to add the ability to use multiprocessing (or threading) to
Bandit.
This essentially means that each thread will be fed a file and analyze
it and return the results. (A file will only ever be analyzed by one
thread.)

This has lead to significant speed improvements in Flake8 when running
against a project like Nova and I think the same improvements could be
made to Bandit.

We skipped parallel processing earlier in Bandit development to keep
things simple, but if we can speed up execution against the bigger code
bases with minimal additional complexity (still needs to be 'easy¹ to add
new checks) then that would be a nice win.

I don't think we¹d lose anything by processing in parallel vs. serial.  If
we do ever add additional state tracking more broadly than per-file,
checks would need to be run at the end of execution anyway to take
advantage of the full state.

Yeah, I don't see this affecting the current state of checks or plugins.
So unless something drastically changes in how we write them, this
shouldn't affect them.


I'd love some feedback on the following points:

1. Should this be on by default?

   Technically, this is backwards incompatible (unless we decide to order
the output before printing results) but since we're still in the 0.x
release series of Bandit, SemVer allows backwards incompatible releases.
I
don't know if we want to take advantage of that or not though.

It looks like flake8 default behavior is off (1 thread²), which makes
sense to me...

It's actually auto which is whatever the number of  CPUs you have:
https://gitlab.com/pycqa/flake8/blob/master/flake8/engine.py#L63


2. Is output ordering significant/important to people?

Output ordering is important - output for repeated runs against an
unchanged code base should be predictable/repeatable. We also want to
continue to support aggregating output by filename or by issue. Under this
model though, we'd just collect results then order/output at completion
rather than during execution.

Right. Ordering would be done after execution.


3. If this is off by default, should the flag accept a special value,
e.g., 'auto', to tell Bandit to always use the number of CPUs present on
the machine?

That seems like a tidy way to do things.


Overall, this feels like a nice evolutionary step.  Not opposed (in fact,
I'd support it), but would want to make sure it doesn't overly complicate
things.

I can probably work on a PoC implementation this weekend to give y'all a
better idea of what it would look like.

What library/ies would you suggest using?  I still like the idea of
keeping as few external dependencies as possible.

No extra dependencies. Flake8's dependencies are also very light. This
will just take advantage of the multiprocessing library that has been
included in the standard library since Python 2.5 or 2.6.



Jamie


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Keystone] Glance and trusts

2015-06-08 Thread David Chadwick
I agree, there are two very different models for getting a third party
to do something for you: delegation oF authority and sub-contracting.
Your examples of the kids at daycare and buying something online neatly
show up the differences between the two models. So lets be sure we use
the right model for the right task. Using the wrong model will lead to
either over complication (DoA when subcontracting is sufficient) or
security vulnerability (subcontracting when DoA is needed)

regards

David

On 08/06/2015 15:10, Adam Young wrote:
 On 06/06/2015 06:00 AM, David Chadwick wrote:
 In order to do this fully, you will need to work out what all the
 possible supply chains are in OpenStack, so that when any customer
 starts any chain of events, there is sufficient information in the
 message passed to the supplier that allows that supplier to order from
 its supplier. This has to be ensured all the way down the chain, so that
 important information that one supplier needs was not omitted by a
 previous supplier higher up the chain. I suspect that the name of the
 initial requestor (at least) will need to be passed all the way down the
 chain.
 Yes, I think so.  This is in keeping with how I understand we need to
 unify delegation across Keystone constructs.
 
 1.  Tokens live too long.  They should be short and ephemeral, and if a
 user needs a new one to complete an action, getting one should be
 trivial.  They should be bound to the endpoint that is the target of the
 operation.  5 minutes makes sense as a length of life, as that is
 essentially now  when you factor in clock skew. Revocations on tokens
 to not make sense.
 
 2.  Delegation are long lived affairs.  If anything is going to take
 longer than the duration of the token, it should be in the context of a
 delegation, and the user should re-authenticate to prove identity. 
 Delegations need to be focused on workflow, not all operations ther
 user can do  which means that the Glance case discussed here is a good
 start of documenting what do you need to get this job done?
 
 
 We need to keep it light (not fill up a database) for normal operations,
 but maintain the chain of responsibility on a given operation.  Trusts
 are the closest thing we have to this model.
 
 In the supply chain example, if one company exchanges a good with
 another company, they don't care about the end user, because there is no
 realy connection between the good and the user yet;   if there is a
 problem, the original manufacterur can produce  another car identical to
 the first and replace it without the user being any the wiser.
 
 Contrast this with picking kids up from daycare:  as a parent, I have to
 sign a form saying that my mother-in-law will be picking up the kids on
 a specific day.  My Mother-in-law is not authorized to sign a form that
 will allow her friend to pick the kids up.  My kids are very, very
 specific to me, and should be carefully handed off from approved
 supervisor to approved supervisor.
 
 Fetching an image from Glance may well be a causal or a protected
 operation, depending on the image.
 
 Casual if it is a global image: anyone in the world can do it, so no big
 deal.
 
 Protected if, for example, the image is pre-populated with an enrollment
 secret.  Only the owners should be able to get at it
 
 
 

 regards

 David

 On 06/06/2015 03:25, Bhandaru, Malini K wrote:
 Continuing with David’s example and the need to control access to a
 Swift object that Adam points out,

  
 How about using the Glance token from glance-API service to
 glance-registry but carry along extra data in the call, namely user-id,
 domain, and public/private information, so the object can be access
 controlled.

  
 Alternately and encapsulating token

  
 Glance-token user-token -- keeping it simple, only two levels.  This
 protects from on the cusp expired user-tokens.

 Could check user quota before attempting the storage.

  
 Should user not have paid dues, Glance knows which objects to garbage
 collect!

  
 Regards

 Malini

  
 *From:*Adam Young [mailto:ayo...@redhat.com]
 *Sent:* Friday, June 05, 2015 4:11 PM
 *To:* openstack-dev@lists.openstack.org
 *Subject:* Re: [openstack-dev] [Glance][Keystone] Glance and trusts

  
 On 06/05/2015 10:39 AM, Dolph Mathews wrote:

  
  On Thu, Jun 4, 2015 at 1:54 AM, David Chadwick
  d.w.chadw...@kent.ac.uk mailto:d.w.chadw...@kent.ac.uk wrote:

  I did suggest another solution to Adam whilst we were in
 Vancouver, and
  this mirrors what happens in the real world today when I order
 something
  from a supplier and a whole supply chain is involved in creating
 the end
  product that I ordered. This is not too dissimilar to a user
 requesting
  a new VM. Essentially each element in the supply chain trusts
 the two
  adjacent elements. It has contracts with both its customers and its
  suppliers to define the obligations of each party. When
 something is
  ordered from it, it trusts 

[openstack-dev] [Neutron] API Extensions - Namespace URLs

2015-06-08 Thread Sean M. Collins
Hi,

Within each API extension in the neutron tree, there is a method:

def get_namespace(cls):

Which returns a string, containing a URL. 

A quick survey:

agent.py:def get_namespace(cls):
agent.py-return http://docs.openstack.org/ext/agent/api/v2.0;
--
allowedaddresspairs.py:def get_namespace(cls):
allowedaddresspairs.py-return 
http://docs.openstack.org/ext/allowedaddresspairs/api/v2.0;
--
dhcpagentscheduler.py:def get_namespace(cls):
dhcpagentscheduler.py-return 
http://docs.openstack.org/ext/dhcp_agent_scheduler/api/v1.0;
--
dvr.py:def get_namespace(cls):
dvr.py-return (http://docs.openstack.org/ext/;
--
external_net.py:def get_namespace(cls):
external_net.py-return 
http://docs.openstack.org/ext/neutron/external_net/api/v1.0;
--
external_net.py:return {l3.L3.get_alias(): l3.L3.get_namespace()}
--
extra_dhcp_opt.py:def get_namespace(cls):
extra_dhcp_opt.py-return 
http://docs.openstack.org/ext/neutron/extra_dhcp_opt/api/v1.0;
--
extraroute.py:def get_namespace(cls):
extraroute.py-return 
http://docs.openstack.org/ext/neutron/extraroutes/api/v1.0;
--
flavor.py:def get_namespace(cls):
flavor.py-return http://docs.openstack.org/ext/flavor/api/v1.0;
--
l3.py:def get_namespace(cls):
l3.py-return http://docs.openstack.org/ext/neutron/router/api/v1.0;
--
l3_ext_gw_mode.py:def get_namespace(cls):
l3_ext_gw_mode.py-return 
http://docs.openstack.org/ext/neutron/ext-gw-mode/api/v1.0;
--
l3_ext_ha_mode.py:def get_namespace(cls):
l3_ext_ha_mode.py-return 
--
l3agentscheduler.py:def get_namespace(cls):
l3agentscheduler.py-return 
http://docs.openstack.org/ext/l3_agent_scheduler/api/v1.0;
--
metering.py:def get_namespace(cls):
metering.py-return 
http://wiki.openstack.org/wiki/Neutron/Metering/Bandwidth#API;
--
multiprovidernet.py:def get_namespace(cls):
multiprovidernet.py-return 
http://docs.openstack.org/ext/multi-provider/api/v1.0;
--
netmtu.py:def get_namespace(cls):
netmtu.py-return http://docs.openstack.org/ext/net_mtu/api/v1.0;
--
portbindings.py:def get_namespace(cls):
portbindings.py-return http://docs.openstack.org/ext/binding/api/v1.0;
--
portsecurity.py:def get_namespace(cls):
portsecurity.py-return 
http://docs.openstack.org/ext/portsecurity/api/v1.0;
--
providernet.py:def get_namespace(cls):
providernet.py-return http://docs.openstack.org/ext/provider/api/v1.0;
--
quotasv2.py:def get_namespace(cls):
quotasv2.py-return 
http://docs.openstack.org/network/ext/quotas-sets/api/v2.0;
--
routerservicetype.py:def get_namespace(cls):
routerservicetype.py-return 
--
securitygroup.py:def get_namespace(cls):
securitygroup.py-# todo
--
servicetype.py:def get_namespace(cls):
servicetype.py-return 
http://docs.openstack.org/ext/neutron/service-type/api/v1.0;
--
subnetallocation.py:def get_namespace(cls):
subnetallocation.py-return (http://docs.openstack.org/ext/;
--
vlantransparent.py:def get_namespace(cls):
vlantransparent.py-return 
http://docs.openstack.org/ext/vlantransparent/api/v1.0;

I believe that they all 404.

A dumb question to start, then progressively smarter questions:

* What is the purpose of the URLs?
* Should the URL point to documentation?
* What shall we do about the actual URLs 404'ing?

Thanks!

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] Adding packaging as an OpenStack project

2015-06-08 Thread Derek Higgins



On 03/06/15 17:28, Haïkel wrote:

2015-06-03 17:23 GMT+02:00 Thomas Goirand z...@debian.org:

i
On 06/03/2015 12:41 AM, James E. Blair wrote:

Hi,

This came up at the TC meeting today, and I volunteered to provide an
update from the discussion.


I've just read the IRC logs. And there's one thing I would like to make
super clear.



I still haven't read the logs as we had our post-mortem meeting today,
but I'll try to address your points.


We, ie: Debian  Ubuntu folks, are very much clear on what we want to
achieve. The project has been maturing in our heads for like more than 2
years. We would like that ultimately, only a single set of packages Git
repositories exist. We already worked on *some* convergence during the
last years, but now we want a *full* alignment.

We're not 100% sure how the implementation details will look like for
the core packages (like about using the Debconf interface for
configuring packages), but it will eventually happen. For all the rest
(ie: Python module packaging), which represent the biggest work, we're
already converging and this has zero controversy.

Now, the Fedora/RDO/Suse people jumped on the idea to push packaging on
the upstream infra. Great. That's socially tempting. But technically, I
don't really see the point, apart from some of the infra tooling (super
cool if what Paul Belanger does works for both Deb+RPM). Finally,
indeed, this is not totally baked. But let's please not delay the
Debian+Ubuntu upstream Gerrit collaboration part because of it. We would
like to get started, and for the moment, nobody is approving the
/stackforge/deb-openstack-pkg-tools [1] new repository because we're
waiting on the TC decision.



First, we all agree that we should move packaging recipes (to use a
neutral term)
and reviewing to upstream gerrit. That should *NOT* be delayed.
We (RDO) are even willing to transfer full control of the openstack-packages
namespace on github. If you want to use another namespace, it's also
fine with us.

Then, about the infra/tooling things, it looks like a misunderstanding.
If we don't find an agreement on these topics, it's perfectly fine and
should not
prevent moving to upstream gerrit

So let's break the discussion in two parts.

1. upstream gerrit shared by everyone and get this started asap


In an attempt to document how this would look for RDO, I've started a 
patch[1] that I'll iterate on while this discussions converges on a 
solution that will work.


This patch would result in 80 packaging repositories being pulled into 
gerrit.


I've left a TODO in the commit message to track questions I believe we 
still have to answer, most notably


o exactly what namespace/prefix to use in the naming, I've seen lots of 
opinions but I'm not clear if we have come to a decision


o Should we use rdo in the packaging repo names and not rpm? I think 
this ultimatly depends whether the packaging can be shared between RDO 
and Suse or not.


o Do the RDO packaging repo's fall under this project[2] or is it its 
own group



[1] https://review.openstack.org/#/c/189497
[2] https://review.openstack.org/#/c/185187





2. continue discussion about infra/tooling within the new project, without
presumin the outcome.

Does it look like a good compromise to you?

Regards,
H.



Cheers,

Thomas Goirand (zigo)

[1] https://review.openstack.org/#/c/185164/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Mechanism drivers and Neutron server forking?

2015-06-08 Thread Kevin Benton
This depends on what initialize is supposed to be doing. If it's just a
one-time sync with a back-end, then I think calling it once in each child
process might not be what we want.

I left a comment on Terry's patch. I think we should just use the callback
manager to have a pre-fork and post-fork even to let drivers/plugins do
whatever is appropriate for them.

On Mon, Jun 8, 2015 at 1:00 PM, Robert Kukura kuk...@noironetworks.com
wrote:

  From a driver's perspective, it would be simpler, and I think sufficient,
 to change ML2 to call initialize() on drivers after the forking, rather
 than requiring drivers to know about forking.

 -Bob


 On 6/8/15 2:59 PM, Armando M. wrote:

 Interestingly, [1] was filed a few moments ago:

  [1] https://bugs.launchpad.net/neutron/+bug/1463129

 On 2 June 2015 at 22:48, Salvatore Orlando sorla...@nicira.com wrote:

 I'm not sure if you can test this behaviour on your own because it
 requires the VMware plugin and the eventlet handling of backend response.

  But the issue was manifesting and had to be fixed with this mega-hack
 [1]. The issue was not about several workers executing the same code - the
 loopingcall was always started on a single thread. The issue I witnessed
 was that the other API workers just hang.

  There's probably something we need to understand about how eventlet can
 work safely with a os.fork (I just think they're not really made to work
 together!).
 Regardless, I did not spent too much time on it, because I thought that
 the multiple workers code might have been rewritten anyway by the pecan
 switch activities you're doing.

  Salvatore


  [1] https://review.openstack.org/#/c/180145/

 On 3 June 2015 at 02:20, Kevin Benton blak...@gmail.com wrote:

 Sorry about the long delay.

  Even the LOG.error(KEVIN PID=%s network response: %s %
 (os.getpid(), r.text)) line?  Surely the server would have forked before
 that line was executed - so what could prevent it from executing once in
 each forked process, and hence generating multiple logs?

  Yes, just once. I wasn't able to reproduce the behavior you ran into.
 Maybe eventlet has some protection for this? Can you provide small sample
 code for the logging driver that does reproduce the issue?

 On Wed, May 13, 2015 at 5:19 AM, Neil Jerram neil.jer...@metaswitch.com
  wrote:

 Hi Kevin,

 Thanks for your response...

 On 08/05/15 08:43, Kevin Benton wrote:

 I'm not sure I understand the behavior you are seeing. When your
 mechanism driver gets initialized and kicks off processing, all of that
 should be happening in the parent PID. I don't know why your child
 processes start executing code that wasn't invoked. Can you provide a
 pointer to the code or give a sample that reproduces the issue?


 https://github.com/Metaswitch/calico/tree/master/calico/openstack

 Basically, our driver's initialize method immediately kicks off a green
 thread to audit what is now in the Neutron DB, and to ensure that the other
 Calico components are consistent with that.

  I modified the linuxbridge mech driver to try to reproduce it:
 http://paste.openstack.org/show/216859/

 In the output, I never received any of the init code output I added
 more
 than once, including the function spawned using eventlet.


  Interesting.  Even the LOG.error(KEVIN PID=%s network response: %s %
 (os.getpid(), r.text)) line?  Surely the server would have forked before
 that line was executed - so what could prevent it from executing once in
 each forked process, and hence generating multiple logs?

 Thanks,
 Neil

  The only time I ever saw anything executed by a child process was
 actual
 API requests (e.g. the create_port method).




  On Thu, May 7, 2015 at 6:08 AM, Neil Jerram 
 neil.jer...@metaswitch.com
  mailto:neil.jer...@metaswitch.com wrote:

 Is there a design for how ML2 mechanism drivers are supposed to
 cope
 with the Neutron server forking?

 What I'm currently seeing, with api_workers = 2, is:

 - my mechanism driver gets instantiated and initialized, and
 immediately kicks off some processing that involves communicating
 over the network

 - the Neutron server process then forks into multiple copies

 - multiple copies of my driver's network processing then continue,
 and interfere badly with each other :-)

 I think what I should do is:

 - wait until any forking has happened

 - then decide (somehow) which mechanism driver is going to kick off
 that processing, and do that.

 But how can a mechanism driver know when the Neutron server forking
 has happened?

 Thanks,
  Neil


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  
 http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 

[openstack-dev] [neutron] - L3 scope-aware security groups

2015-06-08 Thread Kevin Benton
There is a bug in security groups here:
https://bugs.launchpad.net/neutron/+bug/1359523

In the example scenario, it's caused by conntrack zones not being isolated.
But it also applies to the following scenario that can't be solved by zones:

create two networks with same 10.0.0.0/24
create port1 in SG1 on net1 with IP 10.0.0.1
create port2 in SG1 on net2 with IP 10.0.0.2
create port3 in SG2 on net1 with IP 10.0.0.2
create port4 in SG2 on net2 with IP 10.0.0.1

port1 can communicate with port3 because of the allow rule for port2's IP
port2 can communicate with port4 because of the allow rule for port1's IP

The solution will require the security groups processing code to understand
that a member of a security group is not actually reachable by another
member and skip the allow rule for that member.

With the current state of things, it will take a tone of kludgy code to
check for routers and router interfaces to see if two IPs can communicate
without NAT. However, if we end up with the concept of address-scopes, it
just becomes a simple address scope comparison.

Implement address scopes.


Cheers!
-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] MOS 6.1 Release - Hard Code Freeze in action

2015-06-08 Thread Mike Scherbakov
Eugene, thanks for communication.

Fuel DevOps team, when should we expect changes to builds page [1]? Namely,
I'd like to ensure 6.1 builds are switched to stable/6.1 branch, and new
Jenkins jobs created to make builds from master (7.0).

Everyone - we have to be extremely careful now and ensure that no patch is
missed from stable/6.1 branch, which was intended to be fixed in 6.1. QA
team, count on you to pay special attention to Fix Committed bugs which
became closed after branches were created.

Thanks!

[1] https://ci.fuel-infra.org/

On Mon, Jun 8, 2015 at 10:50 AM, Eugene Bogdanov ebogda...@mirantis.com
wrote:

  Hello everyone,

 Please be informed that Hard Code Freeze for MOS 6.1 Release is
 officially in action. As mentioned earlier, stable/6.1 branch was created
 for the following repos:

   fuel-astute
   fuel-docs
   fuel-library
   fuel-main
   fuel-ostf
   fuel-qa
   fuel-web

 Bug reporters, please do not forget to target both master and 6.1
 (stable/6.1) milestones since now. Also, please remember that all fixes for
 stable/6.1 branch should first be applied to master and then cherry-picked
 to stable/6.1. As always, please ensure that you do NOT merge changes to
 stable branch first. It always has to be a backport with the same
 Change-ID. Please see more on this at [1]

 --
 EugeneB

 [1]
 https://wiki.openstack.org/wiki/Fuel/How_to_contribute#Backport_bugfixes_to_stable_release_series

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Mike Scherbakov
#mihgen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [openstack-operators][neutron[dhcp][dnsmask]: duplicate entries in addn_hosts causing no IP allocation

2015-06-08 Thread Neil Jerram
My team has seen a problem that could be related: in a churn test where 
VMs are created and terminated at a constant rate - but so that the 
number of active VMs should remain roughly constant - the size of the 
host and addn_hosts files keeps increasing.


In other words, it appears that the config for VMs that have actually 
been terminated is not being removed from the config file.  Clearly, if 
you have a limited pool of IP addresses, this can eventually lead to the 
problem that you have described.


For your case - i.e. with Icehouse - the problem might be 
https://bugs.launchpad.net/neutron/+bug/1192381.  I'm not sure if the 
fix for that problem - i.e. sending port-create and port-delete 
notifications to DHCP agents even when the server thinks they are down - 
was merged before the Icehouse release, or not.


But there must be at least one other cause as well, because my team was 
seeing this with Juno-level code.


Therefore I, too, would be interested in any other insights about this 
problem.


Regards,
Neil



On 08/06/15 16:26, Daniel Comnea wrote:

Any help, ideas please?

Thx,
Dani

On Mon, Jun 8, 2015 at 9:25 AM, Daniel Comnea comnea.d...@gmail.com
mailto:comnea.d...@gmail.com wrote:

+ Operators

Much thanks in advance,
Dani




On Sun, Jun 7, 2015 at 6:31 PM, Daniel Comnea comnea.d...@gmail.com
mailto:comnea.d...@gmail.com wrote:

Hi all,

I'm running IceHouse (build using Fuel 5.1.1) on Ubuntu where
dnsmask version 2.59-4.
I have a very basic network layout where i have a private net
which has 2 subnets

  2fb7de9d-d6df-481f-acca-2f7860cffa60 | private-net
|
e79c3477-d3e5-471c-a728-8d881cf31bee 192.168.110.0/24
http://192.168.110.0/24 |
|
| |
f48c3223-8507-455c-9c13-8b727ea5f441 192.168.111.0/24
http://192.168.111.0/24 |

and i'm creating VMs via HEAT.
What is happening is that sometimes i get duplicated entries in
[1] and because of that the VM which was spun up doesn't get an ip.
The Dnsmask processes are running okay [2] and i can't see
anything special/ wrong in it.

Any idea why this is happening? Or are you aware of any bugs
around this area? Do you see a problems with having 2 subnets
mapped to 1 private-net?



Thanks,
Dani

[1]
/var/lib/neutron/dhcp/2fb7de9d-d6df-481f-acca-2f7860cffa60/addn_hosts

[2]

nobody5664 1  0 Jun02 ?00:00:08 dnsmasq
--no-hosts --no-resolv --strict-order --bind-interfaces
--interface=tapc9164734-0c --except-interface=lo

--pid-file=/var/lib/neutron/dhcp/2fb7de9d-d6df-481f-acca-2f7860cffa60/pid

--dhcp-hostsfile=/var/lib/neutron/dhcp/2fb7de9d-d6df-481f-acca-2f7860cffa60/host

--addn-hosts=/var/lib/neutron/dhcp/2fb7de9d-d6df-481f-acca-2f7860cffa60/addn_hosts

--dhcp-optsfile=/var/lib/neutron/dhcp/2fb7de9d-d6df-481f-acca-2f7860cffa60/opts
--leasefile-ro --dhcp-authoritative
--dhcp-range=set:tag0,192.168.110.0,static,86400s
--dhcp-range=set:tag1,192.168.111.0,static,86400s
--dhcp-lease-max=512 --conf-file= --server=10.0.0.31
--server=10.0.0.32 --domain=openstacklocal





___
OpenStack-operators mailing list
openstack-operat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [openstack-operators][neutron[dhcp][dnsmask]: duplicate entries in addn_hosts causing no IP allocation

2015-06-08 Thread Neil Jerram

Two further thoughts on this:

1. Another DHCP agent problem that my team noticed is that it 
call_driver('reload_allocations') takes a bit of time (to regenerate the 
Dnsmasq config files, and to spawn a shell that sends a HUP signal) - 
enough so that if there is a fast steady rate of port-create and 
port-delete notifications coming from the Neutron server, these can 
build up in DHCPAgent's RPC queue, and then they still only get 
dispatched one at a time.  So the queue and the time delay become longer 
and longer.


I have a fix pending for this, which uses an extra thread to read those 
notifications off the RPC queue onto an internal queue, and then batches 
the call_driver('reload_allocations') processing when there is a 
contiguous sequence of such notifications - i.e. only does the config 
regeneration and HUP once, instead of lots of times.


I don't think this is directly related to what you are seeing - but 
perhaps there actually is some link that I am missing.


2. There is an interesting and vaguely similar thread currently being 
discussed about the L3 agent (subject L3 agent rescheduling issue) - 
about possible RPC/threading issues between the agent and the Neutron 
server.  You might like to review that thread and see if it describes 
any problems analogous to your DHCP one.


Regards,
Neil


On 08/06/15 17:53, Neil Jerram wrote:

My team has seen a problem that could be related: in a churn test where
VMs are created and terminated at a constant rate - but so that the
number of active VMs should remain roughly constant - the size of the
host and addn_hosts files keeps increasing.

In other words, it appears that the config for VMs that have actually
been terminated is not being removed from the config file.  Clearly, if
you have a limited pool of IP addresses, this can eventually lead to the
problem that you have described.

For your case - i.e. with Icehouse - the problem might be
https://bugs.launchpad.net/neutron/+bug/1192381.  I'm not sure if the
fix for that problem - i.e. sending port-create and port-delete
notifications to DHCP agents even when the server thinks they are down -
was merged before the Icehouse release, or not.

But there must be at least one other cause as well, because my team was
seeing this with Juno-level code.

Therefore I, too, would be interested in any other insights about this
problem.

Regards,
 Neil



On 08/06/15 16:26, Daniel Comnea wrote:

Any help, ideas please?

Thx,
Dani

On Mon, Jun 8, 2015 at 9:25 AM, Daniel Comnea comnea.d...@gmail.com
mailto:comnea.d...@gmail.com wrote:

+ Operators

Much thanks in advance,
Dani




On Sun, Jun 7, 2015 at 6:31 PM, Daniel Comnea comnea.d...@gmail.com
mailto:comnea.d...@gmail.com wrote:

Hi all,

I'm running IceHouse (build using Fuel 5.1.1) on Ubuntu where
dnsmask version 2.59-4.
I have a very basic network layout where i have a private net
which has 2 subnets

  2fb7de9d-d6df-481f-acca-2f7860cffa60 | private-net
|
e79c3477-d3e5-471c-a728-8d881cf31bee 192.168.110.0/24
http://192.168.110.0/24 |
|
| |
f48c3223-8507-455c-9c13-8b727ea5f441 192.168.111.0/24
http://192.168.111.0/24 |

and i'm creating VMs via HEAT.
What is happening is that sometimes i get duplicated entries in
[1] and because of that the VM which was spun up doesn't get
an ip.
The Dnsmask processes are running okay [2] and i can't see
anything special/ wrong in it.

Any idea why this is happening? Or are you aware of any bugs
around this area? Do you see a problems with having 2 subnets
mapped to 1 private-net?



Thanks,
Dani

[1]

/var/lib/neutron/dhcp/2fb7de9d-d6df-481f-acca-2f7860cffa60/addn_hosts

[2]

nobody5664 1  0 Jun02 ?00:00:08 dnsmasq
--no-hosts --no-resolv --strict-order --bind-interfaces
--interface=tapc9164734-0c --except-interface=lo

--pid-file=/var/lib/neutron/dhcp/2fb7de9d-d6df-481f-acca-2f7860cffa60/pid

--dhcp-hostsfile=/var/lib/neutron/dhcp/2fb7de9d-d6df-481f-acca-2f7860cffa60/host


--addn-hosts=/var/lib/neutron/dhcp/2fb7de9d-d6df-481f-acca-2f7860cffa60/addn_hosts


--dhcp-optsfile=/var/lib/neutron/dhcp/2fb7de9d-d6df-481f-acca-2f7860cffa60/opts

--leasefile-ro --dhcp-authoritative
--dhcp-range=set:tag0,192.168.110.0,static,86400s
--dhcp-range=set:tag1,192.168.111.0,static,86400s
--dhcp-lease-max=512 --conf-file= --server=10.0.0.31
--server=10.0.0.32 --domain=openstacklocal





___
OpenStack-operators mailing list
openstack-operat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



___
OpenStack-operators 

[openstack-dev] [kolla] Proposal for changing 1600UTC meeting to 1700 UTC

2015-06-08 Thread Steven Dake (stdake)
Folks,

Several people have messaged me from EMEA timezones that 1600UTC fits right 
into the middle of their family life (ferrying kids from school and what-not) 
and 1700UTC while not perfect, would be a better fit time-wise.

For all people that intend to attend the 1600 UTC, could I get your feedback on 
this thread if a change of the 1600UTC timeslot to 1700UTC would be acceptable? 
 If it wouldn’t be acceptable, please chime in as well.

Thanks
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Security] [Bandit] Using multiprocessing/threading to speed up analysis

2015-06-08 Thread Ian Cordasco


On 6/8/15, 11:38, Clark, Robert Graham robert.cl...@hp.com wrote:

Interesting work,

I guess my initial thought would be - does it need to be faster?

That depends on how we expect people to use Bandit. Keystone is using it
at their gate. I expect some people will want to run it locally before
sending a patch set. They'll probably be less likely to bother if it takes
too long.

That said, it's totally a user experience thing. Did Flake8 need to be
faster? Probably not. Have we received any complaints about it being
faster? No. The only problem we had has been output ordering, which I
alluded to below.

Will this work make maintenance and the addition of features more
difficult?

It hasn't made maintenance or new feature addition for Flake8 harder.

Everything is written as you would expect. There wasn't any change in
Flake8 to any of the checks or how they work. Flake8 did have to make a
few options mutually exclusive. For example, if you use pep8's diff
capabilities then multiprocessing is turned off by default. It's unlikely
that you'll need it either.

-Rob


On 08/06/2015 08:26, Ian Cordasco ian.corda...@rackspace.com wrote:

Hey everyone,

I drew up a blueprint
(https://blueprints.launchpad.net/bandit/+spec/use-threading-when-running
-
c
hecks) to add the ability to use multiprocessing (or threading) to
Bandit.
This essentially means that each thread will be fed a file and analyze
it and return the results. (A file will only ever be analyzed by one
thread.)

This has lead to significant speed improvements in Flake8 when running
against a project like Nova and I think the same improvements could be
made to Bandit.

I'd love some feedback on the following points:

1. Should this be on by default?

   Technically, this is backwards incompatible (unless we decide to order
the output before printing results) but since we're still in the 0.x
release series of Bandit, SemVer allows backwards incompatible releases.
I
don't know if we want to take advantage of that or not though.

2. Is output ordering significant/important to people?

3. If this is off by default, should the flag accept a special value,
e.g., 'auto', to tell Bandit to always use the number of CPUs present on
the machine?

Cheers,
Ian

_
_
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Security] [Bandit] Using multiprocessing/threading to speed up analysis

2015-06-08 Thread Finnigan, Jamie
On 6/8/15, 8:26 AM, Ian Cordasco ian.corda...@rackspace.com wrote:

Hey everyone,

I drew up a blueprint
(https://blueprints.launchpad.net/bandit/+spec/use-threading-when-running-
c
hecks) to add the ability to use multiprocessing (or threading) to Bandit.
This essentially means that each thread will be fed a file and analyze
it and return the results. (A file will only ever be analyzed by one
thread.)

This has lead to significant speed improvements in Flake8 when running
against a project like Nova and I think the same improvements could be
made to Bandit.

We skipped parallel processing earlier in Bandit development to keep
things simple, but if we can speed up execution against the bigger code
bases with minimal additional complexity (still needs to be 'easy¹ to add
new checks) then that would be a nice win.

I don't think we¹d lose anything by processing in parallel vs. serial.  If
we do ever add additional state tracking more broadly than per-file,
checks would need to be run at the end of execution anyway to take
advantage of the full state.



I'd love some feedback on the following points:

1. Should this be on by default?

   Technically, this is backwards incompatible (unless we decide to order
the output before printing results) but since we're still in the 0.x
release series of Bandit, SemVer allows backwards incompatible releases. I
don't know if we want to take advantage of that or not though.

It looks like flake8 default behavior is off (1 thread²), which makes
sense to me...



2. Is output ordering significant/important to people?

Output ordering is important - output for repeated runs against an
unchanged code base should be predictable/repeatable. We also want to
continue to support aggregating output by filename or by issue. Under this
model though, we'd just collect results then order/output at completion
rather than during execution.



3. If this is off by default, should the flag accept a special value,
e.g., 'auto', to tell Bandit to always use the number of CPUs present on
the machine?

That seems like a tidy way to do things.


Overall, this feels like a nice evolutionary step.  Not opposed (in fact,
I'd support it), but would want to make sure it doesn't overly complicate
things.

What library/ies would you suggest using?  I still like the idea of
keeping as few external dependencies as possible.


Jamie


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [openstack-operators][neutron[dhcp][dnsmask]: duplicate entries in addn_hosts causing no IP allocation

2015-06-08 Thread Kevin Benton
I'm having difficulty reproducing the issue. The bug that Neil referenced (
https://bugs.launchpad.net/neutron/+bug/1192381) looks like it was in
Icehouse well before the 2014.1.3 release that looks like Fuel 5.1.1 is
using.

I tried setting the agent report interval to something higher than the
downtime to make it seem like the agent is failing sporadically to the
server, but it's not impacting the notifications.

Neil, does your testing where you saw something similar have a lot of
concurrent creation/deletion?

On Mon, Jun 8, 2015 at 12:21 PM, Andrew Woodward awoodw...@mirantis.com
wrote:

 Daniel,

 This sounds familiar, see if this matches [1]. IIRC, there was another
 issue like this that was might already address this in the updates into
 Fuel 5.1.2 packages repo [2]. You can either update the neutron packages
 from [2] Or try one of community builds for 5.1.2 [3]. If this doesn't
 resolve the issue, open a bug against MOS dev [4].

 [1] https://bugs.launchpad.net/bugs/1295715
 [2] http://fuel-repository.mirantis.com/fwm/5.1.2/ubuntu/pool/main/
 [3] https://ci.fuel-infra.org/
 [4] https://bugs.launchpad.net/mos/+filebug

 On Mon, Jun 8, 2015 at 10:15 AM Neil Jerram neil.jer...@metaswitch.com
 wrote:

 Two further thoughts on this:

 1. Another DHCP agent problem that my team noticed is that it
 call_driver('reload_allocations') takes a bit of time (to regenerate the
 Dnsmasq config files, and to spawn a shell that sends a HUP signal) -
 enough so that if there is a fast steady rate of port-create and
 port-delete notifications coming from the Neutron server, these can
 build up in DHCPAgent's RPC queue, and then they still only get
 dispatched one at a time.  So the queue and the time delay become longer
 and longer.

 I have a fix pending for this, which uses an extra thread to read those
 notifications off the RPC queue onto an internal queue, and then batches
 the call_driver('reload_allocations') processing when there is a
 contiguous sequence of such notifications - i.e. only does the config
 regeneration and HUP once, instead of lots of times.

 I don't think this is directly related to what you are seeing - but
 perhaps there actually is some link that I am missing.

 2. There is an interesting and vaguely similar thread currently being
 discussed about the L3 agent (subject L3 agent rescheduling issue) -
 about possible RPC/threading issues between the agent and the Neutron
 server.  You might like to review that thread and see if it describes
 any problems analogous to your DHCP one.

 Regards,
 Neil


 On 08/06/15 17:53, Neil Jerram wrote:
  My team has seen a problem that could be related: in a churn test where
  VMs are created and terminated at a constant rate - but so that the
  number of active VMs should remain roughly constant - the size of the
  host and addn_hosts files keeps increasing.
 
  In other words, it appears that the config for VMs that have actually
  been terminated is not being removed from the config file.  Clearly, if
  you have a limited pool of IP addresses, this can eventually lead to the
  problem that you have described.
 
  For your case - i.e. with Icehouse - the problem might be
  https://bugs.launchpad.net/neutron/+bug/1192381.  I'm not sure if the
  fix for that problem - i.e. sending port-create and port-delete
  notifications to DHCP agents even when the server thinks they are down -
  was merged before the Icehouse release, or not.
 
  But there must be at least one other cause as well, because my team was
  seeing this with Juno-level code.
 
  Therefore I, too, would be interested in any other insights about this
  problem.
 
  Regards,
   Neil
 
 
 
  On 08/06/15 16:26, Daniel Comnea wrote:
  Any help, ideas please?
 
  Thx,
  Dani
 
  On Mon, Jun 8, 2015 at 9:25 AM, Daniel Comnea comnea.d...@gmail.com
  mailto:comnea.d...@gmail.com wrote:
 
  + Operators
 
  Much thanks in advance,
  Dani
 
 
 
 
  On Sun, Jun 7, 2015 at 6:31 PM, Daniel Comnea 
 comnea.d...@gmail.com
  mailto:comnea.d...@gmail.com wrote:
 
  Hi all,
 
  I'm running IceHouse (build using Fuel 5.1.1) on Ubuntu where
  dnsmask version 2.59-4.
  I have a very basic network layout where i have a private net
  which has 2 subnets
 
2fb7de9d-d6df-481f-acca-2f7860cffa60 | private-net
  |
  e79c3477-d3e5-471c-a728-8d881cf31bee 192.168.110.0/24
  http://192.168.110.0/24 |
  |
  | |
  f48c3223-8507-455c-9c13-8b727ea5f441 192.168.111.0/24
  http://192.168.111.0/24 |
 
  and i'm creating VMs via HEAT.
  What is happening is that sometimes i get duplicated entries in
  [1] and because of that the VM which was spun up doesn't get
  an ip.
  The Dnsmask processes are running okay [2] and i can't see
  anything special/ wrong in it.
 
 

Re: [openstack-dev] [Glance][Keystone] Glance and trusts

2015-06-08 Thread Adam Young

On 06/08/2015 02:10 PM, Steve Lewis wrote:

Monday, June 8, 2015 07:10, Adam Young wrote:

2.  Delegation are long lived affairs.  If anything is going to take
longer than the duration of the token, it should be in the context of a
delegation, and the user should re-authenticate to prove identity.

Requiring re-authenticating to perform many tasks that involves delegation (a 
distinction that users don't understand, or care to) is a sure way to convince 
users to use short and weak passwords. Please, no.
Requiring re-authentication is not the same as requireing the user to 
retype their password.  The Users agent re-authenticates, not the user 
him/herself.  In the case of the CLI, that is using Env Vars, and in the 
case of Horizon, it is using the unscoped token that the user has in 
their session.  For Service users, it should be X509 or Kerberos, but it 
will be the service password.  Don't confuse the one with the other, please.







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [openstack-operators][neutron[dhcp][dnsmask]: duplicate entries in addn_hosts causing no IP allocation

2015-06-08 Thread Kevin Benton
Hi Daniel,

I'm concerned that we are encountered out-of-order port events on the DHCP
agent side so the delete message is processed before the create message.
Would you be willing to apply a small patch to your dhcp agent to see if it
fixes the issue?

If it does fix the issue, you should see occasional warnings in the DHCP
agent log that show Received message for port that was already deleted.
If it doesn't fix the issue, we may be losing the delete event entirely. If
that's the case, it would be great if you can enable debuging on the agent
and upload a log of a run when it happens.

Cheers,
Kevin Benton

Here is the patch:

diff --git a/neutron/agent/dhcp_agent.py b/neutron/agent/dhcp_agent.py
index 71c9709..9b9b637 100644
--- a/neutron/agent/dhcp_agent.py
+++ b/neutron/agent/dhcp_agent.py
@@ -71,6 +71,7 @@ class DhcpAgent(manager.Manager):
 self.needs_resync = False
 self.conf = cfg.CONF
 self.cache = NetworkCache()
+self.deleted_ports = set()
 self.root_helper = config.get_root_helper(self.conf)
 self.dhcp_driver_cls =
importutils.import_class(self.conf.dhcp_driver)
 ctx = context.get_admin_context_without_session()
@@ -151,6 +152,7 @@ class DhcpAgent(manager.Manager):
 LOG.info(_('Synchronizing state'))
 pool = eventlet.GreenPool(cfg.CONF.num_sync_threads)
 known_network_ids = set(self.cache.get_network_ids())
+self.deleted_ports = set()

 try:
 active_networks = self.plugin_rpc.get_active_networks_info()
@@ -302,6 +304,10 @@ class DhcpAgent(manager.Manager):
 @utils.synchronized('dhcp-agent')
 def port_update_end(self, context, payload):
 Handle the port.update.end notification event.
+if payload['port']['id'] in self.deleted_ports:
+LOG.warning(_(Received message for port that was 
+  already deleted: %s), payload['port']['id'])
+return
 updated_port = dhcp.DictModel(payload['port'])
 network = self.cache.get_network_by_id(updated_port.network_id)
 if network:
@@ -315,6 +321,7 @@ class DhcpAgent(manager.Manager):
 def port_delete_end(self, context, payload):
 Handle the port.delete.end notification event.
 port = self.cache.get_port_by_id(payload['port_id'])
+self.deleted_ports.add(payload['port_id'])
 if port:
 network = self.cache.get_network_by_id(port.network_id)
 self.cache.remove_port(port)








On Mon, Jun 8, 2015 at 8:26 AM, Daniel Comnea comnea.d...@gmail.com wrote:

 Any help, ideas please?

 Thx,
 Dani

 On Mon, Jun 8, 2015 at 9:25 AM, Daniel Comnea comnea.d...@gmail.com
 wrote:

 + Operators

 Much thanks in advance,
 Dani




 On Sun, Jun 7, 2015 at 6:31 PM, Daniel Comnea comnea.d...@gmail.com
 wrote:

 Hi all,

 I'm running IceHouse (build using Fuel 5.1.1) on Ubuntu where dnsmask
 version 2.59-4.
 I have a very basic network layout where i have a private net which has
 2 subnets

  2fb7de9d-d6df-481f-acca-2f7860cffa60 | private-net
  | e79c3477-d3e5-471c-a728-8d881cf31bee 192.168.110.0/24
 |
 |
 | |
 f48c3223-8507-455c-9c13-8b727ea5f441 192.168.111.0/24 |

 and i'm creating VMs via HEAT.
 What is happening is that sometimes i get duplicated entries in [1] and
 because of that the VM which was spun up doesn't get an ip.
 The Dnsmask processes are running okay [2] and i can't see anything
 special/ wrong in it.

 Any idea why this is happening? Or are you aware of any bugs around this
 area? Do you see a problems with having 2 subnets mapped to 1 private-net?



 Thanks,
 Dani

 [1]
 /var/lib/neutron/dhcp/2fb7de9d-d6df-481f-acca-2f7860cffa60/addn_hosts
 [2]

 nobody5664 1  0 Jun02 ?00:00:08 dnsmasq --no-hosts
 --no-resolv --strict-order --bind-interfaces --interface=tapc9164734-0c
 --except-interface=lo
 --pid-file=/var/lib/neutron/dhcp/2fb7de9d-d6df-481f-acca-2f7860cffa60/pid
 --dhcp-hostsfile=/var/lib/neutron/dhcp/2fb7de9d-d6df-481f-acca-2f7860cffa60/host
 --addn-hosts=/var/lib/neutron/dhcp/2fb7de9d-d6df-481f-acca-2f7860cffa60/addn_hosts
 --dhcp-optsfile=/var/lib/neutron/dhcp/2fb7de9d-d6df-481f-acca-2f7860cffa60/opts
 --leasefile-ro --dhcp-authoritative
 --dhcp-range=set:tag0,192.168.110.0,static,86400s
 --dhcp-range=set:tag1,192.168.111.0,static,86400s --dhcp-lease-max=512
 --conf-file= --server=10.0.0.31 --server=10.0.0.32 --domain=openstacklocal




 ___
 OpenStack-operators mailing list
 openstack-operat...@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [Ironic] [Inspector] Toward 2.0.0 release

2015-06-08 Thread Yuiko Takada
Hi, Dmitry,

Thank you for notifying.

I've updated our summit etherpad [3] with whatever priorities I remembered,
 please have a look. I've also untargeted a few things in launchpad [4] (and
 will probably untarget more later on). Please assign yourself, if you want
 something done in this release time frame.

I've assigned one item to myself in [3], and also I added one BP to [4],
so please take a look.
https://blueprints.launchpad.net/ironic-inspector/+spec/delete-db-api

BTW, how do you think about Ironic-inspector's release model?
You wrote Version released with Ironic Liberty as
Ironic-inspector Version 2.1.0 in etherpad [3],
but as you know, Ironic's release model has changed to feature
releases.(right?)
Should we make our release model same as Ironic?


Best Regards,
Yuiko Takada(Inspector team member)

2015-06-08 20:38 GMT+09:00 Dmitry Tantsur dtant...@redhat.com:

 Hello, Inspector team!

 The renaming process is going pretty well, the last thing we need to do is
 to get Infra approval and actual rename [1][2].

 I'd like to allow people (e.g. myself) to start packaging inspector under
 it's new name, so I'd like to make 2.0.0 release as soon as possible (as
 opposed to scheduling it to particular date). All breaking changes should
 land by this release - I don't expect 3.0.0 soon :)

 I've updated our summit etherpad [3] with whatever priorities I
 remembered, please have a look. I've also untargeted a few things in
 launchpad [4] (and will probably untarget more later on). Please assign
 yourself, if you want something done in this release time frame.

 I would like 2.1.0 to be released with Ironic Liberty and be properly
 supported.

 Let me know what you think.

 Cheers,
 Dmitry

 [1] https://review.openstack.org/#/c/188030/
 [2] https://review.openstack.org/#/c/188798/
 [3] https://etherpad.openstack.org/p/liberty-ironic-discoverd
 [4] https://bugs.launchpad.net/ironic-inspector/+milestone/2.0.0

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [glance] How to deal with aborted image read?

2015-06-08 Thread Robert Collins
On 9 June 2015 at 07:53, Chris Friesen chris.frie...@windriver.com wrote:
 On 06/08/2015 12:30 PM, Robert Collins wrote:

 On 9 June 2015 at 03:50, Chris Friesen chris.frie...@windriver.com
 wrote:

 On 06/07/2015 04:22 PM, Robert Collins wrote:


 Hi, original reporter here.

 There's no LB involved.  The issue was noticed in a test lab that is
 tight
 on disk space.  When an instance failed to boot the person using the lab
 tried to delete some images to free up space, at which point it was
 noticed
 that space didn't actually free up.  (For at least half an hour, exact
 time
 unknown.)

 I'm more of a nova guy, so could you elaborate a bit on the GC?  Is
 something going to delete the ChunkedFile object after a certain amount
 of
 inactivity? What triggers the GC to run?


 Ok, updating my theory...  I'm not super familiar with glances
 innards, so I'm going to call out my assumptions.

 The ChunkedFile object is in the Nova process. It reads from a local
 file, so its the issue - and it has a handle onto the image because
 glance arranged for it to read from it.


 As I understand it, the ChunkedFile object is in the glance-api process.
 (At least, it's the glance-api process that is holding the open file
 descriptor.)


 Anyhow - to answer your question: the iterator is only referenced by
 the for loop, nothing else *can* hold a reference to it (without nasty
 introspection hacks) and so as long as the iterator has an appropriate
 try:finally:, which the filesystem ChunkedFile one does- the file will
 be closed automatically.


 From what I understand, the iterator (in the glance-api process) normally
 breaks out of the while loop once the whole file has been read and the
 read() call returns an empty string.

 It's not clear to me how an error in the nova process (which causes it to
 stop reading the file from glance-api)  will cause glance-api to break out
 of the while loop in ChunkedFile.__iter__().

AIUI the conclusion of our IRC investigation was:
 - with caching middleware, the fd is freed, just after ~4m.
 - without caching middleware, the fd is freed after ~90s.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] polling agent configuration speculation

2015-06-08 Thread Luo Gangyi
Hi, Chris,


In current, ceilometer load pollsters by agent namespace, So do you mean you 
want load pollsters one by one through their name(maybe defined in 
pipeline.yaml)?


If loading all pollsters in one time do not cost much, I think your change is 
bit unnecessary. 
But if it does cost much, your change is meaningful.


BTW,  I like the idea Separate polling and publishing/transforming into 
separate
   workers/processes.
--
Luo gangyiluogan...@cmss.chinamobile.com



 




-- Original --
From:  Chris Dent;chd...@redhat.com;
Date:  Mon, Jun 8, 2015 09:04 PM
To:  openstack-operatorsopenstack-operat...@lists.openstack.org; 
OpenStack-devOpenStack-dev@lists.openstack.org; 

Subject:  [openstack-dev] [ceilometer] polling agent configuration speculation




(Posting to the mailing list rather than writing a spec or making
code because I think it is important to get some input and feedback
before going off on something wild. Below I'm talking about
speculative plans and seeking feedback, not reporting decisions
about the future. Some of this discussion is intentionally naive
about how things are because that's not really relevant, what's
relevant is how things should be or could be.

tl;dr: I want to make the configuration of the pollsters more explicit
and not conflate and overlap the entry_points.txt and pipeline.yaml
in confusing and inefficient ways.

* entry_points.txt should define what measurements are possible, not
   what measurements are loaded
* something new should define what measurements are loaded and
   polled (and their intervals) (sources in pipeline.yaml speak)
* pipeline.yaml should define transformations and publishers

Would people like something like this?)

The longer version:

Several of the outcomes of the Liberty Design Summit were related to
making changes to the agents which gather or hear measurements and
events. Some of these changes have pending specs:

* Ceilometer Collection Agents Split
   https://review.openstack.org/#/c/186964/

   Splitting the collection agents into their own repo to allow
   use and evolution separate from the rest of Ceilometer.

* Adding Meta-Data Caching Spec
   https://review.openstack.org/#/c/185084/

   Adding metadata caching to the compute agent so the Nova-API is
   less assaulted than it currently is.

* Declarative notification handling
   https://review.openstack.org/#/c/178399/

   Be able to hear and transform a notification to an event without
   having to write code.

Reviewing these and other specs and doing some review of the code
points out that we have an opportunity to make some architectural and
user interface improvements (while still maintain existing
functionality). For example:

The current ceilometer polling agent has an interesting start up
process:

1 It determines which namespaces it is operating in ('compute',
   'central', 'ipmi').
2 Using entry_points defined in setup.cfg it initializes all the
   polling extensions and all the discovery extensions (independent
   of sources defined in pipeline.yaml)
3 Every source in pipeline.yaml is given a list of pollsters that
   match the meters defined by the source, creating long running
   tasks to do the polling.
4 Each task does resource discovery and partitioning coordination.
5 measurements/samples are gathered and are transformed and published
   according the sink rules in pipeline.yaml

A couple things about this seem less than ideal:

* 2 means we load redundant stuff unless we edit entry_points.txt.
   We do not want to encourage this sort of behavior. entry_points is
   not configuration[1]. We should configure elsewhere to declare I
   care about things X (including the option of all things) and
   then load the tools to do so, on demand.

* Two things are happening in the same context in step 5 and that
   seems quite limiting with regard to opportunities for effective
   maintenance and optimizing.

My intuition (which often needs to sanity checked, thus my posting
here) tells me there are some things we could change:

* Separate polling and publishing/transforming into separate
   workers/processes.

* Extract the definition of sources to be polled from pipeline.yaml
   to its own file and use that to be the authority of which
   extensions are loaded for polling and discovery.

What do people think?

[1] This is really the core of my concern and the main part I want
to see change.
-- 
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [Fuel] MOS 6.1 Release - Hard Code Freeze in action

2015-06-08 Thread Mike Scherbakov
The right way, I believe, should be the following:

   1. Change the current development focus to 7.0 in LP
   2. Ensure that every single commit which was merged into master after
   branches created targets correct milestones (it can be only 7.0 or both 7.0
   and 6.1)
   3. Ensure that remaining 6.1 targeted bugs which have to be landed in
   6.1, target both 6.1  7.0 now

Then, if someone merges fix to master - only 7.0 target will be closed as
Fix Committed. 6.1 will keep open unless we backport the patch to
stable/6.1.

DevOps team, can you help in executing this?

On Mon, Jun 8, 2015 at 2:42 PM, Mike Scherbakov mscherba...@mirantis.com
wrote:

 Eugene, thanks for communication.

 Fuel DevOps team, when should we expect changes to builds page [1]?
 Namely, I'd like to ensure 6.1 builds are switched to stable/6.1 branch,
 and new Jenkins jobs created to make builds from master (7.0).

 Everyone - we have to be extremely careful now and ensure that no patch is
 missed from stable/6.1 branch, which was intended to be fixed in 6.1. QA
 team, count on you to pay special attention to Fix Committed bugs which
 became closed after branches were created.

 Thanks!

 [1] https://ci.fuel-infra.org/

 On Mon, Jun 8, 2015 at 10:50 AM, Eugene Bogdanov ebogda...@mirantis.com
 wrote:

  Hello everyone,

 Please be informed that Hard Code Freeze for MOS 6.1 Release is
 officially in action. As mentioned earlier, stable/6.1 branch was created
 for the following repos:

   fuel-astute
   fuel-docs
   fuel-library
   fuel-main
   fuel-ostf
   fuel-qa
   fuel-web

 Bug reporters, please do not forget to target both master and 6.1
 (stable/6.1) milestones since now. Also, please remember that all fixes for
 stable/6.1 branch should first be applied to master and then cherry-picked
 to stable/6.1. As always, please ensure that you do NOT merge changes to
 stable branch first. It always has to be a backport with the same
 Change-ID. Please see more on this at [1]

 --
 EugeneB

 [1]
 https://wiki.openstack.org/wiki/Fuel/How_to_contribute#Backport_bugfixes_to_stable_release_series

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Mike Scherbakov
 #mihgen




-- 
Mike Scherbakov
#mihgen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [akanda] work breakdown for milestone 1

2015-06-08 Thread sean roberts
Each of us have a few blueprints due third week in june. I need each of us
to write up a description, proposed changes, and work items. The work items
need to be into 1, 2, 3, 5, 8, 13 approximate hour chunks. I'm okay with
this in the blueprint summary if it makes sense, otherwise make the extra
effort for a spec.

~ sean
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon][i18n] Ordering of PO files

2015-06-08 Thread Thai Q Tran
Hi folks,In the midst of shifting to angular, we are making use of babel for extracting messages. This would then allow us to write a custom extractor for angular templates.Here's the patch that compare PO files:https://review.openstack.org/#/c/189502/It looks worse than reality, if you compare the django vs babel makemessages, they are nearly identical, only the ordering is different.Which leads me to my next point. If the ordering of the translatable strings are not the same, how does that affect translation (if at all)?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - L3 scope-aware security groups

2015-06-08 Thread Salvatore Orlando
Kevin,

On 8 June 2015 at 23:52, Kevin Benton blak...@gmail.com wrote:

 There is a bug in security groups here:
 https://bugs.launchpad.net/neutron/+bug/1359523

 In the example scenario, it's caused by conntrack zones not being
 isolated. But it also applies to the following scenario that can't be
 solved by zones:

 create two networks with same 10.0.0.0/24
 create port1 in SG1 on net1 with IP 10.0.0.1
 create port2 in SG1 on net2 with IP 10.0.0.2
 create port3 in SG2 on net1 with IP 10.0.0.2
 create port4 in SG2 on net2 with IP 10.0.0.1


 port1 can communicate with port3 because of the allow rule for port2's IP
 port2 can communicate with port4 because of the allow rule for port1's IP


So this would be a scenario when bug 1359523 hits even with conntrack zone
separation, with the subtle, and terrible difference that there is a way to
enable cross-network plugging? For instance to reach port1 on net1, all I
have to do is create a network with a CIDR with some overlap with net1's,
and then wait until a VM is created with an IP that exists also on net1 -
and then jackpot, that VM will basically have access to all of net1's
instances?

The scenario you're describing is quite concerning from a security
perspective. Shouldn't there be L2 isolation to prevent something like this?

The solution will require the security groups processing code to understand
 that a member of a security group is not actually reachable by another
 member and skip the allow rule for that member.


The paragraph above is a bit obscure to me.



 With the current state of things, it will take a tone of kludgy code to
 check for routers and router interfaces to see if two IPs can communicate
 without NAT. However, if we end up with the concept of address-scopes, it
 just becomes a simple address scope comparison.


This is fine, but I wonder how's that related to what you described
earlier. Is the vulnerability triggered by the fact that both networks can
be attached to the same router? In that case I think that if the l3 mgmt
code works as expected it would reject adding an interface for a subnet
with an overlap with another already attached subnet, thus implementing an
implicit address scope of 0.0.0.0/0 (for v4).



 Implement address scopes.


Sure, my master.



 Cheers!
 --
 Kevin Benton

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - L3 scope-aware security groups

2015-06-08 Thread Kevin Benton
For instance to reach port1 on net1, all I have to do is create a network
with a CIDR with some overlap with net1's, and then wait until a VM is
created with an IP that exists also on net1 - and then jackpot, that VM
will basically have access to all of net1's instances?

No, it's not quite that bad. You still have to be plugged into net1. The
two networks are not connected together via a tenant router. You could
assume that they each have their own tenant router each with a connection
to an external network.

What needs to happen is that ports from the same security group are being
created on two disjoint networks with overlapping address space. Then the
allow rules that correspond to ports on the opposite network will end up
allowing traffic on the current network from those addresses.

The issue is that an allow rule is added for each member port in a security
group, even if those networks are not reachable from the port that the rule
is being added to. That becomes a problem with overlapping IPs where the IP
address being allowed may actually belong to a different port.

Does that make sense?

The paragraph above is a bit obscure to me.

Let me try again. We need a way to prevent rules from being installed that
reference other security group members that are unreachable. This is
because the unreachable members might have the same IP address as other
ports on your network.






On Mon, Jun 8, 2015 at 4:07 PM, Salvatore Orlando sorla...@nicira.com
wrote:

 Kevin,

 On 8 June 2015 at 23:52, Kevin Benton blak...@gmail.com wrote:

 There is a bug in security groups here:
 https://bugs.launchpad.net/neutron/+bug/1359523

 In the example scenario, it's caused by conntrack zones not being
 isolated. But it also applies to the following scenario that can't be
 solved by zones:

 create two networks with same 10.0.0.0/24
 create port1 in SG1 on net1 with IP 10.0.0.1
 create port2 in SG1 on net2 with IP 10.0.0.2
 create port3 in SG2 on net1 with IP 10.0.0.2
 create port4 in SG2 on net2 with IP 10.0.0.1


 port1 can communicate with port3 because of the allow rule for port2's IP
 port2 can communicate with port4 because of the allow rule for port1's IP


 So this would be a scenario when bug 1359523 hits even with conntrack zone
 separation, with the subtle, and terrible difference that there is a way to
 enable cross-network plugging? For instance to reach port1 on net1, all I
 have to do is create a network with a CIDR with some overlap with net1's,
 and then wait until a VM is created with an IP that exists also on net1 -
 and then jackpot, that VM will basically have access to all of net1's
 instances?

 The scenario you're describing is quite concerning from a security
 perspective. Shouldn't there be L2 isolation to prevent something like this?

 The solution will require the security groups processing code to
 understand that a member of a security group is not actually reachable by
 another member and skip the allow rule for that member.


 The paragraph above is a bit obscure to me.



 With the current state of things, it will take a tone of kludgy code to
 check for routers and router interfaces to see if two IPs can communicate
 without NAT. However, if we end up with the concept of address-scopes, it
 just becomes a simple address scope comparison.


 This is fine, but I wonder how's that related to what you described
 earlier. Is the vulnerability triggered by the fact that both networks can
 be attached to the same router? In that case I think that if the l3 mgmt
 code works as expected it would reject adding an interface for a subnet
 with an overlap with another already attached subnet, thus implementing an
 implicit address scope of 0.0.0.0/0 (for v4).



 Implement address scopes.


 Sure, my master.



 Cheers!
 --
 Kevin Benton

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The Nova API in Kilo and Beyond

2015-06-08 Thread Adam Young

On 06/08/2015 01:22 PM, Jay Pipes wrote:

On 06/05/2015 10:56 AM, Neil Jerram wrote:

I guess that's why the GNU autoconf/configure system has always advised
testing for particular wanted features, instead of looking at versions
and then relying on carnal knowledge to know what those versions imply.


I'm pretty sure you meant tribal knowledge, not carnal knowledge :)


And I am just as sure he knew what he was saying and intentionally said 
carnal




-jay

http://en.wikipedia.org/wiki/Carnal_knowledge
http://en.wikipedia.org/wiki/Tribal_knowledge

__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova-scheduler] Scheduler sub-group IRC meeting - Agenda 6/9

2015-06-08 Thread Dugger, Donald D
Meeting on #openstack-meeting at 1500 UTC (9:00AM MDT)

1) New meeting time
2) Liberty specs - https://wiki.openstack.org/wiki/Gantt/liberty
3) Opens

--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Security] [Bandit] Using multiprocessing/threading to speed up analysis

2015-06-08 Thread Murphy, Grant
If you guys go down this road I would suggest using
https://docs.python.org/2/library/multiprocessing.html rather than Python
Threads if that is what is being proposed..





On 6/8/15, 10:17 AM, Finnigan, Jamie jamie.finni...@hp.com wrote:

On 6/8/15, 8:26 AM, Ian Cordasco ian.corda...@rackspace.com wrote:

Hey everyone,

I drew up a blueprint
(https://blueprints.launchpad.net/bandit/+spec/use-threading-when-running
-
c
hecks) to add the ability to use multiprocessing (or threading) to
Bandit.
This essentially means that each thread will be fed a file and analyze
it and return the results. (A file will only ever be analyzed by one
thread.)

This has lead to significant speed improvements in Flake8 when running
against a project like Nova and I think the same improvements could be
made to Bandit.

We skipped parallel processing earlier in Bandit development to keep
things simple, but if we can speed up execution against the bigger code
bases with minimal additional complexity (still needs to be 'easy¹ to add
new checks) then that would be a nice win.

I don't think we¹d lose anything by processing in parallel vs. serial.  If
we do ever add additional state tracking more broadly than per-file,
checks would need to be run at the end of execution anyway to take
advantage of the full state.



I'd love some feedback on the following points:

1. Should this be on by default?

   Technically, this is backwards incompatible (unless we decide to order
the output before printing results) but since we're still in the 0.x
release series of Bandit, SemVer allows backwards incompatible releases.
I
don't know if we want to take advantage of that or not though.

It looks like flake8 default behavior is off (1 thread²), which makes
sense to me...



2. Is output ordering significant/important to people?

Output ordering is important - output for repeated runs against an
unchanged code base should be predictable/repeatable. We also want to
continue to support aggregating output by filename or by issue. Under this
model though, we'd just collect results then order/output at completion
rather than during execution.



3. If this is off by default, should the flag accept a special value,
e.g., 'auto', to tell Bandit to always use the number of CPUs present on
the machine?

That seems like a tidy way to do things.


Overall, this feels like a nice evolutionary step.  Not opposed (in fact,
I'd support it), but would want to make sure it doesn't overly complicate
things.

What library/ies would you suggest using?  I still like the idea of
keeping as few external dependencies as possible.


Jamie


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceph] Is it necessary to flatten Copy-on-write cloning for RBD-backed disks?

2015-06-08 Thread Kun Feng
Hi all,

I'm using ceph as storage backend for Nova and Glance, and merged the
rbd-ephemeral-clone patch into Nova. As VM disks are Copy-on-write clones
of a image, I have some concerns about this:

1. Since hundreds of vm disks based on one base file, is there any
performance problems that IOs are loaded on this one paticular base file?

2. Is it possible that the data of base file is damaged or PG/OSD
containing data of this base file out of service, resulting as all the VMs
based on that base file malfunctioned?

If so, flattening Copy-on-write clones may do some help. Is it necessary to
do it?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [javascript] Linters

2015-06-08 Thread Tripp, Travis S
We¹ve adopted the John Papa style guide for Angular in horizon [0]. On
cursory inspection ES lint seems to have an angular specific plugin [1]
that could be very useful to us, but we¹d need to evaluate it in depth. It
looks like there was some discussion on the style guide on this not too
long ago [2]. The jscs rules we have [3] are very generic code formatting
type rules that are helpful, but don't really provide any angular specific
help. Here are the jshint rules [4]. It would be quite nice to put all
this goodness across tools into a single tool configuration if possible.

[0] 
http://docs.openstack.org/developer/horizon/contributing.html#john-papa-sty
le-guide
[1] https://www.npmjs.com/package/eslint-plugin-angular
[2] https://github.com/johnpapa/angular-styleguide/issues/194
[3] https://github.com/openstack/horizon/blob/master/.jscsrc
[4] https://github.com/openstack/horizon/blob/master/.jshintrc

On 6/8/15, 9:59 PM, gustavo panizzo (gfa) g...@zumbi.com.ar wrote:



On 2015-06-06 03:26, Michael Krotscheck wrote:
 Right now, there are several JS linters in use in OpenStack: JSHint,
 JSCS, and Eslint. I really would like to only use one of them, so that I
 can figure out how to sanely share the configuration between projects.

 Can all those who have a strong opinion please stand up and state their
 opinions?

what about https://bitbucket.org/dcs/jsmin/ it's license is free


-- 
1AE0 322E B8F7 4717 BDEA BF1D 44BB 1BA7 9F6C 6333

keybase: http://keybase.io/gfa

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceph] Is it necessary to flatten Copy-on-write cloning for RBD-backed disks?

2015-06-08 Thread Clint Byrum
Excerpts from Kun Feng's message of 2015-06-08 20:34:51 -0700:
 Hi all,
 
 I'm using ceph as storage backend for Nova and Glance, and merged the
 rbd-ephemeral-clone patch into Nova. As VM disks are Copy-on-write clones
 of a image, I have some concerns about this:
 
 1. Since hundreds of vm disks based on one base file, is there any
 performance problems that IOs are loaded on this one paticular base file?
 

Unless you have no ram available to VFS cache on the OSD's, this is
fine. Blocks will be evenly spread to each OSD, and since these are
likely to be super popular blocks, they'll likely all be served from
RAM most of the time.

 2. Is it possible that the data of base file is damaged or PG/OSD
 containing data of this base file out of service, resulting as all the VMs
 based on that base file malfunctioned?

You probably will want to read this and see if that answers your
question:

http://ceph.com/docs/master/architecture/#data-consistency

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][reseller] New way to get a project scoped token by name

2015-06-08 Thread Jamie Lennox


- Original Message -
 From: David Chadwick d.w.chadw...@kent.ac.uk
 To: openstack-dev@lists.openstack.org
 Sent: Saturday, 6 June, 2015 6:01:10 PM
 Subject: Re: [openstack-dev] [keystone][reseller] New way to get a project 
 scoped token by name
 
 
 
 On 06/06/2015 00:24, Adam Young wrote:
  On 06/05/2015 01:15 PM, Henry Nash wrote:
  I am sure I have missed something along the way, but can someone
  explain to me why we need this at all.  Project names are unique
  within a domain, with the exception of the project that is acting as
  its domain (i.e. they can only every be two names clashing in a
  hierarchy at the domain level and below).  So why isn’t specifying
  “is_domain=True/False” sufficient in an auth scope along with the
  project name?
  
  The limitation of  Project names are unique within a domain is
  artificial and somethi8ng we should not be enforcing.  Names should only
  be unique within parent project.
 
 +++1

I said the exact same thing as Henry in the other thread that seems to be on 
the same topic. You're correct the limitation of Project names are unique 
within a domain is completely artificial, but it's a constraint that allows us 
to maintain the auth systems we currently have and will not harm the reseller 
model (because they would be creating new domains).

It's also a constraint that we can relax later when multitenancy is a bit more 
established and someone has a real issue with the limitation - it's not 
something we can ever claw back again if we allow some looking up projects by 
name with delimiters. 

I think for the time being it's an artificial constraint we should maintain.



  
  This whole thing started by trying to distinguish a domain from a
  project within that domain that both have the same name. We can special
  case that, but it is not a great solution.
  
  
  
 
  Henry
 
  On 5 Jun 2015, at 18:02, Adam Young ayo...@redhat.com
  mailto:ayo...@redhat.com wrote:
 
  On 06/03/2015 05:05 PM, Morgan Fainberg wrote:
  Hi David,
 
  There needs to be some form of global hierarchy delimiter - well
  more to the point there should be a common one across OpenStack
  installations to ensure we are providing a good and consistent (and
  more to the point inter-operable) experience to our users. I'm
  worried a custom defined delimiter (even at the domain level) is
  going to make it difficult to consume this data outside of the
  context of OpenStack (there are applications that are written to use
  the APIs directly).
  We have one already.  We are working JSON, and so instead of project
  name being a string, it can be an array.
 
  Nothing else is backwards compatible.  Nothing else will ensure we
  don;t break exisiting deployments.
 
  Moving forward, we should support DNS notation, but it has to be an
  opt in
 
 
  The alternative is to explicitly list the delimiter in the project (
  e.g. {hierarchy: {delim: ., domain.project.project2}} ). The
  additional need to look up the delimiter / set the delimiter when
  creating a domain is likely to make for a worse user experience than
  selecting one that is not different across installations.
 
  --Morgan
 
  On Wed, Jun 3, 2015 at 12:19 PM, David Chadwick
  d.w.chadw...@kent.ac.uk mailto:d.w.chadw...@kent.ac.uk wrote:
 
 
 
  On 03/06/2015 14:54, Henrique Truta wrote:
   Hi David,
  
   You mean creating some kind of delimiter attribute in the domain
   entity? That seems like a good idea, although it does not
  solve the
   problem Morgan's mentioned that is the global hierarchy delimiter.
 
  There would be no global hierarchy delimiter. Each domain would
  define
  its own and this would be carried in the JSON as a separate
  parameter so
  that the recipient can tell how to parse hierarchical names
 
  David
 
  
   Henrique
  
   Em qua, 3 de jun de 2015 às 04:21, David Chadwick
   d.w.chadw...@kent.ac.uk mailto:d.w.chadw...@kent.ac.uk
  mailto:d.w.chadw...@kent.ac.uk
  mailto:d.w.chadw...@kent.ac.uk escreveu:
  
  
  
   On 02/06/2015 23:34, Morgan Fainberg wrote:
Hi Henrique,
   
I don't think we need to specifically call out that we
  want a
   domain, we
should always reference the namespace as we do today.
  Basically, if we
ask for a project name we need to also provide it's
  namespace (your
option #1). This clearly lines up with how we handle
  projects in
   domains
today.
   
I would, however, focus on how to represent the
  namespace in a single
(usable) string. We've been delaying the work on this
  for a while
   since
we have historically not provided a clear way to delimit the
   hierarchy.
If we solve the issue with what is the delimiter
  between domain,
project, and 

Re: [openstack-dev] [horizon][i18n] Ordering of PO files

2015-06-08 Thread Akihiro Motoki
Hi Thai,

At the moment, translation effort are done on Transifex, so I think it is
not a big problem
even if the ordering of translatable strings are changed (as long as the
strings themselves are not changed).
Even when using the current extractor from Django, the order of
translatable strings are sometimes changed,
and the ordering is changed after uploading and then downloading
translations to/from Transifex.

Akihiro

2015-06-09 8:28 GMT+09:00 Thai Q Tran tqt...@us.ibm.com:

 Hi folks,

 In the midst of shifting to angular, we are making use of babel for
 extracting messages. This would then allow us to write a custom extractor
 for angular templates.

 Here's the patch that compare PO files:
 https://review.openstack.org/#/c/189502/
 It looks worse than reality, if you compare the django vs babel
 makemessages, they are nearly identical, only the ordering is different.

 Which leads me to my next point. If the ordering of the translatable
 strings are not the same, how does that affect translation (if at all)?


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [pbr] pbr 1.1.0 release

2015-06-08 Thread Robert Collins
We are content to announce the release of:

pbr 1.1.0: Python Build Reasonableness

With source available at:

http://git.openstack.org/cgit/openstack-dev/pbr

For more details, please see the git log history below and:

http://launchpad.net/pbr/+milestone/1.1.0

Please report issues through launchpad:

http://bugs.launchpad.net/pbr

Changes in pbr 1.0.1..1.1.0
---

1ccd6cd Fix test case to be runnable with gnupg 2.1
9298ddc More explicit data_files install location docs
609488e Move CapturedSubprocess fixture to base
1dfe9ef Remove sphinx_config.init_values() manual call
bb83819 Updated from global requirements
e943f76 builddoc: allow to use fnmatch-style exclusion for autodoc
3cec7c8 doc: add some basic doc about pbr doc options
f6cd7b7 Add home-page into sample setup.cfg
946cf80 Make setup.py --help-commands work without testrepository
e2ac0e0 Add kerberos deps to build the kerberos wheel.

Diffstat (except docs and test files)
-

pbr/builddoc.py   | 24 +
pbr/packaging.py  | 24 +++--
pbr/testr_command.py  | 27 ---
test-requirements.txt |  7 +++--
tools/integration.sh  |  2 +-
10 files changed, 150 insertions(+), 100 deletions(-)


Requirements updates


diff --git a/test-requirements.txt b/test-requirements.txt
index 6e4521c..97f3a79 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -0,0 +1,3 @@
+# The order of packages is significant, because pip processes them in the order
+# of appearance. Changing the order has an impact on the overall integration
+# process, which may cause wedges in the gate later.
@@ -7 +10 @@ python-subunit=0.0.18
-sphinx=1.1.2,1.2
+sphinx=1.1.2,!=1.2.0,!=1.3b1,1.3
@@ -12 +15 @@ testscenarios=0.4
-testtools=0.9.34
+testtools=0.9.36,!=1.2.0



-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [designate] and [lbaas] - GSLB API and backend support

2015-06-08 Thread Gandhi, Kunal
Hi All

I have created an initial draft of the GSLB use cases and uploaded it to Google 
Doc 

https://docs.google.com/document/d/1016shVnPaK8l8HxMpjADiYtY2O94p4g7JxjuIA-qv7w/edit?usp=sharing
 
https://docs.google.com/document/d/1016shVnPaK8l8HxMpjADiYtY2O94p4g7JxjuIA-qv7w/edit?usp=sharing

Please provide feedback on this doc. We can discuss more about it at tomorrow’s 
meeting - 9 AM PST on irc chat, #openstack-meeting-4.

Regards
Kunal

 On Jun 2, 2015, at 9:52 AM, Doug Wiegley doug...@parksidesoftware.com wrote:
 
 Hi all,
 
 The initial meeting logs can be found at 
 http://eavesdrop.openstack.org/meetings/gslb/2015/ , and we will be having 
 another meeting next week, same time, same channel.
 
 Thanks,
 doug
 
 
 On May 31, 2015, at 1:27 AM, Samuel Bercovici samu...@radware.com wrote:
 
 Good for me - Tuesday at 1600UTC
 
 
 -Original Message-
 From: Doug Wiegley [mailto:doug...@parksidesoftware.com] 
 Sent: Thursday, May 28, 2015 10:37 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [designate] and [lbaas] - GSLB API and backend 
 support
 
 
 On May 28, 2015, at 12:47 PM, Hayes, Graham graham.ha...@hp.com wrote:
 
 On 28/05/15 19:38, Adam Harwell wrote:
 I haven't seen any responses from my team yet, but I know we'd be 
 interested as well - we have done quite a bit of work on this in the 
 past, including dealing with the Designate team on this very subject. 
 We can be available most hours between 9am-6pm Monday-Friday CST.
 
 --Adam
 
 https://keybase.io/rm_you
 
 
 From: Rakesh Saha rsahaos...@gmail.com 
 mailto:rsahaos...@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 mailto:openstack-dev@lists.openstack.org
 Date: Thursday, May 28, 2015 at 12:22 PM
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 mailto:openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [designate] and [lbaas] - GSLB API and 
 backend support
 
  Hi Kunal,
  I would like to participate as well.
  Mon-Fri morning US Pacific time works for me.
 
  Thanks,
  Rakesh Saha
 
  On Tue, May 26, 2015 at 8:45 PM, Vijay Venkatachalam
  vijay.venkatacha...@citrix.com
  mailto:vijay.venkatacha...@citrix.com wrote:
 
  We would like to participate as well.
 
  __ __
 
  Monday-Friday Morning US time works for me..
 
  __ __
 
  Thanks,
 
  Vijay V.
 
  __ __
 
  *From:*Samuel Bercovici [mailto:samu...@radware.com
  mailto:samu...@radware.com]
  *Sent:* 26 May 2015 21:39
 
 
  *To:* OpenStack Development Mailing List (not for usage questions)
  *Cc:* kunalhgan...@gmail.com mailto:kunalhgan...@gmail.com;
  v.jain...@gmail.com mailto:v.jain...@gmail.com;
  do...@a10networks.com mailto:do...@a10networks.com
  *Subject:* Re: [openstack-dev] [designate] and [lbaas] - GSLB
  API and backend support
 
  __ __
 
  Hi,
 
  __ __
 
  I would also like to participate.
 
  Friday is a non-working day in Israel (same as Saturday for most
  of you).
 
  So Monday- Thursday works best for me.
 
  __ __
 
  -Sam.
 
  __ __
 
  __ __
 
  *From:*Doug Wiegley [mailto:doug...@parksidesoftware.com]
  *Sent:* Saturday, May 23, 2015 8:45 AM
  *To:* OpenStack Development Mailing List (not for usage questions)
  *Cc:* kunalhgan...@gmail.com mailto:kunalhgan...@gmail.com;
  v.jain...@gmail.com mailto:v.jain...@gmail.com;
  do...@a10networks.com mailto:do...@a10networks.com
  *Subject:* Re: [openstack-dev] [designate] and [lbaas] - GSLB
  API and backend support
 
  __ __
 
  Of those two options, Friday would work better for me.
 
  __ __
 
  Thanks,
 
  doug
 
  __ __
 
  On May 22, 2015, at 9:33 PM, ki...@macinnes.ie
  mailto:ki...@macinnes.ie wrote:
 
  __ __
 
  Hi Kunal,
 
  Thursday/Friday works for me - early morning PT works best,
  as I'm based in Ireland.
 
  I'll find some specific times the Designate folks are
  available over the next day or two and provide some
  options.. 
 
  Thanks,
  Kiall
 
  On 22 May 2015 7:24 pm, Gandhi, Kunal
  kunalhgan...@gmail.com mailto:kunalhgan...@gmail.com
  wrote:
 
  Hi All
 
  __ __
 
  I wanted to start a discussion about adding support for GSLB
  to neutron-lbaas and designate. To be brief for folks who
  are new to GLB, GLB stands for Global Load Balancing and we
  use it for load balancing traffic across various
  geographical regions. A more detail description of GLB can
  be found at my talk at the summit this week here
  https://www.youtube.com/watch?v=fNR0SW3vj_s.
 
  __ __
 
 

Re: [openstack-dev] [javascript] Linters

2015-06-08 Thread gustavo panizzo (gfa)



On 2015-06-06 03:26, Michael Krotscheck wrote:

Right now, there are several JS linters in use in OpenStack: JSHint,
JSCS, and Eslint. I really would like to only use one of them, so that I
can figure out how to sanely share the configuration between projects.

Can all those who have a strong opinion please stand up and state their
opinions?


what about https://bitbucket.org/dcs/jsmin/ it's license is free


--
1AE0 322E B8F7 4717 BDEA BF1D 44BB 1BA7 9F6C 6333

keybase: http://keybase.io/gfa

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-operators][neutron[dhcp][dnsmask]: duplicate entries in addn_hosts causing no IP allocation

2015-06-08 Thread Daniel Comnea
+ Operators

Much thanks in advance,
Dani



On Sun, Jun 7, 2015 at 6:31 PM, Daniel Comnea comnea.d...@gmail.com wrote:

 Hi all,

 I'm running IceHouse (build using Fuel 5.1.1) on Ubuntu where dnsmask
 version 2.59-4.
 I have a very basic network layout where i have a private net which has 2
 subnets

  2fb7de9d-d6df-481f-acca-2f7860cffa60 | private-net
| e79c3477-d3e5-471c-a728-8d881cf31bee 192.168.110.0/24 |
 |
 | |
 f48c3223-8507-455c-9c13-8b727ea5f441 192.168.111.0/24 |

 and i'm creating VMs via HEAT.
 What is happening is that sometimes i get duplicated entries in [1] and
 because of that the VM which was spun up doesn't get an ip.
 The Dnsmask processes are running okay [2] and i can't see anything
 special/ wrong in it.

 Any idea why this is happening? Or are you aware of any bugs around this
 area? Do you see a problems with having 2 subnets mapped to 1 private-net?



 Thanks,
 Dani

 [1] /var/lib/neutron/dhcp/2fb7de9d-d6df-481f-acca-2f7860cffa60/addn_hosts
 [2]

 nobody5664 1  0 Jun02 ?00:00:08 dnsmasq --no-hosts
 --no-resolv --strict-order --bind-interfaces --interface=tapc9164734-0c
 --except-interface=lo
 --pid-file=/var/lib/neutron/dhcp/2fb7de9d-d6df-481f-acca-2f7860cffa60/pid
 --dhcp-hostsfile=/var/lib/neutron/dhcp/2fb7de9d-d6df-481f-acca-2f7860cffa60/host
 --addn-hosts=/var/lib/neutron/dhcp/2fb7de9d-d6df-481f-acca-2f7860cffa60/addn_hosts
 --dhcp-optsfile=/var/lib/neutron/dhcp/2fb7de9d-d6df-481f-acca-2f7860cffa60/opts
 --leasefile-ro --dhcp-authoritative
 --dhcp-range=set:tag0,192.168.110.0,static,86400s
 --dhcp-range=set:tag1,192.168.111.0,static,86400s --dhcp-lease-max=512
 --conf-file= --server=10.0.0.31 --server=10.0.0.32 --domain=openstacklocal


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] Adding packaging as an OpenStack project

2015-06-08 Thread James Page
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Thomas

On 03/06/15 16:23, Thomas Goirand wrote:
 This came up at the TC meeting today, and I volunteered to
 provide an update from the discussion.
 I've just read the IRC logs. And there's one thing I would like to
 make super clear.
 
 We, ie: Debian  Ubuntu folks, are very much clear on what we want
 to achieve. The project has been maturing in our heads for like
 more than 2 years. We would like that ultimately, only a single set
 of packages Git repositories exist. We already worked on *some*
 convergence during the last years, but now we want a *full*
 alignment.

Actually I think we agreed to work towards alignment on the dependency
chain for OpenStack at the summit; alignment on the core packages
(bits of OpenStack you actually install) is going to be much more
challenging and might be something that we can't get full alignment on
between Ubuntu and Debian - so let's not jump the gun on having that
as a core objective for the packaging team.

As I've stated before, we will have to have some way of managing that
delta from the outset of moving any core packaging under the OpenStack
umbrella.

The Ubuntu packaging is used widely by end-users and a number of other
projects including the OpenStack Puppet and Chef modules as well as
the Juju charms for OpenStack - any changes to structure and behaviour
are going to have much wider impact and I'm keen to ensure that we
consider the requirements of our end users before we consider any full
convergence objectives.

- -- 
James Page
Ubuntu and Debian Developer
james.p...@ubuntu.com
jamesp...@debian.org
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIcBAEBCAAGBQJVdVGyAAoJEL/srsug59jDs8YP/2dJJHaXpn++80fOGg3S2z3Q
0ifzNvgMQNPDGGLnfZ95Q/8iWXhqF03waNcd33MLI0+HKFtuBaTZ7P/W3ImDDpEE
B7TdwNGLzFN76Gz8Q9q0K+/6SxYGuwiWwlHrzJLaK4mEer83oojQ2v3Jxgw2SNx2
3UWamoFJm5o1s3Nh84QkpiXOQLZ51J1YjWXS0zz7gfqtgeiQvCr67l6dqJ/RaKGv
B1oWeV4nT+yAogWx/7VX5Vzywab5Vo1PmIRLAC6BX9mKEeqoFOAZC7bd+DUNNp/J
Rzg3KKQbSvXhL+xO0eByuWt4JE3EBJrI2bUz3LzutvWET5eJWnMY2gm1RmPcjguu
LFjNKF/Bmjgzk88kTF3k8kBgghhR0FKJyFfYi14j8RshgIh6ghtzfnHcyamsiaQe
8fitDq/k5p0u6F9zplJ2U4wYptV0COkwlcJTSOrvACQnziCOBM+k6VE2bcLqS5wC
kFnw/0I0iIycKFYxqvSBhR+fnWHQIXD9Swvh6EhF/VT6TQRKxIY2pOci/JYo+/Zv
rLTAfoKxmtlw8HOjifcLQSem7YoxF2O9qgUNqVkzg6g6PzpXAK4S8LnEq3on9v7p
wRJvFhmzfuo3xvtlsdvbTdea0VzRXCG0SM+XfEzB4FSvk4jhWI4TOuqiMxStq4SQ
QyZ3zzq0M0f5uEQ/xQZV
=zgNr
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] L3 agent rescheduling issue

2015-06-08 Thread Eugene Nikanorov
Yes, 50-100 networks received by DHCP agent on startup could cause 2nd
state report to be sent seconds  after it should be sent.
In my tests, if I recall correctly, it was ~70 networks and delay between
1st and 2nd state report around 25 seconds (while 5 sec was configured)

Eugene.

On Sun, Jun 7, 2015 at 11:11 PM, Kevin Benton blak...@gmail.com wrote:

 Well a greenthread will only yield when it makes a blocking call like
 writing to a network socket, file, etc. So once the report_state
 greenthread starts executing, it won't yield until it makes a call like
 that.

 I looked through the report_state code for the DHCP agent and the only
 blocking call it seems to make is the AMQP report_state call/cast itself.
 So even with a bunch of other workers, the report_state thread should get
 execution fairly quickly since most of our workers should yield very
 frequently when they make process calls, etc. That's why I assumed that
 there must be something actually stopping it from sending the message.

 Do you have a way to reproduce the issue with the DHCP agent?

 On Sun, Jun 7, 2015 at 9:21 PM, Eugene Nikanorov enikano...@mirantis.com
 wrote:

 No, I think greenthread itself don't do anything special, it's just when
 there are too many threads, state_report thread can't get the control for
 too long, since there is no prioritization of greenthreads.

 Eugene.

 On Sun, Jun 7, 2015 at 8:24 PM, Kevin Benton blak...@gmail.com wrote:

 I understand now. So the issue is that the report_state greenthread is
 just blocking and yielding whenever it tries to actually send a message?

 On Sun, Jun 7, 2015 at 8:10 PM, Eugene Nikanorov 
 enikano...@mirantis.com wrote:

 Salvatore,

 By 'fairness' I meant chances for state report greenthread to get the
 control. In DHCP case, each network processed by a separate greenthread, so
 the more greenthreads agent has, the less chances that report state
 greenthread will be able to report in time.

 Thanks,
 Eugene.

 On Sun, Jun 7, 2015 at 4:15 AM, Salvatore Orlando sorla...@nicira.com
 wrote:

 On 5 June 2015 at 01:29, Itsuro ODA o...@valinux.co.jp wrote:

 Hi,

  After trying to reproduce this, I'm suspecting that the issue is
 actually
  on the server side from failing to drain the agent report state
 queue in
  time.

 I have seen before.
 I thought the senario at that time as follows.
 * a lot of create/update resource API issued
 * rpc_conn_pool_size pool exhausted for sending notify and blocked
   farther sending side of RPC.
 * rpc_thread_pool_size pool exhausted by waiting
 rpc_conn_pool_size
   pool for replying RPC.
 * receiving state_report is blocked because rpc_thread_pool_size
 pool
   exhausted.


 I think this could be a good explanation couldn't it?
 Kevin proved that the periodic tasks are not mutually exclusive and
 that long process times for sync_routers are not an issue.
 However, he correctly suspected a server-side involvement, which could
 actually be a lot of requests saturating the RPC pool.

 On the other hand, how could we use this theory to explain why this
 issue tend to occur when the agent is restarted?
 Also, Eugene, what do you mean by stating that the issue could be in
 agent's fairness?

 Salvatore



 Thanks
 Itsuro Oda

 On Thu, 4 Jun 2015 14:20:33 -0700
 Kevin Benton blak...@gmail.com wrote:

  After trying to reproduce this, I'm suspecting that the issue is
 actually
  on the server side from failing to drain the agent report state
 queue in
  time.
 
  I set the report_interval to 1 second on the agent and added a
 logging
  statement and I see a report every 1 second even when sync_routers
 is
  taking a really long time.
 
  On Thu, Jun 4, 2015 at 11:52 AM, Carl Baldwin c...@ecbaldwin.net
 wrote:
 
   Ann,
  
   Thanks for bringing this up.  It has been on the shelf for a
 while now.
  
   Carl
  
   On Thu, Jun 4, 2015 at 8:54 AM, Salvatore Orlando 
 sorla...@nicira.com
   wrote:
One reason for not sending the heartbeat from a separate
 greenthread
   could
be that the agent is already doing it [1].
The current proposed patch addresses the issue blindly - that
 is to say
before declaring an agent dead let's wait for some more time
 because it
could be stuck doing stuff. In that case I would probably make
 the
multiplier (currently 2x) configurable.
   
The reason for which state report does not occur is probably
 that both it
and the resync procedure are periodic tasks. If I got it right
 they're
   both
executed as eventlet greenthreads but one at a time. Perhaps
 then adding
   an
initial delay to the full sync task might ensure the first
 thing an agent
does when it comes up is sending a heartbeat to the server?
   
On the other hand, while doing the initial full resync, is the
 agent
   able
to process updates? If not perhaps it makes sense to have it
 down until
   it
finishes synchronisation.
  
   Yes, it can!  The agent prioritizes updates from RPC over full
 resync
   

[openstack-dev] [pulp] use of coinor.pulp rather than pulp

2015-06-08 Thread Robert Collins
We have coinor.pulp in our global-requirements, added by:

commit 9c4314cbaf77dadc5c6938d3ab38201b1e52c67d
Author: Yathiraj Udupi yud...@cisco.com
Date:   Wed Oct 23 13:18:43 2013 -0700

Added a requirement to COIN PULP LP Modeler module

This requirement is needed for a reference constraint solver implementation
included as part of the solver-scheduler blueprint.

Change-Id: I50177485d6d2034f2c15121cc2a56b720ff006fd
Implements: blueprint solver-scheduler


But this module is actually not PULP at all: its a namespace package
whose only reason to exist is to do:

from pulp import *
__doc__ = pulp.__doc__


in it's __init__.py.

And it has an evil setup.py that downloads distribute (yes, that
temporary fork of setuptools) from the internet every time, currently
fails to install for me on Python3.4...

I'd like to replace that with just PuLP
(https://pypi.python.org/pypi/PuLP/1.5.9) which is the actual thing,
and appears to be maintained and tested on current Pythons.

Any objections?

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [swift] Swift as Glance backend in multi-region scenario

2015-06-08 Thread joehuang
Hello, Swifter,

It's often recommended to use shared Glance service with shared Swift as the 
backend to support image replication in multi-sites.

The question is how many sites Swift can support to work as the image store?
Is there any limitation on the multi-site image replication?
Can Swift be improved to support 15 or more sites image replication?

Thanks in advance.

Best Regards
Chaoyi Huang ( Joe Huang )

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] Adding packaging as an OpenStack project

2015-06-08 Thread Andreas Jaeger

On 06/04/2015 01:25 AM, Thomas Goirand wrote:

On 06/03/2015 08:07 PM, Andreas Jaeger wrote:

On 06/03/2015 03:57 PM, James Page wrote:

[...]
After some discussion with Thomas on IRC, I think this is more than
one effort; The skills and motivation for developers reviewing
proposed packaging changes needs to be aligned IMO - so I think it
makes sense to split the packaging teams between:

   Debian/Ubuntu + derivatives
   CentOS/Fedora/RHEL + derivatives


So, would this be two packaging teams - on DEB team and one RPM team?


Yes.


How would those two teams collaborate - or is no collaboration needed?


I don't see what there would be collaboration on. Even the thing we work
on is called differently (ie: specs in Red Hat, and source packages
in Debian/Ubuntu). Though if there are things on which we can converge
(like package names maybe?), it would be a good thing for final users.


The control files (specs, deb) are indeed different, the question is 
what they can share.


I see collaboration possibilities on package names and layout - like how 
to split a package up -, configuration files, defaults... This doesn't 
need to be a first step but something to consider when packaging new 
repositories.




How are you handling Debian and Ubuntu differences?


There's not so much difference already on the Python module
dependencies, as Ubuntu imports packages from Debian. They imported 100%
of my work for Juno, then diverged for Kilo since I didn't upload (as
Jessie was frozen). Now that's the only difference we need to fix, and
James is already working on that.

We already work on some packages together (like James Page and myself
are co-maintaining rabbitmq-server (he did most of it, shame on me)).

As for the core OpenStack packages, what I wrote works already on Ubuntu
if you rebuild the package, though it may need a few fix here and there
for version dependencies (as Jessie was frozen after Trusty), but that's
easy to fix. The harder part will be for Neutron and Nova. In case we
don't agree at all, we can still use dpkg-vendors --derive-from ubuntu
to check on which distribution we are, and do actions from there.
There's already such mechanism in the debian/rules file of the Debian
Nova package for example (to check on the group name for libvirt which
differs in Debian  Ubuntu). It's not in the Ubuntu version of the
package, but I'm sure that it will soon land there! :)

The only thing we can't fix will be the Maintainer: and Uploaders:
field, but even that, maybe we can find a way so we don't have to fix
it either.

So in a smaller amount of words: I'm sure we can find a way to reduce
our differences to the absolute zero on the source package level, even
though the result may look different thanks to some conditionals where
it can't be avoided.


Understood,
Andreas
--
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Dilip Upmanyu, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] L3 agent rescheduling issue

2015-06-08 Thread Kevin Benton
Well a greenthread will only yield when it makes a blocking call like
writing to a network socket, file, etc. So once the report_state
greenthread starts executing, it won't yield until it makes a call like
that.

I looked through the report_state code for the DHCP agent and the only
blocking call it seems to make is the AMQP report_state call/cast itself.
So even with a bunch of other workers, the report_state thread should get
execution fairly quickly since most of our workers should yield very
frequently when they make process calls, etc. That's why I assumed that
there must be something actually stopping it from sending the message.

Do you have a way to reproduce the issue with the DHCP agent?

On Sun, Jun 7, 2015 at 9:21 PM, Eugene Nikanorov enikano...@mirantis.com
wrote:

 No, I think greenthread itself don't do anything special, it's just when
 there are too many threads, state_report thread can't get the control for
 too long, since there is no prioritization of greenthreads.

 Eugene.

 On Sun, Jun 7, 2015 at 8:24 PM, Kevin Benton blak...@gmail.com wrote:

 I understand now. So the issue is that the report_state greenthread is
 just blocking and yielding whenever it tries to actually send a message?

 On Sun, Jun 7, 2015 at 8:10 PM, Eugene Nikanorov enikano...@mirantis.com
  wrote:

 Salvatore,

 By 'fairness' I meant chances for state report greenthread to get the
 control. In DHCP case, each network processed by a separate greenthread, so
 the more greenthreads agent has, the less chances that report state
 greenthread will be able to report in time.

 Thanks,
 Eugene.

 On Sun, Jun 7, 2015 at 4:15 AM, Salvatore Orlando sorla...@nicira.com
 wrote:

 On 5 June 2015 at 01:29, Itsuro ODA o...@valinux.co.jp wrote:

 Hi,

  After trying to reproduce this, I'm suspecting that the issue is
 actually
  on the server side from failing to drain the agent report state
 queue in
  time.

 I have seen before.
 I thought the senario at that time as follows.
 * a lot of create/update resource API issued
 * rpc_conn_pool_size pool exhausted for sending notify and blocked
   farther sending side of RPC.
 * rpc_thread_pool_size pool exhausted by waiting rpc_conn_pool_size
   pool for replying RPC.
 * receiving state_report is blocked because rpc_thread_pool_size pool
   exhausted.


 I think this could be a good explanation couldn't it?
 Kevin proved that the periodic tasks are not mutually exclusive and
 that long process times for sync_routers are not an issue.
 However, he correctly suspected a server-side involvement, which could
 actually be a lot of requests saturating the RPC pool.

 On the other hand, how could we use this theory to explain why this
 issue tend to occur when the agent is restarted?
 Also, Eugene, what do you mean by stating that the issue could be in
 agent's fairness?

 Salvatore



 Thanks
 Itsuro Oda

 On Thu, 4 Jun 2015 14:20:33 -0700
 Kevin Benton blak...@gmail.com wrote:

  After trying to reproduce this, I'm suspecting that the issue is
 actually
  on the server side from failing to drain the agent report state
 queue in
  time.
 
  I set the report_interval to 1 second on the agent and added a
 logging
  statement and I see a report every 1 second even when sync_routers is
  taking a really long time.
 
  On Thu, Jun 4, 2015 at 11:52 AM, Carl Baldwin c...@ecbaldwin.net
 wrote:
 
   Ann,
  
   Thanks for bringing this up.  It has been on the shelf for a while
 now.
  
   Carl
  
   On Thu, Jun 4, 2015 at 8:54 AM, Salvatore Orlando 
 sorla...@nicira.com
   wrote:
One reason for not sending the heartbeat from a separate
 greenthread
   could
be that the agent is already doing it [1].
The current proposed patch addresses the issue blindly - that is
 to say
before declaring an agent dead let's wait for some more time
 because it
could be stuck doing stuff. In that case I would probably make
 the
multiplier (currently 2x) configurable.
   
The reason for which state report does not occur is probably
 that both it
and the resync procedure are periodic tasks. If I got it right
 they're
   both
executed as eventlet greenthreads but one at a time. Perhaps
 then adding
   an
initial delay to the full sync task might ensure the first thing
 an agent
does when it comes up is sending a heartbeat to the server?
   
On the other hand, while doing the initial full resync, is the
 agent
   able
to process updates? If not perhaps it makes sense to have it
 down until
   it
finishes synchronisation.
  
   Yes, it can!  The agent prioritizes updates from RPC over full
 resync
   activities.
  
   I wonder if the agent should check how long it has been since its
 last
   state report each time it finishes processing an update for a
 router.
   It normally doesn't take very long (relatively) to process an
 update
   to a single router.
  
   I still would like to know why the thread to report state is being
   starved.  Anyone have any insight on 

Re: [openstack-dev] [puppet] openstacklib::db::sync proposal

2015-06-08 Thread Yanis Guenane


On 06/03/2015 02:32 PM, Martin Mágr wrote:

 On 06/02/2015 07:05 PM, Mathieu Gagné wrote:
 On 2015-06-02 12:41 PM, Yanis Guenane wrote:
 The openstacklib::db::sync[2] is currently only a wrapper around an
 exec
 that does the actual db sync, this allow to make any modification to
 the
 exec into a single place. The main advantage IMO is that a contributor
 is provided with the same experience as it is not the case today across
 all modules.

 The amount of possible change to an exec resource is very limited. [1] I
 don't see a value in this change which outweighs the code churn and
 review load needed to put it in place. Unless we have real use cases or
 outrageously genius feature to add to it, I'm not in favor of this
 change.

 Furthermore, any change to the public interface of
 openstacklib::db::sync would require changes across all our modules
 anyway to benefit from this latest hypothetical feature. I think we are
 starting to nitpick over as little generic code we could possibly find
 to put in openstacklib.

 [1] https://docs.puppetlabs.com/references/latest/type.html#exec


 Wearing my consistency hat I must say I like this change. On the other
 hand I agree with Mathieu that delegating single resource from several
 modules to single module is necessary in this case.

 Regards,
 Martin

Mathieu, Martin, thank you for replying.

For the wrapper around exec usefulness I understand your concerns.
On a note I was trying to follow the current use of openstacklib.

If we look at openstacklib::db::postgresql[1] or
openstackib::db::mysql[2] they are simple wrapper around
puppet resources with no extra logic, but a common resource across all
modules.

Also, to move forward on this topic I will submit to a vote one of the
three propositions during our next meeting

 1. Abandon this change
 2. Move everything to X::db::sync, but just run the exec there, no
openstacklib::db::sync
 3. Move forward with current implementation

Thanks again for the feedbacks,

[1]
https://github.com/stackforge/puppet-openstacklib/blob/master/manifests/db/postgresql.pp
[2]
https://github.com/stackforge/puppet-openstacklib/blob/master/manifests/db/mysql.pp

--
Yanis Guenane

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] Adding packaging as an OpenStack project

2015-06-08 Thread James Page
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 02/06/15 23:41, James E. Blair wrote:
 3) What are the plans for repositories and their contents?
 
 What repos will be created, and what will be in them.  When will
 new ones be created, and is there any process around that.

Having taken some time to think about this over the weekend, I'm keen
to ensure that any packaging repositories that move upstream are
packaging for OpenStack and other OpenStack umbrella projects.

Thomas - how many of the repositories under the pkg-openstack team in
Debian fall into this category - specifically projects under
/openstack or /stackforge namespaces?

I don't think we should be upstreaming packaging for the wider
OpenStack dependency chain - the Debian Python modules team is a much
larger team of interested contributors and better place for this sort
of work.

- -- 
James Page
Technical Architect
OpenStack Engineering Team
james.p...@canonical.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIcBAEBCAAGBQJVdVSwAAoJEL/srsug59jDGQ8P/jiupTb6Sx48QHAOkPSlJJBj
IFfLY/JLpV+lsdizaIEPH1S232qWg7JtPTVeRDYfEXGtRN+5whJD505frxmTesEN
W68LVMSybbIPIf8f++MMm6PZ1d0wq9STHf8Mi0PvzkjUcDO1xDASNzG2ZRh7X56C
bavfGXAxiYkIdzqB5lUTXoYhcKuMTFq0YzIIQv6Ebgst1hqeOCsEcXYvIEnOTVyM
Dgfc94h13+1+cSSwpo5ilcbwD2uAgsEDQBz2hFPShG7CigW6dY/454hXPCtDyWf7
alLgawiWYsQUmfjCz1ozVvzqrMoCFF1rFqlzlfXUTZc9obKDHtWzorO5KwTkkmRw
AIvnj/zYAlFBPDTDG84ZFVDAH9EJ2y5x3NiIoe16lLP67PGQiUO7BYJCoSQeZT2N
9axsihKmgdwm2icYBrmIFKS83aKUjAoYZ/TZd4yKYVqbdOtG2ysDAt7REXU63UZN
12KArxuFR8ygJS1Kl0MkRDOMX9omWJ8R5EgcEhjtbW6s0XBaaK38vCZECtCmnaBh
nEl+XHV/4GUxpNa424dvwWm8VAm+e0nIFycCBXqgpqdlpDGmAbLdX/B+Kygw5IEY
oI8WsEWbbHKBqdAuYU4ZUxHbUcYxzZ/mYwT9L5M/2ETOrOGsQRRJlxigUbgj4BVY
R/FemtQlWHCdbj3uDac3
=nMQI
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Help needed with TOSCA support in Murano

2015-06-08 Thread Serg Melikyan
Hi Vahid,

You diagrams are perfectly describe what we have now and what we want
to have with TOSCA, I think it's time to start working on
specification!

I've updated corresponding blueprint with new status and assignee:
https://blueprints.launchpad.net/murano/+spec/support-tosca-format


On Fri, Jun 5, 2015 at 3:43 AM, Vahid S Hashemian
vahidhashem...@us.ibm.com wrote:

 I'm attaching two sequence diagrams, one for package creation and import; and 
 the other for environment deployment.
 In each case I compare my understanding of how Murano performs the use case 
 for a HOT template, and how it can perform it for a TOSCA template.
 Please let me know what you think.

 Thanks.

 Regards,
 -
 Vahid Hashemian, Ph.D.
 Advisory Software Engineer, IBM Cloud Labs

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Keystone] Glance and trusts

2015-06-08 Thread stuart . mclaren



On Jun 5, 2015, at 19:25, Bhandaru, Malini K malini.k.bhand...@intel.com 
wrote:

Continuing with David?s example and the need to control access to a Swift 
object that Adam points out,

How about using the Glance token from glance-API service to glance-registry but 
carry along extra data in the call, namely user-id, domain, and public/private 
information, so the object can be access controlled.

Alternately and encapsulating token

Glance-token user-token -- keeping it simple, only two levels.  This 
protects from on the cusp expired user-tokens.
Could check user quota before attempting the storage.


We already went over this type of design and determined it was sub-optimal. 
Instead we allow for passing the X-SERVICE-TOKEN, which is being passed in 
addition to the auth token.
Right now I do not believe that X-SERVICE-TOKEN is being used anywhere.


Support was recently added to the Swift server. (Glance - Swift support should 
land in Liberty.)


In the near future we are looking to make all inter-service (e.g. Nova 
-glance) communication have the service token.

This design was specifically implemented for the case you're describing.

In theory it would be possible to allow the quota check / read with only a 
service token (and a proper policy.json) but would require the user's token to 
do writes.




Should user not have paid dues, Glance knows which objects to garbage collect!

Regards
Malini

From: Adam Young [mailto:ayo...@redhat.com]
Sent: Friday, June 05, 2015 4:11 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Glance][Keystone] Glance and trusts

On 06/05/2015 10:39 AM, Dolph Mathews wrote:

On Thu, Jun 4, 2015 at 1:54 AM, David Chadwick d.w.chadw...@kent.ac.uk wrote:
I did suggest another solution to Adam whilst we were in Vancouver, and
this mirrors what happens in the real world today when I order something
from a supplier and a whole supply chain is involved in creating the end
product that I ordered. This is not too dissimilar to a user requesting
a new VM. Essentially each element in the supply chain trusts the two
adjacent elements. It has contracts with both its customers and its
suppliers to define the obligations of each party. When something is
ordered from it, it trusts the purchaser, and on the strength of this,
it will order from its suppliers. Each element may or may not know who
the original customer is, but if it needs to know, it trusts the
purchaser to tell it. Furthermore the customer does not need to delegate
any of his/her permissions to his/her supplier. If we used such a system
of trust between Openstack services, then we would not need delegation
of authority and trusts as they are implemented today. It could
significantly simplify the interactions between OpenStack services.

+1! I feel like this is the model that we started with in OpenStack, and have 
grown additional complexity over time without much benefit.

We could roll Glance into Nova, too, and get the same benefit.  There is a 
reason we have separate services.  GLance shoud not Trust Nova for all 
operations, just some.

David's example elides the fact that there  are checks built in to the supply 
chain system to prevent cheating.







regards
David

On 03/06/2015 21:03, Adam Young wrote:

On 06/02/2015 12:57 PM, Mikhail Fedosin wrote:

Hello! I think it's a good time to discuss implementation of trusts in
Glance v2 and v3 api.

Currently we have two different situations during image creation where
our token may expire, which leads to unsuccessful operation.

First is connection between glance-api and glance-registry. In this
case we have a solution (https://review.openstack.org/#/c/29967/) -
use_user_token parameter in glance-api.conf, but it is True by default
. If it's changed to False then glance-api will use its own
credentials to authorize in glance-registry and it prevents many
possible issues with user token expiration. So, I'm interested if
there are some performance degradations if we change use_user_token to
False and what are the reasons against making it the default value.

Second one is linked with Swift. Current implementation uploads chunks
one by one and requires authorization each time. It may lead to
problems: for example we have to upload 100 chunks, after 99th one,
token expired and glance can't upload the last one, catches an
exception and tries to remove stale chunks from storage. Of course it
will fail, because token is not valid anymore, and that's why there
will be 99 garbage objects in the storage.
With Single-tenant mode glance uses its own credentials to upload
files, so it's possible to create new connection on each chunk upload
or catch Unauthorized exception and recreate connections only in that
cases. But with Multi-tenant mode there is no way to do it, because
user credentials are required. So it seems that trusts is the only one
solution here.

The problem with using trusts is that it would need to be created
per-user, and that is going to 

Re: [openstack-dev] The Nova API in Kilo and Beyond

2015-06-08 Thread Sean Dague
On 06/05/2015 11:03 AM, David Kranz wrote:
 On 06/05/2015 07:32 AM, Sean Dague wrote:
 One of the things we realized at the summit was that we'd been working
 through a better future for the Nova API for the past 5 cycles, gotten
 somewhere quite useful, but had really done a poor job on communicating
 what was going on and why, and where things are headed next.

 I've written a bunch of English to explain it (which should be on the
 planet shortly as well) -
 https://dague.net/2015/06/05/the-nova-api-in-kilo-and-beyond-2/ (with
 lots of help from Ed Leaf, John Garbutt, and Matt Gillard on content and
 copy editing).

 Yes, this is one of those terrible mailing list posts that points people
 to read a thing not on the list (I appologize). But at 2700 words, I
 think you'll find it more comfortable to read not in email.

 Discussion is welcome here for any comments folks have. Some details
 were trimmed for the sake of it not being a 6000 word essay, and to make
 it accessible to people that don't have a ton of Nova internals
 knowledge. We'll do our best to field questions, all of which will be
 integrated into the eventual dev ref version of this.

 Thanks for your time,

 -Sean

 Thanks, Sean. Great writeup. There are two issues I think might need
 more clarification/amplification:
 
 1. Does the microversion methodology, and the motivation for true
 interoperability, imply that there needs to be a new version for every
 bug fix that could be detected by users of an api? There was back and
 forth about that in the review about the ip6 server list filter bug you
 referenced. If so, this is a pretty strong constraint that will need
 more guidance for reviewers about which kinds of changes need new
 versions and which don't.

Correct, Alex is working on this recommendation for the API WG. But,
yes, user visible changes should be reflected in a version bump, so they
user knows that feature A is supported at version X.Y.

 2. What is the policy for making incompatible changes, now that
 versioning allows such changes to be made? If some one doesn't like
 the name of one of the keys in a returned dict, and submits a change
 with new microversion, how should that be evaluated? IIRC, this was an
 issue that inspired some dislike about the original v3 work.

That's a per project call. And it's about the tradeoffs in cost vs.
benefit. Key renames are probably not worth it unless they are really
confusing.

For instance, the Nova team is currently considering (though in no ways
has decided) renaming the evacuate action to resurrect because *so*
much confusion has been created by the name evacuate (and so many
incorrectly designed applications assuming it would do things it
doesn't) that the pain of making people change might be worth it. Might.
We'll see how it plays out.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [pulp] use of coinor.pulp rather than pulp

2015-06-08 Thread Robert Collins
On 8 June 2015 at 19:50, Robert Collins robe...@robertcollins.net wrote:
 We have coinor.pulp in our global-requirements, added by:

 I'd like to replace that with just PuLP
 (https://pypi.python.org/pypi/PuLP/1.5.9) which is the actual thing,
 and appears to be maintained and tested on current Pythons.
..
 Any objections?

https://review.openstack.org/189248

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][release] updating client requirements

2015-06-08 Thread Sergey Lukjanov
Re sahara client - I will release it today when requirements update CR
merged.

On Fri, Jun 5, 2015 at 6:40 PM, Doug Hellmann d...@doughellmann.com wrote:

 We have a bunch of client libraries with out-dated requirements to
 varying degrees [1], and some of them are causing dependency conflicts
 in gate jobs. It would be good if project teams could prioritize
 those reviews so we can release updates early next week.

 Thanks,
 Doug

 [1]
 https://review.openstack.org/#/q/owner:%22OpenStack+Proposal+Bot%22+status:open+branch:master+project:%255Eopenstack/python-.*client,n,z

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The Nova API in Kilo and Beyond

2015-06-08 Thread Sean Dague
On 06/05/2015 10:34 AM, Chris Dent wrote:
 On Fri, 5 Jun 2015, Sean Dague wrote:
 
 Thanks for your time,
 
 Thanks for writing that up.
 
 I recognize that microversions exist and are as they are so I don't
 want to derail, but my curiosity was piqued:
 
 Riddle me this: If Microversions are kind of like content-negotiation
 (and we love content-negotiation) for APIs, why not just use content-
 negotiation instead of a header? Instead of:
 
X-OpenStack-Nova-API-Version: 2.114
 
 do (media type made up and not suggesting it as canonical):
 
Accept: application/openstack-nova-api-2.114+json
 
 or even
 
Accept: application/vnd.openstack-nova-api+json; version=2.114
 
 (and similar on the content-type header). There is precedent for
 this sort of thing in, for example, the github api.
 
 (I'll not[1] write about srsly, can we please stop giving Jackson the
 Absent so much freaking power.)
 
 [1] whoops

Realistically, this was in one of the early proposals. But one of the
challenges with content negotiations is that if we actually did this as
content negotiation then you also need to support quality levels like

Accept: application/vnd.openstack-nova-api+json; q=0.8; version=2.114,
application/vnd.openstack-nova-api+json; q=0.5; version=2.110,
application/vnd.openstack-nova-api+json; q=0.2; version=2.100

The complexities in thinking through the programming for that on the
client side were a little odd. Enough so, that with other concepts we
were trying to get out there, this seemed like it was just going to
confuse everyone quite a bit. Better to err on the explicit side.

It also meant that the fall back case semantics were less clear. Because
existing clients would just be requesting application/json. Which isn't
application/vnd.openstack-nova-api+json at all.

The Jackson the absent case is kind of important, because we were told
by public cloud providers that turning off v2.0 is not an option for
them in any forseable future. Major upgrades are hard, especially when
you have a lot of people using an interface. See python 3 for how to
do major version bumps in a way that no one moves onto your new stuff in
a timely manner. We're trying to explicitly avoid that by allowing
people to move forward on their own time table, and actually not have to
move everything at once.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [javascript] Linters

2015-06-08 Thread Matthias Runge
On Fri, Jun 05, 2015 at 07:26:24PM +, Michael Krotscheck wrote:
 Right now, there are several JS linters in use in OpenStack: JSHint, JSCS,
 and Eslint. I really would like to only use one of them, so that I can
 figure out how to sanely share the configuration between projects.
 
 Can all those who have a strong opinion please stand up and state their
 opinions?
 
 Michael

jshint: still non-free license [1]

eslint seems to require to sign a CLA, if we come across an issue and
were going to fix that.

jscs seems to be the utility, cool kids are using. As expected, it
requires node. It has seen 3 releases within 3 months, or 5 in 4 months. 
My question here would be: how are we going to handle changes here over
a release (and how to keep this running on the gate for released
versions)?

Matthias






[1] https://github.com/jshint/jshint/blob/master/src/jshint.js#L19

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
Matthias Runge mru...@redhat.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] RabbitMQ deprecation

2015-06-08 Thread Bogdan Dobrelya
 I've spent some time trying to eliminate the number of deprecation warnings
 we have in preparation for our upgrade to Kilo.  One of the ones that I got
 stuck on is the nova::rabbitmq::rabbitmq_class parameter.
 
 If this parameter is specified, and it is by default, then the class will
 include the rabbitmq class given by name, but it also the sets up useful
 dependencies between nova and RabbitMQ.  For example, it will ensure that
 RabbitMQ is running before nova is started.
 
 It seems to me that this needs to be split into two sets of functionality.

If you specify rabbitmq_class=false [0], the puppet-nova module will
only create required RabbitMQ entities, like user and vhost,
but will not install and configure the RabbitMQ. This is
expected and desired behavior for the Nova and other puppet
modules for OpenStack, AFAIK.

What would be pros, if the split introduced? I can see only cons:
increased complexity in the settings of parameters.

 
 *rabbitmq_class*
 * Not deprecated
 * if provided then it will be used to set up dependencies.
 * Defaults to rabbitmq.
 * This should perhaps just be hard coded to be rabbitmq now.
 
 *manage_rabbitmq*
 * Deprecated
 * If specified, configures and sets up rabbitmq, including the class.
 * Defaults to ???
 
 Thoughts?

[0]
https://github.com/stackforge/puppet-nova/blob/master/manifests/rabbitmq.pp#L32-L38

-- 
Best regards,
Bogdan Dobrelya,
Skype #bogdando_at_yahoo.com
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] HCF: branching is in progress - do not merge anything

2015-06-08 Thread Aleksandra Fedorova
Hi, everyone,

Fuel Devops team is working on stable/6.1 branching. Please don't
merge anything into fuel-* repositories untill devops team confirm
that branches are ready and master branch is open.

For any questions please use #fuel-devops IRC channel.

-- 
Aleksandra Fedorova
Fuel Devops Engineer
bookwar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][Oslo][RabbitMQ][Shovel] Deprecate mirrored queues from HA AMQP cluster scenario

2015-06-08 Thread Bogdan Dobrelya
Hello, stackers.

I'd like to bring out a poll about deprecating the RabbitMQ mirrored
queues for HA layout and replacing the AMQP clustering by shovel [0],
[1]. I guess the federation would not be a good option, but let's
consider it as well.

Why this must be done? The answer is that the rabbit cluster cannot
detect and survive micro outages well and just ending up with some
queues stuck and as a result, the rabbitmqctl control plane hanged
completely unresponsive (until the rabbit node erased and recovered its
cluster membership). These outages could be caused either by the network
*or* by CPU load spikes. For example, like this bug in Fuel project [2]
and this mail thread [3].

So, let's please vote and discuss.

But the questions also are:
a) Would be there changes in Oslo.messaging required as well in order to
support the underlying AMQP layer architecture changes?
b) Are there any volunteers for this research to be done for the
Oslo.messaging AMQP rabbit driver?

PS. Note, I'm not bringing RabbitMQ versions here as the issue seems
unresolved for any of existing ones. This seems rather the Erlang's
Mnesia generic clustering issue, than something what could be just fixed
in RabbitMQ, unless the mnesia based clustering would be dropped
completely ;)

[0] https://www.rabbitmq.com/shovel-dynamic.html
[1] https://www.rabbitmq.com/shovel.html
[2] https://bugs.launchpad.net/fuel/+bug/1460762
[3] https://groups.google.com/forum/#!topic/rabbitmq-users/iZWokxvhlaU

-- 
Best regards,
Bogdan Dobrelya,
Skype #bogdando_at_yahoo.com
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Cancelling today's team meeting - 06/08/2015

2015-06-08 Thread Nikolay Makhotkin
Team,

We decided to cancel today’s meeting because a number of key members won’t
be able to attend.

-- 
Best Regards,
Nikolay
@Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The Nova API in Kilo and Beyond

2015-06-08 Thread Sean Dague
On 06/05/2015 10:56 AM, Neil Jerram wrote:
 On 05/06/15 12:32, Sean Dague wrote:
 https://dague.net/2015/06/05/the-nova-api-in-kilo-and-beyond-2/
 
 This is really informative and useful, thanks.
 
 A few comments / questions, with bits of your text in quotes:
 
 Even figuring out what a cloud could do was pretty terrible. You could
 approximate it by listing the extensions of the API, then having a bunch
 of logic in your code to realize which extensions turned on or off
 certain features, or added new data to payloads.
 
 I guess that's why the GNU autoconf/configure system has always advised
 testing for particular wanted features, instead of looking at versions
 and then relying on carnal knowledge to know what those versions imply.
  Is that feature-testing-based approach impractical for OpenStack?

It shouldn't be carnal knowledge. If we are talking about building an
ecosystem we need to be really explicit about this version gives you
this contract. We have protocol versioning all through our internal RPC
mechanisms, because if we didn't you'd need to write an order of
magnitude more code to have an application work.

You can always give your user a terrible contract, and make them do a
ton of extra work on their side to figure out what's available. See...
present day. But the firm belief is we should do better than that if we
want to encourage an application ecosystem.

 Then she runs her code at against another cloud, which runs a version
 of Nova that predates this change. She's now effectively gone back in
 time. Her code now returns thousands of records instead of 1, and she's
 terribly confused why. She also has no way to figure out if random cloud
 Z is going to support this feature or not. So the only safe thing to do
 is implement the filtering client side instead, which means the server
 side filtering actually gained her very little. It's not something she
 can ever determine will work ahead of time. It's an API that is
 untrustworthy, so it's something that's best avoided.
 
 Except that she still has to do all this anyway - i.e. write the
 client-side filtering, and figure out when to use it instead of
 server-side - even if there was an API version change accompanying the
 filtering feature.  Doesn't she?

Not if she's comfortable with a minimum supported version in her code.
People abandon old systems all the time. No one is still writing IE6
supporting javascript code. The important thing is that a non trust
worthy API, i.e. one that could return differently seemingly at random,
is terrible. It makes developers pull out their hair and curse your
name, and figure out if they can get off your platform entirely.

Really stable contracts are a key feature in building an ecosystem above
you. It's why your house has a concrete slab under it, not just a pile
of sand.

 The difference is just between making the switch based on a version
 number, and making it based on detected feature support.
 
 If you want features in the 2.3 microversion, ...
 
 I especially appreciate this part, as I've been seeing all the chat
 about microversions go past, and not really understanding it.
 
 FWIW, though - and maybe this is just me - when I hear microversion,
 I'm thinking of the Z in an X.Y.Z version number.  (With X = major
 and Y = minor.)  So it's counterintuitive for me that 2.3 is a
 microversion; it just sounds like a perfectly normal major/minor version
 number.  Are 2.0 and 2.1 microversions too?

So, we used the word microversion in contrast to previous versioning
in OpenStack which required a new endpoint. The reality of the
implementation is we're working with a monotonically increasing counter.
Y is going to increase forever.

This *is not* semver. There is no semantic meaning / value judgement on
each change. It is better thought about as a sequence number.

 But this is just bikeshedding really, so feel free to ignore...
 
 without building a giant autoconf-like system
 
 Aha, so you probably did consider that option, then. :-)

Just provide the thought exercise on API autoconf like system. In order
to fully explore an API you are looking at basically running something
like tempest against it. So you needed to run about 20K API calls, build
and destroy 100 guests, volumes, objects, etc. That kind of run probably
would take a couple of hours. When you are building against a public
cloud, every API has a cost.

Also, it a service. It can be upgraded at any time. Without something
clear like a microversion declaration, you'd never know they cloud
updated, worked differently, and now your application fails to work.

Also, find me more than a handful of application developers that like
writing autoconf tests. :)

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [Fuel][Oslo][RabbitMQ][Shovel] Deprecate mirrored queues from HA AMQP cluster scenario

2015-06-08 Thread Bogdan Dobrelya
 RabbitMQ team member here. 

Thank you for a quick response, Michael!

 
 Neither Shovel nor Federation will replace mirroring. Shovel moves messages
 from a queue to an exchange (within a single node or between remote nodes 
 and/or clusters).
 It doesn't replicate anything.

Yes, the idea was to not just replace, but redesign OpenStack libs to
use cluster-less messaging as well. It should assume that some messages
from RPC conversations may be lost. And that messages aren't synced
between different AMQP nodes specified in the config of OpenStack
services (rabbit_hosts=).

 
 Federation has two parts to it:
 
  * Queue federation: no replicate, distributes messages from a single logical 
 queue
between N nodes or clusters, when there are no local consumers to consume 
 them.
  * Exchange federation replicates a stream of messages going through an 
 exchange.
As messages are consumed upstream, downstream has no way of knowing about 
 it.
 
 
 The right thing to do here is introduce timeouts to rabbitmqctl, which was 
 99% finished
 in the past but some RabbitMQ team members felt it should produce more 
 detailed
 error messages, which extended the scope of the change significantly.
 
 
 While Mnesia indeed needs to be replaced to introduce AP (as in CAP) style 
 mirroring,
 the issue you're bringing up here has nothing to do with Mnesia.
 Mnesia is not used by rabbitmqctl, and it is not used to store messages.
 It's a rabbitmqctl
 issue, and potentially a hint that you may want to reduce net_ticktime value 
 (say, to 5-10 seconds)
 to make queue master unavailability detected faster.
 
 

Thank you, I updated the bug comments [0]. We will test this option as well.

[0] https://bugs.launchpad.net/fuel/+bug/1460762/comments/23

 
 1. http://www.rabbitmq.com/nettick.html
 --  
 MK  
 
 Staff Software Engineer, Pivotal/RabbitMQ  


-- 
Best regards,
Bogdan Dobrelya,
Skype #bogdando_at_yahoo.com
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][vpnaas] Team meeting Tuesday, June 9th @ 1600 UTC

2015-06-08 Thread Paul Michali
Will hold weekly meetings to discuss VPNaaS topics.  Please check out the
meeting page for proposed agenda, time, and IRC channel (
https://wiki.openstack.org/wiki/Meetings/VPNaaS).

Also, there is an Etherpad for VPN info, where we hope to collect use-cases
and workflow information to (hopefully) create a shared understanding of
the various VPN proposals. Please contribute to the page (
https://etherpad.openstack.org/p/vpn-flavors).

Regards,

PCM
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] New meeting time for nova-scheduler - vote!

2015-06-08 Thread Ed Leafe
Please note that if you are interested in the scheduler meeting times, today is 
the last day to get in your vote. We will be choosing a new time at tomorrow's 
meeting, based on the results of the vote.

 On Jun 2, 2015, at 4:09 PM, Ed Leafe e...@leafe.com wrote:
 
 Signed PGP part
 Hi,
 
 The current meeting time of 1500 UTC on Tuesday is presenting some
 conflicts, so we're trying to come up with an alternative. I've
 created a Doodle for anyone who is interested in attending these
 meetings to indicate what times work and which don't.
 
 There is no way to say Tuesdays on Doodle, so I created it for next
 Tuesday and Wednesday, but please reply with your general availability
 for that time every week. The link is:
 
 http://doodle.com/akuv4b4ftv68q3me
 
 Once we have a few acceptable times, we'll see which IRC rooms are
 available and update the calendar.
 
 --
 
 -- Ed Leafe
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [opnfv-tech-discuss] [Tricircle] Polling for weekly team meeting

2015-06-08 Thread Zhipeng Huang
Hi Iben , if u are interestEd dont forget to participate the time pool :)
On Jun 5, 2015 11:03 PM, Rodriguez, Iben iben.rodrig...@spirent.com
wrote:

  Hello Joe,

 This is very cool. A few questions...

 Tricircle seems to deal mostly with the use of many environments and not
 their setup or configuration, is that right?

 We have a few multi-site projects already like pharos and IPv6. Can we
 make an assessment to understand where each one fits in the workflow of the
 platform lifecycle?

 Do the underclouds need to be similar for tricycle to function properly?
 What if some openstack are different from others? Are we just looking for
 api compatability or will certain features or functions also be needed?

 Many thanks for sharing this project. It is good to understand the
 concepts.

 I b e n
 4087824726

 From: Zhipeng Huang
 Sent: Friday, June 5, 07:44
 Subject: [opnfv-tech-discuss] [openstack-dev][Tricircle] Polling for
 weeklyteam meeting
 To: OpenStack Development Mailing List (not for usage questions)
 Cc: opnfv-tech-disc...@lists.opnfv.org

 Hi All,

 The Tricircle Project has been on stackforge for a while, and without much
 activities.

 Now we will completely restructure the code base to make it more community
 open source friendly, less corporate PoC looking hopefully :P

 At the mean time I want to call for attention for people who are
 interested in this project, to participate in a time poll for our weekly
 meeting:

 http://doodle.com/d7fvmgvrwv8y3bqv

 I would recommend UTC 13:00 coz it is the only few time period when all
 the continent are able to be awake (tougher on US tho).

 Please find more info on Tricircle at
 https://github.com/stackforge/tricircle (new code base would come in the
 next few weeks). It mainly aim to solve OpenStack deployment acorss
 multiple sites.

 Also depending on OPNFV Multisite Project's decision, Tricircle might be
 one of the upstream projects of Multisite, which aims at developing
 requirements for NFV multi-NFVI-PoPs VIM deployment. More info :
 https://wiki.opnfv.org/multisite  https://www.opnfv.org/arno

 --

 Zhipeng (Howard) Huang

 Standard Engineer

 IT Standard  Patent/IT Prooduct Line

 Huawei Technologies Co,. Ltd

 Email: huangzhip...@huawei.com

 Office: Huawei Industrial Base, Longgang, Shenzhen

 (Previous)

 Research Assistant

 Mobile Ad-Hoc Network Lab, Calit2

 University of California, Irvine

 Email: zhipe...@uci.edu

 Office: Calit2 Building Room 2402

 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado

  Spirent Communications e-mail confidentiality.
 
 This e-mail contains confidential and / or privileged information
 belonging to Spirent Communications plc, its affiliates and / or
 subsidiaries. If you are not the intended recipient, you are hereby
 notified that any disclosure, copying, distribution and / or the taking of
 any action based upon reliance on the contents of this transmission is
 strictly forbidden. If you have received this message in error please
 notify the sender by return e-mail and delete it from your system.

 Spirent Communications plc
 Northwood Park, Gatwick Road, Crawley, West Sussex, RH10 9XN, United
 Kingdom.
 Tel No. +44 (0) 1293 767676
 Fax No. +44 (0) 1293 767677

 Registered in England Number 470893
 Registered at Northwood Park, Gatwick Road, Crawley, West Sussex, RH10
 9XN, United Kingdom.

 Or if within the US,

 Spirent Communications,
 27349 Agoura Road, Calabasas, CA, 91301, USA.
 Tel No. 1-818-676- 2300

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Tracking bugs in apps on launchpad

2015-06-08 Thread Dmitro Dovbii
Hi, Serg!
Nice!
Why only bug tracking?
I'm looking forward to the first blueprint submitting :)

Regards,
Dmytro Dovbii

2015-06-08 16:34 GMT+03:00 Serg Melikyan smelik...@mirantis.com:

 We originally created https://launchpad.net/murano-applications, and I
 misspelled address in my first e-mail, but after I was pointed out to
 the mistake I've decided to create new project with URL
 https://launchpad.net/murano-apps that correspond to the repository
 name.

 On Mon, Jun 8, 2015 at 4:01 PM, Serg Melikyan smelik...@mirantis.com
 wrote:
  Hi folks,
 
  We used to track bugs that we have in applications published in
  openstack/murano-apps repository directly on launchpad.net/murano but
  sometimes it's really inconvenient:
 
  * applications are not a part of the murano
  * it's hard to properly prioritize bugs, because critical bug for app is
 not
  critical at all for murano
 
  We had created murano-apps project on launchpad sometimes ago, but never
  truly used this project. I propose to move existing bugs for
 applications to
  https://launchpad.net/murano-apps and use this project as place for
 tracking
  bugs in openstack/murano-apps repository. Folks, what do you think about
  that?
  --
  Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
  http://mirantis.com | smelik...@mirantis.com



 --
 Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
 http://mirantis.com | smelik...@mirantis.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] RTNETLINK permission denied on setting IPv6 on br-ex

2015-06-08 Thread Sean M. Collins
On Fri, Jun 05, 2015 at 04:34:18PM EDT, Angela Smith wrote:
 We have been having this issue with devstack installation since Tuesday 6/2.  
 On trying to add IPv6 address to br-ex, it fails with permission denied.

I've been seeing instances where br-ex is created, but the link state is
down, which really messes things up when you try and add the FIXED_RANGE route.

I'm looking into that issue, perhaps it will shed light on yours as
well.

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Keystone] Glance and trusts

2015-06-08 Thread Adam Young

On 06/06/2015 06:00 AM, David Chadwick wrote:

In order to do this fully, you will need to work out what all the
possible supply chains are in OpenStack, so that when any customer
starts any chain of events, there is sufficient information in the
message passed to the supplier that allows that supplier to order from
its supplier. This has to be ensured all the way down the chain, so that
important information that one supplier needs was not omitted by a
previous supplier higher up the chain. I suspect that the name of the
initial requestor (at least) will need to be passed all the way down the
chain.
Yes, I think so.  This is in keeping with how I understand we need to 
unify delegation across Keystone constructs.


1.  Tokens live too long.  They should be short and ephemeral, and if a 
user needs a new one to complete an action, getting one should be 
trivial.  They should be bound to the endpoint that is the target of the 
operation.  5 minutes makes sense as a length of life, as that is 
essentially now  when you factor in clock skew. Revocations on tokens 
to not make sense.


2.  Delegation are long lived affairs.  If anything is going to take 
longer than the duration of the token, it should be in the context of a 
delegation, and the user should re-authenticate to prove identity.  
Delegations need to be focused on workflow, not all operations ther 
user can do  which means that the Glance case discussed here is a good 
start of documenting what do you need to get this job done?



We need to keep it light (not fill up a database) for normal operations, 
but maintain the chain of responsibility on a given operation.  Trusts 
are the closest thing we have to this model.


In the supply chain example, if one company exchanges a good with 
another company, they don't care about the end user, because there is no 
realy connection between the good and the user yet;   if there is a 
problem, the original manufacterur can produce  another car identical to 
the first and replace it without the user being any the wiser.


Contrast this with picking kids up from daycare:  as a parent, I have to 
sign a form saying that my mother-in-law will be picking up the kids on 
a specific day.  My Mother-in-law is not authorized to sign a form that 
will allow her friend to pick the kids up.  My kids are very, very 
specific to me, and should be carefully handed off from approved 
supervisor to approved supervisor.


Fetching an image from Glance may well be a causal or a protected 
operation, depending on the image.


Casual if it is a global image: anyone in the world can do it, so no big 
deal.


Protected if, for example, the image is pre-populated with an enrollment 
secret.  Only the owners should be able to get at it






regards

David

On 06/06/2015 03:25, Bhandaru, Malini K wrote:

Continuing with David’s example and the need to control access to a
Swift object that Adam points out,

  


How about using the Glance token from glance-API service to
glance-registry but carry along extra data in the call, namely user-id,
domain, and public/private information, so the object can be access
controlled.

  


Alternately and encapsulating token

  


Glance-token user-token -- keeping it simple, only two levels.  This
protects from on the cusp expired user-tokens.

Could check user quota before attempting the storage.

  


Should user not have paid dues, Glance knows which objects to garbage
collect!

  


Regards

Malini

  


*From:*Adam Young [mailto:ayo...@redhat.com]
*Sent:* Friday, June 05, 2015 4:11 PM
*To:* openstack-dev@lists.openstack.org
*Subject:* Re: [openstack-dev] [Glance][Keystone] Glance and trusts

  


On 06/05/2015 10:39 AM, Dolph Mathews wrote:

  


 On Thu, Jun 4, 2015 at 1:54 AM, David Chadwick
 d.w.chadw...@kent.ac.uk mailto:d.w.chadw...@kent.ac.uk wrote:

 I did suggest another solution to Adam whilst we were in Vancouver, and
 this mirrors what happens in the real world today when I order something
 from a supplier and a whole supply chain is involved in creating the end
 product that I ordered. This is not too dissimilar to a user requesting
 a new VM. Essentially each element in the supply chain trusts the two
 adjacent elements. It has contracts with both its customers and its
 suppliers to define the obligations of each party. When something is
 ordered from it, it trusts the purchaser, and on the strength of this,
 it will order from its suppliers. Each element may or may not know who
 the original customer is, but if it needs to know, it trusts the
 purchaser to tell it. Furthermore the customer does not need to delegate
 any of his/her permissions to his/her supplier. If we used such a system
 of trust between Openstack services, then we would not need delegation
 of authority and trusts as they are implemented today. It could
 significantly simplify the interactions between OpenStack services.

  



Re: [openstack-dev] Barbican : Retrieval of the secret in text/plain format generated from Barbican order resource

2015-06-08 Thread John Wood
Hello Asha,

Barbican is not yet supporting the conversion of secrets of one format to 
another. If you have thoughts on desired conversions however, please mentioned 
them in this thread, or else consider mentioning them in our weekly IRC meeting 
(freenode #openstack-meeting-alt at 3pm CDT).

Thanks,
John



From: Asha Seshagiri asha.seshag...@gmail.commailto:asha.seshag...@gmail.com
Date: Monday, June 8, 2015 at 12:17 AM
To: John Wood john.w...@rackspace.commailto:john.w...@rackspace.com
Cc: openstack-dev 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org, 
Douglas Mendizabal 
douglas.mendiza...@rackspace.commailto:douglas.mendiza...@rackspace.com, 
Reller, Nathan S. 
nathan.rel...@jhuapl.edumailto:nathan.rel...@jhuapl.edu, Adam Harwell 
adam.harw...@rackspace.commailto:adam.harw...@rackspace.com, Paul Kehrer 
paul.keh...@rackspace.commailto:paul.keh...@rackspace.com
Subject: Re: Barbican : Retrieval of the secret in text/plain format generated 
from Barbican order resource

Thanks John for your response.
I am aware that application/octet-stream works for the retrieval of secret .
We are utilizing the key generated from Barbican in our AES encryption 
algorithm . Hence we  wanted the response in text/plain format from Barbican 
since AES encryption algorithm would need the key of ASCII format which should 
be either 16,24 or 32 bytes.

The AES encyption algorithms would not accept the binary format and even if 
binary  is converted into ascii , encoding is failing for few of the keys 
because some characters exceeeds the range of ASCII and for some keys  after 
encoding length exceeds 32 bytes  which is the maximum length for doing AES 
encryption.

Would like to know the reason behind Barbican not supporting the retrieval of 
the secret in text/plain format generated from the order resource in plain/text 
format.

Thanks and Regards,
Asha Seshagiri

On Sun, Jun 7, 2015 at 11:43 PM, John Wood 
john.w...@rackspace.commailto:john.w...@rackspace.com wrote:
Hello Asha,

The AES type key should require an application/octet-stream Accept header to 
retrieve the secret as it is a binary type. Please replace ‘text/plain’ with 
‘application/octet-stream’ in your curl calls below.

Thanks,
John


From: Asha Seshagiri asha.seshag...@gmail.commailto:asha.seshag...@gmail.com
Date: Friday, June 5, 2015 at 2:42 PM
To: openstack-dev 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Cc: Douglas Mendizabal 
douglas.mendiza...@rackspace.commailto:douglas.mendiza...@rackspace.com, 
John Wood john.w...@rackspace.commailto:john.w...@rackspace.com, Reller, 
Nathan S. nathan.rel...@jhuapl.edumailto:nathan.rel...@jhuapl.edu, Adam 
Harwell adam.harw...@rackspace.commailto:adam.harw...@rackspace.com, Paul 
Kehrer paul.keh...@rackspace.commailto:paul.keh...@rackspace.com
Subject: Re: Barbican : Retrieval of the secret in text/plain format generated 
from Barbican order resource

Hi All ,

I am currently working on use cases for database and file Encryption.It is 
really important for us to know since my Encryption use case would be using the 
key generated by Barbican through order resource as the key.
The encyption algorithms would not accept the binary format and even if 
converted into ascii , encoding is failing for few of the keys because some 
characters exceeeds the range of ASCII and for some key  after encoding length 
exceeds 32 bytes  which is the maximum length for doing AES encryption.
It would be great if  someone could respond to the query ,since it would block 
my further investigations on Encryption usecases using Babrican

Thanks and Regards,
Asha Seshagiri


On Wed, Jun 3, 2015 at 3:51 PM, Asha Seshagiri 
asha.seshag...@gmail.commailto:asha.seshag...@gmail.com wrote:
Hi All,

Unable to retrieve the secret in text/plain format  generated from Barbican 
order resource

Please find the curl command and responses for

Order creation with payload content type as text/plain :

[root@barbican-automation ~]# curl -X POST -H 'content-type:application/json' 
-H X-Auth-Token:9b211b06669249bb89665df068828ee8 \
 -d '{type : key, meta: {name: secretname2,algorithm: aes, 
 bit_length:256,  mode: cbc, payload_content_type: text/plain}}'  -k 
 https://169.53.235.102:9311/v1/orders

{order_ref: 
https://169.53.235.102:9311/v1/orders/727113f9-fcda-4366-9f85-93b15edd4680}

Retrieval of the order by ORDER ID in order to get to know the secret generated 
by Barbican

[root@barbican-automation ~]# curl -H 'Accept: application/json' -H 
X-Auth-Token:9b211b06669249bb89665df068828ee8 \
 -k  https://169.53.235.102:9311/v1/orders/727113f9-fcda-4366-9f85-93b15edd4680
{status: ACTIVE, sub_status: Unknown, updated: 2015-06-03T19:08:13, 
created: 2015-06-03T19:08:12, order_ref: 
https://169.53.235.102:9311/v1/orders/727113f9-fcda-4366-9f85-93b15edd4680;, 
secret_ref: 
https://169.53.235.102:9311/v1/secrets/5c25525d-a162-4b0b-9954-90c4ce426c4e;, 
creator_id: cedd848a8a9e410196793c601c03b99a, 

Re: [openstack-dev] [Openstack-operators] [Fuel][Oslo][RabbitMQ][Shovel] Deprecate mirrored queues from HA AMQP cluster scenario

2015-06-08 Thread Michael Klishin
On 8 June 2015 at 15:10:15, Davanum Srinivas (dava...@gmail.com) wrote:
 I'd like to bring out a poll about deprecating the RabbitMQ mirrored  
 queues for HA layout and replacing the AMQP clustering by shovel  
 [0],
 [1]. I guess the federation would not be a good option, but let's  
 consider it as well.

RabbitMQ team member here. 

Neither Shovel nor Federation will replace mirroring. Shovel moves messages
from a queue to an exchange (within a single node or between remote nodes 
and/or clusters).
It doesn't replicate anything.

Federation has two parts to it:

 * Queue federation: no replicate, distributes messages from a single logical 
queue
   between N nodes or clusters, when there are no local consumers to consume 
them.
 * Exchange federation replicates a stream of messages going through an 
exchange.
   As messages are consumed upstream, downstream has no way of knowing about it.

 Why this must be done? The answer is that the rabbit cluster cannot  
 detect and survive micro outages well and just ending up with  
 some
 queues stuck and as a result, the rabbitmqctl control plane hanged  
 completely unresponsive (until the rabbit node erased and recovered  
 its
 cluster membership). These outages could be caused either by  
 the network
 *or* by CPU load spikes. For example, like this bug in Fuel project  
 [2]
 and this mail thread [3].

The right thing to do here is introduce timeouts to rabbitmqctl, which was 99% 
finished
in the past but some RabbitMQ team members felt it should produce more detailed
error messages, which extended the scope of the change significantly.

 This seems rather the Erlang's 
 Mnesia generic clustering issue, than something what could be just fixed 
 in RabbitMQ, unless the mnesia based clustering would be dropped 
 completely ;)

While Mnesia indeed needs to be replaced to introduce AP (as in CAP) style 
mirroring,
the issue you're bringing up here has nothing to do with Mnesia.
Mnesia is not used by rabbitmqctl, and it is not used to store messages.
It's a rabbitmqctl
issue, and potentially a hint that you may want to reduce net_ticktime value 
(say, to 5-10 seconds)
to make queue master unavailability detected faster.



1. http://www.rabbitmq.com/nettick.html
--  
MK  

Staff Software Engineer, Pivotal/RabbitMQ  



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Tracking bugs in apps on launchpad

2015-06-08 Thread Serg Melikyan
Hi folks,

We used to track bugs that we have in applications published in
openstack/murano-apps repository directly on launchpad.net/murano but
sometimes it's really inconvenient:

* applications are not a part of the murano
* it's hard to properly prioritize bugs, because critical bug for app is
not critical at all for murano

We had created murano-apps project on launchpad sometimes ago, but never
truly used this project. I propose to move existing bugs for applications
to https://launchpad.net/murano-apps and use this project as place for
tracking bugs in openstack/murano-apps repository. Folks, what do you think
about that?
-- 
Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Flavor framework

2015-06-08 Thread Paul Carver
What is the status of the flavor framework? Is this the right spec? 
https://blueprints.launchpad.net/neutron/+spec/neutron-flavor-framework


I'm trying to sort through how the ML3 proposal 
https://review.openstack.org/#/c/105078/ fits in with requirements for 
high performance (high throughput, high packets per second, low latency) 
needed for workloads like VoIP, video and other NFV functions.


It seems to be the consensus that a significant part of the intent of 
ML3 is flavors for routers and that makes it natural that ML3 should 
have a dependency on flavor framework but the blueprint referenced above 
doesn't have a series or milestone target.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Barbican : Retrieval of the secret in text/plain format generated from Barbican order resource

2015-06-08 Thread Nathan Reller
Asha,

When you say you want your key in ASCII does that also mean putting
the bytes in hex or base64 format? Isn't ASCII only 7 bits?

-Nate

On Mon, Jun 8, 2015 at 1:17 AM, Asha Seshagiri asha.seshag...@gmail.com wrote:
 Thanks John for your response.
 I am aware that application/octet-stream works for the retrieval of secret .
 We are utilizing the key generated from Barbican in our AES encryption
 algorithm . Hence we  wanted the response in text/plain format from Barbican
 since AES encryption algorithm would need the key of ASCII format which
 should be either 16,24 or 32 bytes.

 The AES encyption algorithms would not accept the binary format and even if
 binary  is converted into ascii , encoding is failing for few of the keys
 because some characters exceeeds the range of ASCII and for some keys  after
 encoding length exceeds 32 bytes  which is the maximum length for doing AES
 encryption.

 Would like to know the reason behind Barbican not supporting the retrieval
 of the secret in text/plain format generated from the order resource in
 plain/text format.

 Thanks and Regards,
 Asha Seshagiri

 On Sun, Jun 7, 2015 at 11:43 PM, John Wood john.w...@rackspace.com wrote:

 Hello Asha,

 The AES type key should require an application/octet-stream Accept header
 to retrieve the secret as it is a binary type. Please replace ‘text/plain’
 with ‘application/octet-stream’ in your curl calls below.

 Thanks,
 John


 From: Asha Seshagiri asha.seshag...@gmail.com
 Date: Friday, June 5, 2015 at 2:42 PM
 To: openstack-dev openstack-dev@lists.openstack.org
 Cc: Douglas Mendizabal douglas.mendiza...@rackspace.com, John Wood
 john.w...@rackspace.com, Reller, Nathan S. nathan.rel...@jhuapl.edu,
 Adam Harwell adam.harw...@rackspace.com, Paul Kehrer
 paul.keh...@rackspace.com
 Subject: Re: Barbican : Retrieval of the secret in text/plain format
 generated from Barbican order resource

 Hi All ,

 I am currently working on use cases for database and file Encryption.It is
 really important for us to know since my Encryption use case would be using
 the key generated by Barbican through order resource as the key.
 The encyption algorithms would not accept the binary format and even if
 converted into ascii , encoding is failing for few of the keys because some
 characters exceeeds the range of ASCII and for some key  after encoding
 length exceeds 32 bytes  which is the maximum length for doing AES
 encryption.
 It would be great if  someone could respond to the query ,since it would
 block my further investigations on Encryption usecases using Babrican

 Thanks and Regards,
 Asha Seshagiri


 On Wed, Jun 3, 2015 at 3:51 PM, Asha Seshagiri asha.seshag...@gmail.com
 wrote:

 Hi All,

 Unable to retrieve the secret in text/plain format  generated from
 Barbican order resource

 Please find the curl command and responses for

 Order creation with payload content type as text/plain :

 [root@barbican-automation ~]# curl -X POST -H
 'content-type:application/json' -H
 X-Auth-Token:9b211b06669249bb89665df068828ee8 \
  -d '{type : key, meta: {name: secretname2,algorithm: aes,
  bit_length:256,  mode: cbc, payload_content_type: text/plain}}'
  -k https://169.53.235.102:9311/v1/orders

 {order_ref:
 https://169.53.235.102:9311/v1/orders/727113f9-fcda-4366-9f85-93b15edd4680}

 Retrieval of the order by ORDER ID in order to get to know the secret
 generated by Barbican

 [root@barbican-automation ~]# curl -H 'Accept: application/json' -H
 X-Auth-Token:9b211b06669249bb89665df068828ee8 \
  -k
  https://169.53.235.102:9311/v1/orders/727113f9-fcda-4366-9f85-93b15edd4680
 {status: ACTIVE, sub_status: Unknown, updated:
 2015-06-03T19:08:13, created: 2015-06-03T19:08:12, order_ref:
 https://169.53.235.102:9311/v1/orders/727113f9-fcda-4366-9f85-93b15edd4680;,
 secret_ref:
 https://169.53.235.102:9311/v1/secrets/5c25525d-a162-4b0b-9954-90c4ce426c4e;,
 creator_id: cedd848a8a9e410196793c601c03b99a, meta: {name:
 secretname2, algorithm: aes, payload_content_type: text/plain,
 mode: cbc, bit_length: 256, expiration: null}, sub_status_message:
 Unknown, type: key}[root@barbican-automation ~]#


 Retrieval of the secret failing with the content type text/plain

 [root@barbican-automation ~]# curl -H 'Accept:text/plain' -H
 X-Auth-Token:9b211b06669249bb89665df068828ee8 -k
 https://169.53.235.102:9311/v1/secrets/5c25525d-a162-4b0b-9954-90c4ce426c4e/payload
 {code: 500, description: Secret payload retrieval failure seen -
 please contact site administrator., title: Internal Server Error}

 I would like to know wheather this is a bug from Barbican side  since
 Barbican allows creation of the order resource with text/plain as the
 payload_content type but the retrieval of the secret payload with the
 content type text/plain is not allowed.

 Any help would highly be appreciated.
 --
 Thanks and Regards,
 Asha Seshagiri




 --
 Thanks and Regards,
 Asha Seshagiri




 --
 Thanks and Regards,
 Asha Seshagiri

 

[openstack-dev] [Ironic] [Inspector] Toward 2.0.0 release

2015-06-08 Thread Dmitry Tantsur

Hello, Inspector team!

The renaming process is going pretty well, the last thing we need to do 
is to get Infra approval and actual rename [1][2].


I'd like to allow people (e.g. myself) to start packaging inspector 
under it's new name, so I'd like to make 2.0.0 release as soon as 
possible (as opposed to scheduling it to particular date). All breaking 
changes should land by this release - I don't expect 3.0.0 soon :)


I've updated our summit etherpad [3] with whatever priorities I 
remembered, please have a look. I've also untargeted a few things in 
launchpad [4] (and will probably untarget more later on). Please assign 
yourself, if you want something done in this release time frame.


I would like 2.1.0 to be released with Ironic Liberty and be properly 
supported.


Let me know what you think.

Cheers,
Dmitry

[1] https://review.openstack.org/#/c/188030/
[2] https://review.openstack.org/#/c/188798/
[3] https://etherpad.openstack.org/p/liberty-ironic-discoverd
[4] https://bugs.launchpad.net/ironic-inspector/+milestone/2.0.0

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [stable] No longer doing stable point releases

2015-06-08 Thread Kuvaja, Erno
 -Original Message-
 From: Thierry Carrez [mailto:thie...@openstack.org]
 Sent: Friday, June 05, 2015 1:46 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [all] [stable] No longer doing stable point
 releases
 
 So.. summarizing the various options again:
 
 Plan A
 Just drop stable point releases.
 (-) No more release notes
 (-) Lack of reference points to compare installations
 
 Plan B
 Push date-based tags across supported projects from time to time.
 (-) Encourages to continue using same version across the board
 (-) Almost as much work as making proper releases
 
 Plan C
 Let projects randomly tag point releases whenever
 (-) Still a bit costly in terms of herding cats
 
 Plan D
 Drop stable point releases, publish per-commit tarballs
 (-) Requires some infra changes, takes some storage space
 
 Plans B, C and D also require some release note / changelog generation from
 data maintained *within* the repository.
 
 Personally I think the objections raised against plan A are valid. I like 
 plan D,
 since it's more like releasing every commit than not releasing anymore. I
 think it's the most honest trade-off. I could go with plan C, but I think it's
 added work for no additional value to the user.
 
 What would be your preferred option ?
 
 --
 Thierry Carrez (ttx)
 

I don't think plans B  C are any benefit for  anyone based on the statements 
discussed earlier so won't be repeating those. One thing I like about plan D is 
that it would give also indicator how much the stable branch has moved in each 
individual project. 

Yes this can be seen from git, but also having just running number on the 
stable release for each commit would be really quick glimpse opposed to SHAs or 
counting  the changes from git  log.

- Erno

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [app-catalog] Updating existing artifacts on storage.apps.openstack.org

2015-06-08 Thread Serg Melikyan
Hi folks,

We found few issues with images used by murano application and we would
like to update them. What is procedure for updating existing images/apps
hosted on storage.apps.openstack.org?
-- 
Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [Fuel][Oslo][RabbitMQ][Shovel] Deprecate mirrored queues from HA AMQP cluster scenario

2015-06-08 Thread Davanum Srinivas
Bogdan,

I'd also like to ask:
c) Does anyone have any experience with shovel in a realistic
openstack environment? (or even a devstack one)

Thanks,
dims

On Mon, Jun 8, 2015 at 6:24 AM, Bogdan Dobrelya bdobre...@mirantis.com wrote:
 Hello, stackers.

 I'd like to bring out a poll about deprecating the RabbitMQ mirrored
 queues for HA layout and replacing the AMQP clustering by shovel [0],
 [1]. I guess the federation would not be a good option, but let's
 consider it as well.

 Why this must be done? The answer is that the rabbit cluster cannot
 detect and survive micro outages well and just ending up with some
 queues stuck and as a result, the rabbitmqctl control plane hanged
 completely unresponsive (until the rabbit node erased and recovered its
 cluster membership). These outages could be caused either by the network
 *or* by CPU load spikes. For example, like this bug in Fuel project [2]
 and this mail thread [3].

 So, let's please vote and discuss.

 But the questions also are:
 a) Would be there changes in Oslo.messaging required as well in order to
 support the underlying AMQP layer architecture changes?
 b) Are there any volunteers for this research to be done for the
 Oslo.messaging AMQP rabbit driver?

 PS. Note, I'm not bringing RabbitMQ versions here as the issue seems
 unresolved for any of existing ones. This seems rather the Erlang's
 Mnesia generic clustering issue, than something what could be just fixed
 in RabbitMQ, unless the mnesia based clustering would be dropped
 completely ;)

 [0] https://www.rabbitmq.com/shovel-dynamic.html
 [1] https://www.rabbitmq.com/shovel.html
 [2] https://bugs.launchpad.net/fuel/+bug/1460762
 [3] https://groups.google.com/forum/#!topic/rabbitmq-users/iZWokxvhlaU

 --
 Best regards,
 Bogdan Dobrelya,
 Skype #bogdando_at_yahoo.com
 Irc #bogdando

 ___
 OpenStack-operators mailing list
 openstack-operat...@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] polling agent configuration speculation

2015-06-08 Thread Chris Dent


(Posting to the mailing list rather than writing a spec or making
code because I think it is important to get some input and feedback
before going off on something wild. Below I'm talking about
speculative plans and seeking feedback, not reporting decisions
about the future. Some of this discussion is intentionally naive
about how things are because that's not really relevant, what's
relevant is how things should be or could be.

tl;dr: I want to make the configuration of the pollsters more explicit
and not conflate and overlap the entry_points.txt and pipeline.yaml
in confusing and inefficient ways.

* entry_points.txt should define what measurements are possible, not
  what measurements are loaded
* something new should define what measurements are loaded and
  polled (and their intervals) (sources in pipeline.yaml speak)
* pipeline.yaml should define transformations and publishers

Would people like something like this?)

The longer version:

Several of the outcomes of the Liberty Design Summit were related to
making changes to the agents which gather or hear measurements and
events. Some of these changes have pending specs:

* Ceilometer Collection Agents Split
  https://review.openstack.org/#/c/186964/

  Splitting the collection agents into their own repo to allow
  use and evolution separate from the rest of Ceilometer.

* Adding Meta-Data Caching Spec
  https://review.openstack.org/#/c/185084/

  Adding metadata caching to the compute agent so the Nova-API is
  less assaulted than it currently is.

* Declarative notification handling
  https://review.openstack.org/#/c/178399/

  Be able to hear and transform a notification to an event without
  having to write code.

Reviewing these and other specs and doing some review of the code
points out that we have an opportunity to make some architectural and
user interface improvements (while still maintain existing
functionality). For example:

The current ceilometer polling agent has an interesting start up
process:

1 It determines which namespaces it is operating in ('compute',
  'central', 'ipmi').
2 Using entry_points defined in setup.cfg it initializes all the
  polling extensions and all the discovery extensions (independent
  of sources defined in pipeline.yaml)
3 Every source in pipeline.yaml is given a list of pollsters that
  match the meters defined by the source, creating long running
  tasks to do the polling.
4 Each task does resource discovery and partitioning coordination.
5 measurements/samples are gathered and are transformed and published
  according the sink rules in pipeline.yaml

A couple things about this seem less than ideal:

* 2 means we load redundant stuff unless we edit entry_points.txt.
  We do not want to encourage this sort of behavior. entry_points is
  not configuration[1]. We should configure elsewhere to declare I
  care about things X (including the option of all things) and
  then load the tools to do so, on demand.

* Two things are happening in the same context in step 5 and that
  seems quite limiting with regard to opportunities for effective
  maintenance and optimizing.

My intuition (which often needs to sanity checked, thus my posting
here) tells me there are some things we could change:

* Separate polling and publishing/transforming into separate
  workers/processes.

* Extract the definition of sources to be polled from pipeline.yaml
  to its own file and use that to be the authority of which
  extensions are loaded for polling and discovery.

What do people think?

[1] This is really the core of my concern and the main part I want
to see change.
--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >