Re: [openstack-dev] [keystone] SPFE: Authenticated Encryption (AE) Tokens

2015-02-16 Thread Marek Denis

+1 from me.

On 13.02.2015 22:19, Morgan Fainberg wrote:
On February 13, 2015 at 11:51:10 AM, Lance Bragstad 
(lbrags...@gmail.com mailto:lbrags...@gmail.com) wrote:

Hello all,


I'm proposing the Authenticated Encryption (AE) Token specification 
[1] as an SPFE. AE tokens increases scalability of Keystone by 
removing token persistence. This provider has been discussed prior 
to, and at the Paris summit [2]. There is an implementation that is 
currently up for review [3], that was built off a POC. Based on the 
POC, there has been some performance analysis done with respect to 
the token formats available in Keystone (UUID, PKI, PKIZ, AE) [4].


The Keystone team spent some time discussing limitations of the 
current POC implementation at the mid-cycle. One case that still 
needs to be addressed (and is currently being worked), is federated 
tokens. When requesting unscoped federated tokens, the token contains 
unbound groups which would need to be carried in the token. This case 
can be handled by AE tokens but it would be possible for an unscoped 
federated AE token to exceed an acceptable AE token length (i.e.  
255 characters). Long story short, a federation migration could be 
used to ensure federated AE tokens never exceed a certain length.


Feel free to leave your comments on the AE Token spec.

Thanks!

Lance

[1] https://review.openstack.org/#/c/130050/
[2] https://etherpad.openstack.org/p/kilo-keystone-authorization
[3] https://review.openstack.org/#/c/145317/
[4] http://dolphm.com/benchmarking-openstack-keystone-token-formats/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I am for granting this exception as long as it’s clear that the 
following is clear/true:


* All current use-cases for tokens (including federation) will be 
supported by the new token provider.


* The federation tokens being possibly over 255 characters can be 
addressed in the future if they are not addressed here (a “federation 
migration” does not clearly state what is meant.


I am also ok with the AE token work being re-ordered ahead of the 
provider cleanup to ensure it lands. Fixing the AE Token provider 
along with PKI and UUID providers should be minimal extra work in the 
cleanup.


This addresses a very, very big issue within Keystone as scaling 
scaling up happens. There has been demand for solving token 
persistence for ~3 cycles. The POC code makes this exception possible 
to land within Kilo, whereas without the POC this would almost 
assuredly need to be held until the L-Cycle.



TL;DR, I am for the exception if the AE Tokens support 100% of the 
current use-cases of tokens (UUID or PKI) today.



—Morgan



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress][Delegation] Google doc for working notes

2015-02-16 Thread ruby.krishnaswamy
Hi Tim

What I'd like to see is the best of both worlds.  Users write Datalog policies 
describing whatever VM-placement policy they want.  If the policy they've 
written is on the solver-scheduler's list of options, we use the hard-coded 
implementation, but if the policy isn't on that list we translate directly to 
LP.


ð  How (calling the hard-coded implementation) ?

o   Through the message bus?

ð  Is it Congress that will send out the data or should each implementation (of 
a policy) read it in directly?

Ruby

De : Tim Hinrichs [mailto:thinri...@vmware.com]
Envoyé : jeudi 12 février 2015 19:03
À : OpenStack Development Mailing List (not for usage questions)
Objet : Re: [openstack-dev] [Congress][Delegation] Google doc for working notes

Hi Debo and Yathiraj,

I took a third look at the solver-scheduler docs and code with your comments in 
mind.  A few things jumped out.

1)  Choice of LP solver.

I see solver-scheduler uses Pulp, which was on the Congress short list as well. 
 So we're highly aligned on the choice of underlying solver.

2) User control over VM-placement.

To choose the criteria for VM-placement, the solver-scheduler user picks from a 
list of predefined options, e.g. ActiveHostConstraint, 
MaxRamAllocationPerHostConstraint.

We're investigating a slightly different approach, where the user defines the 
criteria for VM-placement by writing any policy they like in Datalog.  Under 
the hood we then convert that Datalog to an LP problem.  From the developer's 
perspective, with the Congress approach we don't attempt to anticipate the 
different policies the user might want and write code for each policy; instead, 
we as developers write a translator from Datalog to LP.  From the user's 
perspective, the difference is that if the option they want isn't on the 
solver-scheduler's list, they're out of luck or need to write the code 
themselves.  But with the Congress approach, they can write any VM-placement 
policy they like.

What I'd like to see is the best of both worlds.  Users write Datalog policies 
describing whatever VM-placement policy they want.  If the policy they've 
written is on the solver-scheduler's list of options, we use the hard-coded 
implementation, but if the policy isn't on that list we translate directly to 
LP.  This approach gives us the ability to write custom code to handle common 
cases while at the same time letting users write whatever policy they like.

3) API and architecture.

Today the solver-scheduler's VM-placement policy is defined at config-time 
(i.e. not run-time).  Am I correct that this limitation is only because there's 
no API call to set the solver-scheduler's policy?  Or is there some other 
reason the policy is set at config-time?

Congress policies change at runtime, so we'll definitely need a VM-placement 
engine whose policy can be changed at run-time as well.

If we focus on just migration (and not provisioning), we can build a 
VM-placement engine that sits outside of Nova that has an API call that allows 
us to set policy at runtime.  We can also set up that engine to get data 
updates that influence the policy.  We were planning on creating this kind of 
VM-placement engine within Congress as a node on the DSE (our message bus).  
This is convenient because all nodes on the DSE run in their own thread, any 
node on the DSE can subscribe to any data from any other node (e.g. 
ceilometer's data), and the algorithms for translating Datalog to LP look to be 
quite similar to the algorithms we're using in our domain-agnostic policy 
engine.

Tim


On Feb 11, 2015, at 4:50 PM, Debojyoti Dutta 
ddu...@gmail.commailto:ddu...@gmail.com wrote:


Hi Tim: moving our thread to the mailer. Excited to collaborate!



From: Debo~ Dutta dedu...@cisco.commailto:dedu...@cisco.com
Date: Wednesday, February 11, 2015 at 4:48 PM
To: Tim Hinrichs thinri...@vmware.commailto:thinri...@vmware.com
Cc: Yathiraj Udupi (yudupi) yud...@cisco.commailto:yud...@cisco.com, 
Gokul B Kandiraju go...@us.ibm.commailto:go...@us.ibm.com, Prabhakar Kudva 
ku...@us.ibm.commailto:ku...@us.ibm.com, 
ruby.krishnasw...@orange.commailto:ruby.krishnasw...@orange.com 
ruby.krishnasw...@orange.commailto:ruby.krishnasw...@orange.com, 
dilik...@in.ibm.commailto:dilik...@in.ibm.com 
dilik...@in.ibm.commailto:dilik...@in.ibm.com, Norival Figueira 
nfigu...@brocade.commailto:nfigu...@brocade.com, Ramki Krishnan 
r...@brocade.commailto:r...@brocade.com, Xinyuan Huang (xinyuahu) 
xinyu...@cisco.commailto:xinyu...@cisco.com, Rishabh Jain -X (rishabja - 
AAP3 INC at Cisco) risha...@cisco.commailto:risha...@cisco.com
Subject: Re: Nova solver scheduler and Congress

Hi Tim

To address your particular questions:

  1.  translate some policy language into constraints for the LP/CVP and we had 
left that to congress hoping to integrate when the policy efforts in openstack 
were ready (our initial effort was pre congress)
  2.  For migrations: we are currently doing that - its about incremental 

[openstack-dev] [all] oslo.i18n 1.4.0 released

2015-02-16 Thread Doug Hellmann
The Oslo team is pleased to announce the release of:

oslo.i18n 1.4.0: oslo.i18n library

For more details, please see the git log history below and:

http://launchpad.net/oslo.i18n/+milestone/1.4.0

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.i18n

Changes in /home/dhellmann/repos/openstack/oslo.i18n 1.3.1..1.4.0
-

9a2cde2 Add test fixture to prefix lazily translated messages

Diffstat (except docs and test files)
-

oslo_i18n/fixture.py| 76 +
2 files changed, 126 insertions(+)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Missing column in Gerrit overview page showing a -1 review after/with a +2 review

2015-02-16 Thread Jeremy Stanley
On 2015-02-16 15:29:55 +0100 (+0100), Christian Berendt wrote:
 This way I can display review requests with -1 reviews, yes. On the
 overview page a +2 review will still hide any -1 reviews. But a -1
 review will hide all +1 reviews. I think -1 reviews should be visible
 regardless of +2 reviews.

As far as I can tell, changing that will require modifications in
Gerrit, so probably worth opening a feature request bug at
http://code.google.com/p/gerrit/issues/list if someone hasn't
already.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Use of egg snapshots of neutron code in neutron-*aas projects/distributing openstack

2015-02-16 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 02/16/2015 04:13 PM, James Page wrote:
 Hi Folks
 
 The split-out drivers for vpn/fw/lb as-a-service all make use of a 
 generated egg of the neutron git repository as part of their unit
 test suite dependencies.
 
 This presents a bit of a challenge for us downstream in
 distributions,

I am packaging neutron for RDO, but I fail to understand your issue.

 as we can't really pull in a full source egg of neutron from 
 git.openstack.org; we have the code base for neutron core
 available (python-neutron), but that does not appear to be enough
 (see [0]).

Don't you ship egg files with python-neutron package? I think this
should be enough to get access to entry points.

 
 I would appreciate if dev's working in this area could a) review
 the bug and the problems we are seeing a b) think about how this
 can work for distributions - I'm happy to create a new
 'neutron-testing' type package from the neutron codebase to support
 this stuff, but right now I'm a bit unclear on exactly what its
 needs to contain!
 
 Cheers
 
 James
 
 
 [0] https://bugs.launchpad.net/neutron/+bug/1422376
 
 
 __

 
OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJU4hLIAAoJEC5aWaUY1u57+QEH/1ZaBmuEpIHiW1/67Lh452PU
o2dXy3fy23fns/9GUHbXn6ASRPi5usEqe4Qa+Z0jaVnipIQcdjvGZg8RET2KQsyo
RsmLJlOJHA2USJP62PvbkgZ5bmIlFSIi0vgNs75904tGp+UqGkpW4VZ/KTYyzVL2
kpBaMfJxHdjmEnPAdfk14u5kHkblavGqQO7plmjCRncFkUy63m/qWQ2zjQbpUxCZ
wZJ1FTNqA16mo4ThFzdn/br5Mqeopfkcwht7EQV/cCYz6b9Y0oU4qXmL5qy/k8Xz
yyU9hLagPrffLf0hJWdf3Zt0K3FqYDND1GJRvjgGvKSri4ylRt1zG07RG1ZdiWg=
=QffD
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] rethinking docs jobs in the gate

2015-02-16 Thread Sean Dague
On 02/16/2015 10:38 AM, Christian Berendt wrote:
 On 02/16/2015 04:29 PM, Andreas Jaeger wrote:
 For documentation projects we should discuss this separately as well,
 
 Is it possible to keep all environments like they are and to add a meta
 environment (called docs) calling the existing environments? This way we
 can reduce the number of jobs on the gates (entry point for the gates is
 the meta environment) but can keep the existing environments for local
 tests.

Why do you want them all in different venvs? Are there a lot of
instances where you only run one of them? And if so, is there a reason
not to do like we do with unit tests on projects and support a {posargs}
to select a subset.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [trusts] [all] How trusts should work by design?

2015-02-16 Thread Steven Hardy
On Mon, Feb 16, 2015 at 09:02:01PM +0600, Renat Akhmerov wrote:
Yeah, clarification from keystone folks would be really helpful.
If Nikolaya**s info is correct (I believe it is) then I actually dona**t
understand why trusts are needed at all, they seem to be useless. My
assumption is that they can be used only if we send requests directly to
OpenStack services (w/o using clients) with trust scoped token included in
headers, that might work although I didna**t checked that yet myself.
So please help us understand which one of my following assumptions is
correct?
 1. We dona**t understand what trusts are.
 2. We use them in a wrong way. (If yes, then whata**s the correct usage?)

One or both of these seems likely, possibly combined with bugs in the
clients where they try to get a new token instead of using the one you
provide (this is a common pattern in the shell case, as the token is
re-requested to get a service catalog).

This provides some (heat specific) information which may help somewhat:

http://hardysteven.blogspot.co.uk/2014/04/heat-auth-model-updates-part-1-trusts.html

 3. Trust mechanism itself is in development and cana**t be used at this
point.

IME trusts work fine, Heat has been using them since Havana with few
problems.

 4. OpenStack clients need to be changed in some way to somehow bypass
this keystone limitation?

AFAICS it's not a keystone limitation, the behavior you're seeing is
expected, and the 403 mentioned by Nikolay is just trusts working as
designed.

The key thing from a client perspective is:

1. If you pass a trust-scoped token into the client, you must not request
another token, normally this means you must provide an endpoint as you
can't run the normal auth code which retrieves the service catalog.

2. If you could pass a trust ID in, with a non-trust-scoped token, or
username/password, the above limitation is removed, but AFAIK none of the
CLI interfaces support a trust ID yet.

3. If you're using a trust scoped token, you cannot create another trust
(unless you've enabled chained delegation, which only landed recently in
keystone).  This means, for example, that you can't create a heat stack
with a trust scoped token (when heat is configured to use trusts), unless
you use chained delegation, because we create a trust internally.

When you understand these constraints, it's definitely possible to create a
trust and use it for requests to other services, for example, here's how
you could use a trust-scoped token to call heat:

heat --os-auth-token trust-scoped-token --os-no-client-auth
--heat-url http://192.168.0.4:8004/v1/project-id stack-list

The pattern heat uses internally to work with trusts is:

1. Use a trust_id and service user credentials to get a trust scoped token
2. Pass the trust-scoped token into python clients for other projects,
using the endpoint obtained during (1)

This works fine, what you can't do is pass the trust scoped token in
without explicitly defining the endpoint, because this triggers
reauthentication, which as you've discovered, won't work.

Hope that helps!

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla][tripleo][containers] New technical direction for Kolla

2015-02-16 Thread Steven Dake (stdake)
Hey folks,

A few weeks ago I sent the mailing list a broad outline [1] for our new 
approach to deploying OpenStack using container technology.  Between then and 
now, the Kolla core team has approved a new technical direction [2].

In summary, our new approach is to use fig [3] to provide single-node 
orchestration of containers.  The container content will still be produced 
based upon the existing containers developed in the first two milestones of 
Kolla.  We plan to tackle some of the more difficult problems of deploying 
OpenStack in containers using super-privileged containers [4].

We are drawing a line in the sand around what Kolla is and is not responsible 
for.  Specifically the activities of running fig on the single node is the 
responsibility of some third party deployment tool such as TripleO, Fuel, 
SpinalStack, and the list goes on ;)  It doesn’t take a great leap of 
imagination to design a system to do this job using something like Ansible or 
Puppet, but it is outside the scope of Kolla at this time.

Best Regards,
-steve

[1] https://www.mail-archive.com/openstack-dev@lists.openstack.org/msg45234.html
[2] https://review.openstack.org/#/c/153798/
[3] http://www.fig.sh/yml.html
[4] 
http://developerblog.redhat.com/2014/11/06/introducing-a-super-privileged-container-concept/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] pyCADF 0.8.0 released

2015-02-16 Thread gordon chung
The pyCADF team is pleased to announce the release of pyCADF 0.8.0.
This release includes several bug fixes and more importantly the deprecation of 
audit middleware for keystonemiddleware's version.  Additional changes include:
c834ddd Add deprecation message to Audit APIa568c12 Do not depend on endpoint 
id existing in the service catalogd71697f Fix oslo.messaging link in 
docsd130ada Use oslo_contextebd5539 Use oslo namespaces1aa3490 Updated from 
global requirementsd0d6b3b Add a new CADF type for keystone trusts8529fa0 add 
helper module
For more details, please see the git log history below and 
https://launchpad.net/pycadf/+milestone/0.8.0
Please report issues through launchpad: https://launchpad.net/pycadf
gord
  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [trusts] [all] How trusts should work by design?

2015-02-16 Thread Alexander Makarov
We could soften this limitation a little by returning token client tries to
authenticate with.
I think we need to discuss it in community.

On Mon, Feb 16, 2015 at 6:47 PM, Steven Hardy sha...@redhat.com wrote:

 On Mon, Feb 16, 2015 at 09:02:01PM +0600, Renat Akhmerov wrote:
 Yeah, clarification from keystone folks would be really helpful.
 If Nikolaya**s info is correct (I believe it is) then I actually
 dona**t
 understand why trusts are needed at all, they seem to be useless. My
 assumption is that they can be used only if we send requests directly
 to
 OpenStack services (w/o using clients) with trust scoped token
 included in
 headers, that might work although I didna**t checked that yet myself.
 So please help us understand which one of my following assumptions is
 correct?
  1. We dona**t understand what trusts are.
  2. We use them in a wrong way. (If yes, then whata**s the correct
 usage?)

 One or both of these seems likely, possibly combined with bugs in the
 clients where they try to get a new token instead of using the one you
 provide (this is a common pattern in the shell case, as the token is
 re-requested to get a service catalog).

 This provides some (heat specific) information which may help somewhat:


 http://hardysteven.blogspot.co.uk/2014/04/heat-auth-model-updates-part-1-trusts.html

  3. Trust mechanism itself is in development and cana**t be used at
 this
 point.

 IME trusts work fine, Heat has been using them since Havana with few
 problems.

  4. OpenStack clients need to be changed in some way to somehow bypass
 this keystone limitation?

 AFAICS it's not a keystone limitation, the behavior you're seeing is
 expected, and the 403 mentioned by Nikolay is just trusts working as
 designed.

 The key thing from a client perspective is:

 1. If you pass a trust-scoped token into the client, you must not request
 another token, normally this means you must provide an endpoint as you
 can't run the normal auth code which retrieves the service catalog.

 2. If you could pass a trust ID in, with a non-trust-scoped token, or
 username/password, the above limitation is removed, but AFAIK none of the
 CLI interfaces support a trust ID yet.

 3. If you're using a trust scoped token, you cannot create another trust
 (unless you've enabled chained delegation, which only landed recently in
 keystone).  This means, for example, that you can't create a heat stack
 with a trust scoped token (when heat is configured to use trusts), unless
 you use chained delegation, because we create a trust internally.

 When you understand these constraints, it's definitely possible to create a
 trust and use it for requests to other services, for example, here's how
 you could use a trust-scoped token to call heat:

 heat --os-auth-token trust-scoped-token --os-no-client-auth
 --heat-url http://192.168.0.4:8004/v1/project-id stack-list

 The pattern heat uses internally to work with trusts is:

 1. Use a trust_id and service user credentials to get a trust scoped token
 2. Pass the trust-scoped token into python clients for other projects,
 using the endpoint obtained during (1)

 This works fine, what you can't do is pass the trust scoped token in
 without explicitly defining the endpoint, because this triggers
 reauthentication, which as you've discovered, won't work.

 Hope that helps!

 Steve

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kind Regards,
Alexander Makarov,
Senoir Software Developer,

Mirantis, Inc.
35b/3, Vorontsovskaya St., 109147, Moscow, Russia

Tel.: +7 (495) 640-49-04
Tel.: +7 (926) 204-50-60

Skype: MAKAPOB.AJIEKCAHDP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [trusts] [all] How trusts should work by design?

2015-02-16 Thread Alexander Makarov
https://blueprints.launchpad.net/keystone/+spec/trust-scoped-re-authentication

On Mon, Feb 16, 2015 at 7:57 PM, Alexander Makarov amaka...@mirantis.com
wrote:

 We could soften this limitation a little by returning token client tries
 to authenticate with.
 I think we need to discuss it in community.

 On Mon, Feb 16, 2015 at 6:47 PM, Steven Hardy sha...@redhat.com wrote:

 On Mon, Feb 16, 2015 at 09:02:01PM +0600, Renat Akhmerov wrote:
 Yeah, clarification from keystone folks would be really helpful.
 If Nikolaya**s info is correct (I believe it is) then I actually
 dona**t
 understand why trusts are needed at all, they seem to be useless. My
 assumption is that they can be used only if we send requests
 directly to
 OpenStack services (w/o using clients) with trust scoped token
 included in
 headers, that might work although I didna**t checked that yet myself.
 So please help us understand which one of my following assumptions is
 correct?
  1. We dona**t understand what trusts are.
  2. We use them in a wrong way. (If yes, then whata**s the correct
 usage?)

 One or both of these seems likely, possibly combined with bugs in the
 clients where they try to get a new token instead of using the one you
 provide (this is a common pattern in the shell case, as the token is
 re-requested to get a service catalog).

 This provides some (heat specific) information which may help somewhat:


 http://hardysteven.blogspot.co.uk/2014/04/heat-auth-model-updates-part-1-trusts.html

  3. Trust mechanism itself is in development and cana**t be used at
 this
 point.

 IME trusts work fine, Heat has been using them since Havana with few
 problems.

  4. OpenStack clients need to be changed in some way to somehow
 bypass
 this keystone limitation?

 AFAICS it's not a keystone limitation, the behavior you're seeing is
 expected, and the 403 mentioned by Nikolay is just trusts working as
 designed.

 The key thing from a client perspective is:

 1. If you pass a trust-scoped token into the client, you must not request
 another token, normally this means you must provide an endpoint as you
 can't run the normal auth code which retrieves the service catalog.

 2. If you could pass a trust ID in, with a non-trust-scoped token, or
 username/password, the above limitation is removed, but AFAIK none of the
 CLI interfaces support a trust ID yet.

 3. If you're using a trust scoped token, you cannot create another trust
 (unless you've enabled chained delegation, which only landed recently in
 keystone).  This means, for example, that you can't create a heat stack
 with a trust scoped token (when heat is configured to use trusts), unless
 you use chained delegation, because we create a trust internally.

 When you understand these constraints, it's definitely possible to create
 a
 trust and use it for requests to other services, for example, here's how
 you could use a trust-scoped token to call heat:

 heat --os-auth-token trust-scoped-token --os-no-client-auth
 --heat-url http://192.168.0.4:8004/v1/project-id stack-list

 The pattern heat uses internally to work with trusts is:

 1. Use a trust_id and service user credentials to get a trust scoped token
 2. Pass the trust-scoped token into python clients for other projects,
 using the endpoint obtained during (1)

 This works fine, what you can't do is pass the trust scoped token in
 without explicitly defining the endpoint, because this triggers
 reauthentication, which as you've discovered, won't work.

 Hope that helps!

 Steve

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kind Regards,
 Alexander Makarov,
 Senoir Software Developer,

 Mirantis, Inc.
 35b/3, Vorontsovskaya St., 109147, Moscow, Russia

 Tel.: +7 (495) 640-49-04
 Tel.: +7 (926) 204-50-60

 Skype: MAKAPOB.AJIEKCAHDP




-- 
Kind Regards,
Alexander Makarov,
Senoir Software Developer,

Mirantis, Inc.
35b/3, Vorontsovskaya St., 109147, Moscow, Russia

Tel.: +7 (495) 640-49-04
Tel.: +7 (926) 204-50-60

Skype: MAKAPOB.AJIEKCAHDP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Team meeting minutes 02/16/2015

2015-02-16 Thread Nikolay Makhotkin
Thanks for joining our team meeting today!

 * Meeting minutes:
http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-02-16-16.00.html
 * Meeting log:
http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-02-16-16.00.log.html

The next meeting is scheduled for Feb 23 at 16.00 UTC.
-- 
Best Regards,
Nikolay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-16 Thread Ian Cordasco
Hey everyone,

The os-ansible-deployment team was working on updates to add support for
the latest version of juno and noticed some interesting version specifiers
introduced into global-requirements.txt in January. It introduced some
version specifiers that seem a bit impossible like the one for requests
[1]. There are others that equate presently to pinning the versions of the
packages [2, 3, 4].

I understand fully and support the commit because of how it improves
pretty much everyone’s quality of life (no fires to put out in the middle
of the night on the weekend). I’m also aware that a lot of the downstream
redistributors tend to work from global-requirements.txt when determining
what to package/support.

It seems to me like there’s room to clean up some of these requirements to
make them far more explicit and less misleading to the human eye (even
though tooling like pip can easily parse/understand these).

I also understand that stable-maint may want to occasionally bump the caps
to see if newer versions will not break everything, so what is the right
way forward? What is the best way to both maintain a stable branch with
known working dependencies while helping out those who do so much work for
us (downstream and stable-maint) and not permanently pinning to certain
working versions?

I’ve CC’d -operators too since I think their input will be invaluable on
this as well (since I doubt everyone is using distro packages and some may
be doing source-based installations).

Cheers,
Ian

[1]: 
https://github.com/openstack/requirements/commit/499db6b64071c2afa16390ad2b
c024e6a96db4ff#diff-d7d5c6fa7118ea10d88f3afeaef4da77R126
[2]: 
https://github.com/openstack/requirements/commit/499db6b64071c2afa16390ad2b
c024e6a96db4ff#diff-d7d5c6fa7118ea10d88f3afeaef4da77R128
[3]: 
https://github.com/openstack/requirements/commit/499db6b64071c2afa16390ad2b
c024e6a96db4ff#diff-d7d5c6fa7118ea10d88f3afeaef4da77R70
[4]: 
https://github.com/openstack/requirements/commit/499db6b64071c2afa16390ad2b
c024e6a96db4ff#diff-d7d5c6fa7118ea10d88f3afeaef4da77R189

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] SPFE: Authenticated Encryption (AE) Tokens

2015-02-16 Thread Lance Bragstad
On Mon, Feb 16, 2015 at 1:21 PM, Samuel Merritt s...@swiftstack.com wrote:

 On 2/14/15 9:49 PM, Adam Young wrote:

 On 02/13/2015 04:19 PM, Morgan Fainberg wrote:

 On February 13, 2015 at 11:51:10 AM, Lance Bragstad
 (lbrags...@gmail.com mailto:lbrags...@gmail.com) wrote:

 Hello all,


 I'm proposing the Authenticated Encryption (AE) Token specification
 [1] as an SPFE. AE tokens increases scalability of Keystone by
 removing token persistence. This provider has been discussed prior
 to, and at the Paris summit [2]. There is an implementation that is
 currently up for review [3], that was built off a POC. Based on the
 POC, there has been some performance analysis done with respect to
 the token formats available in Keystone (UUID, PKI, PKIZ, AE) [4].

 The Keystone team spent some time discussing limitations of the
 current POC implementation at the mid-cycle. One case that still
 needs to be addressed (and is currently being worked), is federated
 tokens. When requesting unscoped federated tokens, the token contains
 unbound groups which would need to be carried in the token. This case
 can be handled by AE tokens but it would be possible for an unscoped
 federated AE token to exceed an acceptable AE token length (i.e. 
 255 characters). Long story short, a federation migration could be
 used to ensure federated AE tokens never exceed a certain length.

 Feel free to leave your comments on the AE Token spec.

 Thanks!

 Lance

 [1] https://review.openstack.org/#/c/130050/
 [2] https://etherpad.openstack.org/p/kilo-keystone-authorization
 [3] https://review.openstack.org/#/c/145317/
 [4] http://dolphm.com/benchmarking-openstack-keystone-token-formats/
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 I am for granting this exception as long as it’s clear that the
 following is clear/true:

 * All current use-cases for tokens (including federation) will be
 supported by the new token provider.

 * The federation tokens being possibly over 255 characters can be
 addressed in the future if they are not addressed here (a “federation
 migration” does not clearly state what is meant.

  I think the length of the token is not a real issue.  We need to keep
 them within header lengths.  That is 8k.  Anything smaller than that
 will work.


 I'd like to respectfully disagree here. Large tokens can dramatically
 increase the overhead for users of Swift with small objects since the token
 must be passed along with every request.

 For example, I have a small static web site: 68 files, mean file size 5508
 bytes, median 636 bytes, total 374517 bytes. (It's an actual site; these
 are genuine data.)

 If I upload these things to Swift using a UUID token, then I incur maybe
 400 bytes of overhead per file in the HTTP request, which is a 7.3% bloat.
 On the other hand, if the token + other headers is 8K, then I'm looking at
 149% bloat, so I've more than doubled my transfer requirements just from
 tokens. :/

 I think that, for users of Swift and any other OpenStack data-plane APIs,
 token size is a definite concern. I am very much in favor of anything that
 shrinks token sizes while keeping the scalability benefits of PKI tokens.


Ideally, what's the threshold you consider acceptable for token length from
a non-persistent perspective? Does under 255 work or do you need something
smaller?




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] rethinking docs jobs in the gate

2015-02-16 Thread Andreas Jaeger
On 02/16/2015 04:36 PM, Doug Hellmann wrote:
 When we talked about this in the review of the governance change to set
 tox -e docs as part of the testing interface for a project, jeblair
 (and maybe others) pointed out that we didn't want projects running
 extra steps when their docs were built. So maybe we want to continue to
 use tox -e venv -- python setup.py build_sphinx for the real docs, and
 allow a tox -e docs job for the check queu for testing.

trove has XML docs and sphinx (AFAIK), so calling build_sphinx will not
do everything.

Let me work on a patch for trove now...

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Jennifer Guild, Dilip Upmanyu,
   Graham Norton, HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-16 Thread Doug Hellmann


On Mon, Feb 16, 2015, at 01:01 PM, Ian Cordasco wrote:
 Hey everyone,
 
 The os-ansible-deployment team was working on updates to add support for
 the latest version of juno and noticed some interesting version
 specifiers
 introduced into global-requirements.txt in January. It introduced some
 version specifiers that seem a bit impossible like the one for requests
 [1]. There are others that equate presently to pinning the versions of
 the
 packages [2, 3, 4].
 
 I understand fully and support the commit because of how it improves
 pretty much everyone’s quality of life (no fires to put out in the middle
 of the night on the weekend). I’m also aware that a lot of the downstream
 redistributors tend to work from global-requirements.txt when determining
 what to package/support.
 
 It seems to me like there’s room to clean up some of these requirements
 to
 make them far more explicit and less misleading to the human eye (even
 though tooling like pip can easily parse/understand these).

I think that's the idea. These requirements were generated
automatically, and fixed issues that were holding back several projects.
Now we can apply updates to them by hand, to either move the lower
bounds down (as in the case Ihar pointed out with stevedore) or clean up
the range definitions. We should not raise the limits of any Oslo
libraries, and we should consider raising the limits of third-party
libraries very carefully.

We should make those changes on one library at a time, so we can see
what effect each change has on the other requirements.

 
 I also understand that stable-maint may want to occasionally bump the
 caps
 to see if newer versions will not break everything, so what is the right
 way forward? What is the best way to both maintain a stable branch with
 known working dependencies while helping out those who do so much work
 for
 us (downstream and stable-maint) and not permanently pinning to certain
 working versions?

Managing the upper bounds is still under discussion. Sean pointed out
that we might want hard caps so that updates to stable branch were
explicit. I can see either side of that argument and am still on the
fence about the best approach.

 
 I’ve CC’d -operators too since I think their input will be invaluable on
 this as well (since I doubt everyone is using distro packages and some
 may
 be doing source-based installations).

I've not copied the operators list, since we try not to cross-post
threads. We should ask them to respond here on the dev list, or maybe
someone can summarize any responses from the other list.

Doug

 
 Cheers,
 Ian
 
 [1]: 
 https://github.com/openstack/requirements/commit/499db6b64071c2afa16390ad2b
 c024e6a96db4ff#diff-d7d5c6fa7118ea10d88f3afeaef4da77R126
 [2]: 
 https://github.com/openstack/requirements/commit/499db6b64071c2afa16390ad2b
 c024e6a96db4ff#diff-d7d5c6fa7118ea10d88f3afeaef4da77R128
 [3]: 
 https://github.com/openstack/requirements/commit/499db6b64071c2afa16390ad2b
 c024e6a96db4ff#diff-d7d5c6fa7118ea10d88f3afeaef4da77R70
 [4]: 
 https://github.com/openstack/requirements/commit/499db6b64071c2afa16390ad2b
 c024e6a96db4ff#diff-d7d5c6fa7118ea10d88f3afeaef4da77R189
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] SPFE: Authenticated Encryption (AE) Tokens

2015-02-16 Thread Samuel Merritt

On 2/14/15 9:49 PM, Adam Young wrote:

On 02/13/2015 04:19 PM, Morgan Fainberg wrote:

On February 13, 2015 at 11:51:10 AM, Lance Bragstad
(lbrags...@gmail.com mailto:lbrags...@gmail.com) wrote:

Hello all,


I'm proposing the Authenticated Encryption (AE) Token specification
[1] as an SPFE. AE tokens increases scalability of Keystone by
removing token persistence. This provider has been discussed prior
to, and at the Paris summit [2]. There is an implementation that is
currently up for review [3], that was built off a POC. Based on the
POC, there has been some performance analysis done with respect to
the token formats available in Keystone (UUID, PKI, PKIZ, AE) [4].

The Keystone team spent some time discussing limitations of the
current POC implementation at the mid-cycle. One case that still
needs to be addressed (and is currently being worked), is federated
tokens. When requesting unscoped federated tokens, the token contains
unbound groups which would need to be carried in the token. This case
can be handled by AE tokens but it would be possible for an unscoped
federated AE token to exceed an acceptable AE token length (i.e. 
255 characters). Long story short, a federation migration could be
used to ensure federated AE tokens never exceed a certain length.

Feel free to leave your comments on the AE Token spec.

Thanks!

Lance

[1] https://review.openstack.org/#/c/130050/
[2] https://etherpad.openstack.org/p/kilo-keystone-authorization
[3] https://review.openstack.org/#/c/145317/
[4] http://dolphm.com/benchmarking-openstack-keystone-token-formats/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I am for granting this exception as long as it’s clear that the
following is clear/true:

* All current use-cases for tokens (including federation) will be
supported by the new token provider.

* The federation tokens being possibly over 255 characters can be
addressed in the future if they are not addressed here (a “federation
migration” does not clearly state what is meant.


I think the length of the token is not a real issue.  We need to keep
them within header lengths.  That is 8k.  Anything smaller than that
will work.


I'd like to respectfully disagree here. Large tokens can dramatically 
increase the overhead for users of Swift with small objects since the 
token must be passed along with every request.


For example, I have a small static web site: 68 files, mean file size 
5508 bytes, median 636 bytes, total 374517 bytes. (It's an actual site; 
these are genuine data.)


If I upload these things to Swift using a UUID token, then I incur maybe 
400 bytes of overhead per file in the HTTP request, which is a 7.3% 
bloat. On the other hand, if the token + other headers is 8K, then I'm 
looking at 149% bloat, so I've more than doubled my transfer 
requirements just from tokens. :/


I think that, for users of Swift and any other OpenStack data-plane 
APIs, token size is a definite concern. I am very much in favor of 
anything that shrinks token sizes while keeping the scalability benefits 
of PKI tokens.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [trusts] [all] How trusts should work by design?

2015-02-16 Thread Renat Akhmerov
Steve, I saw a couple of things in what you wrote that we might be doing wrong. 
We’ll check them when we wake up and let you know what we discovered. 

Thanks

Renat Akhmerov
@ Mirantis Inc.



 On 16 Feb 2015, at 21:47, Steven Hardy sha...@redhat.com wrote:
 
 On Mon, Feb 16, 2015 at 09:02:01PM +0600, Renat Akhmerov wrote:
   Yeah, clarification from keystone folks would be really helpful.
   If Nikolaya**s info is correct (I believe it is) then I actually dona**t
   understand why trusts are needed at all, they seem to be useless. My
   assumption is that they can be used only if we send requests directly to
   OpenStack services (w/o using clients) with trust scoped token included in
   headers, that might work although I didna**t checked that yet myself.
   So please help us understand which one of my following assumptions is
   correct?
1. We dona**t understand what trusts are.
2. We use them in a wrong way. (If yes, then whata**s the correct usage?)
 
 One or both of these seems likely, possibly combined with bugs in the
 clients where they try to get a new token instead of using the one you
 provide (this is a common pattern in the shell case, as the token is
 re-requested to get a service catalog).
 
 This provides some (heat specific) information which may help somewhat:
 
 http://hardysteven.blogspot.co.uk/2014/04/heat-auth-model-updates-part-1-trusts.html
 
3. Trust mechanism itself is in development and cana**t be used at this
   point.
 
 IME trusts work fine, Heat has been using them since Havana with few
 problems.
 
4. OpenStack clients need to be changed in some way to somehow bypass
   this keystone limitation?
 
 AFAICS it's not a keystone limitation, the behavior you're seeing is
 expected, and the 403 mentioned by Nikolay is just trusts working as
 designed.
 
 The key thing from a client perspective is:
 
 1. If you pass a trust-scoped token into the client, you must not request
 another token, normally this means you must provide an endpoint as you
 can't run the normal auth code which retrieves the service catalog.
 
 2. If you could pass a trust ID in, with a non-trust-scoped token, or
 username/password, the above limitation is removed, but AFAIK none of the
 CLI interfaces support a trust ID yet.
 
 3. If you're using a trust scoped token, you cannot create another trust
 (unless you've enabled chained delegation, which only landed recently in
 keystone).  This means, for example, that you can't create a heat stack
 with a trust scoped token (when heat is configured to use trusts), unless
 you use chained delegation, because we create a trust internally.
 
 When you understand these constraints, it's definitely possible to create a
 trust and use it for requests to other services, for example, here's how
 you could use a trust-scoped token to call heat:
 
 heat --os-auth-token trust-scoped-token --os-no-client-auth
 --heat-url http://192.168.0.4:8004/v1/project-id stack-list
 
 The pattern heat uses internally to work with trusts is:
 
 1. Use a trust_id and service user credentials to get a trust scoped token
 2. Pass the trust-scoped token into python clients for other projects,
 using the endpoint obtained during (1)
 
 This works fine, what you can't do is pass the trust scoped token in
 without explicitly defining the endpoint, because this triggers
 reauthentication, which as you've discovered, won't work.
 
 Hope that helps!
 
 Steve
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] moving openvswitch ports between namespaces considered harmful

2015-02-16 Thread Ivar Lazzaro
I agree with Kevin that we should adopt veth pairs for fixing the issue in
the short term, at least until CT gets merged and distributed in OVS. At
that point the transition to a OVS based solution will make a lot of sense,
given that the numbers show that it's worth of course ;)

On Sun Feb 15 2015 at 7:17:39 AM Thomas Graf tg...@noironetworks.com
wrote:

 FYI: Ivar (CCed) is also working on collecting numbers to compare both
 architectures to kick of a discussion at the next summit. Ivar, can
 you link to the talk proposal?


Thanks for bringing this up Thomas! Here is the link to the talk proposal
[0].
Any with suggestion, idea or random comment is very welcome!

Ivar.

[0]
https://www.openstack.org/vote-paris/presentation/taking-security-groups-to-ludicrous-speed-with-open-vswitch
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] SPFE: Authenticated Encryption (AE) Tokens

2015-02-16 Thread Adam Young

On 02/16/2015 02:21 PM, Samuel Merritt wrote:

On 2/14/15 9:49 PM, Adam Young wrote:

On 02/13/2015 04:19 PM, Morgan Fainberg wrote:

On February 13, 2015 at 11:51:10 AM, Lance Bragstad
(lbrags...@gmail.com mailto:lbrags...@gmail.com) wrote:

Hello all,


I'm proposing the Authenticated Encryption (AE) Token specification
[1] as an SPFE. AE tokens increases scalability of Keystone by
removing token persistence. This provider has been discussed prior
to, and at the Paris summit [2]. There is an implementation that is
currently up for review [3], that was built off a POC. Based on the
POC, there has been some performance analysis done with respect to
the token formats available in Keystone (UUID, PKI, PKIZ, AE) [4].

The Keystone team spent some time discussing limitations of the
current POC implementation at the mid-cycle. One case that still
needs to be addressed (and is currently being worked), is federated
tokens. When requesting unscoped federated tokens, the token contains
unbound groups which would need to be carried in the token. This case
can be handled by AE tokens but it would be possible for an unscoped
federated AE token to exceed an acceptable AE token length (i.e. 
255 characters). Long story short, a federation migration could be
used to ensure federated AE tokens never exceed a certain length.

Feel free to leave your comments on the AE Token spec.

Thanks!

Lance

[1] https://review.openstack.org/#/c/130050/
[2] https://etherpad.openstack.org/p/kilo-keystone-authorization
[3] https://review.openstack.org/#/c/145317/
[4] http://dolphm.com/benchmarking-openstack-keystone-token-formats/
__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I am for granting this exception as long as it’s clear that the
following is clear/true:

* All current use-cases for tokens (including federation) will be
supported by the new token provider.

* The federation tokens being possibly over 255 characters can be
addressed in the future if they are not addressed here (a “federation
migration” does not clearly state what is meant.


I think the length of the token is not a real issue.  We need to keep
them within header lengths.  That is 8k.  Anything smaller than that
will work.


I'd like to respectfully disagree here. Large tokens can dramatically 
increase the overhead for users of Swift with small objects since the 
token must be passed along with every request.


For example, I have a small static web site: 68 files, mean file size 
5508 bytes, median 636 bytes, total 374517 bytes. (It's an actual 
site; these are genuine data.)


If I upload these things to Swift using a UUID token, then I incur 
maybe 400 bytes of overhead per file in the HTTP request, which is a 
7.3% bloat. On the other hand, if the token + other headers is 8K, 
then I'm looking at 149% bloat, so I've more than doubled my transfer 
requirements just from tokens. :/


I think that, for users of Swift and any other OpenStack data-plane 
APIs, token size is a definite concern. I am very much in favor of 
anything that shrinks token sizes while keeping the scalability 
benefits of PKI tokens.


Actually, the only tokens that are going to be non-fixed size will be 
the ones for Federation, since they need groups.  They won't be within 
the 255 byte boundary, but they will be small.







__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Artifacts] Object Version format: SemVer vs pep440

2015-02-16 Thread Robert Collins
On 17 February 2015 at 03:31, Alexander Tivelkov ativel...@mirantis.com wrote:
 Hi Client,

 Thanks for your input.

 We actually support the scenarios you speak about, yet in a slightly
 different way.  The authors of the Artifact Type (the plugin
 developers) may define their own custom field (or set of fields) to
 store their sequence information or any other type-specific
 version-related metadata. So, they may use generic version field
 (which is defined in the base artifact type) to store their numeric
 version - and use their type-specific field for local client-side
 processing.

That sounds scarily like what Neutron did, leading to a different
schema for every configuration. The reason Clint brought up Debian
version numbers is that to sort them in a database you need a custom
field type - e.g.
http://bazaar.launchpad.net/~launchpad-pqm/launchpad/devel/view/head:/database/schema/launchpad-2209-00-0.sql#L25
. And thats quite a burden :)

We've had fairly poor results with the Neutron variation in schemas,
as it tightly couples things, making upgrades that change plugins
super tricky, as well as making it very hard to concurrently support
multiple plugins. I hope you don't mean you're doing the same thing :)

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] oslo.concurrency 1.5.0 released

2015-02-16 Thread Doug Hellmann
The Oslo team is pleased to announce the release of:

oslo.concurrency 1.5.0: oslo.concurrency library

For more details, please see the git log history below and:

http://launchpad.net/oslo.concurrency/+milestone/1.5.0

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.concurrency

Changes in /home/dhellmann/repos/openstack/oslo.concurrency 1.4.1..1.5.0


3d6e372 Ability to set working directory
2a9321e Add eventlet test check to new tests __init__.py
67181a8 Updated from global requirements
63c30e5 Updated from global requirements
da8176d Update Oslo imports to remove namespace package

Diffstat (except docs and test files)
-

oslo_concurrency/fixture/lockutils.py|  2 +-
oslo_concurrency/lockutils.py|  2 +-
oslo_concurrency/processutils.py |  8 ++--
requirements.txt |  2 +-
test-requirements.txt|  2 +-
8 files changed, 38 insertions(+), 8 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index fb89633..8180d8f 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -10 +10 @@ oslo.config=1.6.0  # Apache-2.0
-oslo.i18n=1.0.0  # Apache-2.0
+oslo.i18n=1.3.0  # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index 32cdaae..e715139 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -14 +14 @@ sphinx=1.1.2,!=1.2.0,!=1.3b1,1.3
-eventlet=0.15.2
+eventlet=0.16.1

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance]'Add' capability to the HTTP store

2015-02-16 Thread Jay Pipes

On 02/16/2015 01:39 PM, Jordan Pittier wrote:

 So, I don't understand what allowing the HTTP backend to support add()
gives the user of Glance.
It doesn't give anything to the user.

glance_store is all about different backends, such as the VMWare
datastore or the Sheepdog data store. Having several backends/drivers
allows the cloud operator/administrator to choose among several options
when he deploys and operates his cloud. Currently the HTTP store lacks
an 'add' method so it can't be used as a default store. But the cloud
provider may have an existing storage solution/infrastructure that has
an HTTP gateway and that understands basic PUT/GET/DELETE operations.
So having a full blown HTTP store makes sense, imo, because it gives
more deployment options.

Is that clearer ? What do you think ?


I understand what you're saying. However, if you look at the Swift 
driver, you'll see that it's really nothing more than the HTTP driver 
that you're referring to above. It's just that the swift driver knows 
the Swift HTTP API semantics.


Why not just add a scality driver that performed HTTP operations to 
store image bits in your backend storage?


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] oslo.serialization 1.3.0 released

2015-02-16 Thread Doug Hellmann
The Oslo team is pleased to announce the release of:

oslo.serialization 1.3.0: oslo.serialization library

For more details, please see the git log history below and:

http://launchpad.net/oslo.serialization/+milestone/1.3.0

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.serialization

Changes in /home/dhellmann/repos/openstack/oslo.serialization 1.2.0..1.3.0
--

c89b3f8 add dependency warning to requirements.txt
2857ee3 Correctly load and dump items with datetime.date(s)
9f2a2ed Avoid using strtime for serializing datetimes
22cea10 jsonutils: add set() tests and simplify recursive code
73f7155 jsonutils: support UUID encoding
2c244b2 Use default in dumps()
b17b786 Updated from global requirements
402abdb Update Oslo imports to remove namespace package
2a30128 Add a messagepack utils helper module
12bd905 Bump to hacking 0.10
d153781 Updated from global requirements
23e24d7 fix bug tracker link in README.rst

Diffstat (except docs and test files)
-

README.rst |   2 +-
oslo_serialization/jsonutils.py|  26 +++--
oslo_serialization/msgpackutils.py | 169 +
requirements.txt   |   9 +-
test-requirements.txt  |   4 +-
8 files changed, 378 insertions(+), 17 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index d840b54..ae6cc7c 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -4,0 +5,5 @@
+# NOTE(harlowja): Because oslo.serialization is used by the client libraries,
+# we do not want to add a lot of dependencies to it. If you find that
+# adding a new feature to oslo.serialization means adding a new dependency,
+# that is a likely indicator that the feature belongs somewhere else.
+
@@ -7,0 +13 @@ six=1.7.0
+msgpack-python=0.4.0
@@ -11 +17,2 @@ iso8601=0.1.9
-oslo.utils=1.1.0   # Apache-2.0
+oslo.utils=1.2.0   # Apache-2.0
+pytz=2013.6
diff --git a/test-requirements.txt b/test-requirements.txt
index f4c82b9..af6a92e 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -4 +4 @@
-hacking=0.9.2,0.10
+hacking=0.10.0,0.11
@@ -14 +14 @@ simplejson=2.2.0
-oslo.i18n=1.0.0  # Apache-2.0
+oslo.i18n=1.3.0  # Apache-2.0
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [trusts] [all] How trusts should work by design?

2015-02-16 Thread Jamie Lennox


- Original Message -
 From: Alexander Makarov amaka...@mirantis.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Tuesday, 17 February, 2015 4:00:05 AM
 Subject: Re: [openstack-dev] [keystone] [trusts] [all] How trusts should work 
 by design?
 
 https://blueprints.launchpad.net/keystone/+spec/trust-scoped-re-authentication
 
 On Mon, Feb 16, 2015 at 7:57 PM, Alexander Makarov  amaka...@mirantis.com 
 wrote:
 
 
 
 We could soften this limitation a little by returning token client tries to
 authenticate with.
 I think we need to discuss it in community.
 
 On Mon, Feb 16, 2015 at 6:47 PM, Steven Hardy  sha...@redhat.com  wrote:
 
 
 On Mon, Feb 16, 2015 at 09:02:01PM +0600, Renat Akhmerov wrote:
  Yeah, clarification from keystone folks would be really helpful.
  If Nikolaya**s info is correct (I believe it is) then I actually dona**t
  understand why trusts are needed at all, they seem to be useless. My
  assumption is that they can be used only if we send requests directly to
  OpenStack services (w/o using clients) with trust scoped token included in
  headers, that might work although I didna**t checked that yet myself.
  So please help us understand which one of my following assumptions is
  correct?
  1. We dona**t understand what trusts are.
  2. We use them in a wrong way. (If yes, then whata**s the correct usage?)
 
 One or both of these seems likely, possibly combined with bugs in the
 clients where they try to get a new token instead of using the one you
 provide (this is a common pattern in the shell case, as the token is
 re-requested to get a service catalog).
 
 This provides some (heat specific) information which may help somewhat:
 
 http://hardysteven.blogspot.co.uk/2014/04/heat-auth-model-updates-part-1-trusts.html
 
  3. Trust mechanism itself is in development and cana**t be used at this
  point.
 
 IME trusts work fine, Heat has been using them since Havana with few
 problems.
 
  4. OpenStack clients need to be changed in some way to somehow bypass
  this keystone limitation?
 
 AFAICS it's not a keystone limitation, the behavior you're seeing is
 expected, and the 403 mentioned by Nikolay is just trusts working as
 designed.
 
 The key thing from a client perspective is:
 
 1. If you pass a trust-scoped token into the client, you must not request
 another token, normally this means you must provide an endpoint as you
 can't run the normal auth code which retrieves the service catalog.
 
 2. If you could pass a trust ID in, with a non-trust-scoped token, or
 username/password, the above limitation is removed, but AFAIK none of the
 CLI interfaces support a trust ID yet.
 
 3. If you're using a trust scoped token, you cannot create another trust
 (unless you've enabled chained delegation, which only landed recently in
 keystone). This means, for example, that you can't create a heat stack
 with a trust scoped token (when heat is configured to use trusts), unless
 you use chained delegation, because we create a trust internally.
 
 When you understand these constraints, it's definitely possible to create a
 trust and use it for requests to other services, for example, here's how
 you could use a trust-scoped token to call heat:
 
 heat --os-auth-token trust-scoped-token --os-no-client-auth
 --heat-url http://192.168.0.4:8004/v1/ project-id stack-list
 
 The pattern heat uses internally to work with trusts is:
 
 1. Use a trust_id and service user credentials to get a trust scoped token
 2. Pass the trust-scoped token into python clients for other projects,
 using the endpoint obtained during (1)
 
 This works fine, what you can't do is pass the trust scoped token in
 without explicitly defining the endpoint, because this triggers
 reauthentication, which as you've discovered, won't work.
 
 Hope that helps!
 
 Steve
 

So I think what you are seeing, and what heat has come up against in the past 
is a limitation of the various python-*clients and not a problem of the actual 
delegation mechanism from the keystone point of view. This is a result of the 
basic authentication code being copied around between clients and then not 
being kept updated since... probably havana.

The good news is that if you go with the session based approach then you can 
share these tokens amongst clients without the hacks. 

The identity authentication plugins that keystoneclient offers (v2 and v3 api 
for Token and Password) both accept a trust_id to be scoped to and then the 
plugin can be shared amongst all the clients that support it (AFAIK that's 
almost everyone - the big exceptions being glance and swift). 

Here's an example (untested - off the top of my head):

from keystoneclient import session 
from keystoneclient.auth.identity import v3 
from cinderclient.v2 import client as c_client
from keystoneclient.v3 import client as k_client
from novaclient.v1_1 import client as n_client

a = 

Re: [openstack-dev] Summit Voting and ATC emails?

2015-02-16 Thread Stefano Maffulli
On Mon, 2015-02-16 at 09:44 +, Daniel P. Berrange wrote:
 Why did we change the rules on summit passes in this way ?

Because there is always a balance to strike between open participation
and effective conversations, while the balance point keeps changing. 

In Paris we've noticed a fairly high participation to the Design
sessions from people who didn't really have anything to contribute to
those sessions. We received quite some complaints about the feeling of
being overcrowded in Paris at the Design session so we thought about
experimenting, like we always do. 

By sending an invite automatically only to the most recent contributors
we're hoping to catch the most involved ones, expecting them to be
contributing more to the sessions. Other invites can be sent to
non-contributors, as we've always done for translators, operators and
users or developers of other projects that PTLs and other leaders
believe would contribute to Design conversations.

We may change the policy again in the future, we'll see how things go in
Vancouver.

/stef


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] pci_alias config

2015-02-16 Thread Harish Patil
Hello,

Do we still need “pci_alias config under /etc/nova/nova.conf for SR-IOV PCI 
passthru’ ?

I have Juno release of 1:2014.2.1-0ubuntu1.

Thanks,

Harish



This message and any attached documents contain information from the sending 
company or its parent company(s), subsidiaries, divisions or branch offices 
that may be confidential. If you are not the intended recipient, you may not 
read, copy, distribute, or use this information. If you have received this 
transmission in error, please notify the sender immediately by reply e-mail and 
then delete this message.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance]'Add' capability to the HTTP store

2015-02-16 Thread Jordan Pittier
So, I don't understand what allowing the HTTP backend to support add()
gives the user of Glance.
It doesn't give anything to the user.

glance_store is all about different backends, such as the VMWare datastore
or the Sheepdog data store. Having several backends/drivers allows the
cloud operator/administrator to choose among several options when he
deploys and operates his cloud. Currently the HTTP store lacks an 'add'
method so it can't be used as a default store. But the cloud provider may
have an existing storage solution/infrastructure that has an HTTP gateway
and that understands basic PUT/GET/DELETE operations.  So having a full
blown HTTP store makes sense, imo, because it gives more deployment options.

Is that clearer ? What do you think ?
Jordan

On Fri, Feb 13, 2015 at 7:28 PM, Jay Pipes jaypi...@gmail.com wrote:

 On 02/13/2015 11:55 AM, Jordan Pittier wrote:

 Jay, I am afraid I didn't understand your point.

 Could you rephrase/elaborate on What is the difference between just
 calling the Glance API to upload an image, versus adding add() please ?
 Currently, you can't call the Glance API to upload an image if the
 default_store is the HTTP store.


 No, you upload the image to a Glance server that has a backing data store
 like filesystem or swift. But the process of doing that (i.e. calling
 `glance image upload`) is the same as what you are describing -- it's all
 just POST'ing some data via HTTP through the Glance API endpoint.

 So, I don't understand what allowing the HTTP backend to support add()
 gives the user of Glance.

 Best,
 -jay


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Cross-Project meeting, Tue February 17th, 21:00 UTC

2015-02-16 Thread Doug Hellmann
PTLs, cross-project liaisons, and other interested parties,

The weekly cross-project meeting will be tomorrow at 21:00 UTC. Thierry may not 
be able to be online reliably at the time, so he has asked me to chair the 
meeting this week.

We have one topic on the agenda at this point: the Testing Guidelines 
cross-project spec [1]. Please review the spec before the meeting tomorrow.

If you have anything else to add to the agenda, update the wiki [2].

Thanks,
Doug

[1] https://review.openstack.org/#/c/150653/
[2] https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ovs-dev] [neutron] moving openvswitch ports between namespaces considered harmful

2015-02-16 Thread Thomas Graf
On 02/15/15 at 05:00pm, Kevin Benton wrote:
 What is the status of the conntrack integration with respect to
 availability in distributions? The lack of state tracking has blocked the
 ability for us to get rid of namespaces for the L3 agent (because of SNAT)
 and the filtering bridge between the VM and OVS (stateful firewall for
 security groups).
 
 It has been known for a long time that these are suboptimal, but our hands
 are sort of tied because we don't want to require kernel code changes to
 use Neutron.

 Are Ubuntu 1404 or CentOS 7 shipping openvswitch kernel modules with
 conntrack integration? If not, I don't see a feasible way of eliminating
 any of these problems with a pure OVS solution. (faking a stateful firewall
 with flag matching doesn't count)

As soon as conntrack is merged in the upstream kernel it can be
backported. We can definitely provide support through the openvswitch.ko
in the git tree which will give you conntack on = 2.6.32 but that might
not answer your question as you probably want to use the openvswitch.ko
that is shipped with your distribution. Given the interest in this it
sounds like it makes sense to approach common distributions which do not
rebase kernels frequently to backport this feature.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon]

2015-02-16 Thread David Lyle
A couple of high priority items for Horizon's Kilo release could use some
targeted attention to drive progress forward. These items are related to
angularJS based UX improvements, especially Launch Instance and the
conversion of the Identity Views.

These efforts are suffering from a few issues, education, consensus on
direction, working through blockers and drawn out dev and review cycles. In
order to help insure these high priority issues have the best possible
chance to land in Kilo, I have proposed a virtual sprint to happen this
week. I created an etherpad [1] with proposed dates and times. Anyone who
is interested is welcome to participate, please register your intent in the
etherpad and availability.

David

[1] https://etherpad.openstack.org/p/horizon-kilo-virtual-sprint
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Feature Freeze Exception Request - bp/libvirt-kvm-systemz

2015-02-16 Thread Matt Riedemann



On 2/10/2015 4:27 AM, Daniel P. Berrange wrote:

On Mon, Feb 09, 2015 at 05:15:26PM +0100, Andreas Maier wrote:


Hello,
I would like to ask for the following feature freeze exceptions in Nova.

The patch sets below are all part of this blueprint:
https://review.openstack.org/#/q/status:open+project:openstack/nova
+branch:master+topic:bp/libvirt-kvm-systemz,n,z
and affect only the kvm/libvirt driver of Nova.

The decision for merging these patch sets by exception can be made one by
one; they are independent of each other.

1. https://review.openstack.org/149242 - FCP support

Title: libvirt: Adjust Nova to support FCP on System z systems

What it does: This patch set enables FCP support for KVM on System z.

Impact if we don't get this: FCP attached storage does not work for KVM
on System z.

Why we need it: We really depend on this particular patch set, because
FCP is our most important storage attachment.

Additional notes: The code in the libvirt driver that is updated by this
patch set is consistent with corresponding code in the Cinder driver,
and it has seen review by the Cinder team.

2. https://review.openstack.org/150505 - Console support

Title: libvirt: Enable serial_console feature for system z

What it does: This patch set enables the backing support in Nova for the
interactive console in Horizon.

Impact if we don't get this: Console in Horizon does not work. The
mitigation for a user would be to use the Log in Horizon (i.e. with
serial_console disabled), or the virsh console command in an ssh
session to the host Linux.

Why we need it: We'd like to have console support. Also, because the
Nova support for the Log in Horizon has been merged in an earlier patch
set as part of this blueprint, this remaining patch set makes the
console/log support consistent for KVM on System z Linux.

3. https://review.openstack.org/150497 - ISO/CDROM support

Title: libvirt: Set SCSI as the default cdrom bus on System z

What it does: This patch set enables that cdrom drives can be attached
to an instance on KVM on System z. This is needed for example for
cloud-init config files, but also for simply attaching ISO images to
instances. The technical reason for this change is that the IDE
attachment is not available on System z, and we need SCSI (just like
Power Linux).

Impact if we don't get this:
   - Cloud-init config files cannot be on a cdrom drive. A mitigation
  for a user would be to have such config files on a cloud-init
  server.
   - ISO images cannot be attached to instances. There is no mitigation.

Why we need it: We would like to avoid having to restrict cloud-init
configuration to just using cloud-init servers. We would like to be able
to support ISO images.

Additional notes: This patch is a one line change (it simply extends
what is already done in a platform specific case for the Power platform,
to be also used for System z).


I will happily sponsor exception on patches 2  3, since they are pretty
trivial  easily understood.


I will tenatively sponsor patch 1, if other reviewers feel able to do a
strong review of the SCSI stuff, since this is SCSI host setup is not
something I'm particularly familiar with.

Regards,
Daniel



2 of the 3 changes have been merged outside of the FFE process, the only 
remaining one is the FCP support:


https://review.openstack.org/#/c/149242/

The question is what is the impact to s390x users without this?  Does 
this make using cinder impossible for zKVM users?


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-16 Thread Sean Dague
On 02/16/2015 02:08 PM, Doug Hellmann wrote:
 
 
 On Mon, Feb 16, 2015, at 01:01 PM, Ian Cordasco wrote:
 Hey everyone,

 The os-ansible-deployment team was working on updates to add support for
 the latest version of juno and noticed some interesting version
 specifiers
 introduced into global-requirements.txt in January. It introduced some
 version specifiers that seem a bit impossible like the one for requests
 [1]. There are others that equate presently to pinning the versions of
 the
 packages [2, 3, 4].

 I understand fully and support the commit because of how it improves
 pretty much everyone’s quality of life (no fires to put out in the middle
 of the night on the weekend). I’m also aware that a lot of the downstream
 redistributors tend to work from global-requirements.txt when determining
 what to package/support.

 It seems to me like there’s room to clean up some of these requirements
 to
 make them far more explicit and less misleading to the human eye (even
 though tooling like pip can easily parse/understand these).
 
 I think that's the idea. These requirements were generated
 automatically, and fixed issues that were holding back several projects.
 Now we can apply updates to them by hand, to either move the lower
 bounds down (as in the case Ihar pointed out with stevedore) or clean up
 the range definitions. We should not raise the limits of any Oslo
 libraries, and we should consider raising the limits of third-party
 libraries very carefully.
 
 We should make those changes on one library at a time, so we can see
 what effect each change has on the other requirements.
 

 I also understand that stable-maint may want to occasionally bump the
 caps
 to see if newer versions will not break everything, so what is the right
 way forward? What is the best way to both maintain a stable branch with
 known working dependencies while helping out those who do so much work
 for
 us (downstream and stable-maint) and not permanently pinning to certain
 working versions?
 
 Managing the upper bounds is still under discussion. Sean pointed out
 that we might want hard caps so that updates to stable branch were
 explicit. I can see either side of that argument and am still on the
 fence about the best approach.

History has shown that it's too much work keeping testing functioning
for stable branches if we leave dependencies uncapped. If particular
people are interested in bumping versions when releases happen, it's
easy enough to do with a requirements proposed update. It will even run
tests that in most cases will prove that it works.

It might even be possible for someone to build some automation that did
that as stuff from pypi released so we could have the best of both
worlds. But I think capping is definitely something we want as a
project, and it reflects the way that most deployments will consume this
code.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Question about force_host skip filters

2015-02-16 Thread Vishvananda Ishaya
If this feature is going to be added, I suggest it gets a different name. Force 
host is an admin command to force an instance onto a host. If you want to make 
a user-facing command that respects filters, perhaps something like 
requested-host might work. In general, however, the name of hosts are not 
exposed to users, so this is moving far away from cloud and into virtualization 
management.

Vish

On Feb 12, 2015, at 1:05 AM, Rui Chen chenrui.m...@gmail.com wrote:

 Hi:
 
If we boot instance with 'force_hosts', the force host will skip all 
 filters, looks like that it's intentional logic, but I don't know the reason.
 
I'm not sure that the skipping logic is apposite, I think we should remove 
 the skipping logic, and the 'force_hosts' should work with the scheduler, 
 test whether the force host is appropriate ASAP. Skipping filters and 
 postponing the booting failure to nova-compute is not advisable.
 
 On the other side, more and more options had been added into flavor, like 
 NUMA, cpu pinning, pci and so on, forcing a suitable host is more and more 
 difficult.
 
 
 Best Regards.
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] some questions about bp filtering-weighing-with-driver-supplied-functions

2015-02-16 Thread Zhangli (ISSP)
Hi,
I noticed the following BP has been merged recently:
https://blueprints.launchpad.net/cinder/+spec/filtering-weighing-with-driver-supplied-functions
i have read the related 
spec(http://git.openstack.org/cgit/openstack/cinder-specs/tree/specs/kilo/filtering-weighing-with-driver-supplied-functions.rst)
 and have got some questions.

For my understanding, this BP brought two benefits:
1) different admins can make various configurations on filtering/weighing (by 
editing equation in cinder.conf) to meet their various requirement; the 
equation way itself is much more flexible than single capability scheduling.
2) different backend drivers can take vendor specific evaluation;

The BP seems focus more on the second target: letting drivers do evaluation by 
themselves. In the spec editing equation in cinder.conf is just an example of 
driver implementation, it is up to the driver to
determine how to generate the equations...Some choices a driver has are to use 
values defined in cinder.conf, hard-code the values in the driver or not 
implement the properties at all.
But I think it is also a fact that a lot of devices have common 
capabilities/attributes(even thin-provisionning can take as a common attribute 
today), so can we make the editing equation in cinder.conf a base/common 
implementation to this new scheduler?
Which means:
1) this new scheduler has a built-in implementation of filter/goodness funtion;
2) drivers can supply their own functions as they do now; If a driver do not 
supply one, built-in function will work;

Another question:
Can we make different volume-types associated with different evaluation rule 
(means different filter/goodness function pair)? I think this is also very 
useful.

Thanks.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Changing expression delimiters in Mistral DSL

2015-02-16 Thread Renat Akhmerov
Along with % % syntax here are some other alternatives that I checked for 
YAML friendliness with my short comments:

p1: ${1 + $.var}# Here it’s bad that $ sign is used for two different 
things
p2: ~{1 + $.var}# ~ is easy to miss in a text
p3: ^{1 + $.var}# For someone may be associated with regular expressions
p4: ?{1 + $.var}
p5: {1 + $.var}   # This is kinda crazy
p6: e{1 + $.var}# That looks a pretty interesting option to me, “e” 
could mean “expression” here.
p7: yaql{1 + $.var} # This is interesting because it would give a clear and 
easy mechanism to plug in other expression languages, “yaql” here is a used 
dialect for the following expression
p8: y{1 + $.var}# “y” here is just shortened “yaql


Any ideas and thoughts would be really appreciated!

Renat Akhmerov
@ Mirantis Inc.



 On 17 Feb 2015, at 12:53, Renat Akhmerov rakhme...@mirantis.com wrote:
 
 Dmitri,
 
 I agree with all your reasonings and fully support the idea of changing the 
 syntax now as well as changing system’s API a little bit due to recently 
 found issues in the current engine design that don’t allow us, for example, 
 to fully implement ‘with-items’ (although that’s a little bit different 
 story).
 
 Just a general note about all changes happening now: Once we release kilo 
 stable release our API, DSL of version 2 must be 100% stable. I was hoping to 
 stabilize it much earlier but the start of production use revealed a number 
 of things (I think this is normal) which we need to address, but not later 
 than the end of Kilo.
 
 As far as % % syntax. I see that it would solve a number of problems (YAML 
 friendliness, type ambiguity) but my only not strong argument is that it 
 doesn’t look that elegant in YAML as it looks, for example, in ERB templates. 
 It really reminds me XML/HTML and looks like a bear in a grocery store (tried 
 to make it close to one old russian saying :) ). So just for this only reason 
 I’d suggest we think about other alternatives, maybe not so familiar to 
 Ruby/Chef/Puppet users but looking better with YAML and at the same time 
 being YAML friendly.
 
 I would be good if we could here more feedback on this, especially from 
 people who started using Mistral.
 
 Thanks
 
 Renat Akhmerov
 @ Mirantis Inc.
 
 
 
 On 17 Feb 2015, at 03:06, Dmitri Zimine dzim...@stackstorm.com 
 mailto:dzim...@stackstorm.com wrote:
 
 SUMMARY: 
 
 
 We are changing the syntax for inlining YAQL expressions in Mistral YAML 
 from {1+$.my.var} (or “{1+$.my.var}”) to % 1+$.my.var %
 
 Below I explain the rationale and the criteria for the choice. Comments and 
 suggestions welcome.
 
 DETAILS: 
 -
 
 We faced a number of problems with using YAQL expressions in Mistral DSL: 
 [1] must handle any YAQL, not only the ones started with $; [2] must 
 preserve types and [3] must comply with YAML. We fixed these problems by 
 applying Ansible style syntax, requiring quotes around delimiters (e.g. 
 “{1+$.my.yaql.var}”). However, it lead to unbearable confusion in DSL 
 readability, in regards to types:
 
 publish:
intvalue1: {1+1}” # Confusing: you expect quotes to be string.
intvalue2: {int(1+1)}” # Even this doestn’ clean the confusion
whatisthis:{$.x + $.y}” # What type would this return? 
 
 We got a very strong push back from users in the filed on this syntax. 
 
 The crux of the problem is using { } as delimiters YAML. It is plain wrong 
 to use the reserved character. The clean solution is to find a delimiter 
 that won’t conflict with YAML.
 
 Criteria for selecting best alternative are: 
 1) Consistently applies to to all cases of using YAML in DSL
 2) Complies with YAML 
 3) Familiar to target user audience - openstack and devops
 
 We prefer using two-char delimiters to avoid requiring extra escaping within 
 the expressions.
 
 The current winner is % %. It fits YAML well. It is familiar to 
 openstack/devops as this is used for embedding Ruby expressions in Puppet 
 and Chef (for instance, [4]). It plays relatively well across all cases of 
 using expressions in Mistral (see examples in [5]):
 
 ALTERNATIVES considered:
 --
 
 1) Use Ansible-like syntax: http://docs.ansible.com/YAMLSyntax.html#gotchas 
 http://docs.ansible.com/YAMLSyntax.html#gotchas
 Rejected for confusion around types. See above.
 
 2) Use functions, like Heat HOT or TOSCA:
 
 HOT templates and TOSCA doesn’t seem to have a concept of typed variables to 
 borrow from (please correct me if I missed it). But they have functions: 
 function: { function_name: {foo: [parameter1, parameter 2], bar:xxx”}}. 
 Applied to Mistral, it would look like:
 
 publish:
  - bool_var: { yaql: “1+1+$.my.var  100” } 
 
 Not bad, but currently rejected as it reads worse than delimiter-based 
 syntax, especially in simplified one-line action invocation.
 
 3)   paired with other symbols: php-styoe  ? 

Re: [openstack-dev] [congress][Policy][Copper]Collaboration between OpenStack Congress and OPNFV Copper

2015-02-16 Thread Sean Roberts
I'm good for either time Tuesday. 

~sean

 On Feb 16, 2015, at 15:45, Tim Hinrichs thinri...@vmware.com wrote:
 
 Lets default to 3p/7a if we don't hear from Bryan.
 
 Tim
 
 P. S. Pardon the brevity. Sent from my mobile. 
 
 On Feb 15, 2015, at 10:15 PM, Zhipeng Huang zhipengh...@gmail.com wrote:
 
 2 - 4 pm Tuesday (which means 6 - 8 am China Wed) should work for me. Let's 
 see if Bryan is ok with the time slot. And yes you don't need an account :)
 
 On Mon, Feb 16, 2015 at 1:36 PM, Tim Hinrichs thinri...@vmware.com wrote:
 I just realized Monday is a holiday. What about 8a, 10a, 2-4p Tuesday?
 
 I'm happy to try out gotomeeting.  Looks like I don't need an account, 
 right?
 
 Tim
 
 P. S. Pardon the brevity. Sent from my mobile. 
 
 On Feb 13, 2015, at 4:25 PM, Zhipeng Huang zhipengh...@gmail.com wrote:
 
 Hi Tim,
 
 Monday 9am PST should be ok for me, it is better if u guys could do 4 - 5 
 pm, but let's settle on 9 am for now, see if Bryan and others would be ok. 
 Regarding meeting tools, in opnfv we use GoToMeeting a lot, would u guys 
 be ok with that? Or should we have a Google Hangout?
 
 On Feb 14, 2015 8:04 AM, Tim Hinrichs thinri...@vmware.com wrote:
 Hi Zhipeng,
 
 Sure we can talk online Mon/Tue.  If you come up with some times the 
 Copper team is available, I’ll do the same for a few of the Congress 
 team.  We’re usually available 9-4p Pacific, and for me Monday looks 
 better than Tuesday.
 
 If we schedule an hour meeting in Santa Rosa for Wed at say 1:30 or 2p, 
 I’ll do my best to make it there.  Even if I can’t make it, you’ll always 
 have Sean to talk with.
 
 Tim
 
 
 
 
 On Feb 12, 2015, at 6:49 PM, Zhipeng Huang zhipengh...@gmail.com wrote:
 
 THX Tim!
 
 I think It'd be great if we could have some online discussion ahead of 
 F2F LFC summit.We could have the crash course early next week (Monday or 
 Tuesday), and then Bryan could discuss with Sean in detail when they 
 met, with specific questions. 
 
 Would this be ok for everyone ? 
 
 On Fri, Feb 13, 2015 at 7:21 AM, Tim Hinrichs thinri...@vmware.com 
 wrote:
 Bryan and Zhipeng,
 
 Sean Roberts (CCed) is planning to be in Santa Rosa.   Sean’s 
 definitely there on Wed.  Less clear about Thu/Fri.
 
 I don’t know if I’ll make the trip yet, but I’m guessing Wed early 
 afternoon if I can.  
 
 Tim
 
 
 On Feb 11, 2015, at 9:04 PM, SULLIVAN, BRYAN L bs3...@att.com wrote:
 
 Hi Tim,
 
  
 It would be great to meet with members of the Congress project if 
 possible at our meetup in Santa Rosa. I plan by then to have a basic 
 understanding of Congress and some test driver apps / use cases to 
 demo at the meetup. The goal is to assess the current state of 
 Congress support for the use cases on the OPNFV wiki: 
 https://wiki.opnfv.org/copper/use_cases
 
  
 I would be doing the same with ODL but I’m not as far on getting ready 
 with it. So the opportunity to discuss the use cases under Copper and 
 the other policy-related projects
 
 (fault management, resource management, resource scheduler) with 
 Congress experts would be great.
 
  
 Once we understand the gaps in what we are trying to build in OPNFV, 
 the goal for our first OPNFV release is to create blueprints for new 
 work in Congress. We might also just find some bugs and get directly 
 involved in Congress to address them, or start a collaborative 
 development project in OPNFV for that. TBD
 
  
 Thanks,
 
 Bryan Sullivan | Service Standards | ATT
 
  
 From: Tim Hinrichs [mailto:thinri...@vmware.com] 
 Sent: Wednesday, February 11, 2015 10:22 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Cc: SULLIVAN, BRYAN L; HU, BIN; Rodriguez, Iben; Howard Huang
 Subject: Re: [openstack-dev] [congress][Policy][Copper]Collaboration 
 between OpenStack Congress and OPNFV Copper
 
  
 
 Hi Zhipeng,
 
  
 
 We’d be happy to meet.  Sounds like fun!  
 
  
 
 I don’t know of anyone on the Congress team who is planning to attend 
 the LF collaboration summit.  But we might be able to send a couple of 
 people if it’s the only real chance to have a face-to-face.  
 Otherwise, there are a bunch of us in and around Palo Alto.  And of 
 course, phone/google hangout/irc are fine options as well.
 
  
 
 Tim
 
  
 
  
 
  
 
 On Feb 11, 2015, at 8:59 AM, Zhipeng Huang zhipengh...@gmail.com 
 wrote:
 
  
 
 Hi Congress Team,
 
  
 
 As you might already knew, we had a project in OPNFV covering 
 deployment policy called  Copper, in which we identify Congress as one 
 of the upstream projects that we need to put our requirement to. Our 
 team has been working on setting up a simple openstack environment 
 with congress integrated that could demo simple use cases for policy 
 deployment.
 
  
 
 Would it possible for you guys and our team to find a time do an 
 Copper/Congress interlock meeting, during which Congress Team could 
 introduce how to best integrate congress with vanilla openstack? 
 Will some of you attend LF Collaboration Summit?
 
  
 
 Thanks a lot :) 

[openstack-dev] [cinder] [oslo] MySQL connection shared between green threads concurrently

2015-02-16 Thread Mike Bayer
hi all -

I’ve been researching this cinder issue 
https://bugs.launchpad.net/cinder/+bug/1417018 and I’ve found something 
concerning.

Basically there’s a race condition here which is occurring because a single 
MySQL connection is shared between two green threads.  This occurs while Cinder 
is doing its startup work in cinder/volume/manager.py - init_host(), and at 
the same time a request comes in from a separate service call that seems to be 
part of the startup.

The log output at http://paste.openstack.org/show/175571/ shows this happening. 
 I can break it down:


1. A big query for volumes occurs as part of 
self.db.volume_get_all_by_host(ctxt, self.host)”.   To reproduce the error 
more regularly I’ve placed it into a loop of 100 calls.  We can see that thread 
id is 68089648 MySQL connection is 3a9c5a0.

2015-02-16 19:32:47.236 INFO sqlalchemy.engine.base.Engine 
[req-ed3c0248-6ee5-4063-80b5-77c5c9a23c81 None None] tid: 68089648, connection: 
_mysql.connection open to '127.0.0.1' at 3a9c5a0, stmt SELECT 
volumes.created_at AS 
2015-02-16 19:32:47.237 INFO sqlalchemy.engine.base.Engine 
[req-ed3c0248-6ee5-4063-80b5-77c5c9a23c81 None None] 
('localhost.localdomain@ceph', 'localhost.localdomain@ceph#%’)

2. A “ping” query comes in related to a different API call - different thread 
ID, *same* connection

2015-02-16 19:32:47.276 INFO sqlalchemy.engine.base.Engine 
[req-600ef638-cb45-4a34-a3ab-6d22d83cfd00 None None] tid: 68081456, connection: 
_mysql.connection open to '127.0.0.1' at 3a9c5a0, stmt SELECT 1
2015-02-16 19:32:47.279 INFO sqlalchemy.engine.base.Engine 
[req-600ef638-cb45-4a34-a3ab-6d22d83cfd00 None None] ()

3. The first statement is still in the middle of invocation, so we get a 
failure, either a mismatch of the statement to the cursor, or a MySQL lost 
connection (stack trace begins)

Traceback (most recent call last):
  File /usr/lib/python2.7/site-packages/eventlet/hubs/hub.py, line 457, in 
fire_timers
 … more stack trace

4. another thread id, *same* connection.

2015-02-16 19:32:47.290 INFO sqlalchemy.engine.base.Engine 
[req-f980de7c-151d-4fed-b45e-d12b133859a6 None None] tid: 61238160, connection: 
_mysql.connection open to '127.0.0.1' at 3a9c5a0, stmt SELECT 1

rows = [process[0](row, None) for row in fetch]
  File /usr/lib64/python2.7/site-packages/sqlalchemy/orm/loading.py, line 
363, in _instance
tuple([row[column] for column in pk_cols])
  File /usr/lib64/python2.7/site-packages/sqlalchemy/engine/result.py, line 
331, in _key_fallback
expression._string_or_unprintable(key))
NoSuchColumnError: Could not locate column in row for column 'volumes.id'
2015-02-16 19:32:47.293 ERROR cinder.openstack.common.threadgroup [-] Could 
not locate column in row for column 'volumes.id’


So I’m not really sure what’s going on here.  Cinder seems to have some 
openstack greenlet code of its own in cinder/openstack/common/threadgroup.py, I 
don’t know the purpose of this code.   SQLAlchemy’s connection pool has been 
tested a lot with eventlet / gevent and this has never been reported before.
 This is a very severe race and I’d think that this would be happening all over 
the place.

Current status is that I’m continuing to try to determine why this is happening 
here, and seemingly nowhere else.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Summit Voting and ATC emails?

2015-02-16 Thread Stefano Maffulli
On Sat, 2015-02-14 at 21:11 -0500, Nick Chase wrote:
 Does anybody know if a) ATC emails have started to go out yet, and b) 
 when proposal voting will start?


Voting started:

http://www.openstack.org/vote-vancouver


Hurry, voting closes at 5pm CT on Monday, February 23. 


Continue to visit openstack.org/summit for all Summit-related
information, including registration, visa letters, hotels and FAQ. 

/stef



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-16 Thread Ian Cordasco
On 2/16/15, 16:08, Sean Dague s...@dague.net wrote:

On 02/16/2015 02:08 PM, Doug Hellmann wrote:
 
 
 On Mon, Feb 16, 2015, at 01:01 PM, Ian Cordasco wrote:
 Hey everyone,

 The os-ansible-deployment team was working on updates to add support
for
 the latest version of juno and noticed some interesting version
 specifiers
 introduced into global-requirements.txt in January. It introduced some
 version specifiers that seem a bit impossible like the one for requests
 [1]. There are others that equate presently to pinning the versions of
 the
 packages [2, 3, 4].

 I understand fully and support the commit because of how it improves
 pretty much everyone’s quality of life (no fires to put out in the
middle
 of the night on the weekend). I’m also aware that a lot of the
downstream
 redistributors tend to work from global-requirements.txt when
determining
 what to package/support.

 It seems to me like there’s room to clean up some of these requirements
 to
 make them far more explicit and less misleading to the human eye (even
 though tooling like pip can easily parse/understand these).
 
 I think that's the idea. These requirements were generated
 automatically, and fixed issues that were holding back several projects.
 Now we can apply updates to them by hand, to either move the lower
 bounds down (as in the case Ihar pointed out with stevedore) or clean up
 the range definitions. We should not raise the limits of any Oslo
 libraries, and we should consider raising the limits of third-party
 libraries very carefully.
 
 We should make those changes on one library at a time, so we can see
 what effect each change has on the other requirements.
 

 I also understand that stable-maint may want to occasionally bump the
 caps
 to see if newer versions will not break everything, so what is the
right
 way forward? What is the best way to both maintain a stable branch with
 known working dependencies while helping out those who do so much work
 for
 us (downstream and stable-maint) and not permanently pinning to certain
 working versions?
 
 Managing the upper bounds is still under discussion. Sean pointed out
 that we might want hard caps so that updates to stable branch were
 explicit. I can see either side of that argument and am still on the
 fence about the best approach.

History has shown that it's too much work keeping testing functioning
for stable branches if we leave dependencies uncapped. If particular
people are interested in bumping versions when releases happen, it's
easy enough to do with a requirements proposed update. It will even run
tests that in most cases will prove that it works.

It might even be possible for someone to build some automation that did
that as stuff from pypi released so we could have the best of both
worlds. But I think capping is definitely something we want as a
project, and it reflects the way that most deployments will consume this
code.

   -Sean

-- 
Sean Dague
http://dague.net

Right. No one is arguing the very clear benefits of all of this.

I’m just wondering if for the example version identifiers that I gave in
my original message (and others that are very similar) if we want to make
the strings much simpler for people who tend to work from them (i.e.,
downstream re-distributors whose jobs are already difficult enough). I’ve
offered to help at least one of them in the past who maintains all of
their distro’s packages themselves, but they refused so I’d like to help
them anyway possible. Especially if any of them chime in as this being
something that would be helpful.

Cheers,
Ian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] liberty summit planning

2015-02-16 Thread Doug Hellmann


On Wed, Feb 11, 2015, at 12:05 PM, Doug Hellmann wrote:
 As I mentioned at the meeting Monday, we need to start thinking about the
 number and types of rooms we will need for the summit. I’ve started an
 etherpad to collect topics, just like we did for the kilo summit [1].
 Please add your ideas to the list there so we can review them.
 
 Thanks,
 Doug
 
 
 [1] https://etherpad.openstack.org/p/liberty-oslo-summit-planning
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

As discussed at the meeting today, I have opened the specs repository
for submissions for the liberty cycle. If you are planning a summit
session around a change for which we would want a spec, please go ahead
and file the spec when you add your proposal to the etherpad.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] SPFE: Authenticated Encryption (AE) Tokens

2015-02-16 Thread Samuel Merritt

On 2/16/15 11:48 AM, Lance Bragstad wrote:



On Mon, Feb 16, 2015 at 1:21 PM, Samuel Merritt s...@swiftstack.com
mailto:s...@swiftstack.com wrote:

On 2/14/15 9:49 PM, Adam Young wrote:

On 02/13/2015 04:19 PM, Morgan Fainberg wrote:

On February 13, 2015 at 11:51:10 AM, Lance Bragstad
(lbrags...@gmail.com mailto:lbrags...@gmail.com
mailto:lbrags...@gmail.com mailto:lbrags...@gmail.com)
wrote:

Hello all,


I'm proposing the Authenticated Encryption (AE) Token
specification
[1] as an SPFE. AE tokens increases scalability of
Keystone by
removing token persistence. This provider has been
discussed prior
to, and at the Paris summit [2]. There is an
implementation that is
currently up for review [3], that was built off a POC.
Based on the
POC, there has been some performance analysis done with
respect to
the token formats available in Keystone (UUID, PKI,
PKIZ, AE) [4].

The Keystone team spent some time discussing limitations
of the
current POC implementation at the mid-cycle. One case
that still
needs to be addressed (and is currently being worked),
is federated
tokens. When requesting unscoped federated tokens, the
token contains
unbound groups which would need to be carried in the
token. This case
can be handled by AE tokens but it would be possible for
an unscoped
federated AE token to exceed an acceptable AE token
length (i.e. 
255 characters). Long story short, a federation
migration could be
used to ensure federated AE tokens never exceed a
certain length.

Feel free to leave your comments on the AE Token spec.

Thanks!

Lance

[1] https://review.openstack.org/#__/c/130050/
https://review.openstack.org/#/c/130050/
[2]
https://etherpad.openstack.__org/p/kilo-keystone-__authorization
https://etherpad.openstack.org/p/kilo-keystone-authorization
[3] https://review.openstack.org/#__/c/145317/
https://review.openstack.org/#/c/145317/
[4]

http://dolphm.com/__benchmarking-openstack-__keystone-token-formats/

http://dolphm.com/benchmarking-openstack-keystone-token-formats/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:

OpenStack-dev-request@lists.__openstack.org?subject:__unsubscribe

http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I am for granting this exception as long as it’s clear that the
following is clear/true:

* All current use-cases for tokens (including federation)
will be
supported by the new token provider.

* The federation tokens being possibly over 255 characters
can be
addressed in the future if they are not addressed here (a
“federation
migration” does not clearly state what is meant.

I think the length of the token is not a real issue.  We need to
keep
them within header lengths.  That is 8k.  Anything smaller than that
will work.


I'd like to respectfully disagree here. Large tokens can
dramatically increase the overhead for users of Swift with small
objects since the token must be passed along with every request.

For example, I have a small static web site: 68 files, mean file
size 5508 bytes, median 636 bytes, total 374517 bytes. (It's an
actual site; these are genuine data.)

If I upload these things to Swift using a UUID token, then I incur
maybe 400 bytes of overhead per file in the HTTP request, which is a
7.3% bloat. On the other hand, if the token + other headers is 8K,
then I'm looking at 149% bloat, so I've more than doubled my
transfer requirements just from tokens. :/

I think that, for users of Swift and any other OpenStack data-plane
APIs, token size is a definite concern. I am very much in favor of
anything that shrinks token sizes while keeping the scalability
benefits of PKI tokens.


Ideally, what's the 

Re: [openstack-dev] [Fuel] fake threads in tests

2015-02-16 Thread Przemyslaw Kaminski


On 02/16/2015 01:55 PM, Jay Pipes wrote:
 On 02/16/2015 06:54 AM, Przemyslaw Kaminski wrote:
 Hello,
 
 This somehow relates to [1]: in integration tests we have a
 class called FakeThread. It is responsible for spawning threads
 to simulate asynchronous tasks in fake env. In
 BaseIntegrationTest class we have a method called
 _wait_for_threads that waits for all fake threads to terminate.
 
 In my understanding what these things actually do is that they
 just simulate Astute's responses. I'm thinking if this could be
 replaced by a better solution, I just want to start a discussion
 on the topic.
 
 My suggestion is to get rid of all this stuff and implement a 
 predictable solution: something along promises or coroutines
 that would execute synchronously. With either promises or
 coroutines we could simulate tasks responses any way we want
 without the need to wait using unpredictable stuff like sleeping,
 threading and such. No need for waiting or killing threads. It
 would hopefully make our tests easier to debug and get rid of the
 random errors that are sometimes getting into our master branch.
 
 Hi!
 
 For integration/functional tests, why bother faking out the threads
 at all? Shouldn't the integration tests be functionally testing the
 real code, not mocked or faked stuff?
 

Well you'd need Rabbit/Astute etc fully set up and working so this was
made for less painful testing I guess. These tests concert only
Nailgun side so I think it's OK to have Fake tasks like this. Full
integration tests with all components are made by the QA team.

P.

 Best, -jay
 
 __

 
OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] RFC: Increasing min libvirt to 1.0.6 for LXC driver ?

2015-02-16 Thread Daniel P. Berrange
On Mon, Feb 16, 2015 at 04:31:21PM +0300, Dmitry Guryanov wrote:
 On 02/13/2015 05:50 PM, Jay Pipes wrote:
 On 02/13/2015 09:20 AM, Daniel P. Berrange wrote:
 On Fri, Feb 13, 2015 at 08:49:26AM -0500, Jay Pipes wrote:
 On 02/13/2015 07:04 AM, Daniel P. Berrange wrote:
 Historically Nova has had a bunch of code which mounted images on the
 host OS using qemu-nbd before passing them to libvirt to setup the
 LXC container. Since 1.0.6, libvirt is able todo this itself and it
 would simplify the codepaths in Nova if we can rely on that
 
 In general, without use of user namespaces, LXC can't really be
 considered secure in OpenStack, and this already requires libvirt
 version 1.1.1 and Nova Juno release.
 
 As such I'd be surprised if anyone is running OpenStack with libvirt
  LXC in production on libvirt  1.1.1 as it would be pretty insecure,
 but stranger things have happened.
 
 The general libvirt min requirement for LXC, QEMU and KVM currently
 is 0.9.11. We're *not* proposing to change the QEMU/KVM min libvirt,
 but feel it is worth increasing the LXC min libvirt to 1.0.6
 
 So would anyone object if we increased min libvirt to 1.0.6 when
 running the LXC driver ?
 
 Thanks for raising the question, Daniel!
 
 Since there are no objections, I'd like to make 1.1.1 the minimal required
 version. Let's also make parameters uid_maps and gid_maps mandatory and
 always add them to libvirt XML.

I think it is probably not enough prior warning to actually turn on user
namespace by default in Kilo. So I think what we should do for Kilo is to
issue a warning message on nova startup if userns is not enabled in the
config, telling users that this will become mandatory in Liberty. Then
when Liberty dev opens, we make it mandatory.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fake threads in tests

2015-02-16 Thread Jay Pipes

On 02/16/2015 08:18 AM, Przemyslaw Kaminski wrote:

On 02/16/2015 01:55 PM, Jay Pipes wrote:

On 02/16/2015 06:54 AM, Przemyslaw Kaminski wrote:

Hello,

This somehow relates to [1]: in integration tests we have a
class called FakeThread. It is responsible for spawning threads
to simulate asynchronous tasks in fake env. In
BaseIntegrationTest class we have a method called
_wait_for_threads that waits for all fake threads to terminate.

In my understanding what these things actually do is that they
just simulate Astute's responses. I'm thinking if this could be
replaced by a better solution, I just want to start a discussion
on the topic.

My suggestion is to get rid of all this stuff and implement a
predictable solution: something along promises or coroutines
that would execute synchronously. With either promises or
coroutines we could simulate tasks responses any way we want
without the need to wait using unpredictable stuff like sleeping,
threading and such. No need for waiting or killing threads. It
would hopefully make our tests easier to debug and get rid of the
random errors that are sometimes getting into our master branch.


Hi!

For integration/functional tests, why bother faking out the threads
at all? Shouldn't the integration tests be functionally testing the
real code, not mocked or faked stuff?


Well you'd need Rabbit/Astute etc fully set up and working so this was
made for less painful testing I guess. These tests concert only
Nailgun side so I think it's OK to have Fake tasks like this. Full
integration tests with all components are made by the QA team.


OK, so these are not integration tests, then. Sounds like they are 
merely unit tests, and as such should just mock out any 
cross-function-unit boundaries and not need any FakeThread class at all.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] - Add custom JS functions to dashboard

2015-02-16 Thread Marcos Fermin Lobo

Hi all,

I would like to add some (own) JavaScript functions to the image list, 
in Project dashboard.


I've followed up the documentation 
(http://docs.openstack.org/developer/horizon/topics/customizing.html#custom-javascript), 
but I think it is outdated, because that documentation refers to 
directory tree which is not the same in (for example) Juno release. I 
mean, there is no: 
openstack_dashboard/dashboards/project/templates/project/ (check 
https://github.com/cernops/horizon/tree/master-patches/openstack_dashboard/dashboards/project)


So, my question is: Where should I write my own JavaScript functions to 
be able to use them in the image list (project dashboard)?. The 
important point here is that my new JavaScript functions should be 
available to compress process which is execute (by default) during RPM 
building.


Thank you for your time.

Regards,
Marcos.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Artifacts] Object Version format: SemVer vs pep440

2015-02-16 Thread Alexander Tivelkov
Hi Ian,

On Tue, Feb 10, 2015 at 11:17 PM, Ian Cordasco
ian.corda...@rackspace.com wrote:
 I think the fundamental disconnect is that not every column in a database
 needs offer sorting to the user. Imposing that restriction here causes a
 cascade of further restrictions that will fundamentally cause this to be
 unusable by a large number of people.

I didn't say that every column needs to offer sorting.
However, ability to differentiate between different artifacts and
different versions of the same artifact, as well as ability to sort
and filter these different versions according to some criteria is one
of the core features we were looking for when designing the whole
artifact proposal. In fact, the initial request for this feature was
not even made by me:  I was asked about versions at the first design
summit where we initially suggested artifacts (Icehouse mid-cycle) and
in subsequent email and IRC discussions. The first design proposal
which was presented in Atlanta already included this concept and so it
was approved there, so this feature was always considered as a natural
and important concept. I don't understand why it causes so much
confusion now, when the implementation is almost complete.

 Except for the fact that versions do typically mean more than the values
 SemVer attaches to them. SemVer is further incompatible with any
 versioning scheme using epochs and is so relatively recent compared to
 versioning practices as a whole that I don’t see how we can justify
 restricting what should be a very generic system to something so specific
 to recent history and favored almost entirely by *developers*.

I believe any person in software industry may invent their own
versioning scheme - and most of them will be incompatible with each
other. If you attempt to build something which is compatible with all
of them at once, you will eventually end up having plain strings,
without any semantic and precedence rules. This definitely does not
satisfy our initial goal (see above).
Instead, I suggest to choose the scheme which is strict, unambiguous
and satisfies our initial goal, but has maximum possible adoption
among the developers, so most of the adopters will be able to use it.
Semver seems to be the best candidate here, but any other proposals
are welcome as well.

Meanwhile this does not prevent adopters from having their own schemas
if they want it: Artifact Type developers may add their own
type-specific string metadata field, put some regexp of code-based
constraints on it - and use it to store their own version. Yet they
will not be able to utilize the version-related features which are
built-in into the Glance.

 Because up until now the only use case you’ve been referencing is CD
 software.

There is some disconnect here: I believe the message which started
this thread was the first time where I mentioned Artifact Repository
in the context of Continuous Delivery solutions; and I was saying
about CD system built on top of it, not about the Artifact Repostiory
being a CD-system on its own.  All other use-cases and scenarios did
not mention CDs at all: Artifact Repostiory is definitely a generic
catalog, not a CD solution.

--
Regards,
Alexander Tivelkov

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Request Spec Freeze Exception (DRBD for Nova)

2015-02-16 Thread Daniel P. Berrange
On Mon, Feb 16, 2015 at 03:06:34PM +0100, Philipp Marek wrote:
 Hi all,
 
 Nikola just told me that I need an FFE for the code as well.
 
 Here it is: please grant a FFE for 
 
 https://review.openstack.org/#/c/149244/
 
 which is the code for the spec at
 
 https://review.openstack.org/#/c/134153/

The current feature freeze exceptions are being gathered for code
reviews whose specs/blueprints are already approved. Since your
spec does not appear to be approved, you would have to first
request a spec freeze exception, but I believe it is too late for
that now in Kilo.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Artifacts] Object Version format: SemVer vs pep440

2015-02-16 Thread Alexander Tivelkov
Donald,

Thanks for your comments, really useful!

I think I need to clarify a bit: I am not speaking about the actual
semantic: placing the meaning into the actual values is still up to
the end-users (or the developers of Artifact Types, if they build some
custom logic which processes version info somehow).

So, this thread is really about preferred syntax scheme - and the
rules to determine precedence.
I understand that pep440 has richer syntax with more capabilities
(epochs, unlimited number of version segments, development releases
etc). My only concern is that being a python-only standard it is less
generic (in term of adoption) that the syntax of semver. The same goes
to Monty's pbr semver: it is openstack-only and thus may be confusing.
--
Regards,
Alexander Tivelkov


On Tue, Feb 10, 2015 at 11:32 PM, Donald Stufft don...@stufft.io wrote:

 On Feb 10, 2015, at 3:17 PM, Ian Cordasco ian.corda...@rackspace.com wrote:


 And of course, the chosen solution should be mappable to database, so
 we may do sorting and filtering on the DB-side.
 So, having it as a simple string and letting the user to decide what
 it means is not an option.

 Except for the fact that versions do typically mean more than the values
 SemVer attaches to them. SemVer is further incompatible with any
 versioning scheme using epochs and is so relatively recent compared to
 versioning practices as a whole that I don’t see how we can justify
 restricting what should be a very generic system to something so specific
 to recent history and favored almost entirely by *developers*.

 Semver vs PEP 440 is largely a syntax question since PEP 440 purposely does 
 not
 have much of an opinion on how something like 2.0.0 and 2.1.0 are related 
 other
 than for sorting. We do have operators in PEP 440 that support treating these
 versions in a semver like way, and some that support treating them in other
 ways.

 The primary purpose of PEP 440 was to define a standard way to parse and sort
 and specify versions across several hundred thouse versions that currently
 exist in PyPI. This means that it is more complicated to implement but it is
 much more powerful than semver eve could be. One example, as Ian mentioned is
 the lack of the ability to do an Epoch, another example is that PEP 440 has
 explicit support for someone taking version 1.0 adding some unofficial patches
 to it, and then releasing that in their own distribution channels.

 The primary purpose of Semver was to be extremely opinionated in what meaning
 you place on the *content* of the version parts and the syntax is really a
 secondary concern which exists just to make it easier to parse. This means 
 that
 if you know ahead of time that something is Semver you can guess a lot more
 information about the relationship of two versions.

 It was our intention that PEP 440 would (is?) aimed primarily at people
 implementing tools that work with versions, and the additional PEPs or other
 documentations would be written on top of PEP 440 to add opinions on what a
 version looks like within the framework that PEP 440 sets up. A great example
 is the pbr semver document that Monty linked.

 ---
 Donald Stufft
 PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Request Spec Freeze Exception (DRBD for Nova)

2015-02-16 Thread Philipp Marek
Hi all,

Nikola just told me that I need an FFE for the code as well.

Here it is: please grant a FFE for 

https://review.openstack.org/#/c/149244/

which is the code for the spec at

https://review.openstack.org/#/c/134153/


Regards,

Phil

-- 
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Feature Freeze Exception Request (DRBD for Nova) WAS: Re: [nova] Request Spec Freeze Exception (DRBD for Nova)

2015-02-16 Thread Nikola Đipanov
Re-titling the email so that it does not get missed as it does not have
the right subject line.

I looked at the code and it is quite straightforward - some small nits
inline but other than that - no reason to keep it out.

This is the first contribution to Nova by Philipp and his team, and
their experience navigating our bureaucracy-meritocracy seems far from a
happy one (from what I could gather on IRC at least) - one more reason
to not keep this feature out.

Thanks,
N.

On 02/16/2015 03:06 PM, Philipp Marek wrote:
 Hi all,
 
 Nikola just told me that I need an FFE for the code as well.
 
 Here it is: please grant a FFE for 
 
 https://review.openstack.org/#/c/149244/
 
 which is the code for the spec at
 
 https://review.openstack.org/#/c/134153/
 
 
 Regards,
 
 Phil
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Missing column in Gerrit overview page showing a -1 review after/with a +2 review

2015-02-16 Thread Christian Berendt
On 02/14/2015 04:46 PM, Anita Kuno wrote:
 Using gerrit queries might come in handy here.
 
 https://review.openstack.org/Documentation/user-search.html
 
 For a start, this gets me all reviews that are open for which I am not
 the author, where I voted +1 (or better) and someone else voted -1 (or
 lower). I just gave it a quick test to try it out, it might need refining.
 
 NOT owner:self status:open label:Code-Review=+1,self label:Code-Review=-1

This way I can display review requests with -1 reviews, yes. On the
overview page a +2 review will still hide any -1 reviews. But a -1
review will hide all +1 reviews. I think -1 reviews should be visible
regardless of +2 reviews.

Christian.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Artifacts] Object Version format: SemVer vs pep440

2015-02-16 Thread Alexander Tivelkov
Hi Client,

Thanks for your input.

We actually support the scenarios you speak about, yet in a slightly
different way.  The authors of the Artifact Type (the plugin
developers) may define their own custom field (or set of fields) to
store their sequence information or any other type-specific
version-related metadata. So, they may use generic version field
(which is defined in the base artifact type) to store their numeric
version - and use their type-specific field for local client-side
processing.

--
Regards,
Alexander Tivelkov


On Tue, Feb 10, 2015 at 11:37 PM, Clint Byrum cl...@fewbar.com wrote:
 Excerpts from Alexander Tivelkov's message of 2015-02-10 07:28:55 -0800:
 Hi folks,

 One of the key features that we are adding to Glance with the
 introduction of Artifacts is the ability to have multiple versions of
 the same object in the repository: this gives us the possibility to
 query for the latest version of something, keep track on the changes
 history, and build various continuous delivery solutions on top of
 Artifact Repository.

 We need to determine the format and rules we will use to define,
 increment and compare versions of artifacts in the repository. There
 are two alternatives we have to choose from, and we are seeking advice
 on this choice.

 First, there is Semantic Versioning specification, available at [1].
 It is a very generic spec, widely used and adopted in many areas of
 software development. It is quite straightforward: 3 mandatory numeric
 components for version number, plus optional string labels for
 pre-release versions and build metadata.

 And then there is PEP-440 spec, which is a recommended approach to
 identifying versions and specifying dependencies when distributing
 Python. It is a pythonic way to set versions of python packages,
 including PIP version strings.

 Conceptually PEP-440 and Semantic Versioning are similar in purpose,
 but slightly different in syntax. Notably, the count of version number
 components and rules of version precedence resolution differ between
 PEP-440 and SemVer. Unfortunately, the two version string formats are
 not compatible, so we have to choose one or the other.

 According to my initial vision, the Artifact Repository should be as
 generic as possible in terms of potential adoption. The artifacts were
 never supposed to be python packages only, and even the projects which
 will create and use these artifacts are not mandatory limited to be
 pythonic, the developers of that projects may not be python
 developers! So, I'd really wanted to avoid any python-specific
 notations, such as PEP-440 for artifacts.

 I've put this vision into a spec [3] which also contains a proposal on
 how to convert the semver-compatible version strings into the
 comparable values which may be mapped to database types, so a database
 table may be queried, ordered and filtered by the object version.

 So, we need some feedback on this topic. Would you prefer artifacts to
 be versioned with SemVer or with PEP-440 notation? Are you interested
 in having some generic utility which will map versions (in either
 format) to database columns? If so, which version format would you
 prefer?

 We are on a tight schedule here, as we want to begin landing
 artifact-related code soon. So, I would appreciate your feedback
 during this week: here in the ML or in the comments to [3] review.


 Hi. This is really interesting work and I'm glad Glance is growing into
 an artifact catalog as I think it will assist cloud users and UI
 development at the same time.

 It seems to me that there are really only two reasons to care about the
 content of the versions: sorting, and filtering. You want to make sure
 if people upload artifacts named myapp like this:

 myapp:1.0 myapp:2.0 myapp:1.1

 That when they say show me the newest myapp they get 2.0, not 1.1.

 And if they say show me the newest myapp in the 1.x series they get 1.1.

 I am a little worried this is not something that can or should be made
 generic in a micro service.

 Here's a thought: You could just have the version, series, and sequence,
 and let users manage the sequencing themselves on the client side. This
 way if users want to use the _extremely_ difficult to program for Debian
 packaging version, you don't have to figure out how to make 1.0~special
 less than 1.0 and more than 0.9.

 To start with, you can have a default strategy of a single series, and
 max(sequence)+1000 if unspecified. Then teach the clients the various
 semvers/pep440's/etc. etc. and let them choose their own sequencing and
 series strategy.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not 

Re: [openstack-dev] [barbican][grenade][tempest][qa][ceilometer] Database Upgrade Testing for Incubated Projects

2015-02-16 Thread Sean Dague
On 02/09/2015 12:17 PM, John Wood wrote:
 Hello folks,
 
 (Apologies for the numerous tags, but my question straddles multiple
 areas I believe)
 
 I’m a core developer on the Barbican team and we are interested in
 database upgrade testing via Grenade. In addition to Grenade
 documentation I’ve looked at two blueprints [1][2] the Ceilometer
 project merged in last year, and the CRs created for them. It would
 appear that in order to utilize Grenade testing for Barbican, we would
 need submit CRs to both Grenade (to add an ‘update-barbican’ script) and
 to Tempest (to add Barbican-centric resources to the javeline.py module). 
 
 As Barbican is not yet out of incubation, would such Grenade and Tempest
 CRs need to wait until we are out of incubation?  If we do have to wait,
 is anyone aware of an alternative method of such upgrade testing without
 need to submit changes to Grenade and Tempest (similar to the DevStack
 gate hook back to the project)?
 
 [1] 
 http://specs.openstack.org/openstack/ceilometer-specs/specs/juno/grenade-upgrade-testing.html
 [2] 
 http://specs.openstack.org/openstack/ceilometer-specs/specs/juno/grenade-resource-survivability.html

Sorry, I missed this thread when initially published. I put out an ML
post on where I think we need to get with Grenade in Liberty -
http://lists.openstack.org/pipermail/openstack-dev/2015-February/056738.html


Realistically I think additional project support needs to be postponed
until we can come up with a more modular upgrade and validation
structure in Grenade, which is probably going to be something for when
Liberty starts up.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Outcome of the nova FFE meeting for Kilo

2015-02-16 Thread Jay Pipes
Hi Mikal, sorry for top-posting. What was the final decision regarding 
the instance tagging work?


Thanks,
-jay

On 02/16/2015 09:44 PM, Michael Still wrote:

Hi,

we had a meeting this morning to try and work through all the FFE
requests for Nova. The meeting was pretty long -- two hours or so --
and we did in in the nova IRC channel in an attempt to be as open as
possible. The agenda for the meeting was the list of FFE requests at
https://etherpad.openstack.org/p/kilo-nova-ffe-requests

I recognise that this process is difficult for all, and that it is
frustrating when your FFE request is denied. However, we have tried
very hard to balance distractions from completing priority tasks and
getting as many features into Kilo as possible. I ask for your
patience as we work to finalize the Kilo release.

That said, here's where we ended up:

Approved:

 vmware: ephemeral disk support
 API: Keypair support for X509 public key certificates

We were also presented with a fair few changes which are relatively
trivial (single patch, not very long) and isolated to a small part of
the code base. For those, we've selected the ones with the greatest
benefit. These ones are approved so long as we can get the code merged
before midnight on 20 February 2015 (UTC). The deadline has been
introduced because we really are trying to focus on priority work and
bug fixes for the remainder of the release, so I want to time box the
amount of distraction these patches cause.

Those approved in this way are:

 ironic: Pass the capabilities to ironic node instance_info
 libvirt: Nova vif driver plugin for opencontrail
 libvirt: Quiescing filesystems with QEMU guest agent during image
snapshotting
 libvirt: Support vhost user in libvirt vif driver
 libvirt: Support KVM/libvirt on System z (S/390) as a hypervisor platform

It should be noted that there was one request which we decided didn't
need a FFE as it isn't feature work. That may proceed:

 hyperv: unit tests refactoring

Finally, there were a couple of changes we were uncomfortable merging
this late in the release as we think they need time to bed down
before a release we consider stable for a long time. We'd like to see
these merge very early in Liberty:

 libvirt: use libvirt storage pools
 libvirt: Generic Framework for Securing VNC and SPICE
Proxy-To-Compute-Node Connections

Thanks again to everyone with their patience with our process, and
helping to make Kilo an excellent Nova release.

Michael



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] Domain information through ceilometer and authentication against keystone v3

2015-02-16 Thread Namita Chaudhari
Hi All,

I have explored ceilometer and have few queries regarding it:

1) *Getting domain information:* I haven't came across but is there any
ceilometer API which would provide domain information along with the usage
data?

2) *Ceilometer auth against keystone v3:* As domain feature is provided in
keystone v3 API, I am using that. Is there a way to configure ceilometer so
that it would use keystone v3 API? I tried doing that but it didnt work for
me. Also, I came across a question forum (
https://ask.openstack.org/en/question/55353/ceilometer-v3-auth-against-keystone/
)
 which says that ceilometer can't use v3 for getting service tokens since
middleware doesn't support it.

Can you please help me with the above queries?


Thanks and Regards,
Namita
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Propose Andre Martin for kolla-core

2015-02-16 Thread Steven Dake (stdake)
Hi folks,

I’d am proposing Andre Martin to join the kolla-core team.  Andre has been 
providing mostly code implementation, but as he contributes heavily, has 
indicated he will get more involved in our peer reviewing process.

He has contributed 30% of the commits for the Kilo development cycle, acting as 
our #1 commit contributor during Kilo.

http://stackalytics.com/?project_type=allmodule=kollametric=commits

Kolla-core members please vote +1/abstain/-1.  Remember that a any –1 vote is a 
veto.

Regards
-steve


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] volume replication

2015-02-16 Thread Zhipeng Huang
Great :)

On Mon, Feb 16, 2015 at 3:46 PM, Ronen Kat ronen...@il.ibm.com wrote:

 Good question, I have:

 https://etherpad.openstack.org/p/cinder-replication-redoc
 https://etherpad.openstack.org/p/cinder-replication-cg
 https://etherpad.openstack.org/p/volume-replication-fix-planning

 Jay seems to be the champion for moving replication forward, I will let
 Jay point the way.

 -- Ronen



 From:Zhipeng Huang zhipengh...@gmail.com
 To:OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date:16/02/2015 09:14 AM
 Subject:Re: [openstack-dev] [cinder] volume replication
 --



 Hi Ronen,

 Xingyang mentioned there's another etherpad on rep and CG, which etherpad
 should we mainly follow ?

 On Mon, Feb 16, 2015 at 2:38 PM, Ronen Kat *ronen...@il.ibm.com*
 ronen...@il.ibm.com wrote:
 Ruijing, hi,

 Are you discussing the network/fabric between Storage A and Storage B?
 If so, assumption in Cinder is that this is done in advance by the storage
 administrator.
 The design discussions for replication resulted in that the driver is
 fully responsible for replication and it is up to the driver to implement
 and manage replication on its own.
 Hence, all vendor specific setup actions like creating volume pools, setup
 network on the storage side are considered prerequisite actions and outside
 the scope of the Cinder flows.

 If someone feels that is not the case, or should not be the case, feel
 free to chime in.

 Or does this relates to setting up the data path for accessing both
 Storage A and Storage B?
 Should this be setup in advance? When we attach the primary volume to the
 VM? Or when promoting the replica to be primary?

 -- Ronen



 From:Guo, Ruijing *ruijing@intel.com*
 ruijing@intel.com
 To:OpenStack Development Mailing List (not for usage questions)
 *openstack-dev@lists.openstack.org* openstack-dev@lists.openstack.org
 Date:16/02/2015 02:29 AM
 Subject:Re: [openstack-dev] [cinder] documenting volume
 replication
  --



 Hi, Ronen

 3) I mean storage based replication. In normal, volume replication support
 FC or iSCSI. We need to setup FC or iSCSI before we do volume replication.

 Case 1)

 Host --FC--Storage A ---iSCSI  Storage B FC-
 Host

 Case 2)

 Host --FC--Storage A ---FC  Storage B FC- Host

 As above diagram, we need to setup connection (iSCSI or FC) between
 storage A and Storage B.

 For FC, we need to zone storage A  storage B in FC switch.

 Thanks,
 -Ruijing

 * From:* Ronen Kat [*mailto:ronen...@il.ibm.com* ronen...@il.ibm.com]
 * Sent:* Sunday, February 15, 2015 4:46 PM
 * To:* OpenStack Development Mailing List (not for usage questions)
 * Subject:* Re: [openstack-dev] [cinder] documenting volume replication

 Hi Ruijing,

 Thanks for the comments.
 Re (1) - driver can implement replication in any means the driver see fit.
 It can be exported and be available to the scheduler/drive via the
 capabilities or driver extra-spec prefixes.
 Re (3) - Not sure I see how this relates to storage side replication, do
 you refer to host side replication?

 Ronen



 From:Guo, Ruijing *ruijing@intel.com*
 ruijing@intel.com
 To:OpenStack Development Mailing List (not for usage questions)
 *openstack-dev@lists.openstack.org* openstack-dev@lists.openstack.org
 Date:15/02/2015 03:41 AM
 Subject:Re: [openstack-dev] [cinder] documenting volume
 replication
 --





 Hi, Ronen,

 I don’t know how to edit
 *https://etherpad.openstack.org/p/cinder-replication-redoc*
 https://etherpad.openstack.org/p/cinder-replication-redoc and add some
 comments in email.

 1. We may add asynchronized and synchronized type for replication.
 2. We may add CG for replication
 3. We may add to initialize connection for replication

 Thanks,
 -Ruijing

 * From:* Ronen Kat [*mailto:ronen...@il.ibm.com* ronen...@il.ibm.com]
 * Sent:* Tuesday, February 3, 2015 9:41 PM
 * To:* OpenStack Development Mailing List (
 *openstack-dev@lists.openstack.org* openstack-dev@lists.openstack.org)
 * Subject:* [openstack-dev] [cinder] documenting volume replication

 As some of you are aware the spec for replication is not up to date,
 The current developer documentation,
 *http://docs.openstack.org/developer/cinder/api/cinder.volume.driver.html*
 http://docs.openstack.org/developer/cinder/api/cinder.volume.driver.html,
 cover replication but some folks indicated that it need additional details.

 In order to get the spec and documentation up to date I created an
 Etherpad to be a base for the update.
 The Etherpad page is on
 *https://etherpad.openstack.org/p/cinder-replication-redoc*
 https://etherpad.openstack.org/p/cinder-replication-redoc

 I would appreciate if interested parties would take a look at the
 Etherpad, add comments, 

[openstack-dev] [magnum] Propose removing Dmitry Guryanov from magnum-core

2015-02-16 Thread Steven Dake (stdake)
The initial magnum core team was founded at a meeting where several people 
committed to being active in reviews and writing code for Magnum.  Nearly all 
of the folks that made that initial commitment have been active in IRC, on the 
mailing lists, or participating in code reviews or code development.

Out of our core team of 9 members [1], everyone has been active in some way 
except for Dmitry.  I propose removing him from the core team.  Dmitry is 
welcome to participate in the future if he chooses and be held to the same high 
standards we have held our last 4 new core members to that didn’t get an 
initial opt-in but were voted in by their peers.

Please vote (-1 remove, abstain, +1 keep in core team) - a vote of +1 from any 
core acts as a veto meaning Dmitry will remain in the core team.

[1] https://review.openstack.org/#/admin/groups/473,members
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Propose Andre Martin for kolla-core

2015-02-16 Thread Steven Dake (stdake)
+1 \o/ yay

From: Steven Dake std...@cisco.commailto:std...@cisco.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, February 16, 2015 at 8:07 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [kolla] Propose Andre Martin for kolla-core

Hi folks,

I’d am proposing Andre Martin to join the kolla-core team.  Andre has been 
providing mostly code implementation, but as he contributes heavily, has 
indicated he will get more involved in our peer reviewing process.

He has contributed 30% of the commits for the Kilo development cycle, acting as 
our #1 commit contributor during Kilo.

http://stackalytics.com/?project_type=allmodule=kollametric=commits

Kolla-core members please vote +1/abstain/-1.  Remember that a any –1 vote is a 
veto.

Regards
-steve


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gantt] Scheduler sub-group meeting agenda 2/`17

2015-02-16 Thread Dugger, Donald D
Note: I won't be able to make the meeting this week (family medical issues) so 
everyone meet on the IRC channel and decide who wants to chair.



Meeting on #openstack-meeting at 1500 UTC (8:00AM MST)


1)  Remove direct nova DB/API access by Scheduler Filters - 
https://review.opernstack.org/138444/

2)  Status on cleanup work - https://wiki.openstack.org/wiki/Gantt/kilo


--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Propose removing Dmitry Guryanov from magnum-core

2015-02-16 Thread Adrian Otto
-1


 Original message 
From: Steven Dake (stdake)
Date:02/16/2015 7:23 PM (GMT-08:00)
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum] Propose removing Dmitry Guryanov from 
magnum-core

The initial magnum core team was founded at a meeting where several people 
committed to being active in reviews and writing code for Magnum.  Nearly all 
of the folks that made that initial commitment have been active in IRC, on the 
mailing lists, or participating in code reviews or code development.

Out of our core team of 9 members [1], everyone has been active in some way 
except for Dmitry.  I propose removing him from the core team.  Dmitry is 
welcome to participate in the future if he chooses and be held to the same high 
standards we have held our last 4 new core members to that didn’t get an 
initial opt-in but were voted in by their peers.

Please vote (-1 remove, abstain, +1 keep in core team) - a vote of +1 from any 
core acts as a veto meaning Dmitry will remain in the core team.

[1] https://review.openstack.org/#/admin/groups/473,members
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fake threads in tests

2015-02-16 Thread Jay Pipes

On 02/16/2015 06:54 AM, Przemyslaw Kaminski wrote:

Hello,

This somehow relates to [1]: in integration tests we have a class
called FakeThread. It is responsible for spawning threads to simulate
asynchronous tasks in fake env. In BaseIntegrationTest class we have a
method called _wait_for_threads that waits for all fake threads to
terminate.

In my understanding what these things actually do is that they just
simulate Astute's responses. I'm thinking if this could be replaced by
a better solution, I just want to start a discussion on the topic.

My suggestion is to get rid of all this stuff and implement a
predictable solution: something along promises or coroutines that
would execute synchronously. With either promises or coroutines we
could simulate tasks responses any way we want without the need to
wait using unpredictable stuff like sleeping, threading and such. No
need for waiting or killing threads. It would hopefully make our tests
easier to debug and get rid of the random errors that are sometimes
getting into our master branch.


Hi!

For integration/functional tests, why bother faking out the threads at 
all? Shouldn't the integration tests be functionally testing the real 
code, not mocked or faked stuff?


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][neutron-*aas] Is lockutils-wrapper needed for tox.ini commands?

2015-02-16 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 02/13/2015 11:13 PM, Paul Michali wrote:
 I see that in tox.ini, several commands have lockutils-wrapper
 prefix on them in the neutron-vpnaas repo. Seems like this was
 added as part of commit 88e2d801 for Migration to
 oslo.concurrency.

Those would be interesting for unit tests only. And now that neutron
BaseTestCase uses proper fixture [1], we can just remove the wrapper
call from all *aas repos.

That was actually one of the things I was going to do as a oslo
liaison during Kilo [2] (see the 1st entry in the todo list.) But if
you want to go with this before I reach this cleanup, I will be glad
to review the changes. :)

[1]:
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/tests/base.py#n89
[2]:
http://lists.openstack.org/pipermail/openstack-dev/2015-January/054753.html

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJU4e21AAoJEC5aWaUY1u57zKcH/iZ3SFi3BmyYaEwch5jipbzw
Byxn2QxyjRNcQ6m/dr6ihpvXS2bIo75mrNajc+mdnKTqAdXebceSQRPAw4EX3c9r
qtlaGzSrBqmSiyl/YnbqUiUf2zcXpFIpiTJswbdhv10P5Gi/Q64m6d+ipQsIUaMP
4sY/0sjAV5Gn9cpkBZn9LY1/CrWnP7eqFMBYvFTsyEpGHdgJ4heAx2dLCqY2DE9H
bVFexZK1yMqLzEIwmHtzSyifcFZkC39fa6bsxCVlLkbfU7+KC56FOOHARsjf+grd
ReQuGH4QIS0aTMkrd7qmJRkaK7BudkX1yfOY68jsYSwrpKoia7pMZ+tbPosfUbk=
=+HKf
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] RFC: Increasing min libvirt to 1.0.6 for LXC driver ?

2015-02-16 Thread Dmitry Guryanov

On 02/13/2015 05:50 PM, Jay Pipes wrote:

On 02/13/2015 09:20 AM, Daniel P. Berrange wrote:

On Fri, Feb 13, 2015 at 08:49:26AM -0500, Jay Pipes wrote:

On 02/13/2015 07:04 AM, Daniel P. Berrange wrote:

Historically Nova has had a bunch of code which mounted images on the
host OS using qemu-nbd before passing them to libvirt to setup the
LXC container. Since 1.0.6, libvirt is able todo this itself and it
would simplify the codepaths in Nova if we can rely on that

In general, without use of user namespaces, LXC can't really be
considered secure in OpenStack, and this already requires libvirt
version 1.1.1 and Nova Juno release.

As such I'd be surprised if anyone is running OpenStack with libvirt
 LXC in production on libvirt  1.1.1 as it would be pretty insecure,
but stranger things have happened.

The general libvirt min requirement for LXC, QEMU and KVM currently
is 0.9.11. We're *not* proposing to change the QEMU/KVM min libvirt,
but feel it is worth increasing the LXC min libvirt to 1.0.6

So would anyone object if we increased min libvirt to 1.0.6 when
running the LXC driver ?


Thanks for raising the question, Daniel!

Since there are no objections, I'd like to make 1.1.1 the minimal 
required version. Let's also make parameters uid_maps and gid_maps 
mandatory and always add them to libvirt XML.





Why not 1.1.1?


Well I was only going for what's the technical bare minimum to get
the functionality wrt disk image mounting.

If we wish to declare use of user namespace is mandatory with the
libvirt LXC driver, then picking 1.1.1 would be fine too.


Personally, I'd be +1 on 1.1.1. :)

-jay

__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Dmitry Guryanov


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] RFC: Increasing min libvirt to 1.0.6 for LXC driver ?

2015-02-16 Thread Dmitry Guryanov

On 02/16/2015 04:36 PM, Daniel P. Berrange wrote:

On Mon, Feb 16, 2015 at 04:31:21PM +0300, Dmitry Guryanov wrote:

On 02/13/2015 05:50 PM, Jay Pipes wrote:

On 02/13/2015 09:20 AM, Daniel P. Berrange wrote:

On Fri, Feb 13, 2015 at 08:49:26AM -0500, Jay Pipes wrote:

On 02/13/2015 07:04 AM, Daniel P. Berrange wrote:

Historically Nova has had a bunch of code which mounted images on the
host OS using qemu-nbd before passing them to libvirt to setup the
LXC container. Since 1.0.6, libvirt is able todo this itself and it
would simplify the codepaths in Nova if we can rely on that

In general, without use of user namespaces, LXC can't really be
considered secure in OpenStack, and this already requires libvirt
version 1.1.1 and Nova Juno release.

As such I'd be surprised if anyone is running OpenStack with libvirt
 LXC in production on libvirt  1.1.1 as it would be pretty insecure,
but stranger things have happened.

The general libvirt min requirement for LXC, QEMU and KVM currently
is 0.9.11. We're *not* proposing to change the QEMU/KVM min libvirt,
but feel it is worth increasing the LXC min libvirt to 1.0.6

So would anyone object if we increased min libvirt to 1.0.6 when
running the LXC driver ?

Thanks for raising the question, Daniel!

Since there are no objections, I'd like to make 1.1.1 the minimal required
version. Let's also make parameters uid_maps and gid_maps mandatory and
always add them to libvirt XML.

I think it is probably not enough prior warning to actually turn on user
namespace by default in Kilo. So I think what we should do for Kilo is to
issue a warning message on nova startup if userns is not enabled in the
config, telling users that this will become mandatory in Liberty. Then
when Liberty dev opens, we make it mandatory.

Regards,
Daniel


OK, seems reasonable.

--
Dmitry Guryanov


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] [trusts] [all] How trusts should work by design?

2015-02-16 Thread Nikolay Makhotkin
Hello,

Decided to start a new thread due to too much technical details in old
thread.
(You can see thread *[openstack-dev] [keystone] [nova]* )

*The problem:* Trusts can not be used to retrieve a token for further work
with python-projectclient.

I made some research for trust's use cases. The main goal of trusts is
clear to me: delegation of privileges of one user to another on specific
time (or limitless). But if I get a trust and then get a token from it, it
can not be used in any python-client. The reason why it happens so - is
'authenticate' method in almost all python-clients. This method request a
keystone for authentication and get a new auth token. But in case of
trust-scoped token it can't be true - this method always return '403
Forbidden' [1]

*The question:* Is there a way to create a trust and use it for requests to
any other service? E.g., We can get a token from trust and use it (but
actually, we are not).

Or am I misunderstanding trust's purpose? How are trusts should worked?


[1]
https://github.com/openstack/keystone/blob/master/keystone/token/controllers.py#L154-L156


Best Regards,
Nikolay Makhotkin
@Mirantis
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] Changing expression delimiters in Mistral DSL

2015-02-16 Thread Dmitri Zimine
SUMMARY: 


We are changing the syntax for inlining YAQL expressions in Mistral YAML from 
{1+$.my.var} (or “{1+$.my.var}”) to % 1+$.my.var %

Below I explain the rationale and the criteria for the choice. Comments and 
suggestions welcome.

DETAILS: 
-

We faced a number of problems with using YAQL expressions in Mistral DSL: [1] 
must handle any YAQL, not only the ones started with $; [2] must preserve types 
and [3] must comply with YAML. We fixed these problems by applying Ansible 
style syntax, requiring quotes around delimiters (e.g. “{1+$.my.yaql.var}”). 
However, it lead to unbearable confusion in DSL readability, in regards to 
types:

publish:
   intvalue1: {1+1}” # Confusing: you expect quotes to be string.
   intvalue2: {int(1+1)}” # Even this doestn’ clean the confusion
   whatisthis:{$.x + $.y}” # What type would this return? 

We got a very strong push back from users in the filed on this syntax. 

The crux of the problem is using { } as delimiters YAML. It is plain wrong to 
use the reserved character. The clean solution is to find a delimiter that 
won’t conflict with YAML.

Criteria for selecting best alternative are: 
1) Consistently applies to to all cases of using YAML in DSL
2) Complies with YAML 
3) Familiar to target user audience - openstack and devops

We prefer using two-char delimiters to avoid requiring extra escaping within 
the expressions.

The current winner is % %. It fits YAML well. It is familiar to 
openstack/devops as this is used for embedding Ruby expressions in Puppet and 
Chef (for instance, [4]). It plays relatively well across all cases of using 
expressions in Mistral (see examples in [5]):

ALTERNATIVES considered:
--

1) Use Ansible-like syntax: http://docs.ansible.com/YAMLSyntax.html#gotchas
Rejected for confusion around types. See above.

2) Use functions, like Heat HOT or TOSCA:

HOT templates and TOSCA doesn’t seem to have a concept of typed variables to 
borrow from (please correct me if I missed it). But they have functions: 
function: { function_name: {foo: [parameter1, parameter 2], bar:xxx”}}. 
Applied to Mistral, it would look like:

publish:
 - bool_var: { yaql: “1+1+$.my.var  100” } 

Not bad, but currently rejected as it reads worse than delimiter-based syntax, 
especially in simplified one-line action invocation.

3)   paired with other symbols: php-styoe  ? ..?


REFERENCES: 
--

[1] Allow arbitrary YAQL expressions, not just ones started with $ : 
https://github.com/stackforge/mistral/commit/5c10fb4b773cd60d81ed93aec33345c0bf8f58fd
[2] Use Ansible-like syntax to make YAQL expressions YAML complient 
https://github.com/stackforge/mistral/commit/d9517333b1fc9697d4847df33d3b774f881a111b
[3] Preserving types in YAQL 
https://github.com/stackforge/mistral/blob/d9517333b1fc9697d4847df33d3b774f881a111b/mistral/tests/unit/test_expressions.py#L152-L184
[4]Using % % in Puppet 
https://docs.puppetlabs.com/guides/templating.html#erb-is-plain-text-with-embedded-ruby
 
[5] Etherpad with discussion 
https://etherpad.openstack.org/p/mistral-YAQL-delimiters
[6] Blueprint https://blueprints.launchpad.net/mistral/+spec/yaql-delimiters

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Propose removing Dmitry Guryanov from magnum-core

2015-02-16 Thread Steven Dake (stdake)
-1

From: Steven Dake std...@cisco.commailto:std...@cisco.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, February 16, 2015 at 8:20 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [magnum] Propose removing Dmitry Guryanov from 
magnum-core

The initial magnum core team was founded at a meeting where several people 
committed to being active in reviews and writing code for Magnum.  Nearly all 
of the folks that made that initial commitment have been active in IRC, on the 
mailing lists, or participating in code reviews or code development.

Out of our core team of 9 members [1], everyone has been active in some way 
except for Dmitry.  I propose removing him from the core team.  Dmitry is 
welcome to participate in the future if he chooses and be held to the same high 
standards we have held our last 4 new core members to that didn’t get an 
initial opt-in but were voted in by their peers.

Please vote (-1 remove, abstain, +1 keep in core team) - a vote of +1 from any 
core acts as a veto meaning Dmitry will remain in the core team.

[1] https://review.openstack.org/#/admin/groups/473,members
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] Need help on https://bugs.launchpad.net/ceilometer/+bug/1310580

2015-02-16 Thread ashish.jain14

Hello,

I am newbie and I want to start with my first contribution to open stack. I 
have chosen the bug - https://bugs.launchpad.net/ceilometer/+bug/1310580 to 
start with​. I have added comments on the bug and need some developer to 
validate those. Additionally I need some help on how to go about editing the 
wiki. Please help and advice.

Regards
Ashish

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gantt] Scheduler sub-group meeting agenda 2/`17

2015-02-16 Thread Sylvain Bauza
No worries, I'll run the meeting. Hope your family is going better soon.

-Sylvain
Le 17 févr. 2015 05:40, Dugger, Donald D donald.d.dug...@intel.com a
écrit :

  Note: I won’t be able to make the meeting this week (family medical
 issues) so everyone meet on the IRC channel and decide who wants to chair.



 Meeting on #openstack-meeting at 1500 UTC (8:00AM MST)



 1)  Remove direct nova DB/API access by Scheduler Filters -
 https://review.opernstack.org/138444/

 2)  Status on cleanup work -
 https://wiki.openstack.org/wiki/Gantt/kilo





 --

 Don Dugger

 Censeo Toto nos in Kansa esse decisse. - D. Gale

 Ph: 303/443-3786



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Changing expression delimiters in Mistral DSL

2015-02-16 Thread Renat Akhmerov
Dmitri,

I agree with all your reasonings and fully support the idea of changing the 
syntax now as well as changing system’s API a little bit due to recently found 
issues in the current engine design that don’t allow us, for example, to fully 
implement ‘with-items’ (although that’s a little bit different story).

Just a general note about all changes happening now: Once we release kilo 
stable release our API, DSL of version 2 must be 100% stable. I was hoping to 
stabilize it much earlier but the start of production use revealed a number of 
things (I think this is normal) which we need to address, but not later than 
the end of Kilo.

As far as % % syntax. I see that it would solve a number of problems (YAML 
friendliness, type ambiguity) but my only not strong argument is that it 
doesn’t look that elegant in YAML as it looks, for example, in ERB templates. 
It really reminds me XML/HTML and looks like a bear in a grocery store (tried 
to make it close to one old russian saying :) ). So just for this only reason 
I’d suggest we think about other alternatives, maybe not so familiar to 
Ruby/Chef/Puppet users but looking better with YAML and at the same time being 
YAML friendly.

I would be good if we could here more feedback on this, especially from people 
who started using Mistral.

Thanks

Renat Akhmerov
@ Mirantis Inc.



 On 17 Feb 2015, at 03:06, Dmitri Zimine dzim...@stackstorm.com wrote:
 
 SUMMARY: 
 
 
 We are changing the syntax for inlining YAQL expressions in Mistral YAML from 
 {1+$.my.var} (or “{1+$.my.var}”) to % 1+$.my.var %
 
 Below I explain the rationale and the criteria for the choice. Comments and 
 suggestions welcome.
 
 DETAILS: 
 -
 
 We faced a number of problems with using YAQL expressions in Mistral DSL: [1] 
 must handle any YAQL, not only the ones started with $; [2] must preserve 
 types and [3] must comply with YAML. We fixed these problems by applying 
 Ansible style syntax, requiring quotes around delimiters (e.g. 
 “{1+$.my.yaql.var}”). However, it lead to unbearable confusion in DSL 
 readability, in regards to types:
 
 publish:
intvalue1: {1+1}” # Confusing: you expect quotes to be string.
intvalue2: {int(1+1)}” # Even this doestn’ clean the confusion
whatisthis:{$.x + $.y}” # What type would this return? 
 
 We got a very strong push back from users in the filed on this syntax. 
 
 The crux of the problem is using { } as delimiters YAML. It is plain wrong to 
 use the reserved character. The clean solution is to find a delimiter that 
 won’t conflict with YAML.
 
 Criteria for selecting best alternative are: 
 1) Consistently applies to to all cases of using YAML in DSL
 2) Complies with YAML 
 3) Familiar to target user audience - openstack and devops
 
 We prefer using two-char delimiters to avoid requiring extra escaping within 
 the expressions.
 
 The current winner is % %. It fits YAML well. It is familiar to 
 openstack/devops as this is used for embedding Ruby expressions in Puppet and 
 Chef (for instance, [4]). It plays relatively well across all cases of using 
 expressions in Mistral (see examples in [5]):
 
 ALTERNATIVES considered:
 --
 
 1) Use Ansible-like syntax: http://docs.ansible.com/YAMLSyntax.html#gotchas 
 http://docs.ansible.com/YAMLSyntax.html#gotchas
 Rejected for confusion around types. See above.
 
 2) Use functions, like Heat HOT or TOSCA:
 
 HOT templates and TOSCA doesn’t seem to have a concept of typed variables to 
 borrow from (please correct me if I missed it). But they have functions: 
 function: { function_name: {foo: [parameter1, parameter 2], bar:xxx”}}. 
 Applied to Mistral, it would look like:
 
 publish:
  - bool_var: { yaql: “1+1+$.my.var  100” } 
 
 Not bad, but currently rejected as it reads worse than delimiter-based 
 syntax, especially in simplified one-line action invocation.
 
 3)   paired with other symbols: php-styoe  ? ..?
 
 
 REFERENCES: 
 --
 
 [1] Allow arbitrary YAQL expressions, not just ones started with $ : 
 https://github.com/stackforge/mistral/commit/5c10fb4b773cd60d81ed93aec33345c0bf8f58fd
  
 https://github.com/stackforge/mistral/commit/5c10fb4b773cd60d81ed93aec33345c0bf8f58fd
 [2] Use Ansible-like syntax to make YAQL expressions YAML complient 
 https://github.com/stackforge/mistral/commit/d9517333b1fc9697d4847df33d3b774f881a111b
  
 https://github.com/stackforge/mistral/commit/d9517333b1fc9697d4847df33d3b774f881a111b
 [3] Preserving types in YAQL 
 https://github.com/stackforge/mistral/blob/d9517333b1fc9697d4847df33d3b774f881a111b/mistral/tests/unit/test_expressions.py#L152-L184
  
 https://github.com/stackforge/mistral/blob/d9517333b1fc9697d4847df33d3b774f881a111b/mistral/tests/unit/test_expressions.py#L152-L184
 [4]Using % % in Puppet 
 https://docs.puppetlabs.com/guides/templating.html#erb-is-plain-text-with-embedded-ruby
  
 

[openstack-dev] Group Based Policy - Kilo-1 development milestone

2015-02-16 Thread Sumit Naiksatam
Hi All,

The first milestone release of the Kilo development cycle, “kilo-1 is
now available for the Group Based Policy project. It contains a bunch
of bug fixes and enhancements over the previous release. You can find
the full list of fixed bugs, features, as well as tarball downloads,
at:

https://launchpad.net/group-based-policy/kilo/kilo-gbp-1
https://launchpad.net/group-based-policy-automation/kilo/kilo-gbp-1

Many thanks to those who contributed towards this milestone. The next
development milestone, kilo-2, is scheduled for March 16th.

Best,
~Sumit.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Feature Freeze Exception Request (DRBD for Nova) WAS: Re: [nova] Request Spec Freeze Exception (DRBD for Nova)

2015-02-16 Thread Nikola Đipanov
On 02/16/2015 03:27 PM, Nikola Đipanov wrote:
 Re-titling the email so that it does not get missed as it does not have
 the right subject line.
 

Ugh - as Daniel pointed out - the spec is not actually approved so
please disregard this email - I missed that bit.

Although - I still stand by the following paragraph:

 
 This is the first contribution to Nova by Philipp and his team, and
 their experience navigating our bureaucracy-meritocracy seems far from a
 happy one (from what I could gather on IRC at least) - one more reason
 to not keep this feature out.
 




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Feature Freeze Exception for vmware-driver-domain-metadata

2015-02-16 Thread Gary Kotton
Retitled 

From: Gary Kotton gkot...@vmware.commailto:gkot...@vmware.com
Reply-To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Saturday, February 14, 2015 at 8:01 PM
To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [nova] FFE request for vmware-driver-domain-metadata

Hi,
This support was added to the libvirt driver in J. The support is isolated to 
the Vmware driver and enables admins to get a better understanding of what is 
happening with the running instances. Any chance of getting a sponsor for this? 
https://review.openstack.org/#/c/141028/
This also enables the admin to do queries on the VC about instances, for 
example by AZ, flavor, etc.
Thanks
Gary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [trusts] [all] How trusts should work by design?

2015-02-16 Thread Renat Akhmerov
Yeah, clarification from keystone folks would be really helpful.

If Nikolay’s info is correct (I believe it is) then I actually don’t understand 
why trusts are needed at all, they seem to be useless. My assumption is that 
they can be used only if we send requests directly to OpenStack services (w/o 
using clients) with trust scoped token included in headers, that might work 
although I didn’t checked that yet myself.

So please help us understand which one of my following assumptions is correct?

We don’t understand what trusts are.
We use them in a wrong way. (If yes, then what’s the correct usage?)
Trust mechanism itself is in development and can’t be used at this point.
OpenStack clients need to be changed in some way to somehow bypass this 
keystone limitation?

Thanks

Renat Akhmerov
@ Mirantis Inc.



 On 16 Feb 2015, at 19:10, Nikolay Makhotkin nmakhot...@mirantis.com wrote:
 
 Hello, 
 
 Decided to start a new thread due to too much technical details in old 
 thread. 
 (You can see thread [openstack-dev] [keystone] [nova] )
 
 The problem: Trusts can not be used to retrieve a token for further work with 
 python-projectclient.
 
 I made some research for trust's use cases. The main goal of trusts is clear 
 to me: delegation of privileges of one user to another on specific time (or 
 limitless). But if I get a trust and then get a token from it, it can not be 
 used in any python-client. The reason why it happens so - is 'authenticate' 
 method in almost all python-clients. This method request a keystone for 
 authentication and get a new auth token. But in case of trust-scoped token it 
 can't be true - this method always return '403 Forbidden' [1]
 
 The question: Is there a way to create a trust and use it for requests to any 
 other service? E.g., We can get a token from trust and use it (but actually, 
 we are not).
 
 Or am I misunderstanding trust's purpose? How are trusts should worked?
 
 
 [1] 
 https://github.com/openstack/keystone/blob/master/keystone/token/controllers.py#L154-L156
  
 https://github.com/openstack/keystone/blob/master/keystone/token/controllers.py#L154-L156
 
  
 Best Regards,
 Nikolay Makhotkin
 @Mirantis
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Request Spec Freeze Exception (DRBD for Nova)

2015-02-16 Thread Philipp Marek
Hi Daniel,
 
 The current feature freeze exceptions are being gathered for code
 reviews whose specs/blueprints are already approved. Since your
 spec does not appear to be approved, you would have to first
 request a spec freeze exception, but I believe it is too late for
 that now in Kilo.
I sent the Spec Freeze Exception request in January:
http://lists.openstack.org/pipermail/openstack-dev/2015-January/054225.html


Regards,

Phil

-- 
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] List of all Feature Freeze Exception requests

2015-02-16 Thread Daniel P. Berrange
The following page contains a list of (hopefully) all the feature
freeze exception requests that have been made so far for Nova

  https://etherpad.openstack.org/p/kilo-nova-ffe-requests

Finding them all in the mailing list is not entirely easy, so if you
have requested FFE please do make sure we've got your request listed
on the page and not accidentally missed you out.

This isn't the place to add descriptions/justifications for your FFE,
keep that on this list in your mail request for FFE. We only want to
capture the metadata in the etherpad - ie title, blueprint  mail links
and the list of volunteered sponsors.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: [Neutron][DVR]Neutron distributed SNAT

2015-02-16 Thread Assaf Muller


- Original Message -
 Has there been any work to use conntrack synchronization similar to L3 HA in
 DVR so failover is fast on the SNAT node?
 

https://review.openstack.org/#/c/139686/
https://review.openstack.org/#/c/143169/

These changes have taken a back seat to improving the DVR job reliability
and L3 agent refactoring. Michael and Rajeev could expand.

 On Sat, Feb 14, 2015 at 1:31 PM, Carl Baldwin  c...@ecbaldwin.net  wrote:
 
 
 
 
 
 On Feb 10, 2015 2:36 AM, Wilence Yao  wilence@gmail.com  wrote:
  
  
  Hi all,
  After OpenStack Juno, floating ip is handled by dvr, but SNAT is still
  handled by l3agent on network node. The distributed SNAT is in future
  plans for DVR. In my opinion, SNAT can move to DVR as well as floating ip.
  I have searched in blueprint, there is little about distributed SNAT. Is
  there any different between distributed floating ip and distributed SNAT?
 
 The difference is that a shared snat address is shared among instances on
 multiple compute nodes. A floating ip is exclusive to a single instance on
 one compute node. I'm interested to hear your ideas for distributing it.
 
 Carl
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 --
 Kevin Benton
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [docs] rethinking docs jobs in the gate

2015-02-16 Thread Sean Dague
I've noticed a proliferation of docs jobs on projects in the gate now -
https://review.openstack.org/#/c/154737/

gate-trove-tox-checknicenessSUCCESS in 2m 34s
gate-trove-tox-checksyntax  SUCCESS in 2m 51s
gate-trove-tox-checkdeletions   SUCCESS in 2m 11s
gate-trove-tox-doc-publish-checkbuild   SUCCESS in 4m 46s

in addition to the base:

gate-trove-docs SUCCESS in 2m 31s

I'm not sure I understand why these docs jobs are separate, and not all
part of gate-trove-docs.


It seems like it would be good if 'tox -e docs' was the docs test entry
point, and if a project wanted to test for these various checks in the
docs those would be changed in tox.ini for the project for that docs target.

It also means you don't have to build and maintain multiple local venvs,
and would substantially reduce test nodes used in upstream testing.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] oslo.utils 1.3.0 released

2015-02-16 Thread Doug Hellmann
The Oslo team is pleased to announce the release of:

oslo.utils 1.3.0: Oslo Utility library

For more details, please see the git log history below and:

http://launchpad.net/oslo.utils/+milestone/1.3.0

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.utils

Changes in /home/dhellmann/repos/openstack/oslo.utils 1.2.1..1.3.0
--

ed4e2d7 Add a eventlet utils helper module
d5e2009 Add microsecond support to iso8601_from_timestamp
ff05ecc Updated from global requirements
9174de8 Update Oslo imports to remove namespace package
89d0c2a Add TimeFixture
659e12b Add microsecond support to timeutils.utcnow_ts()
942cf06 Make setup.cfg packages include oslo.utils
dbc5700 fix link to bug tracker in README

Diffstat (except docs and test files)
-

README.rst |   2 +-
oslo_utils/_i18n.py|   4 +-
oslo_utils/eventletutils.py|  98 +
oslo_utils/fixture.py  |  45 +
oslo_utils/timeutils.py|  51 ---
requirements.txt   |   2 +-
setup.cfg  |   1 +
13 files changed, 398 insertions(+), 14 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index c508f12..06f022e 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -9 +9 @@ iso8601=0.1.9
-oslo.i18n=1.0.0  # Apache-2.0
+oslo.i18n=1.3.0  # Apache-2.0

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Use of egg snapshots of neutron code in neutron-*aas projects/distributing openstack

2015-02-16 Thread James Page
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi Folks

The split-out drivers for vpn/fw/lb as-a-service all make use of a
generated egg of the neutron git repository as part of their unit test
suite dependencies.

This presents a bit of a challenge for us downstream in distributions,
as we can't really pull in a full source egg of neutron from
git.openstack.org; we have the code base for neutron core available
(python-neutron), but that does not appear to be enough (see [0]).

I would appreciate if dev's working in this area could a) review the
bug and the problems we are seeing a b) think about how this can work
for distributions - I'm happy to create a new 'neutron-testing' type
package from the neutron codebase to support this stuff, but right now
I'm a bit unclear on exactly what its needs to contain!

Cheers

James


[0] https://bugs.launchpad.net/neutron/+bug/1422376

- -- 
James Page
Ubuntu and Debian Developer
james.p...@ubuntu.com
jamesp...@debian.org
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQIcBAEBCAAGBQJU4gj5AAoJEL/srsug59jD5agP/1NFgrLQjD7+d0OrxSByD+b0
DEwlbSwFGvV6sIbjzP8/9ibUmxnCOcwW9Enn4+Ukp4KuhuWKuiEZYdOARkuEupaz
IyDw3F9NzytnER2s+sn2+tddQloTjCk0vzk+e5uH19ovwoLBmFOd/g5d+yNYt/SB
ozzA3S4WTyG8vws2AOBcubJkg1wYzyUSGATceBqYLFMa7e7GuazRR/XohOwa7iux
T1+4t72juhXUiFiPn4GD2aWjl30Eer0+juHdlje6EHtSRnODJXnYeEHIw/ndmTCy
gEmMZ3c9fUoJC51HBeOSjX+Mg5Hq/AaGLLQHU+shklg6pgXKZ1ZKFAYD5rjWrXB2
jxPM0vFcJEh2yfMHTsgbgP6AnYF5g7/36izTdJsXWDgEJoE7Zt2J+NX5+SLTihtt
GbWIUh5ZstZXBD85u4o8iB+whhpzZd7rE/GRK2Ax/kY8WnB8xeiU5wA5AQN6nTMr
XPT/ObXsXKnyrLgn4KkRZymEeDO1yaaVrtGtLxF2Dap2CpH8so7hLQw/3KYxDsTP
8dptOS4EzVm+jZPdAHMHIqsyA2wnRfyauPAyYDEeVioCUkijinrt61x62OM5s8+X
MbAOyjGGOPVXq0tFChbB9ZdSkMDNvj98sv1xhZ1yHmoKvJ56EM1drh7HhcJWD6/v
dv9uUmY4DhVlvjYKwPgY
=C4vr
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] oslo.config 1.6.1 released

2015-02-16 Thread Doug Hellmann
The Oslo team is pleased to announce the release of:

oslo.config 1.6.1: Oslo Configuration API

For more details, please see the git log history below and:

http://launchpad.net/oslo/+milestone/1.6.1

Please report issues through launchpad:

http://bugs.launchpad.net/oslo

Changes in /home/dhellmann/repos/openstack/oslo.config 1.6.0..1.6.1
---

2241353 Clear up MultiStrOpt and related documentation
28fd2cc Reword DeprecatedOpt docstring
035636d Fix of wrong cli opts unregistration

Diffstat (except docs and test files)
-

oslo_config/cfg.py| 158 +++---
4 files changed, 181 insertions(+), 51 deletions(-)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Use of egg snapshots of neutron code in neutron-*aas projects/distributing openstack

2015-02-16 Thread Gary Kotton
Hi,
The same issue is the for the backe end drivers. I think that they always
need to be aligned with the master branch. When we cut stable all of the
aaS and drivers also need to be cut.
This was one of the pain points that we brought up with the split and the
consensus was: we¹ll deal with it when we get to the bridge.
Thanks
Gary

On 2/16/15, 5:13 PM, James Page james.p...@ubuntu.com wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi Folks

The split-out drivers for vpn/fw/lb as-a-service all make use of a
generated egg of the neutron git repository as part of their unit test
suite dependencies.

This presents a bit of a challenge for us downstream in distributions,
as we can't really pull in a full source egg of neutron from
git.openstack.org; we have the code base for neutron core available
(python-neutron), but that does not appear to be enough (see [0]).

I would appreciate if dev's working in this area could a) review the
bug and the problems we are seeing a b) think about how this can work
for distributions - I'm happy to create a new 'neutron-testing' type
package from the neutron codebase to support this stuff, but right now
I'm a bit unclear on exactly what its needs to contain!

Cheers

James


[0] https://bugs.launchpad.net/neutron/+bug/1422376

- -- 
James Page
Ubuntu and Debian Developer
james.p...@ubuntu.com
jamesp...@debian.org
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQIcBAEBCAAGBQJU4gj5AAoJEL/srsug59jD5agP/1NFgrLQjD7+d0OrxSByD+b0
DEwlbSwFGvV6sIbjzP8/9ibUmxnCOcwW9Enn4+Ukp4KuhuWKuiEZYdOARkuEupaz
IyDw3F9NzytnER2s+sn2+tddQloTjCk0vzk+e5uH19ovwoLBmFOd/g5d+yNYt/SB
ozzA3S4WTyG8vws2AOBcubJkg1wYzyUSGATceBqYLFMa7e7GuazRR/XohOwa7iux
T1+4t72juhXUiFiPn4GD2aWjl30Eer0+juHdlje6EHtSRnODJXnYeEHIw/ndmTCy
gEmMZ3c9fUoJC51HBeOSjX+Mg5Hq/AaGLLQHU+shklg6pgXKZ1ZKFAYD5rjWrXB2
jxPM0vFcJEh2yfMHTsgbgP6AnYF5g7/36izTdJsXWDgEJoE7Zt2J+NX5+SLTihtt
GbWIUh5ZstZXBD85u4o8iB+whhpzZd7rE/GRK2Ax/kY8WnB8xeiU5wA5AQN6nTMr
XPT/ObXsXKnyrLgn4KkRZymEeDO1yaaVrtGtLxF2Dap2CpH8so7hLQw/3KYxDsTP
8dptOS4EzVm+jZPdAHMHIqsyA2wnRfyauPAyYDEeVioCUkijinrt61x62OM5s8+X
MbAOyjGGOPVXq0tFChbB9ZdSkMDNvj98sv1xhZ1yHmoKvJ56EM1drh7HhcJWD6/v
dv9uUmY4DhVlvjYKwPgY
=C4vr
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] rethinking docs jobs in the gate

2015-02-16 Thread Doug Hellmann


On Mon, Feb 16, 2015, at 10:05 AM, Sean Dague wrote:
 I've noticed a proliferation of docs jobs on projects in the gate now -
 https://review.openstack.org/#/c/154737/
 
 gate-trove-tox-checknicenessSUCCESS in 2m 34s
 gate-trove-tox-checksyntax  SUCCESS in 2m 51s
 gate-trove-tox-checkdeletions   SUCCESS in 2m 11s
 gate-trove-tox-doc-publish-checkbuild   SUCCESS in 4m 46s
 
 in addition to the base:
 
 gate-trove-docs SUCCESS in 2m 31s
 
 I'm not sure I understand why these docs jobs are separate, and not all
 part of gate-trove-docs.
 
 
 It seems like it would be good if 'tox -e docs' was the docs test entry
 point, and if a project wanted to test for these various checks in the
 docs those would be changed in tox.ini for the project for that docs
 target.
 
 It also means you don't have to build and maintain multiple local venvs,
 and would substantially reduce test nodes used in upstream testing.

When we talked about this in the review of the governance change to set
tox -e docs as part of the testing interface for a project, jeblair
(and maybe others) pointed out that we didn't want projects running
extra steps when their docs were built. So maybe we want to continue to
use tox -e venv -- python setup.py build_sphinx for the real docs, and
allow a tox -e docs job for the check queu for testing.

Doug

 
   -Sean
 
 -- 
 Sean Dague
 http://dague.net
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] rethinking docs jobs in the gate

2015-02-16 Thread Christian Berendt
On 02/16/2015 04:29 PM, Andreas Jaeger wrote:
 For documentation projects we should discuss this separately as well,

Is it possible to keep all environments like they are and to add a meta
environment (called docs) calling the existing environments? This way we
can reduce the number of jobs on the gates (entry point for the gates is
the meta environment) but can keep the existing environments for local
tests.

Christian.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: [Neutron][DVR]Neutron distributed SNAT

2015-02-16 Thread Kevin Benton
Or a pool of SNAT addresses ~= to the size of the hypervisor count.

This had originally come up as an option in the early DVR discussions. IIRC
it was going to be a tunable parameter since it results in a tradeoff
between spent public addresses and distributed-ness. However, due to time
constraints and complexity, the burning of extra IPs to distribute SNAT
wasn't implemented because it required changes to the data model (multiple
IPs per router gateway interface) and changes to when IPs were assigned
(dynamically adding more IPs to the gateway interface as tenant ports were
instantiated on new nodes). Someone from the DVR team can correct me if I'm
missing the reasons behind some of these decisions.


Conntrack synchronisation gets us HA on the SNAT node, but that's a long
way from distributed SNAT.

Definitely, I was not paying close attention and thought this thread was
just about the HA of the SNAT node.

It's basically very much like floating IPs, only you're handing out a
sub-slice of a floating-IP to each machine - if you like.

This requires participation of the upstream router (L4 policy routing
pointing to next hops that distinguish each L3 agent) or intervention on
the switches between the router an L3 agents (a few OpenFlow rules would
make this simple). Both approaches need to adapt to L3 agent changes so
static configuration is not adequate. Unfortunately, both of these are
outside of the control of Neutron so I don't see an easy way to push this
state in a generic fashion.



On Mon, Feb 16, 2015 at 12:33 AM, Robert Collins robe...@robertcollins.net
wrote:

 On 16 February 2015 at 21:29, Angus Lees g...@inodes.org wrote:
  Conntrack synchronisation gets us HA on the SNAT node, but that's a long
 way
  from distributed SNAT.
 
  Distributed SNAT (in at least one implementation) needs a way to allocate
  unique [IP + ephemeral port ranges] to hypervisors, and then some sort of
  layer4 loadbalancer capable of forwarding the ingress traffic to that IP
  back to the right hypervisor/guest based on the ephemeral port range.
 It's
  basically very much like floating IPs, only you're handing out a
 sub-slice
  of a floating-IP to each machine - if you like.

 Or a pool of SNAT addresses ~= to the size of the hypervisor count.

 -Rob


 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] authentication issues

2015-02-16 Thread Kashyap Chamarthy
On Mon, Feb 16, 2015 at 09:54:30AM +, Gary Kotton wrote:
 Hi, With the current master in devstack I am unable to create a
 neutron network. 

 It looks like there is an issue with keystone. Anyone
 else hit this?  Thanks Gary

Hmm, I don't see any such issue here, I'm using:
Nova+Neutron+Keystone+Glance with libvirt/KVM drivers.

And, these are the commits I'm at:

  DevStack
  commit 13c7ccc9d5d7ee8b88c2ee7d4af8990a075440a2
  Glance
  commit cd60a24a7d32d4ca0be36f7afa4d082193958989
  Merge: 803c540 4a78e85
  Keystone
  commit c06591dd470aaa595206100d0176ccc0575d58b7
  Author: OpenStack Proposal Bot openstack-in...@lists.openstack.org
  Neutron
  commit 12b0d4d9792d83fd559de59f2dd7b9f532f89398
  Merge: 7a0a2a1 a3ab3eb
  Nova
  commit 69f4b44691ddabc2cfa8c08539a51d255646e173
  Merge: ea1f1d6 353e823
  requirements
  commit ec1c788c2bfac3ef964396b92f8a090b60dbb4ef

$ neutron net-list
+--+-++
| id   | name| subnets  
  |
+--+-++
| ec39ff23-9319-4526-aa55-fdd0ec172bc5 | private | 
2192c1b9-d1de-406d-ab7e-ba142a9b7d3d 10.1.0.0/24   |
| d2994e9c-649a-4668-a6bc-a71ff2362e74 | public  | 
94cf7c03-7352-4b8d-8b00-21227c7414a4 172.24.4.0/24 |
+--+-++

$ ps -ef | grep -i neutron
kashyapc 12983 12954  1 05:43 pts/700:00:03 python /usr/bin/neutron-server 
--config-file /etc/neutron/neutron.conf --config-file 
/etc/neutron/plugins/ml2/ml2_conf.ini
kashyapc 13040 13014  0 05:43 pts/800:00:02 python 
/usr/bin/neutron-openvswitch-agent --config-file /etc/neutron/neutron.conf 
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini
kashyapc 13159 13049  0 05:43 pts/900:00:00 python 
/usr/bin/neutron-dhcp-agent --config-file /etc/neutron/neutron.conf 
--config-file=/etc/neutron/dhcp_agent.ini
root 13252 13040  0 05:43 pts/800:00:00 sudo /usr/bin/neutron-rootwrap 
/etc/neutron/rootwrap.conf ovsdb-client monitor Interface name,ofport 
--format=json
root 13254 13252  0 05:43 pts/800:00:00 /usr/bin/python 
/usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf ovsdb-client monitor 
Interface name,ofport --format=json
kashyapc 13263 13175  0 05:43 pts/10   00:00:00 python 
/usr/bin/neutron-l3-agent --config-file /etc/neutron/neutron.conf 
--config-file=/etc/neutron/l3_agent.ini
kashyapc 13348 13273  0 05:43 pts/11   00:00:00 python 
/usr/bin/neutron-metadata-agent --config-file /etc/neutron/neutron.conf 
--config-file=/etc/neutron/metadata_agent.ini
kashyapc 13369 13348  0 05:43 pts/11   00:00:00 python 
/usr/bin/neutron-metadata-agent --config-file /etc/neutron/neutron.conf 
--config-file=/etc/neutron/metadata_agent.ini
kashyapc 13370 13348  0 05:43 pts/11   00:00:00 python 
/usr/bin/neutron-metadata-agent --config-file /etc/neutron/neutron.conf 
--config-file=/etc/neutron/metadata_agent.ini
kashyapc 13371 13348  0 05:43 pts/11   00:00:00 python 
/usr/bin/neutron-metadata-agent --config-file /etc/neutron/neutron.conf 
--config-file=/etc/neutron/metadata_agent.ini
kashyapc 13372 13348  0 05:43 pts/11   00:00:00 python 
/usr/bin/neutron-metadata-agent --config-file /etc/neutron/neutron.conf 
--config-file=/etc/neutron/metadata_agent.ini
kashyapc 13940 1  0 05:43 ?00:00:00 /usr/bin/python 
/usr/bin/neutron-ns-metadata-proxy 
--pid_file=/home/kashyapc/src/cloud/data/neutron/external/pids/ef5ae22b-86d2-42c0-8be4-95f75683bbfd.pid
 --metadata_proxy_socket=/home/kashyapc/src/cloud/data/neutron/metadata_proxy 
--router_id=ef5ae22b-86d2-42c0-8be4-95f75683bbfd 
--state_path=/home/kashyapc/src/cloud/data/neutron --metadata_port=9697 
--metadata_proxy_user=1000 --metadata_proxy_group=1000 --debug --verbose



-- 
/kashyap

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] authentication issues

2015-02-16 Thread Gary Kotton


On 2/16/15, 12:48 PM, Kashyap Chamarthy kcham...@redhat.com wrote:

On Mon, Feb 16, 2015 at 09:54:30AM +, Gary Kotton wrote:
 Hi, With the current master in devstack I am unable to create a
 neutron network.

 It looks like there is an issue with keystone. Anyone
 else hit this?  Thanks Gary

Hmm, I don't see any such issue here, I'm using:
Nova+Neutron+Keystone+Glance with libvirt/KVM drivers.

Thanks. I will double check.


And, these are the commits I'm at:

  DevStack
  commit 13c7ccc9d5d7ee8b88c2ee7d4af8990a075440a2
  Glance
  commit cd60a24a7d32d4ca0be36f7afa4d082193958989
  Merge: 803c540 4a78e85
  Keystone
  commit c06591dd470aaa595206100d0176ccc0575d58b7
  Author: OpenStack Proposal Bot openstack-in...@lists.openstack.org
  Neutron
  commit 12b0d4d9792d83fd559de59f2dd7b9f532f89398
  Merge: 7a0a2a1 a3ab3eb
  Nova
  commit 69f4b44691ddabc2cfa8c08539a51d255646e173
  Merge: ea1f1d6 353e823
  requirements
  commit ec1c788c2bfac3ef964396b92f8a090b60dbb4ef

$ neutron net-list
+--+-+
+
| id   | name| subnets
|
+--+-+
+
| ec39ff23-9319-4526-aa55-fdd0ec172bc5 | private |
2192c1b9-d1de-406d-ab7e-ba142a9b7d3d 10.1.0.0/24   |
| d2994e9c-649a-4668-a6bc-a71ff2362e74 | public  |
94cf7c03-7352-4b8d-8b00-21227c7414a4 172.24.4.0/24 |
+--+-+
+

$ ps -ef | grep -i neutron
kashyapc 12983 12954  1 05:43 pts/700:00:03 python
/usr/bin/neutron-server --config-file /etc/neutron/neutron.conf
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini
kashyapc 13040 13014  0 05:43 pts/800:00:02 python
/usr/bin/neutron-openvswitch-agent --config-file
/etc/neutron/neutron.conf --config-file
/etc/neutron/plugins/ml2/ml2_conf.ini
kashyapc 13159 13049  0 05:43 pts/900:00:00 python
/usr/bin/neutron-dhcp-agent --config-file /etc/neutron/neutron.conf
--config-file=/etc/neutron/dhcp_agent.ini
root 13252 13040  0 05:43 pts/800:00:00 sudo
/usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf ovsdb-client monitor
Interface name,ofport --format=json
root 13254 13252  0 05:43 pts/800:00:00 /usr/bin/python
/usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf ovsdb-client monitor
Interface name,ofport --format=json
kashyapc 13263 13175  0 05:43 pts/10   00:00:00 python
/usr/bin/neutron-l3-agent --config-file /etc/neutron/neutron.conf
--config-file=/etc/neutron/l3_agent.ini
kashyapc 13348 13273  0 05:43 pts/11   00:00:00 python
/usr/bin/neutron-metadata-agent --config-file /etc/neutron/neutron.conf
--config-file=/etc/neutron/metadata_agent.ini
kashyapc 13369 13348  0 05:43 pts/11   00:00:00 python
/usr/bin/neutron-metadata-agent --config-file /etc/neutron/neutron.conf
--config-file=/etc/neutron/metadata_agent.ini
kashyapc 13370 13348  0 05:43 pts/11   00:00:00 python
/usr/bin/neutron-metadata-agent --config-file /etc/neutron/neutron.conf
--config-file=/etc/neutron/metadata_agent.ini
kashyapc 13371 13348  0 05:43 pts/11   00:00:00 python
/usr/bin/neutron-metadata-agent --config-file /etc/neutron/neutron.conf
--config-file=/etc/neutron/metadata_agent.ini
kashyapc 13372 13348  0 05:43 pts/11   00:00:00 python
/usr/bin/neutron-metadata-agent --config-file /etc/neutron/neutron.conf
--config-file=/etc/neutron/metadata_agent.ini
kashyapc 13940 1  0 05:43 ?00:00:00 /usr/bin/python
/usr/bin/neutron-ns-metadata-proxy
--pid_file=/home/kashyapc/src/cloud/data/neutron/external/pids/ef5ae22b-86
d2-42c0-8be4-95f75683bbfd.pid
--metadata_proxy_socket=/home/kashyapc/src/cloud/data/neutron/metadata_pro
xy --router_id=ef5ae22b-86d2-42c0-8be4-95f75683bbfd
--state_path=/home/kashyapc/src/cloud/data/neutron --metadata_port=9697
--metadata_proxy_user=1000 --metadata_proxy_group=1000 --debug --verbose



-- 
/kashyap

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] authentication issues

2015-02-16 Thread Fawad Khaliq
I just happened to have a new setup with all services. Works for me.

Fawad Khaliq


On Mon, Feb 16, 2015 at 2:54 PM, Gary Kotton gkot...@vmware.com wrote:

  Hi,
 With the current master in devstack I am unable to create a neutron
 network. It looks like there is an issue with keystone. Anyone else hit
 this?
 Thanks
 Gary

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: [Neutron][DVR]Neutron distributed SNAT

2015-02-16 Thread Robert Collins
On 16 February 2015 at 21:29, Angus Lees g...@inodes.org wrote:
 Conntrack synchronisation gets us HA on the SNAT node, but that's a long way
 from distributed SNAT.

 Distributed SNAT (in at least one implementation) needs a way to allocate
 unique [IP + ephemeral port ranges] to hypervisors, and then some sort of
 layer4 loadbalancer capable of forwarding the ingress traffic to that IP
 back to the right hypervisor/guest based on the ephemeral port range.  It's
 basically very much like floating IPs, only you're handing out a sub-slice
 of a floating-IP to each machine - if you like.

Or a pool of SNAT addresses ~= to the size of the hypervisor count.

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: [Neutron][DVR]Neutron distributed SNAT

2015-02-16 Thread Angus Lees
Conntrack synchronisation gets us HA on the SNAT node, but that's a long
way from distributed SNAT.

Distributed SNAT (in at least one implementation) needs a way to allocate
unique [IP + ephemeral port ranges] to hypervisors, and then some sort of
layer4 loadbalancer capable of forwarding the ingress traffic to that IP
back to the right hypervisor/guest based on the ephemeral port range.  It's
basically very much like floating IPs, only you're handing out a sub-slice
of a floating-IP to each machine - if you like.

On Mon Feb 16 2015 at 6:12:33 PM Kevin Benton blak...@gmail.com wrote:

 Has there been any work to use conntrack synchronization similar to L3 HA
 in DVR so failover is fast on the SNAT node?

 On Sat, Feb 14, 2015 at 1:31 PM, Carl Baldwin c...@ecbaldwin.net wrote:


 On Feb 10, 2015 2:36 AM, Wilence Yao wilence@gmail.com wrote:
 
 
  Hi all,
After OpenStack Juno, floating ip is handled by dvr, but SNAT is
 still handled by l3agent on network node. The distributed SNAT is in future
 plans for DVR. In my opinion, SNAT can move to DVR as well as floating ip.
 I have searched in blueprint, there is little  about distributed SNAT. Is
 there any different between distributed floating ip and distributed SNAT?

 The difference is that a shared snat address is shared among instances on
 multiple compute nodes.  A floating ip is exclusive to a single instance on
 one compute node.  I'm interested to hear your ideas for distributing it.

 Carl

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton
  
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [UI] Sorting and filtering of node list

2015-02-16 Thread Oleg Gelbukh
Julia,

It would be nice to add grouping by Status to the existing 'Grouping'
dropdown. It would save some time finding faulty/offline nodes in the list
and performing bulk actions (like Delete) on them.

Another useful feature for large deployments would be an ability to see IP
addresses of nodes (including Management and Public addresses) in the UI
and group/sort by those addresses.

--
Best regards,
Oleg Gelbukh
Mirantis Labs

On Sat, Feb 14, 2015 at 11:27 AM, Julia Aranovich jkirnos...@mirantis.com
wrote:

 Hi All,

 Currently we [Fuel UI team] are planning the features of *sorting and
 filtering of node list *to introduce it in 6.1 release.

 Now user can filter nodes just by it's name or MAC address and no sorters
 are available. It's rather poor UI for managing 200+ nodes environment. So,
 the current suggestion is to filter and sort nodes by the following
 parameters:

1. name
2. manufacturer
3. IP address
4. MAC address
5. CPU
6. memory
7. disks total size (we need to think about less than/more than
representation)
8. interfaces speed
9. status (Ready, Pending Addition, Error, etc.)
10. roles


 It will be a form-based filter. Items [1-4] should go to a single text
 input and other go to a separate controls.
 And also there is an idea to translate a user filter selection to a query
 and add it to a location string. Like it's done for the logs search:
 *#cluster/x/logs/type:local;source:api;level:info*.

 Please also note, that the changes we are thinking about should not affect
 backend code.


 I will be very grateful if you share your ideas about this or tell some of
 the cases that would be useful to you at work with real deployments.
 We would like to introduce really usefull tools based on your feedback.


 Best regards,
 Julia

 --
 Kind Regards,
 Julia Aranovich,
 Software Engineer,
 Mirantis, Inc
 +7 (905) 388-82-61 (cell)
 Skype: juliakirnosova
 www.mirantis.ru
 jaranov...@mirantis.com jkirnos...@mirantis.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Block migrations and Cinder volumes

2015-02-16 Thread Daniel P. Berrange
On Mon, Feb 16, 2015 at 01:39:21PM +1300, Robert Collins wrote:
 On 19 June 2014 at 20:38, Daniel P. Berrange berra...@redhat.com wrote:
  On Wed, Jun 18, 2014 at 11:09:33PM -0700, Rafi Khardalian wrote:
  I am concerned about how block migration functions when Cinder volumes are
  attached to an instance being migrated.  We noticed some unexpected
  behavior recently, whereby attached generic NFS-based volumes would become
  entirely unsparse over the course of a migration.  After spending some time
  reviewing the code paths in Nova, I'm more concerned that this was actually
  a minor symptom of a much more significant issue.
 
  For those unfamiliar, NFS-based volumes are simply RAW files residing on an
  NFS mount.  From Libvirt's perspective, these volumes look no different
  than root or ephemeral disks.  We are currently not filtering out volumes
  whatsoever when making the request into Libvirt to perform the migration.
   Libvirt simply receives an additional flag (VIR_MIGRATE_NON_SHARED_INC)
  when a block migration is requested, which applied to the entire migration
  process, not differentiated on a per-disk basis.  Numerous guards within
  Nova to prevent a block based migration from being allowed if the instance
  disks exist on the destination; yet volumes remain attached and within the
  defined XML during a block migration.
 
  Unless Libvirt has a lot more logic around this than I am lead to believe,
  this seems like a recipe for corruption.  It seems as though this would
  also impact any type of volume attached to an instance (iSCSI, RBD, etc.),
  NFS just happens to be what we were testing.  If I am wrong and someone can
  correct my understanding, I would really appreciate it.  Otherwise, I'm
  surprised we haven't had more reports of issues when block migrations are
  used in conjunction with any attached volumes.
 
  Libvirt/QEMU has no special logic. When told to block-migrate, it will do
  so for *all* disks attached to the VM in read-write-exclusive mode. It will
  only skip those marked read-only or read-write-shared mode. Even that
  distinction is somewhat dubious and so not reliably what you would want.
 
  It seems like we should just disallow block migrate when any cinder volumes
  are attached to the VM, since there is never any valid use case for doing
  block migrate from a cinder volume to itself.
 
  Regards,
  Daniel
 
 Just ran across this from bug
 https://bugs.launchpad.net/nova/+bug/1398999. Is there some way to
 signal to libvirt that some block devices shouldn't be migrated by it
 but instead are known to be networked etc? Or put another way, how can
 we have our cake and eat it too. Its not uncommon for a VM to be
 cinder booted but have local storage for swap... and AIUI the fix we
 put in for this bug stops those VM's being migrated. Do you think it
 is tractable (but needs libvirt work), or is it something endemic to
 the problem (e.g. dirty page synchronisation with the VM itself) that
 will be in the way?

It is merely a missing feature in libvirt that no one has had the
time to address yet.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [nova]

2015-02-16 Thread Nikolay Makhotkin
Well, if we use trust-scoped token for getting server-list from nova
(simply use nova.servers.list() ),

Novaclient somehow tries to get another token:
https://github.com/openstack/python-novaclient/blob/master/novaclient/client.py#L690-L724

Actually, novaclient does this request: (found from debug of novaclient)

  REQ: curl -i 'http://my_host:5000/v2.0/tokens' -X POST -H Accept:
application/json -H Content-Type: application/json -H User-Agent:
python-novaclient -d '{auth: {token: {id:
78c71fb549244075b3a5d994baa326b3}, tenantName:
b0c9bbb541d541b098c3c0a92412720d}}'

I.e., this is the request for another auth token from keystone. Keystone
here returns 403 because token in request is trust-scoped.

Why I can't do this simple command using trust-scoped token?

Note: Doing the keystone --os-token 5483086d91094a3886ccce1442b538a0
--os-endpoint http://my_host:5000/v2.0 tenant-list, it returns
tenant-list (not 403).
Note2: Doing the server-list request directly to api with trust-scoped
token, it returns 200, not 403:

curl -H X-Auth-Token: 5483086d91094a3886ccce1442b538a0
http://192.168.0.2:8774/v3/servers

{
servers: [ list_of_servers ]
}

How I can use trust-scoped tokrn via client?

On Fri, Feb 13, 2015 at 9:16 PM, Alexander Makarov amaka...@mirantis.com
wrote:

 Adam, Nova client does it for some reason during a call to
 nova.servers.list()


 On Thu, Feb 12, 2015 at 10:03 PM, Adam Young ayo...@redhat.com wrote:

  On 02/12/2015 10:40 AM, Alexander Makarov wrote:

 A trust token cannot be used to get another token:

 https://github.com/openstack/keystone/blob/master/keystone/token/controllers.py#L154-L156
 You have to make your Nova client use the very same trust scoped token
 obtained from authentication using trust without trying to authenticate
 with it one more time.



 Actually, there have been some recent changes to allow re-delegation of
 Trusts, but for older deployments, you are correct.  I hadn't seen anywhere
 here that he was trying to use a trust token to get another token, though.



 On Wed, Feb 11, 2015 at 9:10 PM, Adam Young ayo...@redhat.com wrote:

  On 02/11/2015 12:16 PM, Nikolay Makhotkin wrote:

 No, I just checked it. Nova receives trust token and raise this error.

  In my script, I see:

  http://paste.openstack.org/show/171452/

  And as you can see, token from trust differs from direct user's token.


  The original user needs to have the appropriate role to perform the
 operation on the specified project.  I see the admin role is created on the
 trust. If the trustor did not have that role, the trustee would not be able
 to exececute the trust and get a token.  It looks like you were able to
 execute the trust and get a token,  but I would like you to confirm that,
 and not just trust the keystone client:  either put debug statements in
 Keystone or call the POST to tokens from curl with the appropriate options
 to get a trust token.  In short, make sure you have not fooled yourself.
 You can also look in the token table inside Keystone to see the data for
 the trust token, or validate the token  via curl to see the data in it.  In
 all cases, there should be an OS-TRUST stanza in the token data.


 If it is still failing, there might be some issue on the Policy side.  I
 have been assuming that you are running with the default policy for Nova.

 http://git.openstack.org/cgit/openstack/nova/tree/etc/nova/policy.json

 I'm not sure which rule matches for list servers (Nova developer input
 would be appreciated)  but I'm guessing it is executing the rule

 admin_or_owner: is_admin:True or project_id:%(project_id)s,

 Since that is the default. I am guessing that the project_id in question
 comes from the token here, as that seems to be common, but if not, it might
 be that the two values are mismatched. Perhaps there Proejct ID value from
 the client env var is sent, and matches what the trustor normally works as,
 not the project in question.  If these two values don't match, then, yes,
 the rule would fail.




 On Wed, Feb 11, 2015 at 7:55 PM, Adam Young ayo...@redhat.com wrote:

   On 02/11/2015 10:52 AM, Nikolay Makhotkin wrote:

 Hi !

  I investigated trust's use cases and encountered the problem: When I
 use auth_token obtained from keystoneclient using trust, I get *403*
 Forbidden error:  *You are not authorized to perform the requested
 action.*

  Steps to reproduce:

  - Import v3 keystoneclient (used keystone and keystoneclient from
 master, tried also to use stable/icehouse)
 - Import v3 novaclient
 - initialize the keystoneclient:
   keystone = keystoneclient.Client(username=username,
 password=password, tenant_name=tenant_name, auth_url=auth_url)

  - create a trust:
   trust = keystone.trusts.create(
 keystone.user_id,
 keystone.user_id,
 impersonation=True,
 role_names=['admin'],
 project=keystone.project_id
   )

  - initialize new keystoneclient:
client_from_trust = keystoneclient.Client(
 username=username, 

[openstack-dev] healthmon, inventory manager

2015-02-16 Thread Mandar Barve
Hi,
I am trying to find out kind of monitoring support available in
Openstack especially for monitoring health of services, physical devices. I
came across Cloud Inventory Manager, HealthMon. The article
https://wiki.openstack.org/wiki/CloudInventoryManageralso talked about
how to enable this extension in nova. When I tried doing it I could see an
error message in the nova api log file that said failed to load this module.

Can someone please point me to the right place?

Thanks,
Mandar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Unshelve Instance Performance Optimization Questions

2015-02-16 Thread Kekane, Abhishek
Hi Devs,

Problem Statement: Performance and storage efficiency of shelving/unshelving 
instance booted from image is far worse than instance booted from volume.

When you unshelve hundreds of instances at the same time, instance spawning 
time varies and it mainly depends on the size of the instance snapshot and
the network speed between glance and nova servers.

If you have configured file store (shared storage) as a backend in Glance for 
storing images/snapshots, then it's possible to improve the performance of
unshelve instance dramatically by configuring nova.image.download.FileTransfer 
in nova. In this case, it simply copies the instance snapshot as if it is
stored on the local filesystem of the compute node. But then again in this 
case, it is observed the network traffic between shared storage servers and
nova increases enormously resulting in slow spawning of the instances.

I would like to gather some thoughts about how we can improve the performance 
of unshelve api (booted from image only) in terms of downloading large
size instance snapshots from glance.

I have proposed a nova-specs [1] to address this performance issue. Please take 
a look at it.

During the last nova mid-cycle summit, Michael 
Stillhttps://review.openstack.org/#/q/owner:mikal%2540stillhq.com+status:open,n,z
 has suggested alternative solutions to tackle this issue.

Storage solutions like ceph (Software based) and NetApp (Hardare based) support 
exposing images from glance to nova-compute and cinder-volume with
copy in write feature. This way there will be no need to download the instance 
snapshot and unshelve api will be pretty faster than getting it
from glance.

Do you think the above performance issue should be handled in the OpenStack 
software as described in nova-specs [1] or storage solutions like
ceph/NetApp should be used in production environment? Apart from ceph/NetApp 
solutions, are there any other options available in the market.

[1] https://review.openstack.org/#/c/135387/

Thank You,

Abhishek Kekane

__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [UI] Sorting and filtering of node list

2015-02-16 Thread Julia Aranovich
Oleg, thank you for the feedback. We'll definitely consider it.

Also, I'd like to share some basic mockups for node sort/filter/group
operations: http://postimg.org/gallery/2c32wcu8q/

On Mon, Feb 16, 2015 at 11:20 AM, Oleg Gelbukh ogelb...@mirantis.com
wrote:

 Julia,

 It would be nice to add grouping by Status to the existing 'Grouping'
 dropdown. It would save some time finding faulty/offline nodes in the list
 and performing bulk actions (like Delete) on them.

 Another useful feature for large deployments would be an ability to see IP
 addresses of nodes (including Management and Public addresses) in the UI
 and group/sort by those addresses.

 --
 Best regards,
 Oleg Gelbukh
 Mirantis Labs

 On Sat, Feb 14, 2015 at 11:27 AM, Julia Aranovich jkirnos...@mirantis.com
  wrote:

 Hi All,

 Currently we [Fuel UI team] are planning the features of *sorting and
 filtering of node list *to introduce it in 6.1 release.

 Now user can filter nodes just by it's name or MAC address and no sorters
 are available. It's rather poor UI for managing 200+ nodes environment. So,
 the current suggestion is to filter and sort nodes by the following
 parameters:

1. name
2. manufacturer
3. IP address
4. MAC address
5. CPU
6. memory
7. disks total size (we need to think about less than/more than
representation)
8. interfaces speed
9. status (Ready, Pending Addition, Error, etc.)
10. roles


 It will be a form-based filter. Items [1-4] should go to a single text
 input and other go to a separate controls.
 And also there is an idea to translate a user filter selection to a
 query and add it to a location string. Like it's done for the logs search:
 *#cluster/x/logs/type:local;source:api;level:info*.

 Please also note, that the changes we are thinking about should not
 affect backend code.


 I will be very grateful if you share your ideas about this or tell some
 of the cases that would be useful to you at work with real deployments.
 We would like to introduce really usefull tools based on your feedback.


 Best regards,
 Julia

 --
 Kind Regards,
 Julia Aranovich,
 Software Engineer,
 Mirantis, Inc
 +7 (905) 388-82-61 (cell)
 Skype: juliakirnosova
 www.mirantis.ru
 jaranov...@mirantis.com jkirnos...@mirantis.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kind Regards,
Julia Aranovich,
Software Engineer,
Mirantis, Inc
+7 (905) 388-82-61 (cell)
Skype: juliakirnosova
www.mirantis.ru
jaranov...@mirantis.com jkirnos...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Summit Voting and ATC emails?

2015-02-16 Thread Daniel P. Berrange
On Sun, Feb 15, 2015 at 02:34:26PM +, Jeremy Stanley wrote:
 On 2015-02-15 07:00:53 + (+), Gary Kotton wrote:
  Yes, I think that they go out in batches. It would be best to check with
  Stefano if you have any issues.
 
 Also a reminder, you need to be the owner of a change in Gerrit
 which merged on or after October 16, 2014 (or have an unexpired
 entry in the extra-atcs file within the governance repo) to be in
 the list of people who automatically get complimentary pass
 discounts.
 
 The time period was scaled down so that you now have to be active in
 the current release cycle, rather than prior conferences where you
 could qualify by only having a change in the previous cycle and none
 in the current one.

Why did we change the rules on summit passes in this way ? I don't recall
seeing any announcement of this policy change, but perhaps I missed it.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] authentication issues

2015-02-16 Thread Gary Kotton
Hi,
With the current master in devstack I am unable to create a neutron network. It 
looks like there is an issue with keystone. Anyone else hit this?
Thanks
Gary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >