Re: [openstack-dev] [Horizon][DOA] Extending OpenStack Auth for new mechanisms (websso, kerberos, k2k etc)

2015-03-17 Thread Douglas Fish

Steve Martinelli steve...@ca.ibm.com wrote on 03/17/2015 12:52:33 AM:

 From: Steve Martinelli steve...@ca.ibm.com
 To: OpenStack Development Mailing List \(not for usage questions\)
 openstack-dev@lists.openstack.org
 Date: 03/17/2015 12:55 AM
 Subject: Re: [openstack-dev] [Horizon][DOA] Extending OpenStack Auth
 for new mechanisms (websso, kerberos, k2k etc)

 I like proposal 1 better, but only because I am already familiar
 with how plugins interact with keystoneclient. The websso work is (i
 think) pretty close to getting merged, and could easily be tweaked
 to use a token plugin (when it's ready). I think the same can be
 said for our k2k issue, but I'm not sure.

 Thanks,

 Steve Martinelli
 OpenStack Keystone Core

 Jamie Lennox jamielen...@redhat.com wrote on 03/15/2015 10:52:31 PM:

  From: Jamie Lennox jamielen...@redhat.com
  To: OpenStack Development Mailing List
openstack-dev@lists.openstack.org
  Date: 03/15/2015 10:59 PM
  Subject: [openstack-dev] [Horizon][DOA] Extending OpenStack Auth for
  new mechanisms (websso, kerberos, k2k etc)
 
  Hi All,
 
  Please note when reading this that I have no real knowledge of django
so
  it is very possible I'm overlooking something obvious.
 
  ### Issue
 
  Django OpenStack Auth (DOA) has always been tightly coupled to the
  notion of a username and password.
  As keystone progresses and new authentication mechanisms become
  available to the project we need a way to extend DOA to keep up with
it.
  However the basic processes of DOA are going to be much the same, it
  still needs to fetch an unscoped token, list available projects and
  handle rescoping and this is too much for every extension mechanism to
  reimplement.
  There is also a fairly tight coupling between how DOA populates the
  request and sets up a User object that we don't really want to reuse.
 
  There are a couple of authentication mechanisms that are currently
being
  proposed that are requiring this ability immediately.
 
  * websso: https://review.openstack.org/136178
  * kerberos: https://review.openstack.org/#/c/153910/ (patchset 2).
 
  and to a certain extent:
 
  * k2k: https://review.openstack.org/159910
 
  Enabling and using these different authentication mechanisms is going
to
  need to be configured by an admin at deployment time.
 
  Given that we want to share the basic scoping/rescoping logic between
  these projects I can essentially see two ways to enable this.
 
  ### Proposal 1 - Add plugins to DOA
 
  The easiest way I can see of doing this is to add a plugin model to the
  existing DOA structure.
  The initial differentiating component for all these mechanisms is the
  retrieval of an unscoped token.
 
  We can take the existing DOA structure and simply make that part
  pluggable and add interfaces to that plugin as required in the future.
 
  Review: https://review.openstack.org/#/c/153910/
 
  Pros:
 
  * Fairly simple and extensible as required.
  * Small plugin interface.
 
  Cons:
 
  * Ignores that django already has an authentication plugin system.
  * Doesn't work well for adding views that run these backends.
 
  ### Proposal 2 - Make the existing DOA subclassable.
 
  The next method is to essentially re-use the existing Django
  authentication module architecture.
  We can extract into a base class all the current logic around token
  handling and develop new modules around that.
 
  Review: https://review.openstack.org/#/c/164071/
  An example of using it:
  https://github.com/jamielennox/django-openstack-auth-kerberos
 
  Pros:
 
  * Reusing Django concepts.
  * Seems easier to handle adding of views.
 
  Cons:
 
  * DOA has to start worrying about public interfaces.
 
  ### Required reviews:
 
  Either way I think these two reviews are going to be required to make
  this work:
 
  * Redirect to login page: https://review.openstack.org/#/c/153174/ - If
  we want apache modules to start handling parts of auth we need to mount
  those at dedicated paths, we can't put kerberos login at /
  * Additional auth urls: https://review.openstack.org/#/c/164068/ - We
  need to register additional views so that we can handle the output of
  these apache modules and call the correct authenticate() parameters.
 
  ### Conclusion
 
  Essentially either of these could work and both will require some
  tweaking and extending to be useful in all situations.
 
  However I am kind of passing through on DOA and Django and would like
  someone with more experience in the field to comment on what feels more
  correct or any issues they see arising with the different approaches.
  Either way I think a clear approach on extensibility would be good
  before committing to any of the kerberos, websso and k2k definitions.
 
 
  Please let me know an opinion as there are multiple patches that will
  depend upon it.
 
 
  Thanks,
 
  Jamie
 
 
 
 
__
  OpenStack Development Mailing List (not for usage 

Re: [openstack-dev] [nova] looking for feedback on object versioning

2015-03-17 Thread Chris Friesen

On 03/16/2015 03:23 AM, Sylvain Bauza wrote:


Le 14/03/2015 01:13, Chris Friesen a écrit :

1) Do I need to do anything special in obj_make_compatible()?


Yes, you need to make sure that you won't provide the new field to a previous
version of Service object.
See other examples on other objects to see how to do this.


Okay, that makes sense.



2) Is what I've done in _from_db_object() correct?  If I don't do it like
this, how is the reported_at field handled when a node sends a v1.11 version
of the object (or the corresponding dict) to another node that wants a v1.12
version object?


_from_db_object is called for transforming a DB services table SQLA object (ie.
a tuple) into a NovaObject Service object.
By saying that you won't check it, it means that you won't have it persisted on
the DB table.


Would it be possible to have a case where a v1.12 Service object calls 
_from_db_object() where the argument is a DB object that hasn't been upgraded 
yet?  (Like maybe if we upgraded nova-compute before nova-conductor?)


If so, then it seems like either we implicitly set it as None in 
_from_db_object(), or else allow it to be lazy-loaded as None in obj_load_attr().




3) Is it okay to lazy-load a None value in obj_load_attr()?  The nice thing
about doing it this way is that a large number of unit/functional tests can
stay as-is.



No, that's not acceptable. The goal of obj_load_attr() is to lazy-load fields by
setting their values on the object. You should not return anything but instead
make sure that self.reported_at field is set.


Whoops, the return is a clear bug.  Would it be okay to set self.reported_at = 
None in obj_load_attr()?



Honestly, the patch is really big for being reviewed. I also have some concerns
about how you plan to fix the bug by adding a new field which is not persisted,
but I prefer to leave my comments on Gerrit, that's what it's used for :-)


I'm not sure what you mean here.  Are you suggesting that it should be broken 
into multiple patches?


Thanks for taking a look at this.

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Proposal to change Glance meeting time.

2015-03-17 Thread Cindy Pallares

+1 for consistent time and a slightly higher preference for 1500 UTC

On 03/17/2015 02:15 AM, Koniszewski, Pawel wrote:

+1 for consistent time (I prefer 1400UTC)

*From:*Fei Long Wang [mailto:feil...@catalyst.net.nz]
*Sent:* Sunday, March 15, 2015 9:00 PM
*To:* openstack-dev@lists.openstack.org
*Subject:* Re: [openstack-dev] [Glance] Proposal to change Glance
meeting time.

+1 for consistent time

On 14/03/15 10:11, Nikhil Komawar wrote:

Here's what it looks like so far:-

1400UTC: 3 votes (all core reviewers)

1500UTC: 5 votes (one core reviewer)

Both: 4 votes (all core reviewers)

Let's wait another couple days to see if more people respond.

I have a feeling that the weight is slightly tilted towards 1400UTC
based on a few assumptions about the past activity of those
participants, their cross project inputs, etc.

Thanks,
-Nikhil



*From:*Mikhail Fedosin mfedo...@mirantis.com
mailto:mfedo...@mirantis.com
*Sent:* Friday, March 13, 2015 3:07 PM
*To:* OpenStack Development Mailing List (not for usage questions)
*Subject:* Re: [openstack-dev] [Glance] Proposal to change Glance
meeting time.

Both options are good, it's a little better at 1500 UTC.

+1 consistent time.

On Fri, Mar 13, 2015 at 9:23 PM, Steve Lewis
steve.le...@rackspace.com mailto:steve.le...@rackspace.com wrote:

+1 consistent time

+1 for 1500 UTC since that has come up.

On 09/03/15 09:07, Nikhil Komawar wrote:

So, the new proposal is:
Glance meetings [1] to be conducted
weekly on
Thursdays at 1400UTC [2] on
#openstack-meeting-4


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__

OpenStack Development Mailing List (not for usage questions)

Unsubscribe:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe  
mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Cheers  Best regards,

Fei Long Wang (王飞龙)

--

Senior Cloud Software Engineer

Tel: +64-48032246

Email:flw...@catalyst.net.nz  mailto:flw...@catalyst.net.nz

Catalyst IT Limited

Level 6, Catalyst House, 150 Willis Street, Wellington

--



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] IPAM reference driver status and other stuff

2015-03-17 Thread Salvatore Orlando
On 17 March 2015 at 14:44, Carl Baldwin c...@ecbaldwin.net wrote:


 On Mar 15, 2015 6:42 PM, Salvatore Orlando
  * the ML2 plugin overrides several methods from the base db class.
 From what I gather from unit tests results, we have not yet refactored it.
 I think to provide users something usable in Kilo we should ensure the ML2
 plugin at least works with the IPAM driver.

 Yes, agreed.

  * the current refactoring has ipam-driver-enabled and
 non-ipam-driver-enabled version of some API operations. While this the less
 ugly way to introduce the driver and keeping at the same time the old
 logic, it adds quite a bit of code duplication. I wonder if there is any
 effort we can make without too much yak shaving to reduce that code
 duplication, because in this conditions I suspect it would a hard sell to
 the Neutron core team.

 This is a good thing to bring up.  It is a difficult trade off.  On one
 hand, the way it has been done makes it easy to review and see that the
 existing implementation has not been disturbed reducing the short term
 risk.  On the other hand, if left the way it is indefinitely, it will be a
 maintenance burden.  Given the current timing, could we take a two-phased
 approach?  First, merge it with duplication and immediately create a follow
 on patch to deduplicate the code to merge when that is ready?

The problem with duplication is that it will make maintenance troubling.
For instance if a bug is found in _test_fixed_ips the bug fixer will have
to know that the same fix must be applied to _test_fixed_ips_for_ipam as
well. I'm not sure we can ask contributors to fix bugs in two places. But
if we plan to deduplicate with a follow-up patch I am on board. I know we'd
have the cycles for that.
Said that, the decision lies with the rest of core team (Carl's and mine
votes do not count here!). If I were a reviewer I'd evaluate the tradeoff
between of the benefits brought buythis new feature, the risks of the
refactoring (which, as you say, are rather low), and the maintenance burden
(aka technical debt) it introduces.

I'm kind of sure the PTL would like to outline all of this, including
extensive details about testing, in an etherpad so that a call could be
made by the end of week.
I am already taking care of that.

Salvatore

Carl

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] what is our shipped paste.ini going to be for Kilo

2015-03-17 Thread park



On 2015年03月17日 13:31, Christopher Yeoh wrote:

On Tue, 17 Mar 2015 15:56:27 +1300
Robert Collins robe...@robertcollins.net wrote:


On 17 March 2015 at 14:27, Ken'ichi Ohmichi ken1ohmi...@gmail.com
wrote:


I am worried about SDKs making requests that have additional JSON
attributes that were previously ignored by v2, but will be
considered invalid by the v2.1 validation code. If we were to just
strip out the extra items, rather than error out the request (when
you don't specify a microversion), I would be less worried about
the transition. Maybe that what we do?

Nice point.
That is a main difference in API behaviors between v2 and v2.1 APIs.
If SDKs pass additional JSON attributes to Nova API now, developers
need to fix/remove these attributes because that is a bug on SDKs
side.
These attributes are unused and meaningless, so some APIs of SDKs
would contain problems if passing this kind of attributes.

Sometime it was difficult to know what are available attributes
before v2.1 API, so The full monty approach will clarify problems
of SDKs and make SDKs' quality better.

Thanks
Ken Ohmichi

Better at the cost of forcing all existing users to upgrade just to
keep using code of their own that already worked.

Not really 'better' IMO. Different surely.

We could (should) add Warning: headers to inform about this, but
breaking isn't healthy IMO.


It'd be up to the operators, but there is always the option of simply
editing the paste.ini file so /v2 is again the produced by the old v2
code.


My main concern about v2 / v2.1 compatibility in practicce (rather than
just passing the same tempest and uniteststs which does work) is lack
of feedback. Probably don't exepct positive feedback in many cases but
we're not really getting negative feedback much either. I really would
appreciate people actually trying it more real world apps so we get a
better idea of the compatibility in areas of the code that don't have
good tempest coverage or have unitests which are incomplete.

+1, v2.1 should be the future


Regards,

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] [Cinder] Ubuntu LVM hangs in gate testing

2015-03-17 Thread Clay Gerrard
Can the bits that make those devices invalid and udev out of date call udev
admin --settle to just block till things are upto date and hopefully the
subseqent vg and pv scans quicker?

On Monday, March 16, 2015, John Griffith john.griffi...@gmail.com wrote:

 Hey Everyone,

 Thought I'd reach out to the ML to see if somebody might have some insight
 or suggestions to a problem I've been trying to solve.

 The short summary is:

 During a dsvm-full run in the gate there are times when /dev/sdX devices
 on the system may be created/deleted.  The trick here though is that on the
 Cinder side with LVM we're doing a lot of things that rely on VGS and LVS
 calls (including periodic tasks that are run).  Part of the scan routine
 unfortunately is for LVM to go through and open any block devices that is
 sees on the system and read them to see if it's an LVM device.

 The problem here is that the timing in the gate tests when everything is
 on a single node can result in udev not being quite up to date and the LVM
 scan process attempts to open a device that is no longer valid.  In this
 case (which we hit a few times on every single gate test), LVM blocks on
 the Open until the device times out and gives:
 -1 ENXIO (No such device or address)

 The problem is this can take up to almost a full minute for the timeout,
 so we have a few tests that take upwards of 150 seconds that actually
 should complete in about 30 seconds.  In addition this causes things like
 concurrent lvcreate cmds to block as well.  Note this is kind of
 inefficient anyway (even if all of the devices are valid), so there is a
 case to be made for not doing it if possible.

 Nothing fails typically in this scenario, things are just slow.

 I thought this would be easy to fix a while back by adding a local
 lvm.conf with a device filter.  It turns out however that the device filter
 only filters out items AFTER the vgs or lvs, it doesn't filter out the
 opens.  For that you need either:
 1. global_filter
 2. lvmetad service enabled

 The problem with '#1' is that the global_filter parameter is only honored
 on a global level NOT just in a local lvm.conf like we have currently.  To
 use that though we would have to set things such that Cinder was the only
 thing using LVM (not sure if that's doable or not).

 The problem with '#2' is that Trusty doesn't seem to have lvmetad in it's
 lvm2 packages until 2.02.111 (which isn't introduced until Vivid).

 I'm wondering if anybody knows of a backport or another method to get
 lvmetad capability in Ubuntu Trusty?

 OR

 What are some thoughts regarding the addition of a global_filter in
 /etc/lvm.conf?  We'd have to make a number of modifications to any services
 in devstack that setup LVM to make sure their PV's are added to the
 filter.  This might not be a big deal because most everyone uses loopback
 files so we could just to a loop regex and hit everything in one shot (I
 think).  But this means that anybody using LVM in devstack for something
 else is going to need to understand what's going on and add their devices
 to the filter.

 Ok... so not really the short version after all, but I'm wondering if
 anybody has any ideas that maybe I'm missing here.  I'll likely proceed
 with the idea of a global filter later this week if I don't hear any strong
 objections, or even better maybe somebody knows how to get lvmetad on
 Trusty which I *think* would be ideal for a number of other reasons.

 Thanks,
 John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [opnfv-tech-discuss] [Keystone][Multisite] Huge token size

2015-03-17 Thread joehuang
Hi, Adam,

Good to know Fernet token is on the way to reduce the token size and token 
persistence issues.

It's not reality to deploy KeyStone service ( including backend store ) in each 
site if the number, for example, is more than 10.  The reason is that the 
stored data including data related to revocation need to be replicated to all 
sites in synchronization manner. Otherwise, the API server might attempt to use 
the token before it's able to be validated in the target site.

When Fernet token is used in multisite scenario, each API request will ask for 
token validation from KeyStone. The cloud will be out of service if KeyStone 
stop working, therefore KeyStone service need to run in several sites.

For reliability purpose, I suggest that the keystone client should provide a 
fail-safe design: primary KeyStone server, the second KeyStone server (or even 
the third KeySont server) . If the primary KeyStone server is out of service, 
then the KeyStone client will try the second KeyStone server. Different 
KeyStone client may be configured with different primary KeyStone server and 
the second KeyStone server.

Best Regards
Chaoyi Huang ( Joe Huang )

From: Adam Young [mailto:ayo...@redhat.com]
Sent: Monday, March 16, 2015 10:52 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [opnfv-tech-discuss] [Keystone][Multisite] Huge 
token size

On 03/16/2015 05:33 AM, joehuang wrote:
[Topic]: Huge token size

Hello,

As you may or may not be aware of, a requirement project proposal Multisite[1] 
was started in OPNFV in order to identify gaps in implementing OpenStack across 
multiple sites.

Although the proposal has not been approved yet, we've started to run some 
experiments to try out different methods. One of the problem we identify in 
those experiments is that, when we want  to use a shared KeyStone for 101 
Regions ( including ~500 endpoints ). The token size is huge (The token format 
is PKI), please see details in the attachments:

token_catalog.txt, 162KB: catalog list included in the token
token_pki.txt, 536KB: non-compressed token size
token_pkiz.txt, 40KB: compressed token size

I understand that KeyStone has a way like endpoint_filter to reduce the size of 
token, however this requires to manage many (hard to id the exact number) 
endpoints can be seen by a project, and the size is not easy to exactly 
controlled.

Do you guys have any insights in how to reduce the token size if PKI token 
used? Is there any BP relates to this issue? Or should we fire one to tackle 
this?


Right now there is an effort for non-multisite to get a handle on the problem.  
The Fernet token format will make it possible for a token to be ephemeral.  The 
scheme is this:

Encode the minimal amount of Data into the token possible.

Always validate the token on the Keystone server.

On the Keystone server, the token validation is performed by checking the 
message HMAC, and then expanding out the data.

This concept is expandable to multi site in two ways.

For a completely trusted and symmetric multisite deployement, the keystone 
servers can share keys.  The Kite project was 
http://git.openstack.org/cgit/openstack/kite origianlly spun up to manage this 
sort of symmetric key sharing, and is a natural extension.

If two keystone server need to sign for and validate separate serts of data 
(future work)  the form of signing could be returned to Asymmetric Crypto.  
This would lead to a minimal tokne size of about 800 Bytes (I haven't tested 
exactly).  It would mean that any service responsible for validating tokens 
would need to fetch and cache the responses for things like catalog and role 
assignments.

The epehemeral nature of the Fernet specification means that revocation data 
needs to bepersisted separate from the token, so it is not 100% ephemeral, but 
the amount of stored data should be (I estimate) two orders of magnatude 
smaller, maybe three.  Password changes, project deactivations,  and role 
revocations will still cause some traffic there.  These will need to be 
synchronized across token validation servers.

Great topic for discussion in Vancouver.







[1]https://wiki.opnfv.org/requirements_projects/multisite

Best Regards
Chaoyi Huang ( Joe Huang )






__

OpenStack Development Mailing List (not for usage questions)

Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribemailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] QuintupleO Overview

2015-03-17 Thread Smigiel, Dariusz
 On 17 March 2015 at 09:30, Ben Nemec openst...@nemebean.com
 wrote:
  So I've successfully done a deployment to a QuintupleO environment. \o/
 
 \o/
 

Great news! Congrats!

@Ben, are you planning to keep it up-to-date or create repo with README?
I would also like to be involved in TripleO wasn't confusing enough, let's add 
another layer [1] ;) so maybe this is good start.

[1] http://blog.nemebean.com/content/quintupleo-status-update



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [opnfv-tech-discuss] [Keystone][Multisite] Huge token size

2015-03-17 Thread David Chadwick
Encryption per se does not decrease token size, the best it can do is
keep the token size the same size. So using Fernet tokens will not on
its own alter the token size. Reducing the size must come from putting
less information in the token. If the token recipient has to always go
back to Keystone to get the token validated, then all the token needs to
be is a large random number that Keystone can look up in its database to
retrieve the user's permissions. In this case no encryption is needed at
all.

regards

David

On 17/03/2015 06:51, joehuang wrote:
 Hi, Adam,
 
  
 
 Good to know Fernet token is on the way to reduce the token size and
 token persistence issues.
 
  
 
 It’s not reality to deploy KeyStone service ( including backend store )
 in each site if the number, for example, is more than 10.  The reason is
 that the stored data including data related to revocation need to be
 replicated to all sites in synchronization manner. Otherwise, the API
 server might attempt to use the token before it's able to be validated
 in the target site.
 
  
 
 When Fernet token is used in multisite scenario, each API request will
 ask for token validation from KeyStone. The cloud will be out of service
 if KeyStone stop working, therefore KeyStone service need to run in
 several sites.
 
  
 
 For reliability purpose, I suggest that the keystone client should
 provide a fail-safe design: primary KeyStone server, the second KeyStone
 server (or even the third KeySont server) . If the primary KeyStone
 server is out of service, then the KeyStone client will try the second
 KeyStone server. Different KeyStone client may be configured with
 different primary KeyStone server and the second KeyStone server.
 
  
 
 Best Regards
 
 Chaoyi Huang ( Joe Huang )
 
  
 
 *From:*Adam Young [mailto:ayo...@redhat.com]
 *Sent:* Monday, March 16, 2015 10:52 PM
 *To:* openstack-dev@lists.openstack.org
 *Subject:* Re: [openstack-dev] [opnfv-tech-discuss]
 [Keystone][Multisite] Huge token size
 
  
 
 On 03/16/2015 05:33 AM, joehuang wrote:
 
 [Topic]: Huge token size
 
  
 
 Hello,
 
  
 
 As you may or may not be aware of, a requirement project proposal
 Multisite[1] was started in OPNFV in order to identify gaps in
 implementing OpenStack across multiple sites.
 
  
 
 Although the proposal has not been approved yet, we’ve started to
 run some experiments to try out different methods. One of the
 problem we identify in those experiments is that, when we want  to
 use a shared KeyStone for 101 Regions ( including ~500 endpoints ).
 The token size is huge (The token format is PKI), please see details
 in the attachments:
 
  
 
 token_catalog.txt, 162KB: catalog list included in the token
 
 token_pki.txt, 536KB: non-compressed token size
 
 token_pkiz.txt, 40KB: compressed token size
 
  
 
 I understand that KeyStone has a way like endpoint_filter to reduce
 the size of token, however this requires to manage many (hard to id
 the exact number) endpoints can be seen by a project, and the size
 is not easy to exactly controlled.
 
  
 
 Do you guys have any insights in how to reduce the token size if PKI
 token used? Is there any BP relates to this issue? Or should we fire
 one to tackle this?
 
 
 
 Right now there is an effort for non-multisite to get a handle on the
 problem.  The Fernet token format will make it possible for a token to
 be ephemeral.  The scheme is this:
 
 Encode the minimal amount of Data into the token possible.
 
 Always validate the token on the Keystone server.
 
 On the Keystone server, the token validation is performed by checking
 the message HMAC, and then expanding out the data.
 
 This concept is expandable to multi site in two ways.
 
 For a completely trusted and symmetric multisite deployement, the
 keystone servers can share keys.  The Kite project was
 http://git.openstack.org/cgit/openstack/kite origianlly spun up to
 manage this sort of symmetric key sharing, and is a natural extension.
 
 If two keystone server need to sign for and validate separate serts of
 data (future work)  the form of signing could be returned to Asymmetric
 Crypto.  This would lead to a minimal tokne size of about 800 Bytes (I
 haven't tested exactly).  It would mean that any service responsible for
 validating tokens would need to fetch and cache the responses for things
 like catalog and role assignments. 
 
 The epehemeral nature of the Fernet specification means that revocation
 data needs to bepersisted separate from the token, so it is not 100%
 ephemeral, but the amount of stored data should be (I estimate) two
 orders of magnatude smaller, maybe three.  Password changes, project
 deactivations,  and role revocations will still cause some traffic
 there.  These will need to be synchronized across token validation servers.
 
 Great topic for discussion in Vancouver.
 
 
 
 
 
 
  
 
 

Re: [openstack-dev] [nova] Unshelve Instance Performance Optimization Questions

2015-03-17 Thread Kekane, Abhishek
Hi John,

Thanks for your opinion.

Fundamentally we cannot assume infinite storage space.

To enhance the shelve/unshelve performance, I have proposed nova-specs [1], in 
which there are two challenges.

A. This design is libvirt specific, currently I am using KVM hypervisor but I 
am open to make changes to other hypervisors.
  I don't have the know-how about other hypervisors (how to configuration 
etc.)  any help about same from community is appreciated.

B. HostAggregateGroupFilter [2] (Rescheduling instance)- Filter to schedule 
instance on different node if shared storage is full or resources are not 
available.
 Please let me know your opinion about this HostAggregateGroupFilter.

I request community members to go through the nova-specs [1] and patches 
submitted [3] for the same and let us give your feedback on the same.

[1] https://review.openstack.org/135387
[2] https://review.openstack.org/150330
[3] https://review.openstack.org/150315, https://review.openstack.org/150337, 
https://review.openstack.org/150344

Thank You,

Abhishek Kekane

-Original Message-
From: John Garbutt [mailto:j...@johngarbutt.com] 
Sent: 12 March 2015 17:41
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Unshelve Instance Performance Optimization 
Questions

Hi,

On 11 March 2015 at 06:35, Kekane, Abhishek abhishek.kek...@nttdata.com wrote:
 In case of start/stop API’s cpu/memory are not released/reassigned. We 
 can modify these API’s to release the cpu and memory while stopping 
 the instance and reassign the same while starting the instance. In 
 this case also rescheduling logic need  to be modified to reschedule 
 the instance on different host, if required resources are not 
 available while starting the instance. This is similar to what I have 
 implemented in [2] Improving the performance of unshelve API.

I am against start releasing the resources, as you can't guarantee start will 
work quickly. Similar to suspend I suppose.

The idea of shelve/unshelve is to avoid that problem, by ensuring you can 
resume the VM anywhere, should someone else use the resources you have freed 
up. But the idea was to optimize for a quick unshelve, where possible. The 
feature is not really complete, we need a scheduling weighter to deal with 
avoiding that capacity till you need it, etc. When you have shared storage, it 
maybes sense to add the option of skipping the snapshot (boot from volume 
clearly doesn't need a snapshot), if you are happy to assume there will always 
be space on some host that can see that shared storage.

 Please let me know your opinion, whether we can modify start/stop 
 API’s as an alternative to shelve/unshelve API’s.

I would rather we enhance shelve/unshelve, rather than fundamentally change the 
semantics of start/stop.

Thanks,
John


 From: Kekane, Abhishek [mailto:abhishek.kek...@nttdata.com]
 Sent: 24 February 2015 12:47

 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova] Unshelve Instance Performance 
 Optimization Questions



 Hi Duncan,



 Thank you for the inputs.



 @Community-Members

 I want to know if there are any other alternatives to improve the 
 performance of unshelve api ((booted from image only).

 Please give me your opinion on the same.



 Thank You,



 Abhishek Kekane







 From: Duncan Thomas [mailto:duncan.tho...@gmail.com]
 Sent: 16 February 2015 16:46
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova] Unshelve Instance Performance 
 Optimization Questions



 There has been some talk in cinder meetings about making 
 cinder-glance interactions more efficient. They are already 
 optimised in some deployments, e.g. ceph glance and ceph cinder, and 
 some backends cache glance images so that many volumes created from the same 
 image becomes very efficient.
 (search the meeting logs or channel logs for 'public snapshot' to get 
 some entry points into the discussions)

 I'd like to see more work done on this, and perhaps re-examine a 
 cinder backend to glance. This would give some of what you're 
 suggesting (particularly fast, low traffic un-shelve), and there is 
 more that can be done to improve that performance, particularly if we 
 can find a better performing generic CoW technology than QCOW2.

 As suggested in the review, in the short term you might be better 
 experimenting with moving to boot-from-volume instances if you have a 
 suitable cinder deployed, since that gives you some of the performance 
 improvements already.



 On 16 February 2015 at 12:10, Kekane, Abhishek 
 abhishek.kek...@nttdata.com
 wrote:

 Hi Devs,



 Problem Statement: Performance and storage efficiency of 
 shelving/unshelving instance booted from image is far worse than instance 
 booted from volume.



 When you unshelve hundreds of instances at the same time, instance 
 spawning time varies and it mainly 

Re: [openstack-dev] [nova] Unshelve Instance Performance Optimization Questions

2015-03-17 Thread Kekane, Abhishek
Hi John,

Thanks for your opinion.

Fundamentally we cannot assume infinite storage space.

To enhance the shelve/unshelve performance, I have proposed nova-specs [1], in 
which there are two challenges.

A. This design is libvirt specific, currently I am using KVM hypervisor but I 
am open to make changes to other hypervisors.
  I don't have the know-how about other hypervisors (how to configuration 
etc.)  any help about same from community is appreciated.

B. HostAggregateGroupFilter [2] - Filter to schedule host on different node if 
shared storage is full or resources are not available.
 Please let me know your opinion about this HostAggregateGroupFilter.

I request community members to go through the nova-specs [1] and patches 
submitted [3] for the same and let us give your feedback on the same.

[1] https://review.openstack.org/135387
[2] https://review.openstack.org/150330
[3] https://review.openstack.org/150315, https://review.openstack.org/150337, 
https://review.openstack.org/150344

Thank You,

Abhishek Kekane

-Original Message-
From: John Garbutt [mailto:j...@johngarbutt.com] 
Sent: 12 March 2015 17:41
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Unshelve Instance Performance Optimization 
Questions

Hi,

On 11 March 2015 at 06:35, Kekane, Abhishek abhishek.kek...@nttdata.com wrote:
 In case of start/stop API’s cpu/memory are not released/reassigned. We 
 can modify these API’s to release the cpu and memory while stopping 
 the instance and reassign the same while starting the instance. In 
 this case also rescheduling logic need  to be modified to reschedule 
 the instance on different host, if required resources are not 
 available while starting the instance. This is similar to what I have 
 implemented in [2] Improving the performance of unshelve API.

I am against start releasing the resources, as you can't guarantee start will 
work quickly. Similar to suspend I suppose.

The idea of shelve/unshelve is to avoid that problem, by ensuring you can 
resume the VM anywhere, should someone else use the resources you have freed 
up. But the idea was to optimize for a quick unshelve, where possible. The 
feature is not really complete, we need a scheduling weighter to deal with 
avoiding that capacity till you need it, etc. When you have shared storage, it 
maybes sense to add the option of skipping the snapshot (boot from volume 
clearly doesn't need a snapshot), if you are happy to assume there will always 
be space on some host that can see that shared storage.

 Please let me know your opinion, whether we can modify start/stop 
 API’s as an alternative to shelve/unshelve API’s.

I would rather we enhance shelve/unshelve, rather than fundamentally change the 
semantics of start/stop.

Thanks,
John


 From: Kekane, Abhishek [mailto:abhishek.kek...@nttdata.com]
 Sent: 24 February 2015 12:47

 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova] Unshelve Instance Performance 
 Optimization Questions



 Hi Duncan,



 Thank you for the inputs.



 @Community-Members

 I want to know if there are any other alternatives to improve the 
 performance of unshelve api ((booted from image only).

 Please give me your opinion on the same.



 Thank You,



 Abhishek Kekane







 From: Duncan Thomas [mailto:duncan.tho...@gmail.com]
 Sent: 16 February 2015 16:46
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova] Unshelve Instance Performance 
 Optimization Questions



 There has been some talk in cinder meetings about making 
 cinder-glance interactions more efficient. They are already 
 optimised in some deployments, e.g. ceph glance and ceph cinder, and 
 some backends cache glance images so that many volumes created from the same 
 image becomes very efficient.
 (search the meeting logs or channel logs for 'public snapshot' to get 
 some entry points into the discussions)

 I'd like to see more work done on this, and perhaps re-examine a 
 cinder backend to glance. This would give some of what you're 
 suggesting (particularly fast, low traffic un-shelve), and there is 
 more that can be done to improve that performance, particularly if we 
 can find a better performing generic CoW technology than QCOW2.

 As suggested in the review, in the short term you might be better 
 experimenting with moving to boot-from-volume instances if you have a 
 suitable cinder deployed, since that gives you some of the performance 
 improvements already.



 On 16 February 2015 at 12:10, Kekane, Abhishek 
 abhishek.kek...@nttdata.com
 wrote:

 Hi Devs,



 Problem Statement: Performance and storage efficiency of 
 shelving/unshelving instance booted from image is far worse than instance 
 booted from volume.



 When you unshelve hundreds of instances at the same time, instance 
 spawning time varies and it mainly depends on the size of the 

[openstack-dev] Group Based Policy - Kilo-2 development milestone

2015-03-17 Thread Sumit Naiksatam
Hi All,

The second milestone release of the Kilo development cycle, “kilo-2
is now available for the Group Based Policy project. It contains a
bunch of bug fixes and enhancements over the previous release. You can
find the full list of fixed bugs, features, as well as tarball
downloads, at:

https://launchpad.net/group-based-policy/kilo/kilo-gbp-2
https://launchpad.net/group-based-policy-automation/kilo/kilo-gbp-2
https://launchpad.net/group-based-policy-ui/kilo/kilo-gbp-2

Many thanks to those who contributed towards this milestone. The next
development milestone, kilo-3, is scheduled for April 15th.

Best,
~Sumit.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Proposal to change Glance meeting time.

2015-03-17 Thread Koniszewski, Pawel
+1 for consistent time (I prefer 1400UTC)

 

From: Fei Long Wang [mailto:feil...@catalyst.net.nz] 
Sent: Sunday, March 15, 2015 9:00 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Glance] Proposal to change Glance meeting time.

 

+1 for consistent time

On 14/03/15 10:11, Nikhil Komawar wrote:

Here's what it looks like so far:-

 

1400UTC: 3 votes (all core reviewers)

1500UTC: 5 votes (one core reviewer)

Both: 4 votes (all core reviewers)

 

Let's wait another couple days to see if more people respond. 

 

I have a feeling that the weight is slightly tilted towards 1400UTC based on a 
few assumptions about the past activity of those participants, their cross 
project inputs, etc.

 

Thanks,
-Nikhil


  _  


From: Mikhail Fedosin  mailto:mfedo...@mirantis.com mfedo...@mirantis.com
Sent: Friday, March 13, 2015 3:07 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance] Proposal to change Glance meeting time. 

 

Both options are good, it's a little better at 1500 UTC.



+1 consistent time.



 

On Fri, Mar 13, 2015 at 9:23 PM, Steve Lewis steve.le...@rackspace.com 
mailto:steve.le...@rackspace.com  wrote:

+1 consistent time

+1 for 1500 UTC since that has come up.

On 09/03/15 09:07, Nikhil Komawar wrote:

So, the new proposal is:
Glance meetings [1] to be conducted
weekly on
Thursdays at 1400UTC [2] on
#openstack-meeting-4

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





-- 
Cheers  Best regards,
Fei Long Wang (王飞龙)
--
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flw...@catalyst.net.nz mailto:flw...@catalyst.net.nz 
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
-- 


smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] fuel-utils package

2015-03-17 Thread Vladimir Kuklin
Andrew

Thank you for pointing this out. We are working on packaging all of the
fuel library components right now. We will make it happen in a couple of
weeks - me and other guys are currently working on
https://blueprints.launchpad.net/fuel/+spec/package-fuel-components to
clean up our make system and switch to packages-only model - this should
make our life easier in terms of providing updates.

Regarding fuel-utils - you are absolutely right, but I am not sure we need
it now - it is a leftover from old times when OVS could not handle
gratuitous ARPs correctly - we will need to recheck whether we need it now.

On Tue, Mar 17, 2015 at 4:17 AM, Andrew Woodward xar...@gmail.com wrote:



 On Mon, Mar 16, 2015 at 4:35 PM, Andrew Woodward xar...@gmail.com wrote:

 While working to remove deps on /root/openrc [1][2], I found that we have
 a package for fule-utils to provide a tool called fdb-cleaner. It looks
 like the source is [3] and its built at [4]

 I see that there are no tests, or CI to test integrity of the code there.
 I propose that we move this to stackforge.

 Also, I think we would want to move code in module/files
 modules/templates (like q-agent-cleanup.py and OCF scripts) so that we
 have a chance of being able to upgrade them. It seems like this could be a
 proper location for these kinds of items so they don't live unversioned and
 hidden in our manifests.


 The missing links
 [1] https://bugs.launchpad.net/fuel/+bug/1396594
 [2] https://bugs.launchpad.net/fuel/+bug/1347542
 [3] https://github.com/xenolog/fuel-utils
 [4] https://review.fuel-infra.org/#/admin/projects/?filter=fuel-utils

 --
 Andrew
 Mirantis
 Fuel community ambassador
 Ceph community




 --
 Andrew
 Mirantis
 Fuel community ambassador
 Ceph community

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
45bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com http://www.mirantis.ru/
www.mirantis.ru
vkuk...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Block migrations and Cinder volumes

2015-03-17 Thread Joe Gordon
On Thu, Jun 19, 2014 at 1:38 AM, Daniel P. Berrange berra...@redhat.com
wrote:

 On Wed, Jun 18, 2014 at 11:09:33PM -0700, Rafi Khardalian wrote:
  I am concerned about how block migration functions when Cinder volumes
 are
  attached to an instance being migrated.  We noticed some unexpected
  behavior recently, whereby attached generic NFS-based volumes would
 become
  entirely unsparse over the course of a migration.  After spending some
 time
  reviewing the code paths in Nova, I'm more concerned that this was
 actually
  a minor symptom of a much more significant issue.
 
  For those unfamiliar, NFS-based volumes are simply RAW files residing on
 an
  NFS mount.  From Libvirt's perspective, these volumes look no different
  than root or ephemeral disks.  We are currently not filtering out volumes
  whatsoever when making the request into Libvirt to perform the migration.
   Libvirt simply receives an additional flag (VIR_MIGRATE_NON_SHARED_INC)
  when a block migration is requested, which applied to the entire
 migration
  process, not differentiated on a per-disk basis.  Numerous guards within
  Nova to prevent a block based migration from being allowed if the
 instance
  disks exist on the destination; yet volumes remain attached and within
 the
  defined XML during a block migration.
 
  Unless Libvirt has a lot more logic around this than I am lead to
 believe,
  this seems like a recipe for corruption.  It seems as though this would
  also impact any type of volume attached to an instance (iSCSI, RBD,
 etc.),
  NFS just happens to be what we were testing.  If I am wrong and someone
 can
  correct my understanding, I would really appreciate it.  Otherwise, I'm
  surprised we haven't had more reports of issues when block migrations are
  used in conjunction with any attached volumes.

 Libvirt/QEMU has no special logic. When told to block-migrate, it will do
 so for *all* disks attached to the VM in read-write-exclusive mode. It will
 only skip those marked read-only or read-write-shared mode. Even that
 distinction is somewhat dubious and so not reliably what you would want.

 It seems like we should just disallow block migrate when any cinder volumes
 are attached to the VM, since there is never any valid use case for doing
 block migrate from a cinder volume to itself.



Digging up this old thread because I am working on getting multi node live
migration testing working (https://review.openstack.org/#/c/165182/), and
just ran into this issue (bug 1398999).

And I am not sure I agree with this statement. I think there is a valid
case for doing block migrate with a cinder volume attached to an instance:


* Cloud isn't using a shared filesystem for ephemeral storage
* Instance is booted from an image, and a volume is attached afterwards. An
admin wants to take the box the instance is running on offline for
maintanince with a minimal impact to the instances running on it.

What is the recommended solution for that use case? If the admin
disconnects and reconnects the volume themselves is there a risk of
impacting whats running on the instance? etc.



 Regards,
 Daniel
 --
 |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/
 :|
 |: http://libvirt.org  -o- http://virt-manager.org
 :|
 |: http://autobuild.org   -o- http://search.cpan.org/~danberr/
 :|
 |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc
 :|

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] fuel-utils package

2015-03-17 Thread Vladimir Kuklin
Andrew

We would very appreciate if you figured it out. It is really easy to test -
just migrate l3 agent and see whether OVS datapath to l3 agent MAC address
switched to different port - it should digest gratuitous ARPs correctly.
Otherwise, we will still need to flush database on migration of agent IP
addresses.

On Tue, Mar 17, 2015 at 9:35 PM, Andrew Woodward xar...@gmail.com wrote:



 On Tue, Mar 17, 2015 at 12:58 AM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:

 Andrew

 Thank you for pointing this out. We are working on packaging all of the
 fuel library components right now. We will make it happen in a couple of
 weeks - me and other guys are currently working on
 https://blueprints.launchpad.net/fuel/+spec/package-fuel-components to
 clean up our make system and switch to packages-only model - this should
 make our life easier in terms of providing updates.

 Regarding fuel-utils - you are absolutely right, but I am not sure we
 need it now - it is a leftover from old times when OVS could not handle
 gratuitous ARPs correctly - we will need to recheck whether we need it now.


 Thoughts on who should test this? I can If someone clarifies how to test
 it.

 If we are going to keep it around, then we will need to make the same
 openrc dep changes to it as are in q-agent-cleanup [5]

 [5] https://review.openstack.org/#/c/158996/


 On Tue, Mar 17, 2015 at 4:17 AM, Andrew Woodward xar...@gmail.com
 wrote:



 On Mon, Mar 16, 2015 at 4:35 PM, Andrew Woodward xar...@gmail.com
 wrote:

 While working to remove deps on /root/openrc [1][2], I found that we
 have a package for fule-utils to provide a tool called fdb-cleaner. It
 looks like the source is [3] and its built at [4]

 I see that there are no tests, or CI to test integrity of the code
 there. I propose that we move this to stackforge.

 Also, I think we would want to move code in module/files
 modules/templates (like q-agent-cleanup.py and OCF scripts) so that we
 have a chance of being able to upgrade them. It seems like this could be a
 proper location for these kinds of items so they don't live unversioned and
 hidden in our manifests.


 The missing links
 [1] https://bugs.launchpad.net/fuel/+bug/1396594
 [2] https://bugs.launchpad.net/fuel/+bug/1347542
 [3] https://github.com/xenolog/fuel-utils
 [4] https://review.fuel-infra.org/#/admin/projects/?filter=fuel-utils

 --
 Andrew
 Mirantis
 Fuel community ambassador
 Ceph community




 --
 Andrew
 Mirantis
 Fuel community ambassador
 Ceph community


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Yours Faithfully,
 Vladimir Kuklin,
 Fuel Library Tech Lead,
 Mirantis, Inc.
 +7 (495) 640-49-04
 +7 (926) 702-39-68
 Skype kuklinvv
 45bk3, Vorontsovskaya Str.
 Moscow, Russia,
 www.mirantis.com http://www.mirantis.ru/
 www.mirantis.ru
 vkuk...@mirantis.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Andrew
 Mirantis
 Fuel community ambassador
 Ceph community

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
45bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com http://www.mirantis.ru/
www.mirantis.ru
vkuk...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] what is our shipped paste.ini going to be for Kilo

2015-03-17 Thread Joe Gordon
On Mon, Mar 16, 2015 at 6:06 PM, Ken'ichi Ohmichi ken1ohmi...@gmail.com
wrote:

 Hi Sean,

 2015-03-16 23:15 GMT+09:00 Sean Dague s...@dague.net:
  Our current top level shipped example paste.ini for Nova includes the
  following set of endpoint definitions:
 
  [composite:osapi_compute]
  use = call:nova.api.openstack.urlmap:urlmap_factory
  /: oscomputeversions
  /v1.1: openstack_compute_api_v2
  /v2: openstack_compute_api_v2
  /v2.1: openstack_compute_api_v21
  /v3: openstack_compute_api_v3
 
 
  The real question I have is what should this look like in the Kilo
  release. And this has a couple of axes.
 
   Who uses our paste.ini?
  =
 
  paste.ini is an etc file, so we assume that during upgrade you'll be
  using your existing config. Changes to the default paste.ini will
  really only be effective in new deploys. So this should not impact
  existing users, but instead only new users.

 Nice point, so the content of paste.ini seems what is our
 recommendation for API configuration.

  Cleaning up Cruft
  =
 
  Drop of /v3
  ---
 
  v3 is no longer a supported thing. I think v3 in the paste.ini causes
  confusion. It also causes us to keep around untested / unsupported
  code.
 
  This seems kind of a no brainer.

 +1, the patch has been already posted.


  Drop of /v1.1 ?
  ---
 
  Only new deploys are really going to base off of our in tree
  config. I'm not convinced that we want to encourage people setting up
  new /v1.1 endpoint in tree.
 
  I'm not convinced there is a deprecation path here because the ones I
  could imagine would involve people changing their paste.ini to include
  a deprecation checking piece of code.
 
  Honestly, I don't care strongly either way on this one. It's cruft,
  but not dangerous cruft (unlike v3).

 I'd like to propose /v1.1 removal.
 http://developer.openstack.org/api-ref.html also doesn't contain /v1.1
 endpoint.
 So the endpoint seems confusion for new users now.


Maybe we should just comment out the /v1.1 stuff for now and remove it
completely early in Lemming. That way if any new deployments need it for
some strange reason its easy  to enable.


  Nova v2
  ===
 
  This is where things get interesting.
 
  v2.1 is supposed to be equivalent to v2. The difference is moving the
  validation for datastructures from the database to the wsgi layer. The
  ways in which this doesn't react like the existing APIs should be
  basically not letting you create corrupt data models which will
  explode later in unexpected and hard to determine ways. The reality is
  objects validation has been sneaking this in all along anyway.
 
  The full monty approach
  ---
 
  [composite:osapi_compute]
  use = call:nova.api.openstack.urlmap:urlmap_factory
  /: oscomputeversions
  /v1.1: openstack_compute_api_v2
  /v2: openstack_compute_api_v21
  # starting in Kilo the v21 implementation replaces the v2
  # implementation and is suggested that you use it as the default. If
  # this causes issues with your clients you can rollback to the
  # *frozen* v2 api by commenting out the above stanza and using the
  # following instead::
  # /v2: openstack_compute_api_v2
  # if rolling back to v2 fixes your issue please file a critical bug
  # at - https://bugs.launchpad.net/nova/+bugs
 
  This would make the v2 endpoint the v21 implementation for new
  deploys. It would also make it easy for people to flip back if they
  hit an edge condition we didn't notice.
 
  In functional testing we'd still test both v2 and v2.1
 
  Tempest would move to v2.1 by default, and I think we should put an
  old v2 job on nova stable/kilo - master to help us keep track of
  regressions.
 
  The slow roll approach
  --
 
  Ship the existing file (minus v3):
 
  [composite:osapi_compute]
  use = call:nova.api.openstack.urlmap:urlmap_factory
  /: oscomputeversions
  /v1.1: openstack_compute_api_v2
  /v2: openstack_compute_api_v2
  /v2.1: openstack_compute_api_v21
 
  The advantages here is that out of the box stuff keeps working. The
  dilema here is it's not really clear that we'll get people poking at
  v2.1 because it will be out of the main path. The point of the
  microversioning was to get folks onto that train soon because it falls
  back to existing API. And once we are convinced we're good we can
  deprecate out the old implementation.
 
  Also, things like out of tree EC2 support requires v2.1, which is
  going to make deploys start relying on a /v2.1 endpoint that want EC2,
  so our options for grafting that back onto /v2 in the future are more
  limitted.
 
  Decision Time
  =
 
  Anyway, this is a decision we should make before freeze. The 'no
  decision' case gives us the slow roll. I think from an upstream
  perspective the full monty will probably serve us a little
  better. Especially with robust release notes that explain to people
  how to move their endpoints forward.

 +1 for The 

Re: [openstack-dev] [Congress] [Delegation] Meeting scheduling

2015-03-17 Thread Ramki Krishnan
I will also be glad to participate.

Thanks,
Ramki

From: ruby.krishnasw...@orange.com [mailto:ruby.krishnasw...@orange.com]
Sent: Tuesday, March 17, 2015 5:39 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] [Delegation] Meeting scheduling

Hi
I'd like to participate.

By when will you fix the meeting date ?

Ruby

De : Tim Hinrichs [mailto:thinri...@vmware.com]
Envoyé : lundi 16 mars 2015 19:05
À : OpenStack Development Mailing List (not for usage questions)
Objet : [openstack-dev] [Congress] [Delegation] Meeting scheduling

Hi all,

The feedback on the POC delegation proposal has been mostly positive.  Several 
people have asked for a meeting to discuss further.  Given time zone 
constraints, it will likely be 8a or 9a Pacific.  Let me know in the next 2 
days if you want to participate, and we will try to find a day that everyone 
can attend.

https://docs.google.com/document/d/1ksDilJYXV-5AXWON8PLMedDKr9NpS8VbT0jIy_MIEtI/edit

Thanks!
Tim

_



Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc

pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler

a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,

Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.



This message and its attachments may contain confidential or privileged 
information that may be protected by law;

they should not be distributed, used or copied without authorisation.

If you have received this email in error, please notify the sender and delete 
this message and its attachments.

As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.

Thank you.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Block migrations and Cinder volumes

2015-03-17 Thread Chris Friesen

On 03/17/2015 02:33 PM, Joe Gordon wrote:


Digging up this old thread because I am working on getting multi node live
migration testing working (https://review.openstack.org/#/c/165182/), and just
ran into this issue (bug 1398999).

And I am not sure I agree with this statement. I think there is a valid case for
doing block migrate with a cinder volume attached to an instance:


* Cloud isn't using a shared filesystem for ephemeral storage
* Instance is booted from an image, and a volume is attached afterwards. An
admin wants to take the box the instance is running on offline for maintanince
with a minimal impact to the instances running on it.

What is the recommended solution for that use case? If the admin disconnects and
reconnects the volume themselves is there a risk of impacting whats running on
the instance? etc.


Interesting bug.  I think I agree with you that there isn't a good solution 
currently for instances that have a mix of shared and not-shared storage.


I'm curious what Daniel meant by saying that marking the disk shareable is not 
as reliable as we would want.


I think there is definitely a risk if the admin disconnects the volume--whether 
or not that causes problems depends on whether the application can handle that 
cleanly.


I suspect the proper cloud-aware strategy would be to just kill it and have 
another instance take over.  But that's not very helpful for 
not-fully-cloud-aware applications.


Also, since you've been playing in this area...do you know if we currently 
properly support all variations on live/cold migration, resize, evacuate, etc. 
for the boot-from-volume case?


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][neutron] Moving network api test development to Neutron repo

2015-03-17 Thread Salvatore Orlando
With the API tests being now available in the neutron repository - and
being actively developed, I would also mandate in reviews that API tests
are provided in lieu of the usual unit tests which at the end of the day do
what the API tests are supposed to do.

This will provide better validation, and perhaps might finally allow us to
tear down the unit test non-sense we had so far.
It's with great shame that I must admit I introduce it as a quick way to
test all plugins in Folsom. But I never expected that contributors would
start building on that.

Hopefullly we could start having unit tests which do what unit tests are
supposed to do - white-box testing methods to provide maximum coverage and
validate their behaviour with different input values.

Salvatore


On 17 March 2015 at 22:04, Maru Newby ma...@redhat.com wrote:

 tl;dr; As per a discussion in Paris [1], development of Neutron's
 API tests is moving from the Tempest repo to the Neutron repo.
 If you are developing API tests for Neutron in Tempest, please be
 advised that, effective immediately, your efforts should be
 directed towards the Neutron repo.

 The current set of Neutron API tests in Tempest has been
 copied (along with supporting infrastructure) to
 neutron/tests/tempest [1].  Tests in this path are run as part of
 the neutron-dsvm-api job, which will shortly be gating [2].  Test
 changes that previously targeted the tempest/api/network path in
 the Tempest repo should target neutron/tests/tempest/network/api
 in the Neutron repo until further notice.

 Automatic conversion from a Tempest change to a Neutron change is
 possible:

  - cd [path to neutron repo]
  - ./tools/copy_api_tests_from_tempest.sh [path to tempest working
 directory]

 As per the Tempest guidelines for test removal [3], the tests
 currently in Tempest will remain in Tempest and continue to run
 as part of the existing jobs until we can target tests in the
 Neutron repo against stable branches and enable the use of the
 in-repo tests by defcore/refstack.

 Finally, guidelines for API test development in the Neutron repo are
 in the works and will be proposed shortly.  The guidelines will
 define policy intended to protect against backwards compatible
 changes to our API.

 Thanks,


 Maru

 1:
 https://etherpad.openstack.org/p/kilo-crossproject-move-func-tests-to-projects
 2: https://github.com/openstack/neutron/tree/master/neutron/tests/tempest
 3: https://review.openstack.org/#/c/164886
 4: https://wiki.openstack.org/wiki/QA/Tempest-test-removal


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa][neutron] Moving network api test development to Neutron repo

2015-03-17 Thread Maru Newby
tl;dr; As per a discussion in Paris [1], development of Neutron's
API tests is moving from the Tempest repo to the Neutron repo.
If you are developing API tests for Neutron in Tempest, please be
advised that, effective immediately, your efforts should be
directed towards the Neutron repo.

The current set of Neutron API tests in Tempest has been
copied (along with supporting infrastructure) to
neutron/tests/tempest [1].  Tests in this path are run as part of
the neutron-dsvm-api job, which will shortly be gating [2].  Test
changes that previously targeted the tempest/api/network path in
the Tempest repo should target neutron/tests/tempest/network/api
in the Neutron repo until further notice.

Automatic conversion from a Tempest change to a Neutron change is
possible:

 - cd [path to neutron repo]
 - ./tools/copy_api_tests_from_tempest.sh [path to tempest working directory]

As per the Tempest guidelines for test removal [3], the tests
currently in Tempest will remain in Tempest and continue to run
as part of the existing jobs until we can target tests in the
Neutron repo against stable branches and enable the use of the
in-repo tests by defcore/refstack.

Finally, guidelines for API test development in the Neutron repo are
in the works and will be proposed shortly.  The guidelines will
define policy intended to protect against backwards compatible
changes to our API.

Thanks,


Maru

1: 
https://etherpad.openstack.org/p/kilo-crossproject-move-func-tests-to-projects
2: https://github.com/openstack/neutron/tree/master/neutron/tests/tempest
3: https://review.openstack.org/#/c/164886
4: https://wiki.openstack.org/wiki/QA/Tempest-test-removal


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Barbican : Unable to create the container with the POST request using the CURL command

2015-03-17 Thread Asha Seshagiri
Hi Douglas ,

Thanks a lot for  your help .It worked .

-- 
*Thanks and Regards,*
*Asha Seshagiri*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] requirements-py{2, 3} and universal wheels

2015-03-17 Thread Julien Danjou
On Tue, Mar 17 2015, Robert Collins wrote:

 I think we should deprecate and remove the requirements-pyN files and
 instead use environment markers directly in requirements.txt. That
 will then flow into wheels and things should just work (plus we can
 delete more pbr code).

 I haven't tested yet (and someone should) that it does all JUST WORK,
 but thats easy: put an environment marker in a requirements.txt file
 like so:

  argparse; python_version  '3'

That's great news, I had no idea this has been available for a while. :)

I've just tried it on Gnocchi https://review.openstack.org/#/c/164994/

Long story short, it *almost* works. The current problem is that pbr is
parsing the test-requirements.txt file and using it to feed the
tests_require variable of setuptools, and the format does not please it.

I'll try to write a patch on pbr to fix that.

-- 
Julien Danjou
;; Free Software hacker
;; http://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] release note items ??

2015-03-17 Thread Thierry Carrez
Manickam, Kanagaraj wrote:
 Is there any process we follow to create the release notes (ex:
 https://wiki.openstack.org/wiki/ReleaseNotes/Juno) for each release of
 OpenStack, by collection details across different projects in OpenStack.?

The release notes (see [1] for Kilo) are edited as a wiki page, and
every project is responsible for filling its section.

[1] https://wiki.openstack.org/wiki/ReleaseNotes/Kilo

In the last week(s) before release, I assess the empty entries and chase
down the corresponding team(s) so that they fill their part.

Cheers,

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] what is our shipped paste.ini going to be for Kilo

2015-03-17 Thread Sean Dague
On 03/16/2015 10:56 PM, Robert Collins wrote:
 On 17 March 2015 at 14:27, Ken'ichi Ohmichi ken1ohmi...@gmail.com wrote:
 
 I am worried about SDKs making requests that have additional JSON
 attributes that were previously ignored by v2, but will be considered
 invalid by the v2.1 validation code. If we were to just strip out the
 extra items, rather than error out the request (when you don't specify
 a microversion), I would be less worried about the transition. Maybe
 that what we do?

 Nice point.
 That is a main difference in API behaviors between v2 and v2.1 APIs.
 If SDKs pass additional JSON attributes to Nova API now, developers
 need to fix/remove these attributes because that is a bug on SDKs
 side.
 These attributes are unused and meaningless, so some APIs of SDKs
 would contain problems if passing this kind of attributes.

 Sometime it was difficult to know what are available attributes before
 v2.1 API, so The full monty approach will clarify problems of SDKs
 and make SDKs' quality better.

 Thanks
 Ken Ohmichi
 
 Better at the cost of forcing all existing users to upgrade just to
 keep using code of their own that already worked.
 
 Not really 'better' IMO. Different surely.
 
 We could (should) add Warning: headers to inform about this, but
 breaking isn't healthy IMO.

No, that's the point, *no* existing users are forced to upgrade. This is
going to require a manual change after your upgrade to get this new
default behavior, which we'll need to explain in the release notes.

This is not a code change, it's a sample config change.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Fuel meeting participation.

2015-03-17 Thread Tomasz Napierala

 On 17 Mar 2015, at 01:12, Andrew Woodward xar...@gmail.com wrote:
 
 The last couple of meetings have been visibly low on participation. Most 
 notably anyone not involved with the planned schedule is not participating. 
 Often I find that the discussion leeds to wanting to talk with more of the 
 devs, but they are frequently not available. 
 
 Is there any reason for the low participation ( time / schedule )? Any one 
 have any thoughts as to how we can improve attendance?

I think in last couple of months quality of the meetings degraded and 
conversations switched to other channels. One of the reasons I find particulary 
annoying is last changes to agenda, often unexpected and surprising. To fix 
that we need to plan in advance. I would suggest putting things in agenda once 
they come up, e.g. during other discussions, 1on1s, etc. I also think it would 
be beneficial if we define purpose of this meeting. 

Regards,
-- 
Tomasz 'Zen' Napierala
Sr. OpenStack Engineer
tnapier...@mirantis.com







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][congress][group-policy] Fetching policy from a remote source

2015-03-17 Thread David Chadwick
Hi Adam

prior art is the publish-subscribe mechanism. I dont know if Keystone
already has this implemented or not, or if Python implementation exists
or not, without doing some research

regards

David


On 16/03/2015 18:08, Sumit Naiksatam wrote:
 On Mon, Mar 16, 2015 at 8:10 AM, Adam Young ayo...@redhat.com wrote:
 Oslo policy has been released as a stand alone library.  This is great, in
 that the rules engine is relatively non-applicaition specific, and I assume
 that all of the policy based project are planning to migrate over to using
 the policy library instead of the incubated version.

 Part of the push toward a more dynamic policy mechanism is figuring out how
 to fetch and cache the policy files from Keystone.  I suspect that the other
 services have the same issue.

 1.  How long should a service hold on to the cached version of the policy
 file?
 2.  How can we avoid the stampeding herd if Keystone pushes out a
 notification change event?
 3.  How do we securely cache the file and still make it possible to debug.

 The PKI tokens have a lot of these same issues, and have a one-off mechanism
 for handling it.  We should probably look in to commonizing this function.

 There general mechanism should be fetch and cache but I think it should
 not be tied to keystone token validation so much as capable of using it if
 necessary.  I'm guessing that access to policy rules are typically
 controlled by auth token validated services.  Is this correct?

 Maybe the right level of abstraction is a callback function for fetching the
 file to be cached, with the default being something that uses
 python-requests, and then an  auth plugin based alternative for those that
 require Keystone tokens.

 Before I go off and write a spec, I'd like to know what the prior art is
 here.  I'd also like to know if there oslo policy library is part of the
 plans for the other groups that are doing policy based work?

 
 Thanks Adam for bringing this up. As regards the group-based-policy
 (GBP) project, we leverage the access control policy just like other
 projects do, so the questions you raise above are definitely relevant
 to GBP. We do not manage the lifecycle of this aspect of policy, so we
 hope to use whatever comes out of this discussion.
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [opnfv-tech-discuss] [Keystone][Multisite] Huge token size

2015-03-17 Thread Dolph Mathews
On Tuesday, March 17, 2015, David Chadwick d.w.chadw...@kent.ac.uk wrote:

 Encryption per se does not decrease token size, the best it can do is
 keep the token size the same size.


Correct.


 So using Fernet tokens will not on
 its own alter the token size. Reducing the size must come from putting
 less information in the token.


Fernet tokens carry far less information than PKI tokens, and thus have a
smaller relative size.


 If the token recipient has to always go
 back to Keystone to get the token validated, then all the token needs to
 be is a large random number that Keystone can look up in its database to
 retrieve the user's permissions.


Correct, but then those large random numbers must be persisted and
distributed, as is the case with UUID tolens. However, Fernet tokens carry
just enough information to indicate which permissions apply, and keystone
can build a validation response from there, without persisting anything for
every token issued.


 In this case no encryption is needed at
 all.


Fernet tokens encrypt everything but the token's creation timestamp, but
that's just a perk that some deployers will find attractive, not a critical
design feature that we're utilizing today.



 regards

 David

 On 17/03/2015 06:51, joehuang wrote:
  Hi, Adam,
 
 
 
  Good to know Fernet token is on the way to reduce the token size and
  token persistence issues.
 
 
 
  It’s not reality to deploy KeyStone service ( including backend store )
  in each site if the number, for example, is more than 10.  The reason is
  that the stored data including data related to revocation need to be
  replicated to all sites in synchronization manner. Otherwise, the API
  server might attempt to use the token before it's able to be validated
  in the target site.
 
 
 
  When Fernet token is used in multisite scenario, each API request will
  ask for token validation from KeyStone. The cloud will be out of service
  if KeyStone stop working, therefore KeyStone service need to run in
  several sites.
 
 
 
  For reliability purpose, I suggest that the keystone client should
  provide a fail-safe design: primary KeyStone server, the second KeyStone
  server (or even the third KeySont server) . If the primary KeyStone
  server is out of service, then the KeyStone client will try the second
  KeyStone server. Different KeyStone client may be configured with
  different primary KeyStone server and the second KeyStone server.
 
 
 
  Best Regards
 
  Chaoyi Huang ( Joe Huang )
 
 
 
  *From:*Adam Young [mailto:ayo...@redhat.com javascript:;]
  *Sent:* Monday, March 16, 2015 10:52 PM
  *To:* openstack-dev@lists.openstack.org javascript:;
  *Subject:* Re: [openstack-dev] [opnfv-tech-discuss]
  [Keystone][Multisite] Huge token size
 
 
 
  On 03/16/2015 05:33 AM, joehuang wrote:
 
  [Topic]: Huge token size
 
 
 
  Hello,
 
 
 
  As you may or may not be aware of, a requirement project proposal
  Multisite[1] was started in OPNFV in order to identify gaps in
  implementing OpenStack across multiple sites.
 
 
 
  Although the proposal has not been approved yet, we’ve started to
  run some experiments to try out different methods. One of the
  problem we identify in those experiments is that, when we want  to
  use a shared KeyStone for 101 Regions ( including ~500 endpoints ).
  The token size is huge (The token format is PKI), please see details
  in the attachments:
 
 
 
  token_catalog.txt, 162KB: catalog list included in the token
 
  token_pki.txt, 536KB: non-compressed token size
 
  token_pkiz.txt, 40KB: compressed token size
 
 
 
  I understand that KeyStone has a way like endpoint_filter to reduce
  the size of token, however this requires to manage many (hard to id
  the exact number) endpoints can be seen by a project, and the size
  is not easy to exactly controlled.
 
 
 
  Do you guys have any insights in how to reduce the token size if PKI
  token used? Is there any BP relates to this issue? Or should we fire
  one to tackle this?
 
 
 
  Right now there is an effort for non-multisite to get a handle on the
  problem.  The Fernet token format will make it possible for a token to
  be ephemeral.  The scheme is this:
 
  Encode the minimal amount of Data into the token possible.
 
  Always validate the token on the Keystone server.
 
  On the Keystone server, the token validation is performed by checking
  the message HMAC, and then expanding out the data.
 
  This concept is expandable to multi site in two ways.
 
  For a completely trusted and symmetric multisite deployement, the
  keystone servers can share keys.  The Kite project was
  http://git.openstack.org/cgit/openstack/kite origianlly spun up to
  manage this sort of symmetric key sharing, and is a natural extension.
 
  If two keystone server need to sign for and validate separate serts of
  data (future work)  the form of signing could be returned to Asymmetric
  

Re: [openstack-dev] [all] requirements-py{2, 3} and universal wheels

2015-03-17 Thread Julien Danjou
On Tue, Mar 17 2015, Julien Danjou wrote:

 I'll try to write a patch on pbr to fix that.

https://review.openstack.org/#/c/165015/ should fix that; tested against
my Gnocchi patch and seems to do the trick.

-- 
Julien Danjou
# Free Software hacker
# http://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [opnfv-tech-discuss] [Keystone][Multisite] Huge token size

2015-03-17 Thread Zhipeng Huang
Hi Adam,

The Fernet token and Project Kite looks very interesting, I think it might
be helpful to work together to tackle the problem, shall we put this issue
to the keystone design summit for further follow up discussion?

On Tue, Mar 17, 2015 at 3:30 PM, David Chadwick d.w.chadw...@kent.ac.uk
wrote:

 Encryption per se does not decrease token size, the best it can do is
 keep the token size the same size. So using Fernet tokens will not on
 its own alter the token size. Reducing the size must come from putting
 less information in the token. If the token recipient has to always go
 back to Keystone to get the token validated, then all the token needs to
 be is a large random number that Keystone can look up in its database to
 retrieve the user's permissions. In this case no encryption is needed at
 all.

 regards

 David

 On 17/03/2015 06:51, joehuang wrote:
  Hi, Adam,
 
 
 
  Good to know Fernet token is on the way to reduce the token size and
  token persistence issues.
 
 
 
  It’s not reality to deploy KeyStone service ( including backend store )
  in each site if the number, for example, is more than 10.  The reason is
  that the stored data including data related to revocation need to be
  replicated to all sites in synchronization manner. Otherwise, the API
  server might attempt to use the token before it's able to be validated
  in the target site.
 
 
 
  When Fernet token is used in multisite scenario, each API request will
  ask for token validation from KeyStone. The cloud will be out of service
  if KeyStone stop working, therefore KeyStone service need to run in
  several sites.
 
 
 
  For reliability purpose, I suggest that the keystone client should
  provide a fail-safe design: primary KeyStone server, the second KeyStone
  server (or even the third KeySont server) . If the primary KeyStone
  server is out of service, then the KeyStone client will try the second
  KeyStone server. Different KeyStone client may be configured with
  different primary KeyStone server and the second KeyStone server.
 
 
 
  Best Regards
 
  Chaoyi Huang ( Joe Huang )
 
 
 
  *From:*Adam Young [mailto:ayo...@redhat.com]
  *Sent:* Monday, March 16, 2015 10:52 PM
  *To:* openstack-dev@lists.openstack.org
  *Subject:* Re: [openstack-dev] [opnfv-tech-discuss]
  [Keystone][Multisite] Huge token size
 
 
 
  On 03/16/2015 05:33 AM, joehuang wrote:
 
  [Topic]: Huge token size
 
 
 
  Hello,
 
 
 
  As you may or may not be aware of, a requirement project proposal
  Multisite[1] was started in OPNFV in order to identify gaps in
  implementing OpenStack across multiple sites.
 
 
 
  Although the proposal has not been approved yet, we’ve started to
  run some experiments to try out different methods. One of the
  problem we identify in those experiments is that, when we want  to
  use a shared KeyStone for 101 Regions ( including ~500 endpoints ).
  The token size is huge (The token format is PKI), please see details
  in the attachments:
 
 
 
  token_catalog.txt, 162KB: catalog list included in the token
 
  token_pki.txt, 536KB: non-compressed token size
 
  token_pkiz.txt, 40KB: compressed token size
 
 
 
  I understand that KeyStone has a way like endpoint_filter to reduce
  the size of token, however this requires to manage many (hard to id
  the exact number) endpoints can be seen by a project, and the size
  is not easy to exactly controlled.
 
 
 
  Do you guys have any insights in how to reduce the token size if PKI
  token used? Is there any BP relates to this issue? Or should we fire
  one to tackle this?
 
 
 
  Right now there is an effort for non-multisite to get a handle on the
  problem.  The Fernet token format will make it possible for a token to
  be ephemeral.  The scheme is this:
 
  Encode the minimal amount of Data into the token possible.
 
  Always validate the token on the Keystone server.
 
  On the Keystone server, the token validation is performed by checking
  the message HMAC, and then expanding out the data.
 
  This concept is expandable to multi site in two ways.
 
  For a completely trusted and symmetric multisite deployement, the
  keystone servers can share keys.  The Kite project was
  http://git.openstack.org/cgit/openstack/kite origianlly spun up to
  manage this sort of symmetric key sharing, and is a natural extension.
 
  If two keystone server need to sign for and validate separate serts of
  data (future work)  the form of signing could be returned to Asymmetric
  Crypto.  This would lead to a minimal tokne size of about 800 Bytes (I
  haven't tested exactly).  It would mean that any service responsible for
  validating tokens would need to fetch and cache the responses for things
  like catalog and role assignments.
 
  The epehemeral nature of the Fernet specification means that revocation
  data needs to bepersisted separate from the token, so it is not 100%
  ephemeral, but the 

[openstack-dev] Cross-Project meeting, Tue March 17th, 21:00 UTC

2015-03-17 Thread Thierry Carrez
Dear PTLs, cross-project liaisons and anyone else interested,

We'll have a (likely short) cross-project meeting today at 21:00 UTC,
with the following agenda:

* Progress on Swift and Keystone developing next (incompatible) version
of client libs in openstack-sdk
* openstack-specs discussion
  * Add library stable release procedures/policy [1] -- final call for
reviews before approval
  * Managing stable branch requirements [2] -- review time
* Open discussion  announcements

[1] https://review.openstack.org/#/c/155072/
[2] https://review.openstack.org/#/c/161047/

See you there !

For more details on this meeting, please see:
https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] FFE python-fuelclient improvements

2015-03-17 Thread Evgeniy L
+1, because those patches are simple don't look destructive.

On Mon, Mar 16, 2015 at 7:43 PM, Roman Prykhodchenko m...@romcheg.me wrote:

 Hi folks,

 due to some technical issues we were unable to merge Cliff integration
 patches to keep ISO build jobs alive.
 Since now the problem is fixed and we are unblocked, I’d like to ask for a
 FFE in order to merge that all.


 - romcheg


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [UX] contribution

2015-03-17 Thread Liz Blanchard
Hi Luke,

Thanks for your interest in OpenStack UX! 

Piet - would you be able to add Luke to the OpenStack Invision account? After 
you are added you will be able to see and comment on any designs we’ve posted 
so far for review. You can of course post any of your own designs for review as 
well.

With respect to poking around at Horizon, you might want to try out 
trystack.org. I’ve had a great experience with this once my Facebook account 
was hooked up for authentication.

We haven’t been running the weekly meetings on Mondays as there hasn’t been a 
lot of activity to cover, but I encourage you to attend the weekly Horizon 
meetings to become familiar with some of the development topics.[1]

Please feel free to join the #openstack-ux channel on freenode and ping me 
(lblanchard) if you have any specific questions!

Best,
Liz

[1] https://wiki.openstack.org/wiki/Meetings/Horizon

On Mar 17, 2015, at 12:44 AM, Łukasz B 01.lukaszblon...@gmail.com wrote:

 Hi.
 
 My name is Luke and I am UX designer willing to contribute to OpenStack user 
 experience. I am currently digging through wiki trying to find out how to 
 contribute.
 
 So far I kindly ask you to add me to OpenStack InVision account. 
 I am also wondering if there is any test instance with Horizon installed that 
 I could get access to.
 
 BTW I tried to reach you today via the freenode. Are the weekly meeting 
 continued to take place on Mondays?
 
 Looking forward to hearing from you.
 
 Luke.
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] oslo.db 1.7.1

2015-03-17 Thread Doug Hellmann
The Oslo team is thrilled to announce the release of:

oslo.db 1.7.1: oslo.db library

This is a patch release intended for use with the Kilo series of projects.

For more details, please see the git log history below and:

http://launchpad.net/oslo.db/+milestone/1.7.1

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.db

Changes in oslo.db 1.7.0..1.7.1
---

3e6a30c Add process guards + invalidate to the connection pool

Diffstat (except docs and test files)
-

oslo_db/sqlalchemy/session.py   | 33 +
2 files changed, 61 insertions(+)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] FFE python-fuelclient improvements

2015-03-17 Thread Mike Scherbakov
Roman,
it would be great if you share the links. Thanks!

On Tue, Mar 17, 2015 at 5:52 AM, Igor Kalnitsky ikalnit...@mirantis.com
wrote:

 Yep, I think we can do merge them. +1 from my side.

 On Tue, Mar 17, 2015 at 12:50 PM, Evgeniy L e...@mirantis.com wrote:
  +1, because those patches are simple don't look destructive.
 
  On Mon, Mar 16, 2015 at 7:43 PM, Roman Prykhodchenko m...@romcheg.me
 wrote:
 
  Hi folks,
 
  due to some technical issues we were unable to merge Cliff integration
  patches to keep ISO build jobs alive.
  Since now the problem is fixed and we are unblocked, I’d like to ask
 for a
  FFE in order to merge that all.
 
 
  - romcheg
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Mike Scherbakov
#mihgen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] fuel-utils package

2015-03-17 Thread Andrew Woodward
On Tue, Mar 17, 2015 at 12:58 AM, Vladimir Kuklin vkuk...@mirantis.com
wrote:

 Andrew

 Thank you for pointing this out. We are working on packaging all of the
 fuel library components right now. We will make it happen in a couple of
 weeks - me and other guys are currently working on
 https://blueprints.launchpad.net/fuel/+spec/package-fuel-components to
 clean up our make system and switch to packages-only model - this should
 make our life easier in terms of providing updates.

 Regarding fuel-utils - you are absolutely right, but I am not sure we need
 it now - it is a leftover from old times when OVS could not handle
 gratuitous ARPs correctly - we will need to recheck whether we need it now.


Thoughts on who should test this? I can If someone clarifies how to test
it.

If we are going to keep it around, then we will need to make the same
openrc dep changes to it as are in q-agent-cleanup [5]

[5] https://review.openstack.org/#/c/158996/


 On Tue, Mar 17, 2015 at 4:17 AM, Andrew Woodward xar...@gmail.com wrote:



 On Mon, Mar 16, 2015 at 4:35 PM, Andrew Woodward xar...@gmail.com
 wrote:

 While working to remove deps on /root/openrc [1][2], I found that we
 have a package for fule-utils to provide a tool called fdb-cleaner. It
 looks like the source is [3] and its built at [4]

 I see that there are no tests, or CI to test integrity of the code
 there. I propose that we move this to stackforge.

 Also, I think we would want to move code in module/files
 modules/templates (like q-agent-cleanup.py and OCF scripts) so that we
 have a chance of being able to upgrade them. It seems like this could be a
 proper location for these kinds of items so they don't live unversioned and
 hidden in our manifests.


 The missing links
 [1] https://bugs.launchpad.net/fuel/+bug/1396594
 [2] https://bugs.launchpad.net/fuel/+bug/1347542
 [3] https://github.com/xenolog/fuel-utils
 [4] https://review.fuel-infra.org/#/admin/projects/?filter=fuel-utils

 --
 Andrew
 Mirantis
 Fuel community ambassador
 Ceph community




 --
 Andrew
 Mirantis
 Fuel community ambassador
 Ceph community

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Yours Faithfully,
 Vladimir Kuklin,
 Fuel Library Tech Lead,
 Mirantis, Inc.
 +7 (495) 640-49-04
 +7 (926) 702-39-68
 Skype kuklinvv
 45bk3, Vorontsovskaya Str.
 Moscow, Russia,
 www.mirantis.com http://www.mirantis.ru/
 www.mirantis.ru
 vkuk...@mirantis.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Andrew
Mirantis
Fuel community ambassador
Ceph community
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kolla] PTL Candidacy

2015-03-17 Thread Daneyon Hansen (danehans)

Congratulations Steve!

Regards,
Daneyon Hansen
Software Engineer
Email: daneh...@cisco.com
Phone: 303-718-0400
http://about.me/daneyon_hansen

From: Angus Salkeld asalk...@mirantis.commailto:asalk...@mirantis.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, March 17, 2015 at 5:05 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Kolla] PTL Candidacy

There have been no other candidates within the allowed time, so congratulations 
Steve on being the new Kolla PTL.

Regards
Angus Salkeld



On Thu, Mar 12, 2015 at 8:13 PM, Angus Salkeld 
asalk...@mirantis.commailto:asalk...@mirantis.com wrote:
Candidacy confirmed.

-Angus

On Thu, Mar 12, 2015 at 6:54 PM, Steven Dake (stdake) 
std...@cisco.commailto:std...@cisco.com wrote:
I am running for PTL for the Kolla project.  I have been executing in an 
unofficial PTL capacity for the project for the Kilo cycle, but I feel it is 
important for our community to have an elected PTL and have asked Angus 
Salkeld, who has no outcome in the election, to officiate the election [1].

For the Kilo cycle our community went from zero LOC to a fully working 
implementation of most of the services based upon Kubernetes as the backend.  
Recently I led the effort to remove Kubernetes as a backend and provide 
container contents, building, and management on bare metal using docker-compose 
which is nearly finished.  At the conclusion of Kilo, it should be possible 
from one shell script to start an AIO full deployment of all of the current 
OpenStack git-namespaced services using containers built from RPM packaging.

For Liberty, I’d like to take our community and code to the next level.  Since 
our containers are fairly solid, I’d like to integrate with existing projects 
such as TripleO, os-ansible-deployment, or Fuel.  Alternatively the community 
has shown some interest in creating a multi-node HA-ified installation 
toolchain.

I am deeply committed to leading the community where the core developers want 
the project to go, wherever that may be.

I am strongly in favor of adding HA features to our container architecture.

I would like to add .deb package support and from-source support to our docker 
container build system.

I would like to implement a reference architecture where our containers can be 
used as a building block for deploying a reference platform of 3 controller 
nodes, ~100 compute nodes, and ~10 storage nodes.

I am open to expanding our scope to address full deployment, but would prefer 
to merge our work with one or more existing upstreams such as TripelO, 
os-ansible-deployment, and Fuel.

Finally I want to finish the job on functional testing, so all of our 
containers are functionally checked and gated per commit on Fedora, CentOS, and 
Ubuntu.

I am experienced as a PTL, leading the Heat Orchestration program from zero LOC 
through OpenStack integration for 3 development cycles.  I write code as a PTL 
and was instrumental in getting the Magnum Container Service code-base kicked 
off from zero LOC where Adrian Otto serves as PTL.  My past experiences include 
leading Corosync from zero LOC to a stable building block of High Availability 
in Linux.  Prior to that I was part of a team that implemented Carrier Grade 
Linux.  I have a deep and broad understanding of open source, software 
development, high performance team leadership, and distributed computing.

I would be pleased to serve as PTL for Kolla for the Liberty cycle and welcome 
your vote.

Regards
-steve

[1] https://wiki.openstack.org/wiki/Kolla/PTL_Elections_March_2015

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon][DOA] Extending OpenStack Auth for new mechanisms (websso, kerberos, k2k etc)

2015-03-17 Thread Jamie Lennox


- Original Message -
 From: Douglas Fish drf...@us.ibm.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Wednesday, March 18, 2015 2:07:56 AM
 Subject: Re: [openstack-dev] [Horizon][DOA] Extending OpenStack Auth for new 
 mechanisms (websso, kerberos, k2k etc)
 
 
 Steve Martinelli steve...@ca.ibm.com wrote on 03/17/2015 12:52:33 AM:
 
  From: Steve Martinelli steve...@ca.ibm.com
  To: OpenStack Development Mailing List \(not for usage questions\)
  openstack-dev@lists.openstack.org
  Date: 03/17/2015 12:55 AM
  Subject: Re: [openstack-dev] [Horizon][DOA] Extending OpenStack Auth
  for new mechanisms (websso, kerberos, k2k etc)
 
  I like proposal 1 better, but only because I am already familiar
  with how plugins interact with keystoneclient. The websso work is (i
  think) pretty close to getting merged, and could easily be tweaked
  to use a token plugin (when it's ready). I think the same can be
  said for our k2k issue, but I'm not sure.
 
  Thanks,
 
  Steve Martinelli
  OpenStack Keystone Core
 
  Jamie Lennox jamielen...@redhat.com wrote on 03/15/2015 10:52:31 PM:
 
   From: Jamie Lennox jamielen...@redhat.com
   To: OpenStack Development Mailing List
 openstack-dev@lists.openstack.org
   Date: 03/15/2015 10:59 PM
   Subject: [openstack-dev] [Horizon][DOA] Extending OpenStack Auth for
   new mechanisms (websso, kerberos, k2k etc)
  
   Hi All,
  
   Please note when reading this that I have no real knowledge of django
 so
   it is very possible I'm overlooking something obvious.
  
   ### Issue
  
   Django OpenStack Auth (DOA) has always been tightly coupled to the
   notion of a username and password.
   As keystone progresses and new authentication mechanisms become
   available to the project we need a way to extend DOA to keep up with
 it.
   However the basic processes of DOA are going to be much the same, it
   still needs to fetch an unscoped token, list available projects and
   handle rescoping and this is too much for every extension mechanism to
   reimplement.
   There is also a fairly tight coupling between how DOA populates the
   request and sets up a User object that we don't really want to reuse.
  
   There are a couple of authentication mechanisms that are currently
 being
   proposed that are requiring this ability immediately.
  
   * websso: https://review.openstack.org/136178
   * kerberos: https://review.openstack.org/#/c/153910/ (patchset 2).
  
   and to a certain extent:
  
   * k2k: https://review.openstack.org/159910
  
   Enabling and using these different authentication mechanisms is going
 to
   need to be configured by an admin at deployment time.
  
   Given that we want to share the basic scoping/rescoping logic between
   these projects I can essentially see two ways to enable this.
  
   ### Proposal 1 - Add plugins to DOA
  
   The easiest way I can see of doing this is to add a plugin model to the
   existing DOA structure.
   The initial differentiating component for all these mechanisms is the
   retrieval of an unscoped token.
  
   We can take the existing DOA structure and simply make that part
   pluggable and add interfaces to that plugin as required in the future.
  
   Review: https://review.openstack.org/#/c/153910/
  
   Pros:
  
   * Fairly simple and extensible as required.
   * Small plugin interface.
  
   Cons:
  
   * Ignores that django already has an authentication plugin system.
   * Doesn't work well for adding views that run these backends.
  
   ### Proposal 2 - Make the existing DOA subclassable.
  
   The next method is to essentially re-use the existing Django
   authentication module architecture.
   We can extract into a base class all the current logic around token
   handling and develop new modules around that.
  
   Review: https://review.openstack.org/#/c/164071/
   An example of using it:
   https://github.com/jamielennox/django-openstack-auth-kerberos
  
   Pros:
  
   * Reusing Django concepts.
   * Seems easier to handle adding of views.
  
   Cons:
  
   * DOA has to start worrying about public interfaces.
  
   ### Required reviews:
  
   Either way I think these two reviews are going to be required to make
   this work:
  
   * Redirect to login page: https://review.openstack.org/#/c/153174/ - If
   we want apache modules to start handling parts of auth we need to mount
   those at dedicated paths, we can't put kerberos login at /
   * Additional auth urls: https://review.openstack.org/#/c/164068/ - We
   need to register additional views so that we can handle the output of
   these apache modules and call the correct authenticate() parameters.
  
   ### Conclusion
  
   Essentially either of these could work and both will require some
   tweaking and extending to be useful in all situations.
  
   However I am kind of passing through on DOA and Django and would like
   someone with more experience in the field to comment on what feels more
   

[openstack-dev] [Sahara][Horizon] Can't open Data Processing panel after update sahara horizon

2015-03-17 Thread Li, Chen
Hi all,

I'm working under Ubuntu14.04 with devstack.

After the fresh devstack installation, I run a integration test to test the 
environment.
After the test, cluster and tested edp jobs remains in my environment.

Then I updated sahara to the lasted code.
To make the newest code work, I also did :

1.   manually download python-novaclient and by running python setup.py 
install to install it

2.   run sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade 
head

Then I restarted sahara.

I tried to delete things remained from last test from dashboard, but  :

1.   The table for job_executions can't be opened anymore.

2.   When I try to delete job, an error happened:

2015-03-18 10:34:33.031 ERROR oslo_db.sqlalchemy.exc_filters [-] DBAPIError 
exception wrapped from (IntegrityError) (1451, 'Cannot delete or update a 
parent row: a foreign key constraint fails (`sahara`.`job_executions`, 
CONSTRAINT `job_executions_ibfk_3` FOREIGN KEY (`job_id`) REFERENCES `jobs` 
(`id`))') 'DELETE FROM jobs WHERE jobs.id = %s' 
('10c36a9b-a855-44b6-af60-0effee31efc9',)
2015-03-18 10:34:33.031 TRACE oslo_db.sqlalchemy.exc_filters Traceback (most 
recent call last):
2015-03-18 10:34:33.031 TRACE oslo_db.sqlalchemy.exc_filters   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 951, 
in _execute_context
2015-03-18 10:34:33.031 TRACE oslo_db.sqlalchemy.exc_filters context)
2015-03-18 10:34:33.031 TRACE oslo_db.sqlalchemy.exc_filters   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py, line 
436, in do_execute
2015-03-18 10:34:33.031 TRACE oslo_db.sqlalchemy.exc_filters 
cursor.execute(statement, parameters)
2015-03-18 10:34:33.031 TRACE oslo_db.sqlalchemy.exc_filters   File 
/usr/lib/python2.7/dist-packages/MySQLdb/cursors.py, line 174, in execute
2015-03-18 10:34:33.031 TRACE oslo_db.sqlalchemy.exc_filters 
self.errorhandler(self, exc, value)
2015-03-18 10:34:33.031 TRACE oslo_db.sqlalchemy.exc_filters   File 
/usr/lib/python2.7/dist-packages/MySQLdb/connections.py, line 36, in 
defaulterrorhandler
2015-03-18 10:34:33.031 TRACE oslo_db.sqlalchemy.exc_filters raise 
errorclass, errorvalue
2015-03-18 10:34:33.031 TRACE oslo_db.sqlalchemy.exc_filters IntegrityError: 
(1451, 'Cannot delete or update a parent row: a foreign key constraint fails 
(`sahara`.`job_executions`, CONSTRAINT `job_executions_ibfk_3` FOREIGN KEY 
(`job_id`) REFERENCES `jobs` (`id`))')
2015-03-18 10:34:33.031 TRACE oslo_db.sqlalchemy.exc_filters
2015-03-18 10:34:33.073 DEBUG sahara.openstack.common.periodic_task [-] Running 
periodic task SaharaPeriodicTasks.terminate_unneeded_transient_clusters from 
(pid=8084) run_periodic_tasks 
/opt/stack/sahara/sahara/openstack/common/periodic_task.py:219
2015-03-18 10:34:33.073 DEBUG sahara.service.periodic [-] Terminating unneeded 
transient clusters from (pid=8084) terminate_unneeded_transient_clusters 
/opt/stack/sahara/sahara/service/periodic.py:131
2015-03-18 10:34:33.108 ERROR sahara.utils.api [-] Validation Error occurred: 
error_code=400, error_message=Job deletion failed on foreign key constraint
Error ID: e65b3fb1-b142-45a7-bc96-416efb14de84, error_name=DELETION_FAILED

I assume this might be caused by an old horizon version, so I did :

1.   update horizon code.

2.   python manage.py compress

3.   sudo python setup.py install

4.   sudo service apache2 restart

But these only make things worse.
Now, when I click Data Processing on dashboard, there is no return action 
anymore.

Anyone can help me here ?
What I did wrong ?
How can I fix this ?

I tested sahara CLI, command like sahara job-list  sahara job-delete can 
still work.
So I guess sahara is working fine.

Thanks.
-chen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Gerrit downtime on 2015-03-21

2015-03-17 Thread James E. Blair
Hi,

Gerrit will be unavailable for a few hours starting at 1500 UTC on
Saturday, March 21.

Also, as Jeremy indicated in this email, its IP address will be changing:

  http://lists.openstack.org/pipermail/openstack-infra/2015-February/002425.html

If you use Gerrit from a network with egress filtering, you may need to
update your firewall.  Please see the above message for full details.

This outage is to move Gerrit to a new server running Ubuntu 14.04.
There should be no user-visible changes with this move.  This is a
preparatory step for upgrading to a new version of the Gerrit software
itself, which will happen later and we will announce separately.

Thanks,

Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] [Delegation] Meeting scheduling

2015-03-17 Thread Yathiraj Udupi (yudupi)
Hi Tim,

I posted this comment on the doc.  I am still pondering over a possibility of 
have a policy-driven scheduler workflow via the Solver Scheduler placement 
engine, which is also LP based like you describe in your doc.
I know in your initial meeting, you plan to go over your proposal of building a 
VM placement engine that subscribes to the Congress DSE,  I probably will 
understand the Congress workflows better and see how I could incorporate this 
proposal to talk to the Solver Scheduler to make the placement decisions.

The example you provide in the doc, is a very good scenario, where a VM 
placement engine should continuously monitor and trigger VM migrations.

I am also interested in the case of a policy-driven scheduling for the initial 
creation of VMs. This is where say people will call Nova APIs and create a new 
set of VMs. Here the scheduler workflow should address the constraints as 
imposed from the user's policies.

Say the simple policy is  Host's free RAM = 0.25 * Memory_Capacity
I would like the scheduler to use this policy as defined from Congress, and 
apply it during the scheduling as part of the Nova boot call.

I am really interested in and need help in coming up with a solution 
integrating Solver Scheduler, so say if I have an implementation of a 
MemoryCapacityConstraint, which takes a hint value free_memory_limit (0.25 
in this example),
could we have a policy in Datalog

placement_requirement(id) :-
nova:host(id),
solver_scheduler:applicable_constraints(id, [MemoryCapacityConstraint, ]),
applicable_metadata(id, {free_memory_limit: 0.25, })

This policy could be set and delegated by Congress to solver scheduler via the 
set_policy API. or the Solver Scheduler can query Congress via a get_policy 
API to get this policy, and incorporate it as part of the solver scheduler 
workflow ?

Does this sound doable ?

Thanks,
Yathi.



On 3/16/15, 11:05 AM, Tim Hinrichs 
thinri...@vmware.commailto:thinri...@vmware.com wrote:

Hi all,

The feedback on the POC delegation proposal has been mostly positive.  Several 
people have asked for a meeting to discuss further.  Given time zone 
constraints, it will likely be 8a or 9a Pacific.  Let me know in the next 2 
days if you want to participate, and we will try to find a day that everyone 
can attend.

https://docs.google.com/document/d/1ksDilJYXV-5AXWON8PLMedDKr9NpS8VbT0jIy_MIEtI/edit

Thanks!
Tim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Proposal to change Glance meeting time.

2015-03-17 Thread Nikhil Komawar

The voting here is now closed.

The final count looks like:
1400UTC: 5 votes (3 core reviewers)
1500UTC: 6 votes (one core reviewer)
Both: 5 votes (all core reviewers)

Adding in some more subtle aspects like the ones mentioned in the previous 
email and the preference strength mentioned in the votes, 1400UTC looks more 
appropriate.

So, the new Glance meeting schedule is:
Weekly on Thursdays at 1400UTC. IRC channel: #openstack-meeting-4

The next meeting would be on March 19th at 1400UTC (the new time). See you all 
then!

Thanks,
-Nikhil


From: Rykowski, Kamil kamil.rykow...@intel.com
Sent: Tuesday, March 17, 2015 11:35 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance] Proposal to change Glance meeting time.

+1 for consistent time  1400UTC is a bit more preferred.

-Original Message-
From: Cindy Pallares [mailto:cpalla...@redhat.com]
Sent: Tuesday, March 17, 2015 4:29 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Glance] Proposal to change Glance meeting time.

+1 for consistent time and a slightly higher preference for 1500 UTC

On 03/17/2015 02:15 AM, Koniszewski, Pawel wrote:
 +1 for consistent time (I prefer 1400UTC)

 *From:*Fei Long Wang [mailto:feil...@catalyst.net.nz]
 *Sent:* Sunday, March 15, 2015 9:00 PM
 *To:* openstack-dev@lists.openstack.org
 *Subject:* Re: [openstack-dev] [Glance] Proposal to change Glance
 meeting time.

 +1 for consistent time

 On 14/03/15 10:11, Nikhil Komawar wrote:

 Here's what it looks like so far:-

 1400UTC: 3 votes (all core reviewers)

 1500UTC: 5 votes (one core reviewer)

 Both: 4 votes (all core reviewers)

 Let's wait another couple days to see if more people respond.

 I have a feeling that the weight is slightly tilted towards 1400UTC
 based on a few assumptions about the past activity of those
 participants, their cross project inputs, etc.

 Thanks,
 -Nikhil


 --
 --

 *From:*Mikhail Fedosin mfedo...@mirantis.com
 mailto:mfedo...@mirantis.com
 *Sent:* Friday, March 13, 2015 3:07 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Glance] Proposal to change Glance
 meeting time.

 Both options are good, it's a little better at 1500 UTC.

 +1 consistent time.

 On Fri, Mar 13, 2015 at 9:23 PM, Steve Lewis
 steve.le...@rackspace.com mailto:steve.le...@rackspace.com wrote:

 +1 consistent time

 +1 for 1500 UTC since that has come up.

 On 09/03/15 09:07, Nikhil Komawar wrote:
 
 So, the new proposal is:
 Glance meetings [1] to be conducted
 weekly on
 Thursdays at 1400UTC [2] on
 #openstack-meeting-4

 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 
 http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe;

 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 __
 

 OpenStack Development Mailing List (not for usage questions)


 Unsubscribe:openstack-dev-requ...@lists.openstack.org?subject:unsubscr
 ibe
 mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --

 Cheers  Best regards,

 Fei Long Wang (王飞龙)

 --
 

 Senior Cloud Software Engineer

 Tel: +64-48032246

 Email:flw...@catalyst.net.nz  mailto:flw...@catalyst.net.nz

 Catalyst IT Limited

 Level 6, Catalyst House, 150 Willis Street, Wellington

 --
 



 __
  OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__

[openstack-dev] [Keystone][FFE] - IdP ID (remote_id) registration and validation

2015-03-17 Thread Marek Denis

Hello,

One very important feature that we have been working on in the Kilo 
development cycle is management of remote_id attributes tied to Identity 
Providers in keystone.


This work is crucial for:

-  Secure OpenStack identity federation configuration. User is required 
to specify what Identity Provider (IdP) issues an assertion as well as 
what protocol (s)he wishes to use (typically it would be SAML2 or OpenId 
Connect). Based on that knowledge (arbitrarily specified by a user), 
keystone fetches mapping rules configured for {IdP, protocol} pair and 
applies it on the assertion. As an effect a set of groups is returned, 
and by membership of those dynamically assigned groups (and later 
roles), an ephemeral user is being granted access to certain OpenStack 
resources. Without remote_id attributes, a user, can arbitrarily choose 
pair {Identity Provider, protocol} without respect of issuing Identity 
Provider. This may lead to a situation where Identity Provider X issues 
an assertion, but user chooses mapping ruleset dedicated for Identity 
Provider Y, effectively being granted improper groups (roles). As part 
of various federation protocols, every Identity Provider issues an 
identifier allowing trusting peers (Keystone  servers in this case) to 
reliably identify issuer of the assertion. That said, remote_id 
attributes allow cloud administrators to match assertions with Identity 
Providers objects configured in keystone (i.e. situation depicted above 
would not happen, as keystone object Identity Provider Y would accept 
assertions issued by Identity Provider Y only).


- WebSSO implementation - a highly requested feature that allows to use 
federation in OpenStack via web browsers, especially Horizon. Without 
remote_ids server (keystone) is not able to distinguish what maping rule 
set should be used for transforming assertion into set of local 
attributes (groups, users etc).



Status of the work:

So far we have implemented and merged feature where each Identity 
Provider object can have one remote_id specified. However, there have 
been few request for stakeholders for ability to configure multiple 
remote_id attributes per Identity Provider objects. This is extremely 
useful in configuring federations where 10s or 100s of Identity Provider 
work within one federation and where one mapping ruleset is used among 
them.
This has been discussed and widely accepted during Keystone mid cycle 
meetup in January 2015. The first version of the implementation was 
proposed on Febrary 2nd. During the implementation process we discovered 
the bug (https://bugs.launchpad.net/keystone/+bug/1426334) that was 
blocking further work. Fixing it took reasonably big manpower and 
significantly delayed delivery process of the main feature. Eventually, 
the bug was fixed and now we are ready to get final reviews (mind, the 
patch was already reviewed and all the comments and issues were 
constantly being addressed) and hopefully get landed in the Kilo release.


Specification link: 
https://github.com/openstack/keystone-specs/blob/master/specs/kilo/idp-id-registration.rst

Implementation link: https://review.openstack.org/#/c/152156/

I hereby ask for exceptional accepting the provided work in the Kilo 
release cycle.


With kind regards,

--
Marek Denis
Keystone Core member

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] what is our shipped paste.ini going to be for Kilo

2015-03-17 Thread Sean Dague
On 03/17/2015 04:17 PM, Robert Collins wrote:
 On 17 March 2015 at 23:48, Sean Dague s...@dague.net wrote:
 On 03/16/2015 10:56 PM, Robert Collins wrote:
 ...
 Better at the cost of forcing all existing users to upgrade just to
 keep using code of their own that already worked.

 Not really 'better' IMO. Different surely.

 We could (should) add Warning: headers to inform about this, but
 breaking isn't healthy IMO.

 No, that's the point, *no* existing users are forced to upgrade. This is
 going to require a manual change after your upgrade to get this new
 default behavior, which we'll need to explain in the release notes.

 This is not a code change, it's a sample config change.
 
 I may be confused. Let me spell out what's in my head.
 
 Firstly, new clouds will default to an API that throws errors from
 [some] existing SDK's (and perhaps also custom apps that are adding
 unexpected fields via regular SDKs). Folk driving multiple clouds who
 try to talk to these new ones will get errors and be unable to use
 those clouds until those errors are fixed. Either by fixing the SDK,
 or by going to the [now deployed] cloud and complaining.

There is a theoretical bogee man here, that doesn't really have a data
point. Kenichi did a survey of a bunch of the SDKs when we were looking
at strict validation in Tempest a few months back, and this kind of edge
condition was not discovered. We can figure out what the survey is again
for that.

 Secondly, you say that paste.ini is a config file, but I recall Dan
 Prince saying in TripleO that they aren't config files we should be
 editing, and we should instead be using the upstream one as-is, so we
 did that there. So there's some confusion at least in some circles
 about whether these are config-for-users or not :).

If we put it in /etc, it's definitely overwritable. We're treating
required changes to it in upgrade testing something that needs to be
specifically called out. We blocked cinder changes recently over that.

I agree we (OpenStack) put things in etc that we probably shouldn't,
especially if we want interop. But they are currently in etc. I'm not
boiling that ocean today. If we believe this is all really read only,
lets get it out of flat files some time in the near future.

 I may be jumping at shadows, like the whole
 must-have-nova-bm-to-ironic upgrade discussion was, so I'm not going
 to argue very strongly here - if my scenarios are wrong, thats cool.
 OTOH if I've described something plausible or something we don't have
 but can get data on, perhaps its worth considering.

It's worth another survey. Honestly, it would be really great if we were
regularly testing jclouds and fog against our upstream. And this other
100 item laundry list I've got. :)

However, when it comes to what makes sense for the Kilo release, I think
there is a very good case for 2.1 on the 2 endpoint. That's how we've
been tempest testing for 2 months.

-Sean

-- 
Sean Dague
http://dague.net




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kolla] PTL Candidacy

2015-03-17 Thread Angus Salkeld
There have been no other candidates within the allowed time, so
congratulations Steve on being the new Kolla PTL.

Regards
Angus Salkeld



On Thu, Mar 12, 2015 at 8:13 PM, Angus Salkeld asalk...@mirantis.com
wrote:

 Candidacy confirmed.

 -Angus

 On Thu, Mar 12, 2015 at 6:54 PM, Steven Dake (stdake) std...@cisco.com
 wrote:

  I am running for PTL for the Kolla project.  I have been executing in
 an unofficial PTL capacity for the project for the Kilo cycle, but I feel
 it is important for our community to have an elected PTL and have asked
 Angus Salkeld, who has no outcome in the election, to officiate the
 election [1].

  For the Kilo cycle our community went from zero LOC to a fully working
 implementation of most of the services based upon Kubernetes as the
 backend.  Recently I led the effort to remove Kubernetes as a backend and
 provide container contents, building, and management on bare metal using
 docker-compose which is nearly finished.  At the conclusion of Kilo, it
 should be possible from one shell script to start an AIO full deployment of
 all of the current OpenStack git-namespaced services using containers built
 from RPM packaging.

  For Liberty, I’d like to take our community and code to the next
 level.  Since our containers are fairly solid, I’d like to integrate with
 existing projects such as TripleO, os-ansible-deployment, or Fuel.
 Alternatively the community has shown some interest in creating a
 multi-node HA-ified installation toolchain.

  I am deeply committed to leading the community where the core
 developers want the project to go, wherever that may be.

  I am strongly in favor of adding HA features to our container
 architecture.

  I would like to add .deb package support and from-source support to our
 docker container build system.

  I would like to implement a reference architecture where our containers
 can be used as a building block for deploying a reference platform of 3
 controller nodes, ~100 compute nodes, and ~10 storage nodes.

  I am open to expanding our scope to address full deployment, but would
 prefer to merge our work with one or more existing upstreams such as
 TripelO, os-ansible-deployment, and Fuel.

  Finally I want to finish the job on functional testing, so all of our
 containers are functionally checked and gated per commit on Fedora, CentOS,
 and Ubuntu.

  I am experienced as a PTL, leading the Heat Orchestration program from
 zero LOC through OpenStack integration for 3 development cycles.  I write
 code as a PTL and was instrumental in getting the Magnum Container Service
 code-base kicked off from zero LOC where Adrian Otto serves as PTL.  My
 past experiences include leading Corosync from zero LOC to a stable
 building block of High Availability in Linux.  Prior to that I was part of
 a team that implemented Carrier Grade Linux.  I have a deep and broad
 understanding of open source, software development, high performance team
 leadership, and distributed computing.

  I would be pleased to serve as PTL for Kolla for the Liberty cycle and
 welcome your vote.

  Regards
 -steve

  [1] https://wiki.openstack.org/wiki/Kolla/PTL_Elections_March_2015

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][FFE] - IdP ID (remote_id) registration and validation

2015-03-17 Thread Steve Martinelli
I'd also be happy to sponsor this work.

Thanks,

Steve Martinelli
OpenStack Keystone Core

Marek Denis marek.de...@cern.ch wrote on 03/17/2015 06:28:58 PM:

 From: Marek Denis marek.de...@cern.ch
 To: openstack-dev@lists.openstack.org
 Date: 03/17/2015 06:35 PM
 Subject: [openstack-dev] [Keystone][FFE] - IdP ID (remote_id) 
 registration and validation
 
 Hello,
 
 One very important feature that we have been working on in the Kilo 
 development cycle is management of remote_id attributes tied to Identity 

 Providers in keystone.
 
 This work is crucial for:
 
 -  Secure OpenStack identity federation configuration. User is required 
 to specify what Identity Provider (IdP) issues an assertion as well as 
 what protocol (s)he wishes to use (typically it would be SAML2 or OpenId 

 Connect). Based on that knowledge (arbitrarily specified by a user), 
 keystone fetches mapping rules configured for {IdP, protocol} pair and 
 applies it on the assertion. As an effect a set of groups is returned, 
 and by membership of those dynamically assigned groups (and later 
 roles), an ephemeral user is being granted access to certain OpenStack 
 resources. Without remote_id attributes, a user, can arbitrarily choose 
 pair {Identity Provider, protocol} without respect of issuing Identity 
 Provider. This may lead to a situation where Identity Provider X issues 
 an assertion, but user chooses mapping ruleset dedicated for Identity 
 Provider Y, effectively being granted improper groups (roles). As part 
 of various federation protocols, every Identity Provider issues an 
 identifier allowing trusting peers (Keystone  servers in this case) to 
 reliably identify issuer of the assertion. That said, remote_id 
 attributes allow cloud administrators to match assertions with Identity 
 Providers objects configured in keystone (i.e. situation depicted above 
 would not happen, as keystone object Identity Provider Y would accept 
 assertions issued by Identity Provider Y only).
 
 - WebSSO implementation - a highly requested feature that allows to use 
 federation in OpenStack via web browsers, especially Horizon. Without 
 remote_ids server (keystone) is not able to distinguish what maping rule 

 set should be used for transforming assertion into set of local 
 attributes (groups, users etc).
 
 
 Status of the work:
 
 So far we have implemented and merged feature where each Identity 
 Provider object can have one remote_id specified. However, there have 
 been few request for stakeholders for ability to configure multiple 
 remote_id attributes per Identity Provider objects. This is extremely 
 useful in configuring federations where 10s or 100s of Identity Provider 

 work within one federation and where one mapping ruleset is used among 
 them.
 This has been discussed and widely accepted during Keystone mid cycle 
 meetup in January 2015. The first version of the implementation was 
 proposed on Febrary 2nd. During the implementation process we discovered 

 the bug (https://bugs.launchpad.net/keystone/+bug/1426334) that was 
 blocking further work. Fixing it took reasonably big manpower and 
 significantly delayed delivery process of the main feature. Eventually, 
 the bug was fixed and now we are ready to get final reviews (mind, the 
 patch was already reviewed and all the comments and issues were 
 constantly being addressed) and hopefully get landed in the Kilo 
release.
 
 Specification link: 
 https://github.com/openstack/keystone-specs/blob/master/specs/kilo/
 idp-id-registration.rst
 Implementation link: https://review.openstack.org/#/c/152156/
 
 I hereby ask for exceptional accepting the provided work in the Kilo 
 release cycle.
 
 With kind regards,
 
 -- 
 Marek Denis
 Keystone Core member
 
 
__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Capability Discovery API

2015-03-17 Thread Davis, Amos (PaaS-Core)
All,
The Application EcoSystem Working Group realized during the mid-cycle meetup in 
Philadelphia that there is no way to get the capabilities of an Openstack cloud 
so that applications can measure their compatibility against that cloud.  In 
other words,  if we create an Openstack App Marketplace and have developers 
make apps to be in that marketplace, then we'll have no way for apps to verify 
that they can run on that cloud.  We'd like to ask that there be a standard set 
of API calls created that allow a cloud to list its capabilities.  The cloud 
features or capabilities list should return True/False API responses and 
could include but is not limited to the below examples.  Also, 
https://review.openstack.org/#/c/162655/ may be a good starting point for this 
request.


Glance:
URL/upload
types (raw, qcow, etc)

Nova:
Suspend/Resume VM
Resize
Flavor sizes supported
Images Available
Quota Limits
VNC support

Neutron:
Types of Networking (neutron, neutron + ml2, nova-network aka linux bridge, 
other)
Types of SDN in use?
Shared tenant networks
Anything else?


Ceph/Cinder:
LVM or other?
SCSI-backed?
Any others?

Swift:
?

Best Regards,
Amos Davis
amos.da...@hp.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] what is our shipped paste.ini going to be for Kilo

2015-03-17 Thread Robert Collins
On 17 March 2015 at 23:48, Sean Dague s...@dague.net wrote:
 On 03/16/2015 10:56 PM, Robert Collins wrote:
...
 Better at the cost of forcing all existing users to upgrade just to
 keep using code of their own that already worked.

 Not really 'better' IMO. Different surely.

 We could (should) add Warning: headers to inform about this, but
 breaking isn't healthy IMO.

 No, that's the point, *no* existing users are forced to upgrade. This is
 going to require a manual change after your upgrade to get this new
 default behavior, which we'll need to explain in the release notes.

 This is not a code change, it's a sample config change.

I may be confused. Let me spell out what's in my head.

Firstly, new clouds will default to an API that throws errors from
[some] existing SDK's (and perhaps also custom apps that are adding
unexpected fields via regular SDKs). Folk driving multiple clouds who
try to talk to these new ones will get errors and be unable to use
those clouds until those errors are fixed. Either by fixing the SDK,
or by going to the [now deployed] cloud and complaining.

Secondly, you say that paste.ini is a config file, but I recall Dan
Prince saying in TripleO that they aren't config files we should be
editing, and we should instead be using the upstream one as-is, so we
did that there. So there's some confusion at least in some circles
about whether these are config-for-users or not :).

I may be jumping at shadows, like the whole
must-have-nova-bm-to-ironic upgrade discussion was, so I'm not going
to argue very strongly here - if my scenarios are wrong, thats cool.
OTOH if I've described something plausible or something we don't have
but can get data on, perhaps its worth considering.

HTH
-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone][FFE] - Reseller Implementation

2015-03-17 Thread Raildo Mascena
Hi Folks,

We’ve discussed a lot in the last Summit about the Reseller use case.
OpenStack needs to grow support for hierarchical ownership of objects.This
enables the management of subsets of users and projects in a way that is
much more comfortable for private clouds, besides giving to public cloud
providers the option of reselling a piece of their cloud.

More detailed information can be found in the spec for this change at:
https://review.openstack.org/#/c/139824

The current code change for this is split into 8 patches (to make it easier
to review). We currently have 7 patches in code review and we are finishing
the last one.

Here is the workflow of our patches:

1- Adding a field to enable the possibility to have a project with the
domain feature: https://review.openstack.org/#/c/157427/

2- Change some constraints and create some options to list projects (for
is_domain flag, for parent_id):
https://review.openstack.org/#/c/159944/
https://review.openstack.org/#/c/158398/
https://review.openstack.org/#/c/161378/
https://review.openstack.org/#/c/158372/

3- Reflect domain operations to project table, mapping domains to projects
that have the is_domain attribute set to True. In addition, it changes the
read operations to use only the project table. Then, we will drop the
Domain Table.
https://review.openstack.org/#/c/143763/
https://review.openstack.org/#/c/161854/ (Only patch with work in progress)

4- Finally, the inherited role will not be applied to a subdomain and its
sub hierarchy. https://review.openstack.org/#/c/164180/

Since we have the implementation almost completed, waiting for code review,
I am requesting an FFE to enable the implementation of this last patch and
work to have this implementation merged in Kilo.

Raildo
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Capability Discovery API

2015-03-17 Thread John Dickinson

 On Mar 17, 2015, at 1:02 PM, Davis, Amos (PaaS-Core) 
 amos.steven.da...@hp.com wrote:
 
 All,
 The Application EcoSystem Working Group realized during the mid-cycle meetup 
 in Philadelphia that there is no way to get the capabilities of an Openstack 
 cloud so that applications can measure their compatibility against that 
 cloud.  In other words,  if we create an Openstack App Marketplace and have 
 developers make apps to be in that marketplace, then we'll have no way for 
 apps to verify that they can run on that cloud.  We'd like to ask that there 
 be a standard set of API calls created that allow a cloud to list its 
 capabilities.  The cloud features or capabilities list should return 
 True/False API responses and could include but is not limited to the below 
 examples.  Also, https://review.openstack.org/#/c/162655/ may be a good 
 starting point for this request.
 
 
 Glance:
 URL/upload
 types (raw, qcow, etc)
 
 Nova:
 Suspend/Resume VM
 Resize
 Flavor sizes supported
 Images Available
 Quota Limits
 VNC support
 
 Neutron:
 Types of Networking (neutron, neutron + ml2, nova-network aka linux bridge, 
 other)
 Types of SDN in use?
 Shared tenant networks
 Anything else?
 
 
 Ceph/Cinder:
 LVM or other?
 SCSI-backed?
 Any others?
 
 Swift:
 ?

Swift's capabilities are discoverable via an /info endpoint. The docs are at:

http://docs.openstack.org/developer/swift/api/discoverability.html

Example output from my dev environment and from Rackspace Cloud Files and from 
a SwiftStack lab cluster:

https://gist.github.com/notmyname/438392d57c2f3d3ee327


Clients use these to ensure a unified experience across clusters and that 
features are supported before trying to use them.

 
 Best Regards,
 Amos Davis
 amos.da...@hp.com
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Release management and bug triage

2015-03-17 Thread Colleen Murphy
Comments inline.

On Tue, Mar 17, 2015 at 12:22 PM, Emilien Macchi emil...@redhat.com wrote:

 Hi,

 I wanted to start the discussion here about our bug/release management
 system with Launchpad.

 A first question that comes in my mind is: should we continue to manage
 every Puppet module in a different Launchpad project? Or should we
 migrate all modules to a single project.

 So far this is what I think about both solutions, feel free to comment:

 Having one project per module
 Pros:
 * Really useful when having the right tools to manage Launchpad, and
 also to manage one module as a real project.
 * The solution scales to the number of modules we support.

 Cons:
 * I think some people don't go on Launchpad because there is so many
 projects (one per module), so they did not subscribe emails or don't
 visit the page very often.
 * Each time we create a module (it's not every day, I would say each
 time a new OpenStack project is started), we have to repeat the process
 for a new launchpad project.

I don't think this is that big a hurdle, and it doesn't happen often.



 Having everything in a single project
 Pro:
 * Release management could be simpler

What would be simpler? We'd still need to track releases of each module, as
not all of them always get released at the same time.

 * A single view for all the bugs in Puppet modules

You can view all the bugs in the openstack-puppet-modules top-level project
https://bugs.launchpad.net/openstack-puppet-modules

 * Maybe a bad idea, but we can use tags to track puppet modules issues
 (ie: puppet-openstacklib whould be openstacklib)

 Con:
 * The solution does not scale much, it depends again at how we decide to
 make bug triage and release management;

 Also, feel free to add more concerns or feedback to this discussion.

I don't have strong feelings either way, but I'm not sure I see the current
way as broken enough to change. There is a top-level project for these
subprojects (https://launchpad.net/openstack-puppet-modules). You can
create a bug for one project and then add other projects to the bug, so
having one ticket that links to multiple modules is possible.

 Thanks,
 --
 Emilien Macchi

 --


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] [Delegation] Meeting scheduling

2015-03-17 Thread Yathiraj Udupi (yudupi)
I will like to participate in the discussions.

Thanks,
Yathi.


On 3/16/15, 11:05 AM, Tim Hinrichs 
thinri...@vmware.commailto:thinri...@vmware.com wrote:

Hi all,

The feedback on the POC delegation proposal has been mostly positive.  Several 
people have asked for a meeting to discuss further.  Given time zone 
constraints, it will likely be 8a or 9a Pacific.  Let me know in the next 2 
days if you want to participate, and we will try to find a day that everyone 
can attend.

https://docs.google.com/document/d/1ksDilJYXV-5AXWON8PLMedDKr9NpS8VbT0jIy_MIEtI/edit

Thanks!
Tim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] Release management and bug triage

2015-03-17 Thread Emilien Macchi
Hi,

I wanted to start the discussion here about our bug/release management
system with Launchpad.

A first question that comes in my mind is: should we continue to manage
every Puppet module in a different Launchpad project? Or should we
migrate all modules to a single project.

So far this is what I think about both solutions, feel free to comment:

Having one project per module
Pros:
* Really useful when having the right tools to manage Launchpad, and
also to manage one module as a real project.
* The solution scales to the number of modules we support.

Cons:
* I think some people don't go on Launchpad because there is so many
projects (one per module), so they did not subscribe emails or don't
visit the page very often.
* Each time we create a module (it's not every day, I would say each
time a new OpenStack project is started), we have to repeat the process
for a new launchpad project.


Having everything in a single project
Pro:
* Release management could be simpler
* A single view for all the bugs in Puppet modules
* Maybe a bad idea, but we can use tags to track puppet modules issues
(ie: puppet-openstacklib whould be openstacklib)

Con:
* The solution does not scale much, it depends again at how we decide to
make bug triage and release management;

Also, feel free to add more concerns or feedback to this discussion.
Thanks,
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] Requesting FFE for last few remaining patches for domain configuration SQL support

2015-03-17 Thread Henry Nash
Hi

Prior to Kilo, Keystone supported the ability for its Identity backends to be 
specified on a domain-by-domain basis - primarily so that different domains 
could be backed by different LDAP servers. In this previous support, you 
defined the domain-specific configuration options in a separate config file 
(one for each domain that was not using the default options). While functional, 
this can make onboarding new domains somewhat problematic since you need to 
create the domains via REST and then create a config file and push it out to 
the keystone server (and restart the server). As part of the Keystone Kilo 
release we are are supporting the ability to manage these domain-specific 
configuration options via REST (and allow them to be stored in the Keystone SQL 
database). More detailed information can be found in the spec for this change 
at: https://review.openstack.org/#/c/123238/ 
https://review.openstack.org/#/c/123238/

The actual code change for this is split into 11 patches (to make it easier to 
review), the majority of which have already merged - and the basic 
functionality described is already functional.  There are some final patches 
that are in-flight, a few of which are unlikely to meet the m3 deadline.  These 
relate to:

1) Migration assistance for those that want to move from the current file-based 
domain-specific configuration files to the SQL based support (i.e. a one-off 
upload of their config files).  This is handled in the keystone-manage tool - 
See: https://review.openstack.org/160364 https://review.openstack.org/160364
2) The notification between multiple keystone server processes that a domain 
has a new configuration (so that a restart of keystone is not required) - See: 
https://review.openstack.org/163322 https://review.openstack.org/163322
3) Support of substitution of sensitive config options into whitelisted options 
(this might actually make the m3 deadline anyway) - See 
https://review.openstack.org/159928 https://review.openstack.org/159928

Given that we have the core support for this feature already merged, I am 
requesting an FFE to enable these final patches to be merged ahead of RC.

Henry__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [UX] contribution

2015-03-17 Thread Łukasz B
Hi Liz.

Thank you for your email. I will ping you in case I have a question.
In the meantime I will check trystack.org.

Regards,
Luke

On 17 March 2015 at 18:51, Liz Blanchard lsure...@redhat.com wrote:

 Hi Luke,

 Thanks for your interest in OpenStack UX!

 Piet - would you be able to add Luke to the OpenStack Invision account?
 After you are added you will be able to see and comment on any designs
 we’ve posted so far for review. You can of course post any of your own
 designs for review as well.

 With respect to poking around at Horizon, you might want to try out
 trystack.org. I’ve had a great experience with this once my Facebook
 account was hooked up for authentication.

 We haven’t been running the weekly meetings on Mondays as there hasn’t
 been a lot of activity to cover, but I encourage you to attend the weekly
 Horizon meetings to become familiar with some of the development topics.[1]

 Please feel free to join the #openstack-ux channel on freenode and ping me
 (lblanchard) if you have any specific questions!

 Best,
 Liz

 [1] https://wiki.openstack.org/wiki/Meetings/Horizon

 On Mar 17, 2015, at 12:44 AM, Łukasz B 01.lukaszblon...@gmail.com wrote:

 Hi.

 My name is Luke and I am UX designer willing to contribute to OpenStack
 user experience. I am currently digging through wiki trying to find out how
 to contribute.

 So far I kindly ask you to add me to OpenStack InVision account.
 I am also wondering if there is any test instance with Horizon installed
 that I could get access to.

 BTW I tried to reach you today via the freenode. Are the weekly meeting
 continued to take place on Mondays?

 Looking forward to hearing from you.

 Luke.
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Requesting FFE for last few remaining patches for domain configuration SQL support

2015-03-17 Thread Rich Megginson

On 03/17/2015 01:26 PM, Henry Nash wrote:

Hi

Prior to Kilo, Keystone supported the ability for its Identity 
backends to be specified on a domain-by-domain basis - primarily so 
that different domains could be backed by different LDAP servers. In 
this previous support, you defined the domain-specific configuration 
options in a separate config file (one for each domain that was not 
using the default options). While functional, this can make onboarding 
new domains somewhat problematic since you need to create the domains 
via REST and then create a config file and push it out to the keystone 
server (and restart the server). As part of the Keystone Kilo release 
we are are supporting the ability to manage these domain-specific 
configuration options via REST (and allow them to be stored in the 
Keystone SQL database). More detailed information can be found in the 
spec for this change at: https://review.openstack.org/#/c/123238/


The actual code change for this is split into 11 patches (to make it 
easier to review), the majority of which have already merged - and the 
basic functionality described is already functional.  There are some 
final patches that are in-flight, a few of which are unlikely to meet 
the m3 deadline.  These relate to:


1) Migration assistance for those that want to move from the current 
file-based domain-specific configuration files to the SQL based 
support (i.e. a one-off upload of their config files).  This is 
handled in the keystone-manage tool - See: 
https://review.openstack.org/160364 https://review.openstack.org/160364
2) The notification between multiple keystone server processes that a 
domain has a new configuration (so that a restart of keystone is not 
required) - See: https://review.openstack.org/163322 
https://review.openstack.org/163322
3) Support of substitution of sensitive config options into 
whitelisted options (this might actually make the m3 deadline anyway) 
- See https://review.openstack.org/159928 
https://review.openstack.org/159928


Given that we have the core support for this feature already merged, I 
am requesting an FFE to enable these final patches to be merged ahead 
of RC.


This would be nice to use in puppet-keystone for domain configuration.  
Is there support planned for the openstack client?




Henry


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Deprecation warnings considered harmful?

2015-03-17 Thread Robert Collins
On 18 March 2015 at 05:22, Ihar Hrachyshka ihrac...@redhat.com wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
...
 That relatively short-lived issue already resulted in multiple
 backports to stable branches with new namespaces being used. F.e. see:

 https://bugs.launchpad.net/nova/+bug/1432685

 There is no safe way to communicate the issue to all parties involved,
 so if automation is good at catching those issues, it should be
 applied. It's wrong to rely on people when a hacking check is enough.

+1 to automation.
-1 to blocking forward progress on transient changes because our
automation hasn't been updated.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] FFE python-fuelclient improvements

2015-03-17 Thread Igor Kalnitsky
Yep, I think we can do merge them. +1 from my side.

On Tue, Mar 17, 2015 at 12:50 PM, Evgeniy L e...@mirantis.com wrote:
 +1, because those patches are simple don't look destructive.

 On Mon, Mar 16, 2015 at 7:43 PM, Roman Prykhodchenko m...@romcheg.me wrote:

 Hi folks,

 due to some technical issues we were unable to merge Cliff integration
 patches to keep ISO build jobs alive.
 Since now the problem is fixed and we are unblocked, I’d like to ask for a
 FFE in order to merge that all.


 - romcheg


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] [Delegation] Meeting scheduling

2015-03-17 Thread ruby.krishnaswamy
Hi
I'd like to participate.

By when will you fix the meeting date ?

Ruby

De : Tim Hinrichs [mailto:thinri...@vmware.com]
Envoyé : lundi 16 mars 2015 19:05
À : OpenStack Development Mailing List (not for usage questions)
Objet : [openstack-dev] [Congress] [Delegation] Meeting scheduling

Hi all,

The feedback on the POC delegation proposal has been mostly positive.  Several 
people have asked for a meeting to discuss further.  Given time zone 
constraints, it will likely be 8a or 9a Pacific.  Let me know in the next 2 
days if you want to participate, and we will try to find a day that everyone 
can attend.

https://docs.google.com/document/d/1ksDilJYXV-5AXWON8PLMedDKr9NpS8VbT0jIy_MIEtI/edit

Thanks!
Tim

_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.
Thank you.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] requirements-py{2, 3} and universal wheels

2015-03-17 Thread Monty Taylor
On 03/17/2015 09:07 AM, Monty Taylor wrote:
 On 03/16/2015 08:32 PM, Robert Collins wrote:
 On 17 March 2015 at 13:22, Doug Hellmann d...@doughellmann.com wrote:
 Excerpts from Robert Collins's message of 2015-03-17 12:54:00 +1300:
 I've raised this in reviews 157135 and 153966, but I think it deserves
 a thread of its own.

 I think universal wheels are useful - they are simple to build and
 publish - we don't need to do one wheel per Python version.

 However, right now I'm fairly sure that we're not exporting the
 requirements from requirements-py2 / requirements-py3 as environment
 markers (see PEP-426).

 That means that a wheel built on Python2 for a pbr project using
 requirements-pyN files, even if marked as a universal wheel, will only
 have the requirements for the Python2 deps.

 You're right that they only include the requirements for python 2. We
 try not to mark those packages as universal for that reason.


 This is broken - I've filed a bug about it (http://pad.lv/1431529).

 I think we should deprecate and remove the requirements-pyN files and
 instead use environment markers directly in requirements.txt. That
 will then flow into wheels and things should just work (plus we can
 delete more pbr code).

 I haven't tested yet (and someone should) that it does all JUST WORK,
 but thats easy: put an environment marker in a requirements.txt file
 like so:

  argparse; python_version  '3'

 I think the last time this came up the feature wasn't available in pip
 yet, and so using separate files was the work-around. Are environment
 markers fully supported by pip/setuptools/whatever now?

 Donald says yes, at least for pip (which is all we need, since we
 advise folk to use pip install -e . locally).
 
 Not just advise - setup.py install is _explicitly_ not supported since
 it is broken by design and insecure. I've spoken with Donald about
 trying to figure out a way to determine if we're being run via straight
 setup.py install rather than via pip so that we can error descriptively...
 
 In any case:
 
 a) woot
 b) I agree, pip support is all we need
 
 If so, an option would be to have pbr recognize the version-specific
 input files as implying a particular rule, and adding that environment
 marker to the dependencies list automatically until we can migrate to a
 single requirements.txt (for which no rules would be implied).

 We could, or we could just migrate - I don't think its worth writing a
 compat shim.
 
 Also agree.

Actually - no, I just realized - we need to do a compat shim - because
pbr has no such thing as a stable release or ability to be versioned. We
have requirements-pyX in the wild, which means we must support them
basically until the end of time.

So I'm going to propose that we add a shim such as the one dhellmann
suggests above so that pbr will support our old releases, but moving
forward as a project, we should use markers and not requirements-pyX

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] requirements-py{2, 3} and universal wheels

2015-03-17 Thread Monty Taylor
On 03/16/2015 08:32 PM, Robert Collins wrote:
 On 17 March 2015 at 13:22, Doug Hellmann d...@doughellmann.com wrote:
 Excerpts from Robert Collins's message of 2015-03-17 12:54:00 +1300:
 I've raised this in reviews 157135 and 153966, but I think it deserves
 a thread of its own.

 I think universal wheels are useful - they are simple to build and
 publish - we don't need to do one wheel per Python version.

 However, right now I'm fairly sure that we're not exporting the
 requirements from requirements-py2 / requirements-py3 as environment
 markers (see PEP-426).

 That means that a wheel built on Python2 for a pbr project using
 requirements-pyN files, even if marked as a universal wheel, will only
 have the requirements for the Python2 deps.

 You're right that they only include the requirements for python 2. We
 try not to mark those packages as universal for that reason.


 This is broken - I've filed a bug about it (http://pad.lv/1431529).

 I think we should deprecate and remove the requirements-pyN files and
 instead use environment markers directly in requirements.txt. That
 will then flow into wheels and things should just work (plus we can
 delete more pbr code).

 I haven't tested yet (and someone should) that it does all JUST WORK,
 but thats easy: put an environment marker in a requirements.txt file
 like so:

  argparse; python_version  '3'

 I think the last time this came up the feature wasn't available in pip
 yet, and so using separate files was the work-around. Are environment
 markers fully supported by pip/setuptools/whatever now?
 
 Donald says yes, at least for pip (which is all we need, since we
 advise folk to use pip install -e . locally).

Not just advise - setup.py install is _explicitly_ not supported since
it is broken by design and insecure. I've spoken with Donald about
trying to figure out a way to determine if we're being run via straight
setup.py install rather than via pip so that we can error descriptively...

In any case:

a) woot
b) I agree, pip support is all we need

 If so, an option would be to have pbr recognize the version-specific
 input files as implying a particular rule, and adding that environment
 marker to the dependencies list automatically until we can migrate to a
 single requirements.txt (for which no rules would be implied).
 
 We could, or we could just migrate - I don't think its worth writing a
 compat shim.

Also agree.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Block Device Mapping is Invalid error

2015-03-17 Thread aburluka

Hello once more!

It turned out that proper body arg will be generated if you add 
another --block-device param with source=image like this:


nova boot qwe --flavor vm1 --image cent-os7-vm --block-device 
id=0af1f5a8-8172-4936-a958-90486759d598,source=volume,dest=volume,device=sdb,bootindex=1 
--block-device 
id=1b6fd7a7-16b6-4053-91d1-41a625d6b185,source=image,device=sda,bootindex=0


Can you clarify please if API was changed since Juno release. Is it 
redundant since we specify --image param?


On 03/16/2015 06:55 PM, aburluka wrote:

Hello Nova!

I'd like to ask community to help me with some unclear things. I'm 
currently working on adding persistent storage support into a 
parallels driver.


I'm trying to start VM.

nova boot test-vm --flavor m1.medium --image centos-vm-32 --nic 
net-id=c3f40e33-d535-4217-916b-1450b8cd3987 --block-device 
id=26b7b917-2794-452a-95e5-2efb2ca6e32d,bus=sata,source=volume,bootindex=1


Got an error:
ERROR (BadRequest): Block Device Mapping is Invalid: Boot sequence for 
the instance and image/block device mapping combination is not valid. 
(HTTP 400) (Request-ID: req-454a512c-c9c0-4f01-a4c8-dd0df0c2e052)



nova/api/openstack/compute/servers.py
def create(self, req, body)
Has such body arg:
{u'server':
{u'name': u'test-vm',
 u'imageRef': u'b9349d54-6fd3-4c09-94f5-8d1d5c5ada5c',
 u'block_device_mapping_v2': [{u'disk_bus': u'sata',
   u'source_type': u'volume',
   u'boot_index': u'1',
   u'uuid': 
u'26b7b917-2794-452a-95e5-2efb2ca6e32d'}],

 u'flavorRef': u'3',
 u'max_count': 1,
 u'min_count': 1,
 u'networks': [{u'uuid': u'c3f40e33-d535-4217-916b-1450b8cd3987'}],
 'scheduler_hints': {}
}
}

Such block device mapping leads to bad boot indexes list.
I've tried to watch this argument while executing similiar command 
with kvm hypervisor on Juno RDO and get something like in body:


{u'server': {u'name': u'test-vm',
 u'imageRef': u'78ad3d84-a165-42bb-93c0-a4ad1f1ddefc',
 u'block_device_mapping_v2': [{u'source_type': u'image',
   u'destination_type': u'local',
   u'boot_index': 0,
   u'delete_on_termination': True,
   u'uuid': 
u'78ad3d84-a165-42bb-93c0-a4ad1f1ddefc'},


 {u'disk_bus': u'sata',
  u'source_type': u'volume',
  u'boot_index': u'1',
  u'uuid': 
u'57a27723-65a6-472d-a67d-a551d7dc8405'}],

 u'flavorRef': u'3',
 u'max_count': 1,
 u'min_count': 1,
 'scheduler_hints': {}}}

Can you answer next questions please:
1) Does the first version miss an 'source_type': 'image' arg?
2) Where should and image block_device be added to this arg? Does it 
come from novaclient or is it added by some callback or decorator?


Looking forward for your help!



--
Regards,
Alexander Burluka


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [opnfv-tech-discuss] [Keystone][Multisite] Huge token size

2015-03-17 Thread Adam Young

On 03/17/2015 03:30 AM, David Chadwick wrote:

Encryption per se does not decrease token size, the best it can do is
keep the token size the same size. So using Fernet tokens will not on
its own alter the token size.


Fernet is striking a blanace.  It is encruypting a subset of the data.  
Not the whole payload of the PKI tokens.  They are under 500 Bytes, with 
a target of getting them under 255 bytes.  Only Federation tokens should 
be larger than 255 bytes.



  Reducing the size must come from putting
less information in the token. If the token recipient has to always go
back to Keystone to get the token validated, then all the token needs to
be is a large random number that Keystone can look up in its database to
retrieve the user's permissions. In this case no encryption is needed at
all.
The Fernet goal is to remove that database.  Instead, the data 
associated with the token will be assembeld at verification time from 
the small subset in the fernet token body and the data stored in the 
Keystone server.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OSSG] Announcement: I'll be transitioning away from OpenStack

2015-03-17 Thread Clark, Robert Graham
This is a big loss to the community, it’s been a real pleasure working with you 
over the last three years and I wish you all the best in the future!

 

-Rob

 

From: Bryan D. Payne [mailto:bdpa...@acm.org] 
Sent: 16 March 2015 21:53
To: OpenStack Development Mailing List
Subject: [openstack-dev] [OSSG] Announcement: I'll be transitioning away from 
OpenStack

 

I have recently accepted a new position with a company that does not work with 
OpenStack.  As a result, I'll be transitioning away from this community.  As 
such, I wanted to offer a few quick notes:

 

* OpenStack Security Guide -- I have transitioned leadership of this security 
documentation effort to Nathaniel Dillon.

 

* #openstack-security IRC channel -- Travis McPeak now also has OP privilege in 
the channel.

 

Beyond that, I just wanted to say thanks to everyone.  The OpenStack community 
has been great to work with over the past several years and I wish you all the 
best in the time ahead!

 

I have about one more week working with OpenStack full time.  After that, I am 
still planning on coming to the summit in May, and would be happy to help with 
any final transition pieces at that time.  And I'll continue being available at 
this email address well into the future.

 

Cheers,

-bryan



smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] IPAM reference driver status and other stuff

2015-03-17 Thread Carl Baldwin
On Mar 15, 2015 6:42 PM, Salvatore Orlando
 * the ML2 plugin overrides several methods from the base db class. From
what I gather from unit tests results, we have not yet refactored it. I
think to provide users something usable in Kilo we should ensure the ML2
plugin at least works with the IPAM driver.

Yes, agreed.

 * the current refactoring has ipam-driver-enabled and
non-ipam-driver-enabled version of some API operations. While this the less
ugly way to introduce the driver and keeping at the same time the old
logic, it adds quite a bit of code duplication. I wonder if there is any
effort we can make without too much yak shaving to reduce that code
duplication, because in this conditions I suspect it would a hard sell to
the Neutron core team.

This is a good thing to bring up.  It is a difficult trade off.  On one
hand, the way it has been done makes it easy to review and see that the
existing implementation has not been disturbed reducing the short term
risk.  On the other hand, if left the way it is indefinitely, it will be a
maintenance burden.  Given the current timing, could we take a two-phased
approach?  First, merge it with duplication and immediately create a follow
on patch to deduplicate the code to merge when that is ready?

Carl
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [opnfv-tech-discuss] [Keystone][Multisite] Huge token size

2015-03-17 Thread Adam Young

On 03/17/2015 02:51 AM, joehuang wrote:


It’s not reality to deploy KeyStone service ( including backend store 
) in each site if the number, for example, is more than 10.  The 
reason is that the stored data including data related to revocation 
need to be replicated to all sites in synchronization manner. 
Otherwise, the API server might attempt to use the token before it's 
able to be validated in the target site.




Replicating revocati9on data across 10 sites will be tricky, but far 
better than replicating all of the token data.  Revocations should be 
relatively rare.


When Fernet token is used in multisite scenario, each API request will 
ask for token validation from KeyStone. The cloud will be out of 
service if KeyStone stop working, therefore KeyStone service need to 
run in several sites.




There will be multiple Keystone servers, so each should talk to their 
local instance.


For reliability purpose, I suggest that the keystone client should 
provide a fail-safe design: primary KeyStone server, the second 
KeyStone server (or even the third KeySont server) . If the primary 
KeyStone server is out of service, then the KeyStone client will try 
the second KeyStone server. Different KeyStone client may be 
configured with different primary KeyStone server and the second 
KeyStone server.




Makes sense, but that can be handled outside of Keystone using HA and 
Heartbear and awhole slew of technologies.  Each Keystone server can 
validate each other's tokens.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][pbr] fixing up pbr's master branch

2015-03-17 Thread Doug Hellmann
Now that we have good processes in place for the other Oslo libraries, I
want to bring pbr into the fold so to speak and start putting it through
the same reviews and release procedures. We also have some open bugs
that I'd like to find someone to help fix, but because we can't actually
release from the master branch right now working on fixes is more
complicated than it needs to be. I don't want to focus on placing blame,
just understanding where things actually stand and then talking about
how to get them to a better state.

From what I can tell, the main problem we have in master right now
is that the new semver rules added as part of [1] don't like some
of the existing stable branch tags being used by projects. This
feels a bit like we overreached with the spec, and so I would like
to explore options for pulling back and changing directions. It is
quite likely I don't fully understand either the original intent
or the current state of things, but I want to start the discussion
with the hope that those with the details can correct my mistakes
and fill in any gaps.

It looks like [1] had several goals. It was meant to fix the
automatically generated version numbers for untagged revisions,
bring us up to standard with current pip rules for version numbers,
improve our support for distro version number generation, and enforce
semver automatically by looking at comments on commits when creating
a new tag.

Is it the last part that's broken? Can we make pbr just do what
I say with version numbers, and build a standalone tool (maybe
installed as part of pbr) for suggesting tags using whatever rules
we like? That would be like the proposed next-version command, but
not invoked through setup.py. I don't actually care what the UI is,
but I do think we want any rules about what versions are allowed
ignored when the *existing* tags are parsed, so moving them out of
the setuptools commands entirely seems like an easy way to ensure
that.

Some of the other special casing seems to be for TripleO's benefit
(especially the stuff that generates versions from untagged commits).
Is that working? If not, is it still necessary to have?

The tag-release command isn't necessary for OpenStack as far as I
can tell. We have a whole separate repository of tools with
release-related scripts and tooling [2], and those tools automate
far more than just creating a tag for us. I don't expect any OpenStack
project to directly use a pbr command for creating a tag. Maybe we
missed the window of opportunity there? How much of that work is done?
Should we drop any remaining plans?

Did I miss anything that's currently broken, or needs to be done before
we can consider pbr releasable for liberty?

Doug


[1] http://specs.openstack.org/openstack/oslo-specs/specs/juno/pbr-semver.html
[2] http://git.openstack.org/cgit/openstack-infra/release-tools/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Common library for shared code

2015-03-17 Thread Zane Bitter

On 16/03/15 16:38, Ben Nemec wrote:

On 03/13/2015 05:53 AM, Jan Provaznik wrote:

On 03/10/2015 05:53 PM, James Slagle wrote:

On Mon, Mar 9, 2015 at 4:35 PM, Jan Provazník jprov...@redhat.com wrote:

Hi,
it would make sense to have a library for the code shared by Tuskar UI and
CLI (I mean TripleO CLI - whatever it will be, not tuskarclient which is
just a thing wrapper for Tuskar API). There are various actions which
consist from more that a single API call to an openstack service, to give
some examples:

- nodes registration - for loading a list of nodes from a user defined file,
this means parsing a CSV file and then feeding Ironic with this data
- decommission a resource node - this might consist of disabling
monitoring/health checks on this node, then gracefully shut down the node
- stack breakpoints - setting breakpoints will allow manual
inspection/validation of changes during stack-update, user can then update
nodes one-by-one and trigger rollback if needed


I agree something is needed. In addition to the items above, it's much
of the post deployment steps from devtest_overcloud.sh. I'd like to see that be
consumable from the UI and CLI.

I think we should be aware though that where it makes sense to add things
to os-cloud-config directly, we should just do that.



Yes, actually I think most of the devtest_overcloud content fits
os-cloud-config (and IIRC for this purpose os-cloud-config was created).



It would be nice to have a place (library) where the code could live and
where it could be shared both by web UI and CLI. We already have
os-cloud-config [1] library which focuses on configuring OS cloud after
first installation only (setting endpoints, certificates, flavors...) so not
all shared code fits here. It would make sense to create a new library where
this code could live. This lib could be placed on Stackforge for now and it
might have very similar structure as os-cloud-config.

And most important... what is the best name? Some of ideas were:
- tuskar-common


I agree with Dougal here, -1 on this.


- tripleo-common
- os-cloud-management - I like this one, it's consistent with the
os-cloud-config naming


I'm more or less happy with any of those.

However, If we wanted something to match the os-*-config pattern we might
could go with:
- os-management-config
- os-deployment-config



Well, the scope of this lib will be beyond configuration of a cloud so
having -config in the name is not ideal. Based on feedback in this
thread I tend to go ahead with os-cloud-management and unless someone
rises an objection here now, I'll ask infra team what is the process of
adding the lib to stackforge.


Any particular reason you want to start on stackforge?  If we're going
to be consuming this in TripleO (and it's basically going to be
functionality graduating from incubator) I'd rather just have it in the
openstack namespace.  The overhead of some day having to rename this
project seems unnecessary in this case.


I think the long-term hope for this code is for it to move behind the 
Tuskar API, so at this stage the library is mostly to bootstrap that 
development to the point where the API is more or less settled. In that 
sense stackforge seems like a natural fit, but if folks feel strongly 
that it should be part of TripleO (i.e. in the openstack namespace) from 
the beginning then there's probably nothing wrong with that either.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum] Plans for heat-containers

2015-03-17 Thread Adrian Otto
Team,

Steven Dake and I met with Lars Kellogg-Stedman about introducing a new 
Stackforge project named heat-container that will use heat-kubernetes at the 
initial upstream repo. Magnum contributors currently working on the templates 
will be added to this project, and we will treat it as a library dependency, 
allowing it to be enhanced, and used directly for other purposes. Lars agreed 
to create the repo, so I’ll update you about it once that’s done.

Adrian
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Weekly subteam status report

2015-03-17 Thread Ruby Loo
Hi,

Following is the subteam report for Ironic. As usual, this is pulled
directly from the Ironic whiteboard[0] and formatted. Note that the weekly
subteam status report wasn't discussed in yesterday's weekly Ironic
meeting. The meeting focus was on kilo-3.

Bugs (dtantsur)


(As of Tue Mar 17 14:10:22 UTC)
Open: 145 (+12)
10 new (+7), 33 in progress (+1), 0 critical, 16 high (-1) and 10
incomplete (+2)

Drivers
==

iLO (wanyen)
--
Got one +2 for
https://blueprints.launchpad.net/ironic/+spec/ilo-properties-capabilities-discovery.
Need one more +2 to merge.

Still very few reviews  for
https://blueprints.launchpad.net/ironic/+spec/uefi-secure-boot.  Need more
core reviewers to review this feature.

Need more reviewers for
https://blueprints.launchpad.net/ironic/+spec/ilo-cleaning-support

local boot option for UEFI merged.

iRMC (naohirot)
-
started to port driver part of kilo-3 high priority features to iRMC in
order to contribute kilo-3 review and test as well as to prepare for
liberty.

AMT (lintan)

AMT power driver is land in, which provides remotely power control only.



Until next week,
--ruby

[0] https://etherpad.openstack.org/p/IronicWhiteBoard
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Deprecation warnings considered harmful?

2015-03-17 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 03/13/2015 04:36 PM, Doug Hellmann wrote:
 
 
 On Fri, Mar 13, 2015, at 07:25 AM, Ihar Hrachyshka wrote: On
 03/13/2015 01:37 AM, Nikhil Manchanda wrote:
 Looking back at the meeting eavesdrop, the primary reasons
 why we deferred this work to Liberty was twofold:
 
 - It wasn't a set plan based on information available to us
 at the time. This being the case, we decided to wait until we
 had more information regarding the requirements around this
 from oslo.
 
 - We wanted to ensure that we had a corresponding hacking
 rule in place to prevent future patch-sets from using the
 deprecated module names.
 
 
 For hacking check, I have a patch in review for 'hacking' repo to
 add checks (both for stable branches where oslo.* namespace is
 used, and new branches where oslo_* is expected):
 
 - https://review.openstack.org/157894
 
 Also, neutron has a (test covered) hacking check at:
 
 http://git.openstack.org/cgit/openstack/neutron/tree/neutron/hacking/checks.py#n119

  Feel free to adopt.
 
 I wish we, as a community, were less obsessed with creating so
 many hacking rules. These are really minor changes and it's going
 to be a relatively short-lived issue that could just be fixed
 once. If there's a regression, fixing *that* won't be hard or
 take long.

That relatively short-lived issue already resulted in multiple
backports to stable branches with new namespaces being used. F.e. see:

https://bugs.launchpad.net/nova/+bug/1432685

There is no safe way to communicate the issue to all parties involved,
so if automation is good at catching those issues, it should be
applied. It's wrong to rely on people when a hacking check is enough.

 
 As I said in the IRC snippet pasted into the meeting log linked 
 elsewhere in the thread, I want to drop the oslo package during
 the next cycle. It's not clear that all projects will be ready
 for us to do that, and that's why it's not a definite plan,
 yet. We're trying to be cognizant of the fact that you all have
 other things you're trying to accomplish too, and that this work
 appears like code churn even though it is solving a problem many
 developers have had in their development environments.
 
 In any case, you should plan for all Oslo libraries to drop the 
 namespace packages entirely *soon*. If not for Liberty then
 definitely for M. There's no sense at all in delaying the work
 needed in your projects beyond L-1, and landing the changes
 sooner is better than waiting.
 
 Doug
 
 
 We specifically didn't consider the impact of logging
 statements with deprecation warnings at the meeting.
 
 We now have a better picture of the actual status -- with the
 oslo decision that these namespace packages are definitely
 going away. I've added an agenda item to bring this up again
 at the next Trove weekly meeting [1] so that we can address
 this.
 
 [1] https://wiki.openstack.org/wiki/Meetings/TroveMeeting
 
 Thanks, Nikhil
 
 
 
 On Thu, Mar 12, 2015 at 4:05 PM, Robert Collins 
 robe...@robertcollins.net
 mailto:robe...@robertcollins.net wrote:
 
 On 13 March 2015 at 09:43, Ihar Hrachyshka
 ihrac...@redhat.com mailto:ihrac...@redhat.com wrote:
 -BEGIN PGP SIGNED MESSAGE- Hash: SHA1
 
 On 03/12/2015 09:35 PM, Robert Collins wrote:
 On 13 March 2015 at 08:09, Ihar Hrachyshka 
 ihrac...@redhat.com
 mailto:ihrac...@redhat.com
 wrote:
 -BEGIN PGP SIGNED MESSAGE- Hash: SHA1
 
 On 03/12/2015 11:38 AM, Boris Bobrov wrote:
 On Thursday 12 March 2015 12:59:10 Duncan Thomas
 wrote:
 So, assuming that all of the oslo depreciations
 aren't going to be fixed before release
 
 What makes you think that?
 
 In my opinion it's just one component's problem.
 These particular deprecation warnings are a result of
 still on-going migration from oslo.package to
 oslo_package. Ironically, all components except
 oslo have already moved to the new naming scheme.
 
 It's actually wrong. For example, Trove decided to stay
 on using the old namespace for Kilo.
 
 Why?
 
 -Rob
 
 
 
 http://eavesdrop.openstack.org/irclogs/%23openstack-meeting-alt/%23openstack-meeting-alt.2015-02-11.log




 
starting from 2015-02-11T18:03:11. I guess the assumption was
 that there is immediate benefit, and they can just wait.
 Though I don't think the fact that it means deprecation
 warnings in their logs was appreciated at the time of
 decision.
 
 Thanks, reading that it looks like the actual status (oslo
 decided most definitely that namespace packages are going
 away, its just a matter of when) wasn't understood in that
 meeting.
 
 Is it possible to put it back on the agenda for the next
 Trove meeting?
 
 Cheers, Rob
 
 -- Robert Collins rbtcoll...@hp.com
 mailto:rbtcoll...@hp.com Distinguished Technologist HP
 Converged Cloud
 
 __



 
OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 

[openstack-dev] Hyper-V Meeting Minutes

2015-03-17 Thread Peter Pouliot
Hi All,

Minutes from this weeks IRC meeting can be found here:

Meeting ended Tue Mar 17 16:23:45 2015 UTC.  Information about MeetBot at 
http://wiki.debian.org/MeetBot . (v 0.1.4)
Minutes:
http://eavesdrop.openstack.org/meetings/hyper_v/2015/hyper_v.2015-03-17-16.01.html
Minutes (text): 
http://eavesdrop.openstack.org/meetings/hyper_v/2015/hyper_v.2015-03-17-16.01.txt
Log:
http://eavesdrop.openstack.org/meetings/hyper_v/2015/hyper_v.2015-03-17-16.01.log.html

Best,

p


Peter J. Pouliot CISSP
Microsoft Enterprise Cloud Solutions
C:\OpenStack
New England Research  Development Center
1 Memorial Drive
Cambridge, MA 02142
P: 1.(857).4536436
E: ppoul...@microsoft.commailto:ppoul...@microsoft.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Deprecation warnings considered harmful?

2015-03-17 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 03/17/2015 05:22 PM, Ihar Hrachyshka wrote:
 On 03/13/2015 04:36 PM, Doug Hellmann wrote:
 
 
 On Fri, Mar 13, 2015, at 07:25 AM, Ihar Hrachyshka wrote: On 
 03/13/2015 01:37 AM, Nikhil Manchanda wrote:
 Looking back at the meeting eavesdrop, the primary reasons 
 why we deferred this work to Liberty was twofold:
 
 - It wasn't a set plan based on information available to
 us at the time. This being the case, we decided to wait
 until we had more information regarding the requirements
 around this from oslo.
 
 - We wanted to ensure that we had a corresponding hacking 
 rule in place to prevent future patch-sets from using the 
 deprecated module names.
 
 
 For hacking check, I have a patch in review for 'hacking' repo
 to add checks (both for stable branches where oslo.* namespace
 is used, and new branches where oslo_* is expected):
 
 - https://review.openstack.org/157894
 
 Also, neutron has a (test covered) hacking check at:
 
 http://git.openstack.org/cgit/openstack/neutron/tree/neutron/hacking/checks.py#n119

  Feel free to adopt.
 
 I wish we, as a community, were less obsessed with creating so 
 many hacking rules. These are really minor changes and it's
 going to be a relatively short-lived issue that could just be
 fixed once. If there's a regression, fixing *that* won't be
 hard or take long.
 
 That relatively short-lived issue already resulted in multiple 
 backports to stable branches with new namespaces being used. F.e.
 see:
 
 https://bugs.launchpad.net/nova/+bug/1432685
 
 There is no safe way to communicate the issue to all parties
 involved, so if automation is good at catching those issues, it
 should be applied. It's wrong to rely on people when a hacking
 check is enough.
 
 

OK, that was a wrong example. Though we still had bugs before when a
patch that used oslo_* namespace was backported to Juno (which is wrong).

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJVCFgjAAoJEC5aWaUY1u57Hd4H/32X6Utp1aLm0KnF2HgW63ah
DWk+PJj4Ku5TfZZ7IAXH/Rk6C4crCidfqEguwFuCkhlwQ+9bqsZ2INUjjGIfeY4B
GJaG5lBIn8Pcvbtf3X2oU8ByE4/GTHNP91XRv2/mQ5+PnhFF57GW4b3qVBy/DQp8
7+CBUqKwxPVx0rgCcotRiGmJs6tjLtlm/8iFhkYzu9Xr5Ti1b+nAm7NP9HzK1279
sEbbv68zRdYhOAm06CAoy/WoEgftoFR6xSc9hhTkakf6t7zE5EMeMWdJATrILjsE
pZjD39g8ye6ni3OiUUQtx8NosMJs+ORroNgeAn0WnFYW63kZoHvd2iJ/91CFNyg=
=m5pk
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Proposal to change Glance meeting time.

2015-03-17 Thread Rykowski, Kamil
+1 for consistent time  1400UTC is a bit more preferred.

-Original Message-
From: Cindy Pallares [mailto:cpalla...@redhat.com] 
Sent: Tuesday, March 17, 2015 4:29 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Glance] Proposal to change Glance meeting time.

+1 for consistent time and a slightly higher preference for 1500 UTC

On 03/17/2015 02:15 AM, Koniszewski, Pawel wrote:
 +1 for consistent time (I prefer 1400UTC)

 *From:*Fei Long Wang [mailto:feil...@catalyst.net.nz]
 *Sent:* Sunday, March 15, 2015 9:00 PM
 *To:* openstack-dev@lists.openstack.org
 *Subject:* Re: [openstack-dev] [Glance] Proposal to change Glance 
 meeting time.

 +1 for consistent time

 On 14/03/15 10:11, Nikhil Komawar wrote:

 Here's what it looks like so far:-

 1400UTC: 3 votes (all core reviewers)

 1500UTC: 5 votes (one core reviewer)

 Both: 4 votes (all core reviewers)

 Let's wait another couple days to see if more people respond.

 I have a feeling that the weight is slightly tilted towards 1400UTC
 based on a few assumptions about the past activity of those
 participants, their cross project inputs, etc.

 Thanks,
 -Nikhil

 
 --
 --

 *From:*Mikhail Fedosin mfedo...@mirantis.com
 mailto:mfedo...@mirantis.com
 *Sent:* Friday, March 13, 2015 3:07 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Glance] Proposal to change Glance
 meeting time.

 Both options are good, it's a little better at 1500 UTC.

 +1 consistent time.

 On Fri, Mar 13, 2015 at 9:23 PM, Steve Lewis
 steve.le...@rackspace.com mailto:steve.le...@rackspace.com wrote:

 +1 consistent time

 +1 for 1500 UTC since that has come up.

 On 09/03/15 09:07, Nikhil Komawar wrote:
 
 So, the new proposal is:
 Glance meetings [1] to be conducted
 weekly on
 Thursdays at 1400UTC [2] on
 #openstack-meeting-4

 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 
 __
 

 OpenStack Development Mailing List (not for usage questions)

 
 Unsubscribe:openstack-dev-requ...@lists.openstack.org?subject:unsubscr
 ibe  
 mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --

 Cheers  Best regards,

 Fei Long Wang (王飞龙)

 --
 

 Senior Cloud Software Engineer

 Tel: +64-48032246

 Email:flw...@catalyst.net.nz  mailto:flw...@catalyst.net.nz

 Catalyst IT Limited

 Level 6, Catalyst House, 150 Willis Street, Wellington

 --
 



 __
  OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Feature Freeze request for DB2 support in Trove

2015-03-17 Thread Mariam John


Hello,

   I would like to request a feature freeze for Trove for the following
feature: Add DB2 support for Trove (
https://blueprints.launchpad.net/openstack/?searchtext=db2-plugin-for-trove
)

These are the patches related to this blueprint:
- https://review.openstack.org/#/c/164293/
- https://review.openstack.org/#/c/156802/

The changes include the following changes:
- disk image builder elements for DB2 to create DB2 images
- guest agent for DB2 which will enable users to create/delete DB2
instances, databases and users.

Unit tests have been added to cover all the API's implemented by this guest
agent and the patches have been linked to the blueprint. The risk for
regression is very minimal since the changes use the exisiting API's and is
providing support for a new datastore and the code changes does not affect
base Trove code.

Thank you.

Regards,
Mariam John.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Can I change the username for review.openstack.org?

2015-03-17 Thread Lily.Sing
Hi all,

I follow the account setup steps here
http://docs.openstack.org/infra/manual/developers.html#account-setup and
it says the username for review.openstack.org should be the same as
launchpad. But I input a mismatch one by mistake. Does it still work? If
not, how can I change it? Thanks!

Best regards,
Lily Xing
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano] PTL elections

2015-03-17 Thread Sergey Lukjanov
The PTL candidacy proposal time frame ended and we have only one candidate.

So, Serg Melikyan, my congratulations!

Results documented in
https://wiki.openstack.org/wiki/Murano/PTL_Elections_Kilo_Liberty#PTL

On Wed, Mar 11, 2015 at 2:04 AM, Sergey Lukjanov slukja...@mirantis.com
wrote:

 Hi folks,

 due to the requirement to have officially elected PTL, we're running
 elections for the Murano PTL for Kilo and Liberty cycles. Schedule
 and policies are fully aligned with official OpenStack PTLs elections.

 You can find more info in official elections wiki page [0] and the same
 page for Murano elections [1], additionally some more info in the past
 official nominations opening email [2].

 Timeline:

 till 05:59 UTC March 17, 2015: Open candidacy to PTL positions
 March 17, 2015 - 1300 UTC March 24, 2015: PTL elections

 To announce your candidacy please start a new openstack-dev at
 lists.openstack.org mailing list thread with the following subject:
 [murano] PTL Candidacy.

 [0] https://wiki.openstack.org/wiki/PTL_Elections_March/April_2014
 [1] https://wiki.openstack.org/wiki/Murano/PTL_Elections_Kilo_Liberty
 [2]
 http://lists.openstack.org/pipermail/openstack-dev/2014-March/031239.html

 Thank you.

 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Principal Software Engineer
 Mirantis Inc.




-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev