Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys
Hi, Lance, May we store the keys in Barbican, can the key rotation be done upon Barbican? And if we use Barican as the repository, then it’s easier for Key distribution and rotation in multiple KeyStone deployment scenario, the database replication (sync. or async.) capability could be leveraged. Best Regards Chaoyi Huang ( Joe Huang ) From: Lance Bragstad [mailto:lbrags...@gmail.com] Sent: Tuesday, August 04, 2015 10:56 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys On Tue, Aug 4, 2015 at 9:28 AM, Boris Bobrov bbob...@mirantis.commailto:bbob...@mirantis.com wrote: On Tuesday 04 August 2015 08:06:21 Lance Bragstad wrote: On Tue, Aug 4, 2015 at 1:37 AM, Boris Bobrov bbob...@mirantis.commailto:bbob...@mirantis.com wrote: On Monday 03 August 2015 21:05:00 David Stanek wrote: On Sat, Aug 1, 2015 at 8:03 PM, Boris Bobrov bbob...@mirantis.commailto:bbob...@mirantis.com wrote: Also, come on, does http://paste.openstack.org/show/406674/ look overly complex? (it should be launched from Fuel master node). I'm reading this on a small phone, so I may have it wrong, but the script appears to be broken. It will ssh to node-1 and rotate. In the simplest case this takes key 0 and moves it to the next highest key number. Then a new key 0 is generated. Later there is a loop that will again ssh into node-1 and run the rotation script. If there is a limit set on the number of keys and you are at that limit a key will be deleted. This extra rotation on node-1 means that it's possible that it has a different set of keys than are on node-2 and node-3. You are absolutely right. Node-1 should be excluded from the loop. pinc also lacks -c 1. I am sure that other issues can be found. In my excuse I want to say that I never ran the script and wrote it just to show how simple it should be. Thank for review though! I also hope that no one is going to use a script from a mailing list. What's the issue with just a simple rsync of the directory? None I think. I just want to reuse the interface provided by keystone-manage. You wanted to use the interface from keystone-manage to handle the actual promotion of the staged key, right? This is why there were two fernet_rotate commands issued? Right. Here is the fixed version (please don't use it anyway): http://paste.openstack.org/show/406862/ Note, this doesn't take into account the initial key repository creation, does it? Here is a similar version that relies on rsync for the distribution after the initial key rotation [0]. [0] http://cdn.pasteraw.com/d6odnvtt1u9zsw5mg4xetzgufy1mjua -- Best regards, Boris Bobrov __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [python-neutronclient][neutron] sub-project client extensions
Thanks Amir. This will be a step forward in that direction Extending my question to Henry and Kyle, as they are driving the decomposition phase II. Hello Henry/Kyle, With devref [1] for Neutron sub-projects getting in and sub-projects owners working towards completing the phase II, the Neutron tree decomposition is excellently described, however, the python-neutron client side of changes are not completely clear and seem optional. So what you guys suggest for client implementation of vendor specific extensions? Should they be out of python-neutronclient tree or be in tree? I would prefer former as its much cleaner and aligns better with phase II. If former is the case, can we please enhance the devref to capture how the process will work for python-neutronclient. Thanks, Fawad Khaliq On Tue, Aug 4, 2015 at 5:51 PM, Amir Sadoughi amir.sadou...@rackspace.com wrote: I started down the path of making a python-neutronclient extension, but got stuck on the lack of support for child resource extensions as described here https://bugs.launchpad.net/python-neutronclient/+bug/1446428. I submitted a bugfix here https://review.openstack.org/#/c/202597/. Amir -- *From:* Fawad Khaliq fa...@plumgrid.com *Sent:* Tuesday, August 4, 2015 6:10 AM *To:* OpenStack Development Mailing List (not for usage questions) *Subject:* [openstack-dev] [python-neutronclient][neutron] sub-project client extensions Folks, In networking-plumgrid project, we intend to implement client side for some of the vendor specific extensions. Looking at the current implementation for client side for some vendors, I see the code is part of python-neutronclient tree [1]. I do see this change [2] talking about a way to load extensions through entry points, however, I could not find any example extension module. Has anyone gone through the route of implementing out of tree extensions for Neutron client, which extend python-neutronclient shell and load at run/install time? With decomposition phase II, it makes sense to keep the client side in the respective projects as well. [1] https://github.com/openstack/python-neutronclient/tree/master/neutronclient/neutron/v2_0 [2] https://review.openstack.org/#/c/148318/16 Thanks, Fawad Khaliq __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova][FFE] Feature Freeze Exception Request for 'Adding support for InfiniBand SR-IOV vif type'
Hi, I would like to request a FFE for the following BP https://blueprints.launchpad.net/nova/+spec/vif-driver-ib-passthrough The BP has one patch https://review.openstack.org/#/c/187052/ which had +2 but lost in the rebase. The neutron code it already merged https://review.openstack.org/#/c/187054/ and not merging this will break the mlnx mechanism driver in the neutron core. The code is self-contained and very minimal. Thanks, Moshe Levi. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [keystone] policy issues when generating trusts with different clients
I think this is happening because the last session created was based off of trustee_auth. Try creating 2 sessions, one for each user (trustor and trustee). Maybe Jamie will chime in. Thanks, Steve Martinelli OpenStack Keystone Core michael mccune m...@redhat.com wrote on 2015/08/03 07:11:34 PM: From: michael mccune m...@redhat.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Date: 2015/08/03 07:12 PM Subject: [openstack-dev] [keystone] policy issues when generating trusts with different clients hi all, i am doing some work to change sahara to make greater use of keystoneclient.session.Session objects and i am running into a strange error when issuing the trusts. the crux of this issue is that when i create Client objects by passing all the parameters directly to the client, the trust is created as normal. But, if i create a Password based auth plugin object, using the same parameters, and the instantiate a Client by using the auth and a Session object, then i fail to create the trust with an error about not having sufficient permission. i have put together a few python repl samples to show what is happening, these are also available on github[1]. the following code shows how we've been doing this, using the generic Client object we authenticate using the named parameters. from keystoneclient.v3 import client trustor = client.Client( auth_url='http://192.168.122.2:5000/v3', username='demo', password='openstack', project_name='demo', user_domain_name='Default', project_domain_name='Default') trustee = client.Client( auth_url='http://192.168.122.2:5000/v3', username='admin', password='openstack', project_name='admin', user_domain_name='Default', project_domain_name='Default') trustor.trusts.create( trustor_user=trustor.user_id, trustee_user=trustee.user_id, project=trustor.project_id, role_names=['Member'], impersonation=True, expires_at=None) Trust deleted_at=None, expires_at=None, id=ac0d8f3b9e7443c2bdb0f855c2a3b9b5, impersonation=True, links={u'self': u'http://192.168.122.2:35357/v3/OS-TRUST/trusts/ ac0d8f3b9e7443c2bdb0f855c2a3b9b5'}, project_id=416290f342e04a34acccafe79bb399c7, redelegation_count=0, remaining_uses=None, roles=[{u'id': u'433c86b705ef4656b90514ea5401469e', u'links': {u'self': u'http://192.168.122.2:35357/v3/roles/ 433c86b705ef4656b90514ea5401469e'}, u'name': u'Member'}], roles_links={u'self': u'http://192.168.122.2:35357/v3/OS-TRUST/trusts/ ac0d8f3b9e7443c2bdb0f855c2a3b9b5/roles', u'next': None, u'previous': None}, trustee_user_id=cf45da134c76460e89b5837e07cc4b82, trustor_user_id=863b972dbbfd44b7bbde1b988e2b5098 the trust is created with no issues. next, i try to create a Client using a Session and a Password auth plugin object. from keystoneclient.auth.identity import v3 from keystoneclient import session sess = session.Session() trustor_auth = v3.Password( auth_url='http://192.168.122.2:5000/v3', username='demo', password='openstack', project_name='demo', user_domain_name='Default', project_domain_name='Default') trustee_auth = v3.Password( auth_url='http://192.168.122.2:5000/v3', username='admin', password='openstack', project_name='admin', user_domain_name='Default', project_domain_name='Default') trustor = client.Client(session=sess, auth=trustor_auth) trustee = client.Client(session=sess, auth=trustee_auth) trustor.trusts.create( trustor_user=trustor.user_id, trustee_user=trustee.user_id, project=trustor.project_id, role_names=['Member'], impersonation=True, expires_at=None) Traceback (most recent call last): File stdin, line 1, in module File /home/mike/.venvs/openstack/lib/python2.7/site-packages/ keystoneclient/v3/contrib/trusts.py, line 76, in create **kwargs) File /home/mike/.venvs/openstack/lib/python2.7/site-packages/ keystoneclient/base.py, line 73, in func return f(*args, **new_kwargs) File /home/mike/.venvs/openstack/lib/python2.7/site-packages/ keystoneclient/base.py, line 333, in create self.key) File /home/mike/.venvs/openstack/lib/python2.7/site-packages/ keystoneclient/base.py, line 151, in _create return self._post(url, body, response_key, return_raw, **kwargs) File /home/mike/.venvs/openstack/lib/python2.7/site-packages/ keystoneclient/base.py, line 165, in _post
Re: [openstack-dev] [Cinder] A possible solution for HA Active-Active
On Tue, Aug 4, 2015 at 7:47 PM, Morgan Fainberg morgan.fainb...@gmail.com wrote: On Tue, Aug 4, 2015 at 1:43 AM, Gorka Eguileor gegui...@redhat.com wrote: On Tue, Aug 04, 2015 at 05:47:44AM +1000, Morgan Fainberg wrote: On Aug 4, 2015, at 01:42, Fox, Kevin M kevin@pnnl.gov wrote: I'm usually for abstraction layers, but they don't always pay off very well due to catering to the lowest common denominator. Lets clearly define the problem space first. IFF the problem space can be fully implemented using Tooz, then lets do that. Then the operator can choose. If Tooz cant and wont handle the problem space, then we're trying to fit a square peg in a round hole. +1 and specifically around tooz, it is narrow in comparison to the feature sets of some the DLMs (since it has to mostly-implement to the lowest common denominator, as abstraction layers do). Defining the space we are trying to target will let us make the informed decision on what we use. Again with this? Yes, I was reiterating that we should not talk about a specific choice but continue with the other discussion. Tooz, ZooKeeper, Consul, etc, is all irrelevant to the rest of the conversation we are having. The specific technology used can be discussed in an x-project spec, but I really would rather see a very opinionated choice. That can again be delayed until a later point. We already what we want to get out of Tooz, where we want it and for how long we'll be using it in each of those places. My response was also before the rest of the convo that occurred post Flavio's summary. To answer those questions all that's needed is to read this thread and the links referred on some conversations. I am fine with using a DLM. I see a significant benefit (without putting too fine a point on it, Keystone *will* benefit from a choice for a DLM to be available in OpenStack, and I like the idea). I was hoping to continue (and we did) identify where we had DLM-like/DLM uses in OpenStack so we knew where to focus. Hey all, This thread is a mess. I'm going to put together facts with what projects are doing and why. I will present my findings at the session that I will be moderating in the cross project track of the summit [1], if accepted. Spec may follow. [1] - https://etherpad.openstack.org/p/mitaka-cross-project-session-planning -- Mike Perez __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Stable][Nova] VMware NSXv Support
Hi, In the Kilo cycle a Neutron driver was added for supporting the Vmware NSXv plugin. This required patches in Nova to enable the plugin to work with Nova. These patches finally landed yesterday. I have back ported them to stable/kilo as the Neutron driver is unable to work without these in stable/kilo. The patches can be found at: 1. VNIC support - https://review.openstack.org/209372 2. Metadata support - https://review.openstack.org/209374 I hope that the stable team can take this into consideration. Thanks in advance Gary __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Cinder] A possible solution for HA Active-Active
On 04/08/15 23:39 -0700, Mike Perez wrote: On Tue, Aug 4, 2015 at 7:47 PM, Morgan Fainberg morgan.fainb...@gmail.com wrote: On Tue, Aug 4, 2015 at 1:43 AM, Gorka Eguileor gegui...@redhat.com wrote: On Tue, Aug 04, 2015 at 05:47:44AM +1000, Morgan Fainberg wrote: On Aug 4, 2015, at 01:42, Fox, Kevin M kevin@pnnl.gov wrote: I'm usually for abstraction layers, but they don't always pay off very well due to catering to the lowest common denominator. Lets clearly define the problem space first. IFF the problem space can be fully implemented using Tooz, then lets do that. Then the operator can choose. If Tooz cant and wont handle the problem space, then we're trying to fit a square peg in a round hole. +1 and specifically around tooz, it is narrow in comparison to the feature sets of some the DLMs (since it has to mostly-implement to the lowest common denominator, as abstraction layers do). Defining the space we are trying to target will let us make the informed decision on what we use. Again with this? Yes, I was reiterating that we should not talk about a specific choice but continue with the other discussion. Tooz, ZooKeeper, Consul, etc, is all irrelevant to the rest of the conversation we are having. The specific technology used can be discussed in an x-project spec, but I really would rather see a very opinionated choice. That can again be delayed until a later point. We already what we want to get out of Tooz, where we want it and for how long we'll be using it in each of those places. My response was also before the rest of the convo that occurred post Flavio's summary. To answer those questions all that's needed is to read this thread and the links referred on some conversations. I am fine with using a DLM. I see a significant benefit (without putting too fine a point on it, Keystone *will* benefit from a choice for a DLM to be available in OpenStack, and I like the idea). I was hoping to continue (and we did) identify where we had DLM-like/DLM uses in OpenStack so we knew where to focus. Hey all, This thread is a mess. I'm going to put together facts with what projects are doing and why. I will present my findings at the session that I will be moderating in the cross project track of the summit [1], if accepted. Spec may follow. [1] - https://etherpad.openstack.org/p/mitaka-cross-project-session-planning FWIW, there are 2 threads now. This one that you just replied to is supposed to be related to Cinder and not to the cross-project discussion. It's a mess, I agree! :( That said, you may want to sync with Joshua since he's going to work on a cross-project spec as well (as he mentioned in the other thread).[0] Thanks for taking the time, Flavio [0] http://lists.openstack.org/pipermail/openstack-dev/2015-August/071400.html -- Mike Perez __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- @flaper87 Flavio Percoco pgpjhEZ_lKeP9.pgp Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Cinder] A possible solution for HA Active-Active
Well, is it already decided that Pacemaker would be chosen to provide HA in Openstack? There's been a talk Pacemaker: the PID 1 of Openstack IIRC. I know that Pacemaker's been pushed aside in an earlier ML post, but IMO there's already *so much* been done for HA in Pacemaker that Openstack should just use it. All HA nodes needs to participate in a Pacemaker cluster - and if one node looses connection, all services will get stopped automatically (by Pacemaker) - or the node gets fenced. No need to invent some sloppy scripts to do exactly the tasks (badly!) that the Linux HA Stack has been providing for quite a few years. So just a piece of information, but yahoo (the company I work for, with vms in the tens of thousands, baremetal in the much more than that...) hasn't used pacemaker, and in all honesty this is the first project (openstack) that I have heard that needs such a solution. I feel that we really should be building our services better so that they can be A-A vs having to depend on another piece of software to get around our 'sloppiness' (for lack of a better word). Nothing against pacemaker personally... IMHO it just doesn't feel like we are doing this right if we need such a product in the first place. Well, Pacemaker is *the* Linux HA Stack. So, before trying to achieve similar goals by self-written scripts (and having to re-discover all the gotchas involved), it would be much better to learn from previous experiences - even if they are not one's own. Pacemaker has eg. the concept of clones[1] - these define services that run multiple instances within a cluster. And behold! the instances get some Pacemaker-internal unique id[2], which can be used to do sharding. Yes, that still means that upon service or node crash the failed instance has to be started on some other node; but as that'll typically be up and running already, the startup time should be in the range of seconds. We'd instantly get * a supervisor to start/stop/restart/fence/monitor the service(s) * node/service failure detection * only small changes needed in the services * and all that in a tested software that's available in all distributions, and that already has its own testsuite... If we decide that this solution won't fulfill all our expectations, fine - let's use something else. But I don't think it makes *any* sense to try to redo some (existing) High-Availability code in some quickly written scripts, just because it looks easy - there are quite a few traps for the unwary. Ad 1: http://clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Pacemaker_Explained/s-resource-clone.html Ad 2: OCF_RESKEY_CRM_meta_clone; that's not guaranteed to be an unbroken sequence, though. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron] Common Base class for agents
On Wed, Aug 05, 2015 at 01:42:20AM EDT, Sukhdev Kapur wrote: We discussed this in ML2 sub-team meeting last week and felt the best approach is to implement this agent in a separate repo. There is already an on-going effort/plan for modular L2 agent. This agent would be a perfect candidate to take on that effort and implement it for macvtap agent. Once done, this could be moved over under neutron tent and other agents could be moved over to utilize this framework. Either of option 1 or 2 could be utilized to implement this agent. Keeping it in a seperate repo keeps the it from impacting any other agents. Once all ready and working, others could be converted over. You get the best of both words - i.e. quick implementation of this agent and a framework for others to use - and plenty of time to bake the framework. thoughts? I'm eyeballing the changes[0] that macvtap makes to the Linux Bridge agent. I think that the LB agent can be refactored a bit to make the changes less intrusive and allow macvtap to have classes the main Linux Bridge agent can load - that will encapsulate the functionality. For example I like the idea of renaming LinuxBridgeManager[1] to LinuxNetworkManager, and the subsequent MacvtapManager class - if we can re-work a LinuxBridgeManager class that contains all the bridge handling functions, as a subclass of LinuxNetworkManager, that would maybe pave the way for macvtap and the existing LB code to co-exist nicely. Eventually it could develop into a good API for handling all the different ways to do l2 agent plugging. [0]: https://github.com/scheuran/networking-macvtap/commit/36a068cf3d3d6930ab9330efb099cd95a84ca785 [1]: https://github.com/scheuran/networking-macvtap/commit/36a068cf3d3d6930ab9330efb099cd95a84ca785#diff-0445dd61b516b3357ecf54d6e4609e0fR76 -- Sean M. Collins __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] Adding Project_id to the display list when using nova server-group-list
Hi All, Currently, when using command: nova server-group-list, server groups' project id will not be displayed. As the admin user can use option --all-projects to list server groups in all projects, it will be really difficult to identify which serer group belongs to which project. It will be better if we can display also project id. I have submitted patches for the above mentioned problem: https://review.openstack.org/#/c/209018/ Since it is really small fix(only added one line in the code), is it necessary to submit a spec for that? Thanks, BR, Zheng __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [python-neutronclient][neutron] sub-project client extensions
Hi Fawad, If I understood your question correctly , here some ways to do [1] you want to extend your client side packages . you can include entry point in “setup.cfg” https://review.openstack.org/#/c/200065/1/setup.cfg [2] To extend python-neutronclient base packages , you can add in “requirement.txt” https://review.openstack.org/#/c/166564/3/requirements.txt https://review.openstack.org/#/c/204963/ Thanks., Mohankumar.N From: Fawad Khaliq [mailto:fa...@plumgrid.com] Sent: 2015年8月5日 14:23 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [python-neutronclient][neutron] sub-project client extensions Thanks Amir. This will be a step forward in that direction Extending my question to Henry and Kyle, as they are driving the decomposition phase II. Hello Henry/Kyle, With devref [1] for Neutron sub-projects getting in and sub-projects owners working towards completing the phase II, the Neutron tree decomposition is excellently described, however, the python-neutron client side of changes are not completely clear and seem optional. So what you guys suggest for client implementation of vendor specific extensions? Should they be out of python-neutronclient tree or be in tree? I would prefer former as its much cleaner and aligns better with phase II. If former is the case, can we please enhance the devref to capture how the process will work for python-neutronclient. Thanks, Fawad Khaliq On Tue, Aug 4, 2015 at 5:51 PM, Amir Sadoughi amir.sadou...@rackspace.commailto:amir.sadou...@rackspace.com wrote: I started down the path of making a python-neutronclient extension, but got stuck on the lack of support for child resource extensions as described here https://bugs.launchpad.net/python-neutronclient/+bug/1446428. I submitted a bugfix here https://review.openstack.org/#/c/202597/. Amir From: Fawad Khaliq fa...@plumgrid.commailto:fa...@plumgrid.com Sent: Tuesday, August 4, 2015 6:10 AM To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [python-neutronclient][neutron] sub-project client extensions Folks, In networking-plumgrid project, we intend to implement client side for some of the vendor specific extensions. Looking at the current implementation for client side for some vendors, I see the code is part of python-neutronclient tree [1]. I do see this change [2] talking about a way to load extensions through entry points, however, I could not find any example extension module. Has anyone gone through the route of implementing out of tree extensions for Neutron client, which extend python-neutronclient shell and load at run/install time? With decomposition phase II, it makes sense to keep the client side in the respective projects as well. [1] https://github.com/openstack/python-neutronclient/tree/master/neutronclient/neutron/v2_0 [2] https://review.openstack.org/#/c/148318/16 Thanks, Fawad Khaliq __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys
Hi, I believe that Barbican keystore for signing keys was discussed earlier. I'm not sure if that's best idea since Barbican relies on Keystone authN/authZ. That's why this mechanism should be considered rather as out of band to Keystone/OS API and is rather devops task. regards, Adam On Wed, Aug 5, 2015 at 8:11 AM, joehuang joehu...@huawei.com wrote: Hi, Lance, May we store the keys in Barbican, can the key rotation be done upon Barbican? And if we use Barican as the repository, then it’s easier for Key distribution and rotation in multiple KeyStone deployment scenario, the database replication (sync. or async.) capability could be leveraged. Best Regards Chaoyi Huang ( Joe Huang ) *From:* Lance Bragstad [mailto:lbrags...@gmail.com] *Sent:* Tuesday, August 04, 2015 10:56 PM *To:* OpenStack Development Mailing List (not for usage questions) *Subject:* Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys On Tue, Aug 4, 2015 at 9:28 AM, Boris Bobrov bbob...@mirantis.com wrote: On Tuesday 04 August 2015 08:06:21 Lance Bragstad wrote: On Tue, Aug 4, 2015 at 1:37 AM, Boris Bobrov bbob...@mirantis.com wrote: On Monday 03 August 2015 21:05:00 David Stanek wrote: On Sat, Aug 1, 2015 at 8:03 PM, Boris Bobrov bbob...@mirantis.com wrote: Also, come on, does http://paste.openstack.org/show/406674/ look overly complex? (it should be launched from Fuel master node). I'm reading this on a small phone, so I may have it wrong, but the script appears to be broken. It will ssh to node-1 and rotate. In the simplest case this takes key 0 and moves it to the next highest key number. Then a new key 0 is generated. Later there is a loop that will again ssh into node-1 and run the rotation script. If there is a limit set on the number of keys and you are at that limit a key will be deleted. This extra rotation on node-1 means that it's possible that it has a different set of keys than are on node-2 and node-3. You are absolutely right. Node-1 should be excluded from the loop. pinc also lacks -c 1. I am sure that other issues can be found. In my excuse I want to say that I never ran the script and wrote it just to show how simple it should be. Thank for review though! I also hope that no one is going to use a script from a mailing list. What's the issue with just a simple rsync of the directory? None I think. I just want to reuse the interface provided by keystone-manage. You wanted to use the interface from keystone-manage to handle the actual promotion of the staged key, right? This is why there were two fernet_rotate commands issued? Right. Here is the fixed version (please don't use it anyway): http://paste.openstack.org/show/406862/ Note, this doesn't take into account the initial key repository creation, does it? Here is a similar version that relies on rsync for the distribution after the initial key rotation [0]. [0] http://cdn.pasteraw.com/d6odnvtt1u9zsw5mg4xetzgufy1mjua -- Best regards, Boris Bobrov __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Adam Heczko Security Engineer @ Mirantis Inc. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron] Common Base class for agents
Sukhdev, last week I spent some time to figure out the current state of modular l2 agent design and discussion. I got the impression it's not in a good shape! So I personally don't think that it makes any sense to start with a modular l2 agent prototype and in the worst case throw it all away, as we missed a single detail. I would prefer to get folks with knowledge cross all l2 agents together and work on a design first, that everyone can agree upon. So my initial mail basically was to start a effort for easily sharing code. Maybe this will end up in a single agent having multiple drivers but that's not the primary goal (which is sharing code). I'm more with Carl, to start a code sharing effort and the macvtap agent effort in parallel, independent from each other. I must admit I have less insights into ovsagent. But I know that it diverged a lot from the other agents. Sean Collins is currently evaluating an approach to bring linuxbrige closer to ovs [1]. Maybe that's the way to got. Do internal refactorings to bring things close to each other and then see what might be possible to get a common agent or at least common code. But any other suggestions are highly welcome! [1] https://review.openstack.org/#/c/208666/ Andreas (IRC: scheuran) On Di, 2015-08-04 at 22:42 -0700, Sukhdev Kapur wrote: We discussed this in ML2 sub-team meeting last week and felt the best approach is to implement this agent in a separate repo. There is already an on-going effort/plan for modular L2 agent. This agent would be a perfect candidate to take on that effort and implement it for macvtap agent. Once done, this could be moved over under neutron tent and other agents could be moved over to utilize this framework. Either of option 1 or 2 could be utilized to implement this agent. Keeping it in a seperate repo keeps the it from impacting any other agents. Once all ready and working, others could be converted over. You get the best of both words - i.e. quick implementation of this agent and a framework for others to use - and plenty of time to bake the framework. thoughts? Sukhdev On Mon, Aug 3, 2015 at 3:53 PM, Carl Baldwin c...@ecbaldwin.net wrote: I see this as two tasks: 1) A refactoring to share common code and 2) the addition of another agent following the pattern of the others. I'd prefer that the two tasks not be mixed in the same review because it makes it more difficult to review as I think Kevin eluded to. For me, either could be done first. I'm sure some reviewers would prefer that #1 be done first to avoid the proliferation of duplicated code. However, IMO, it is not necessary to be so strict. It can take some time to review common code to get it right. I'm afraid that holding up #2 until merging #1 will either motivate us to merge #1 too hastily and do a poor job or hold up #2 longer than it should be. If this were me, I would post both as independent reviews knowing that when one of the two merges, the other will have to be rebased to take the other in to account. Sometimes, having the refactor in flight can help to allay fears about code proliferation. Actually given Kevin's mention of the modular agent stuff, maybe it isn't worth putting much effort in to the refactor patch at all. My $0.02. Carl On Mon, Aug 3, 2015 at 9:46 AM, Andreas Scheuring scheu...@linux.vnet.ibm.com wrote: Hi, I'm planning to add a new ml2 driver and agent to neutron supporting macvtap attachments [1]. Kyle already decided, that this code should land in the neutron tree [2]. The normal approach till now was to copy an existing agent code and modify accordingly, which lead to a lot of duplicated code. So my question is, how to proceed with macvtap agent? I basically see the the 2 options: 1) Do it like in the past, duplicate the code that is needed for macvtap agent (main loop, mechanism for detecting new/changed/deleted devices) and just go for it. 2) Extract a new superclass that holds some of the common code. This would work for linuxridge agent and sriovnic agent - they could inherit from the new superclass and get rid of some code but they still would exist on their own. OVS agent diverged too far to get it done easily. (More details below) My personal opinion: If I had the power to decide, I would go along with
Re: [openstack-dev] [Cinder] A possible solution for HA Active-Active
On Tue, Aug 04, 2015 at 08:30:17AM -0700, Joshua Harlow wrote: Duncan Thomas wrote: On 3 August 2015 at 20:53, Clint Byrum cl...@fewbar.com mailto:cl...@fewbar.com wrote: Excerpts from Devananda van der Veen's message of 2015-08-03 08:53:21 -0700: Also on a side note, I think Cinder's need for this is really subtle, and one could just accept that sometimes it's going to break when it does two things to one resource from two hosts. The error rate there might even be lower than the false-error rate that would be caused by a twitchy DLM with timeouts a little low. So there's a core cinder discussion that keeps losing to the shiny DLM discussion, and I'd like to see it played out fully: Could Cinder just not do anything, and let the few drivers that react _really_ badly, implement their own concurrency controls? So the problem here is data corruption. Lots of our races can cause data corruption. Not 'my instance didn't come up', not 'my network is screwed and I need to tear everything down and do it again', but 'My 1tb of customer database is now missing the second half'. This means that we *really* need some confidence and understanding in whatever we do. The idea of locks timing out and being stolen without fencing is frankly scary and begging for data corruption unless we're very careful. I'd rather use a persistent lock (e.g. a db record change) and manual recovery than a lock timeout that might cause corruption. So perhaps start off using persistent locks, gain confidence that we have all the right fixes in to prevent that data corruption, and then slowly remove persistent locks as needed. Sounds like an iterative solution to me, and one that will build confidence (hopefully that confidence building can be automated via a chaos-monkey like test-suite) as we go :) That was my suggestion as well, it is not that we cannot do without locks, it's that we have confidence in them and the current code that uses them, so we can start with an initial solution with distributed locks, confirm that the rest of the code is running properly (as distributed locks are not the only change needed) and then, on a second iteration, proceed to remove locks in the Volume Manager and lastly on the next iteration remove them in the drivers wherever it is possible, and for those places where it isn't possible maybe look for alternative solutions. This way we can get a solution faster and avoid potential delays that may raise if we try to do everything at once. But I can see the point of those who say that why put the ops through the DLM configuration process if we are probably going to remove the DLM in a couple of releases. But since we don't know how difficult it will get to remove all other locks, I think that a bird in the hand is worth two in the bush and we should still go with the distributed locks and at least make sure we have a solution. Cheers, Gorka. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Cinder] A possible solution for HA Active-Active
On Tue, Aug 04, 2015 at 08:40:13AM -0700, Joshua Harlow wrote: Clint Byrum wrote: Excerpts from Devananda van der Veen's message of 2015-08-03 08:53:21 -0700: On Mon, Aug 3, 2015 at 8:41 AM Joshua Harlowharlo...@outlook.com wrote: Clint Byrum wrote: Excerpts from Gorka Eguileor's message of 2015-08-02 15:49:46 -0700: On Fri, Jul 31, 2015 at 01:47:22AM -0700, Mike Perez wrote: On Mon, Jul 27, 2015 at 12:35 PM, Gorka Eguileorgegui...@redhat.com wrote: I know we've all been looking at the HA Active-Active problem in Cinder and trying our best to figure out possible solutions to the different issues, and since current plan is going to take a while (because it requires that we finish first fixing Cinder-Nova interactions), I've been looking at alternatives that allow Active-Active configurations without needing to wait for those changes to take effect. And I think I have found a possible solution, but since the HA A-A problem has a lot of moving parts I ended up upgrading my initial Etherpad notes to a post [1]. Even if we decide that this is not the way to go, which we'll probably do, I still think that the post brings a little clarity on all the moving parts of the problem, even some that are not reflected on our Etherpad [2], and it can help us not miss anything when deciding on a different solution. Based on IRC conversations in the Cinder room and hearing people's opinions in the spec reviews, I'm not convinced the complexity that a distributed lock manager adds to Cinder for both developers and the operators who ultimately are going to have to learn to maintain things like Zoo Keeper as a result is worth it. **Key point**: We're not scaling Cinder itself, it's about scaling to avoid build up of operations from the storage backend solutions themselves. Whatever people think ZooKeeper scaling level is going to accomplish is not even a question. We don't need it, because Cinder isn't as complex as people are making it. I'd like to think the Cinder team is a great in recognizing potential cross project initiatives. Look at what Thang Pham has done with Nova's version object solution. He made a generic solution into an Oslo solution for all, and Cinder is using it. That was awesome, and people really appreciated that there was a focus for other projects to get better, not just Cinder. Have people consider Ironic's hash ring solution? The project Akanda is now adopting it [1], and I think it might have potential. I'd appreciate it if interested parties could have this evaluated before the Cinder midcycle sprint next week, to be ready for discussion. [1] - https://review.openstack.org/#/c/195366/ -- Mike Perez Hi all, Since my original proposal was more complex that it needed be I have a new proposal of a simpler solution, and I describe how we can do it with or without a DLM since we don't seem to reach an agreement on that. The solution description was more rushed than previous one so I may have missed some things. http://gorka.eguileor.com/simpler-road-to-cinder-active-active/ I like the idea of keeping it simpler Gorka. :) Note that this is punting back to use the database for coordination, which is what most projects have done thus far, and has a number of advantages and disadvantages. Note that the stale-lock problem was solved in an interesting way in Heat: each engine process gets an instance-of-engine uuid that adds to the topic queues it listens to. If it holds a lock, it records this UUID in the owner field. When somebody wants to steal the lock (due to timeout) they send to this queue, and if there's no response, the lock is stolen. Anyway, I think what might make more sense than copying that directly, is implementing Use the database and oslo.messaging to build a DLM as a tooz backend. This way as the negative aspects of that approach impact an operator, they can pick a tooz driver that satisfies their needs, or even write one to their specific backend needs. Oh jeez, using 'the database and oslo.messaging to build a DLM' scares me :-/ There are already N + 1 DLM like-systems out there (and more every day if u consider the list at https://raftconsensus.github.io/#implementations) so I'd really rather use one that is proven to work by academia vs make a frankenstein one. Joshua, As has been said on this thread, some projects (eg, Ironic) are already using a consistent hash ring backed by a database to meet the requirements they have. Could those requirements also be met with some other tech? Yes. Would that provide additional functionality or some other benefits? Maybe. But that's not what this thread was about. Distributed hash rings are a well understood technique, as are databases. There's no need to be insulting by calling not-your-favorite-technology-of-the-day a frankenstein one. The topic here, which I've been eagerly following, is whether or not Cinder needs to
Re: [openstack-dev] [python-neutronclient][neutron] sub-project client extensions
Thanks Mohankumar. Option #1 is exactly what I was looking for and that should work. Thanks a lot! Fawad Khaliq On Wed, Aug 5, 2015 at 12:36 PM, Mohankumar N mohankuma...@huawei.com wrote: Hi Fawad, If I understood your question correctly , here some ways to do [1] you want to extend your client side packages . you can include entry point in “setup.cfg” https://review.openstack.org/#/c/200065/1/setup.cfg [2] To extend python-neutronclient base packages , you can add in “requirement.txt” https://review.openstack.org/#/c/166564/3/requirements.txt https://review.openstack.org/#/c/204963/ Thanks., Mohankumar.N *From:* Fawad Khaliq [mailto:fa...@plumgrid.com] *Sent:* 2015年8月5日 14:23 PM *To:* OpenStack Development Mailing List (not for usage questions) *Subject:* Re: [openstack-dev] [python-neutronclient][neutron] sub-project client extensions Thanks Amir. This will be a step forward in that direction Extending my question to Henry and Kyle, as they are driving the decomposition phase II. Hello Henry/Kyle, With devref [1] for Neutron sub-projects getting in and sub-projects owners working towards completing the phase II, the Neutron tree decomposition is excellently described, however, the python-neutron client side of changes are not completely clear and seem optional. So what you guys suggest for client implementation of vendor specific extensions? Should they be out of python-neutronclient tree or be in tree? I would prefer former as its much cleaner and aligns better with phase II. If former is the case, can we please enhance the devref to capture how the process will work for python-neutronclient. Thanks, Fawad Khaliq On Tue, Aug 4, 2015 at 5:51 PM, Amir Sadoughi amir.sadou...@rackspace.com wrote: I started down the path of making a python-neutronclient extension, but got stuck on the lack of support for child resource extensions as described here https://bugs.launchpad.net/python-neutronclient/+bug/1446428. I submitted a bugfix here https://review.openstack.org/#/c/202597/. Amir -- *From:* Fawad Khaliq fa...@plumgrid.com *Sent:* Tuesday, August 4, 2015 6:10 AM *To:* OpenStack Development Mailing List (not for usage questions) *Subject:* [openstack-dev] [python-neutronclient][neutron] sub-project client extensions Folks, In networking-plumgrid project, we intend to implement client side for some of the vendor specific extensions. Looking at the current implementation for client side for some vendors, I see the code is part of python-neutronclient tree [1]. I do see this change [2] talking about a way to load extensions through entry points, however, I could not find any example extension module. Has anyone gone through the route of implementing out of tree extensions for Neutron client, which extend python-neutronclient shell and load at run/install time? With decomposition phase II, it makes sense to keep the client side in the respective projects as well. [1] https://github.com/openstack/python-neutronclient/tree/master/neutronclient/neutron/v2_0 [2] https://review.openstack.org/#/c/148318/16 Thanks, Fawad Khaliq __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Keystone] [Horizon] Federated Login
Hi Jamie On 05/08/2015 00:46, Jamie Lennox wrote: - Original Message - From: Steve Martinelli steve...@ca.ibm.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Sent: Wednesday, August 5, 2015 3:59:34 AM Subject: Re: [openstack-dev] [Keystone] [Horizon] Federated Login Right, but that API is/should be protected. If we want to list IdPs *before* authenticating a user, we either need: 1) a new API for listing public IdPs or 2) a new policy that doesn't protect that API. Thanks, Is there a real requirement here for this to be a dynamic listing Yes. As the size of federations increase, then dynamic listing is the only sensible approach otherwise you will be reconfiguring Horizon every day. In the worldwide academic community (EduGain) we already have hundreds of IdPs. as opposed to something that can be edited from the horizon local_settings? There are obvious use cases for both situations where you want this to be dynamic or you very carefully want to protect which IdPs are available to log in with and from that perspective it would be a very unusual API for keystone to have. We discussed this many months back and two approaches were proposed then a) alter the policy that currently controls the API that lists IdPs to allow 'public access' to be a policy option. The current policy engine does not support 'public access', but only 'anyone who has been authenticated', and this is too restrictive for federated login where the user has not yet been authenticated. In this way different sites can configure their policy to give public access to IdPs or not. b) edit the list of IdPs to say whether they are publicly accessible or not, and create a new publicly accessible API that lists only the public IdPs. Horizon can then be configured to call either the public list of IdPs or all IdPs, since Horizon is an authenticated user. I thought that option b) had been chosen as the preferred approach, but I don't know whether it was implemented or not. If it has been, then I don't see what extra functionality is needed regards David My understanding of the current websso design where we always logged in via /v3/OS-FEDERATION/auth/websso/{protocol} was so that you would run a discovery page on that address that allowed you to customize which IdPs you exposed outside of keystone. Personally i don't like this which is what i wrote this spec[1] was for. However my intention there would have been to manually specify in the local_settings what IdPs were available and reuse the current horizon WebSSO drop down box. Jamie [1] https://review.openstack.org/#/c/199339/ Steve Martinelli OpenStack Keystone Core Lance Bragstad ---2015/08/04 01:49:29 PM---On Tue, Aug 4, 2015 at 10:52 AM, Douglas Fish drf...@us.ibm.com wrote: Hi David, From: Lance Bragstad lbrags...@gmail.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Date: 2015/08/04 01:49 PM Subject: Re: [openstack-dev] [Keystone] [Horizon] Federated Login On Tue, Aug 4, 2015 at 10:52 AM, Douglas Fish drf...@us.ibm.com wrote: Hi David, This is a cool looking UI. I've made a minor comment on it in InVision. I'm curious if this is an implementable idea - does keystone support large numbers of 3rd party idps? is there an API to retreive the list of idps or does this require carefully coordinated configuration between Horizon and Keystone so they both recognize the same list of idps? There is an API call for getting a list of Identity Providers from Keystone http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3-os-federation-ext.html#list-identity-providers Doug Fish David Chadwick d.w.chadw...@kent.ac.uk wrote on 08/01/2015 06:01:48 AM: From: David Chadwick d.w.chadw...@kent.ac.uk To: OpenStack Development Mailing List openstack-dev@lists.openstack.org Date: 08/01/2015 06:05 AM Subject: [openstack-dev] [Keystone] [Horizon] Federated Login Hi Everyone I have a student building a GUI for federated login with Horizon. The interface supports both a drop down list of configured IDPs, and also Type Ahead for massive federations with hundreds of IdPs. Screenshots are visible in InVision here https://invis.io/HQ3QN2123 All comments on the design are appreciated. You can make them directly to the screens via InVision Regards David __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe:
Re: [openstack-dev] [Stable][Nova] VMware NSXv Support
Hi Gary, While I do understand the interest to get this functionality included, I really fail to see how it would comply with the Stable Branch Policy: https://wiki.openstack.org/wiki/StableBranch#Stable_branch_policy Obviously the last say is on stable-maint-core, but normally new features are really no-no to stable branches. My concerns are more on the metadata side of your changes. Even the refactoring is fairly clean it is major part of the metadata handler. It also changes the API (In the case of X-Metadata-Provider being present) which tends to be sacred on stable branches. The changes here does not actually fix any bug but just implements new functionality that missed kilo not even slightly but by months. Thus my -1 for merging these. - Erno From: Gary Kotton [mailto:gkot...@vmware.com] Sent: Wednesday, August 05, 2015 8:03 AM To: OpenStack List Subject: [openstack-dev] [Stable][Nova] VMware NSXv Support Hi, In the Kilo cycle a Neutron driver was added for supporting the Vmware NSXv plugin. This required patches in Nova to enable the plugin to work with Nova. These patches finally landed yesterday. I have back ported them to stable/kilo as the Neutron driver is unable to work without these in stable/kilo. The patches can be found at: 1. VNIC support - https://review.openstack.org/209372 2. Metadata support - https://review.openstack.org/209374 I hope that the stable team can take this into consideration. Thanks in advance Gary __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [CI]How to set proxy for nodepool
Hi Ramy, Thanks for your patience. I have tried your suggestion, but it did not work for me. According to the log, this element has already ran in the chroot before the pip commands are executed. So, in theory, the pip command would run behind this proxy, but the connection errors are still raised. It`s weird. Then I tried to hard code the proxy with --proxy option into the pip command, it works. Anyway, this is a merely temporary solution for this issue until I figure it out. But, after that, I got a new error: - nodepool.image.build.dpc: + sudo env 'PATH=/opt/git/subunit2sql-env/bin:/usr/lib64/ccache:/usr/lib/ccache:$PATH:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin' /opt/git/subunit2sql-env/bin/python2 /opt/nodepool-scripts/prepare_tempest_testrepository.py /opt/git/openstack/tempest nodepool.image.build.dpc: sudo: unable to resolve host fnst01 nodepool.image.build.dpc: No handlers could be found for logger oslo_db.sqlalchemy.session nodepool.image.build.dpc: Traceback (most recent call last): nodepool.image.build.dpc: File /opt/nodepool-scripts/prepare_tempest_testrepository.py, line 50, in module nodepool.image.build.dpc: main() nodepool.image.build.dpc: File /opt/nodepool-scripts/prepare_tempest_testrepository.py, line 39, in main nodepool.image.build.dpc: session = api.get_session() nodepool.image.build.dpc: File /opt/git/subunit2sql-env/local/lib/python2.7/site-packages/subunit2sql/db/api.py, line 47, in get_session nodepool.image.build.dpc: facade = _create_facade_lazily() nodepool.image.build.dpc: File /opt/git/subunit2sql-env/local/lib/python2.7/site-packages/subunit2sql/db/api.py, line 37, in _create_facade_lazily nodepool.image.build.dpc: **dict(CONF.database.iteritems())) nodepool.image.build.dpc: File /opt/git/subunit2sql-env/local/lib/python2.7/site-packages/oslo_db/sqlalchemy/session.py, line 822, in __init__ nodepool.image.build.dpc: **engine_kwargs) nodepool.image.build.dpc: File /opt/git/subunit2sql-env/local/lib/python2.7/site-packages/oslo_db/sqlalchemy/session.py, line 417, in create_engine nodepool.image.build.dpc: test_conn = _test_connection(engine, max_retries, retry_interval) nodepool.image.build.dpc: File /opt/git/subunit2sql-env/local/lib/python2.7/site-packages/oslo_db/sqlalchemy/session.py, line 596, in _test_connection nodepool.image.build.dpc: six.reraise(type(de_ref), de_ref) nodepool.image.build.dpc: File string, line 2, in reraise nodepool.image.build.dpc: oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, Can't connect to MySQL server on 'logstash.openstack.org' ([Errno -5] No address associated with hostname)) - I think it is also a proxy problem about remote access to the subunit2sql database. And it should work with simpleproxy I think. But I couldn`t find how to use simpleproxy to forward data for the subunit2sql db. Could you please give me more hints? Thanks again. Xiexs From: Asselin, Ramy [mailto:ramy.asse...@hp.com] Sent: Wednesday, August 05, 2015 6:00 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [CI]How to set proxy for nodepool Hi Xiexs, You might need to configure pip to use your proxy. I added my own element here: cache-devstack/install.d/98-setup-pip Basically: set -eux mkdir -p /root/.pip/ cat EOF /root/.pip/pip.conf [global] proxy = your proxy EOF cp -f /root/.pip/pip.conf /etc/ Ramy From: Xie, Xianshan [mailto:xi...@cn.fujitsu.com] Sent: Tuesday, August 04, 2015 12:05 AM To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [CI]How to set proxy for nodepool Hi Ramy, Thanks for your help. I have already confirmed proxy setting again, and it works fine(no matter whether the NODEPOOL_ variables are declared or not): 1) not only in the host machine on which DIB run, 2) but also in the first half part of DIB(before the DIB running chroot). 3) furthmore, I ran the commands manually in the host env, they also works fine. $ sudo -H virtualenv /usr/zuul-swift-logs-env $ sudo -H /usr/zuul-swift-logs-env/bin/pip install python-magic argparse requests glob2 So, if I understood correctly, it seems obvious, that the proxy setting is missed when DIB goes into chroot env. Thus, when DIB attempts to connect internet to download/install/update some materials to prepare the image within the chroot env, the error will be encountered. In this case, DIB will run “pip install” in the chroot env zuul-swift-logs-env to install python-magic, argparse and so forth. Actually, all NODEPOOL_ variables were already declared by the install_master.sh previously, and the proxy setting also derived from the host machine`s proxy setting. Xiexs From: Asselin, Ramy [mailto:ramy.asse...@hp.com] Sent: Tuesday, August 04, 2015 12:54 PM To: OpenStack
Re: [openstack-dev] [Keystone] [Horizon] Federated Login
On 04/08/2015 18:59, Steve Martinelli wrote: Right, but that API is/should be protected. If we want to list IdPs *before* authenticating a user, we either need: 1) a new API for listing public IdPs or 2) a new policy that doesn't protect that API. Hi Steve yes this was my understanding of the discussion that took place many months ago. I had assumed (wrongly) that something had been done about it, but I guess from your message that we are no further forward on this Actually 2) above might be better reworded as - a new policy/engine that allows public access to be a bona fide policy rule regards David Thanks, Steve Martinelli OpenStack Keystone Core Inactive hide details for Lance Bragstad ---2015/08/04 01:49:29 PM---On Tue, Aug 4, 2015 at 10:52 AM, Douglas Fish drfish@us.iLance Bragstad ---2015/08/04 01:49:29 PM---On Tue, Aug 4, 2015 at 10:52 AM, Douglas Fish drf...@us.ibm.com wrote: Hi David, From: Lance Bragstad lbrags...@gmail.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Date: 2015/08/04 01:49 PM Subject: Re: [openstack-dev] [Keystone] [Horizon] Federated Login On Tue, Aug 4, 2015 at 10:52 AM, Douglas Fish _drf...@us.ibm.com_ mailto:drf...@us.ibm.com wrote: Hi David, This is a cool looking UI. I've made a minor comment on it in InVision. I'm curious if this is an implementable idea - does keystone support large numbers of 3rd party idps? is there an API to retreive the list of idps or does this require carefully coordinated configuration between Horizon and Keystone so they both recognize the same list of idps? There is an API call for getting a list of Identity Providers from Keystone _http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3-os-federation-ext.html#list-identity-providers_ Doug Fish David Chadwick _d.w.chadw...@kent.ac.uk_ mailto:d.w.chadw...@kent.ac.uk wrote on 08/01/2015 06:01:48 AM: From: David Chadwick _d.w.chadw...@kent.ac.uk_ mailto:d.w.chadw...@kent.ac.uk To: OpenStack Development Mailing List _openstack-dev@lists.openstack.org_ mailto:openstack-dev@lists.openstack.org Date: 08/01/2015 06:05 AM Subject: [openstack-dev] [Keystone] [Horizon] Federated Login Hi Everyone I have a student building a GUI for federated login with Horizon. The interface supports both a drop down list of configured IDPs, and also Type Ahead for massive federations with hundreds of IdPs. Screenshots are visible in InVision here _https://invis.io/HQ3QN2123_ All comments on the design are appreciated. You can make them directly to the screens via InVision Regards David __ OpenStack Development Mailing List (not for usage questions) Unsubscribe:_ __openstack-dev-requ...@lists.openstack.org?subject:unsubscribe_ http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe _http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev_ __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: _openstack-dev-requ...@lists.openstack.org?subject:unsubscribe_ http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe_ __http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev_ __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Stable][Nova] VMware NSXv Support
Hi, Thanks for the comments. I agree with you that this does not comply with the policy. I wanted to raise the issue as whoever is going to use the Neutron driver with stable/kilo will need these patches. I will update the plugin wiki indicating that these two patches are required to get it working for stable/kilo. Thanks Gary From: Kuvaja, Erno kuv...@hp.commailto:kuv...@hp.com Reply-To: OpenStack List openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Date: Wednesday, August 5, 2015 at 1:37 PM To: OpenStack List openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [Stable][Nova] VMware NSXv Support Hi Gary, While I do understand the interest to get this functionality included, I really fail to see how it would comply with the Stable Branch Policy: https://wiki.openstack.org/wiki/StableBranch#Stable_branch_policy Obviously the last say is on stable-maint-core, but normally new features are really no-no to stable branches. My concerns are more on the metadata side of your changes. Even the refactoring is fairly clean it is major part of the metadata handler. It also changes the API (In the case of X-Metadata-Provider being present) which tends to be sacred on stable branches. The changes here does not actually fix any bug but just implements new functionality that missed kilo not even slightly but by months. Thus my -1 for merging these. - Erno From: Gary Kotton [mailto:gkot...@vmware.com] Sent: Wednesday, August 05, 2015 8:03 AM To: OpenStack List Subject: [openstack-dev] [Stable][Nova] VMware NSXv Support Hi, In the Kilo cycle a Neutron driver was added for supporting the Vmware NSXv plugin. This required patches in Nova to enable the plugin to work with Nova. These patches finally landed yesterday. I have back ported them to stable/kilo as the Neutron driver is unable to work without these in stable/kilo. The patches can be found at: 1. VNIC support - https://review.openstack.org/209372 2. Metadata support - https://review.openstack.org/209374 I hope that the stable team can take this into consideration. Thanks in advance Gary __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Keystone] [Horizon] Federated Login
On 04/08/2015 17:51, Lin Hua Cheng wrote: Hi David, There was a similar effort in Kilo to design the flow in the login page for federated login[1]. WebSSO feature[2] was implemented in Kilo, it allows the user to perform federated login by selecting an IdP protocol. This have tested with kerberos and saml2. This is not a very user friendly thing to do. Users typically have no idea what a federation protocol is, and wont know which one to select. They will however know which organisation (IdP) they are associated with and can use for federated login. We have been following the best practice guide available here https://discovery.refeds.org/guide/ There is a proposal to extend that feature to show listing per Idp/Protocol instead [3], because just listing only by protocol is fairly limited . Our intention is to list by organisation/IdP only and not to mention the protocol to the user, since it is meaningless to him. Horizon can work the protocol out itself and use the correct one, without burdening the user with extra mental effort that will only confuse, frustrate and distress I think the Type Ahead can fit it nicely when we implement the support for WebSSO by IdP/Protocol. Agreed, type ahead was introduced after many years of simple listing, since once federation grew to any appreciable size, the listing became unusable. regards David thanks, Lin [1] https://openstack.invisionapp.com/d/main#/projects/2784587 [2] http://docs.openstack.org/developer/keystone/extensions/websso.html [3] https://review.openstack.org/#/c/199339/ https://review.openstack.org/#/c/199339/ On Sat, Aug 1, 2015 at 4:01 AM, David Chadwick d.w.chadw...@kent.ac.uk mailto:d.w.chadw...@kent.ac.uk wrote: Hi Everyone I have a student building a GUI for federated login with Horizon. The interface supports both a drop down list of configured IDPs, and also Type Ahead for massive federations with hundreds of IdPs. Screenshots are visible in InVision here https://invis.io/HQ3QN2123 All comments on the design are appreciated. You can make them directly to the screens via InVision Regards David __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [mistral] BPMN support
Thanks Dmitri! From: Dmitri Zimine Reply-To: OpenStack Development Mailing List (not for usage questions) Date: Tuesday, August 4, 2015 at 22:14 To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [mistral] BPMN support Hi Noy, The short answer is No, BPMN is currently not supported, and No, we didn’t hear requests to support it yet. It is architecturally feasible but is a substantial effort, and not on the current roadmap. The longer answer is: Both Mistral DSL and BPMN come from the common root of workflow body of knowledge, The Workflow Reference Model. However specifics choices of 1) supported workflow patterns 2) syntax to express and 3) visual representations and 4) target users are all different, given the differences in the domain. Business processes are quite complex, and business users are much less technical, this forced more complexity into the workflow and workflow definition language - to express more visually and abstract it from “the code”. Industry experience with applying workflow to IT automation shown that few patterns are commonly used in the field. And technical IT users don’t hesitate dealing with the code (rather hesitate dealing with graphical tools or any tools that constraint them). Mistral DSL tried to strike the balance between using of workflow as a powerful abstraction, and keeping it simple, with minimal patterns to support typical IT operations. Mistral as workflow service is architecturally capable of supporting another workflow language; it’s a matter of writing workflow handler that supports BMPN syntax and implements underlying workflow logic. Practically, however, BPMN may be too far away: it’s XML not YAML, it’s a different way to refer data, therefore an implementation may reviel more places were adjustments will be required. Hope this helps. Cheers, Dmitri. There’s no plans On Jul 23, 2015, at 1:05 AM, ITZIKOWITZ, Noy (Noy) noy.itzikow...@alcatel-lucent.commailto:noy.itzikow...@alcatel-lucent.com wrote: Hi, We had a question from our OSS team about the level of support of BPMN in Mistral, Is there any plan to include that support? Did you heard from other people on the community around the need for that? Thanks, Noy __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova][FFE] Feature Freeze Exception Request
Hello, I would like to request feature freeze exception for the implementation of Nested Quota Driver for Nova, which does the quota management of nested projects. Blueprint https://blueprints.launchpad.net/nova/+spec/nested-quota-driver-api Addressed by, https://review.openstack.org/#/c/160605/ The patches in the order of dependency are as follows, 1. Create column allocated in Quota table https://review.openstack.org/#/c/151327/ https://review.openstack.org/#/c/151327/ 2. Set default values to sub-projects and users. https://review.openstack.org/#/c/151677/ https://review.openstack.org/#/c/151677/ 3. Modification of settable quotas of nested projects https://review.openstack.org/#/c/200342 https://review.openstack.org/#/c/200342 4. Finding parent_id and immediate child list https://review.openstack.org/#/c/200941/ https://review.openstack.org/#/c/200941/ 5. Adding v2 and v3 support https://review.openstack.org/#/c/149828/ Keystone already supports nested projects.Without Nested Quota Driver ,Nova will not be able to support nested projects,even if they exist in keystone. Nested Quota Driver is a superset of DbQuotaDriver and it supports one to N levels of projects.It can support nested as well as non-nested projects. The implementation of Nested Quota Driver is completed and the code is under review. It is supposed to become the default quota driver of nova. To avoid any potential risks,it can be deployed as an optional driver in the current release and can be made default in the subsequent release. Kindly grant freeze exception for the change.Nested projects are very important for large organizations like CERN who are waiting for this code to get merged in Liberty. Organizations like NASA, Yahoo, Federal University of Campina Grande,Brazil etc also have expressed keen interest in this feature. best regards sajeesh __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [third-party-ci]Issue with running noop-check-communication
Hi, We're in the process of rebuilding the Jenkins CI for Cinder and i'm stuck at testing the noop job. I've setup using the latest changes from os-ext-testing and os-ext-testing-data using project-config, jjb and dib and i have a jenkins running which has the 2 jobs defined and i have 3 slaves attached (from a devstack - cloud provider). Now when i make changes to sandbox (to test it) i see jobs being triggered but they hang in Jenkins: pending—There are no nodes with the label ‘’ http://10.100.128.3:8080/label/ I checked the job config and it shows Restrict where this project can be run/Label expression master / Slaves in label: 1 and i can trigger it manually and it shows Waiting for next available executor on master - although there are 4 executors in Idle. Any ideas? Thanks, -- *Eduard Biceri Matei, Senior Software Developer* www.cloudfounders.com | eduard.ma...@cloudfounders.com *CloudFounders, The Private Cloud Software Company* __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [puppet][keystone] To always use or not use domain name?
While working on trust provider for the Keystone (V3) puppet module, a question about using domain names came up. Shall we allow or not to use names without specifying the domain name in the resource call? I have this trust case involving a trustor user, a trustee user and a project. For each user/project the domain can be explicit (mandatory): trustor_name::domain_name or implicit (optional): trustor_name[::domain_name] If a domain isn't specified the domain name can be assumed (intuited) from either the default domain or the domain of the corresponding object, if unique among all domains. Although allowing to not use the domain might seems easier at first, I believe it could lead to confusion and errors. The latter being harder for the user to detect. Therefore it might be better to always pass the domain information. I believe using the full domain name approach is better. But it's difficult to tell because in puppet-keystone and puppet-openstacklib now rely on python-openstackclient (OSC) to interface with Keystone. Because we can use OSC defaults (OS_DEFAULT_DOMAIN or equivalent to set the default domain) doesn't necessarily makes it the best approach. For example hard coded value [1] makes it flaky. [1] https://github.com/openstack/python-openstackclient/blob/master/openstackclient/shell.py#L40 To help determine the approach to use, any feedback will be appreciated. Thanks, Gilles __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Cinder] A possible solution for HA Active-Active
On 2015-08-05 09:10:30 +0200 (+0200), Philipp Marek wrote: [...] Pacemaker is *the* Linux HA Stack. [...] Can you expand on this assertion? It doesn't look to me like it's part of the Linux source tree and I see strong evidence to suggest it's released and distributed completely separately from the kernel. Statements like this one make the rest of your messages look even more like a marketing campaign, so I'd love to understand what you really mean (I seriously doubt you're campaigning for this specific piece of software, after all, but that's the way it comes across). -- Jeremy Stanley __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Cinder] A possible solution for HA Active-Active
[...] Pacemaker is *the* Linux HA Stack. [...] Can you expand on this assertion? It doesn't look to me like it's part of the Linux source tree and I see strong evidence to suggest it's released and distributed completely separately from the kernel. If you read Linux as GNU/Linux or Linux platform, instead of Linux kernel, it's what I meant. Statements like this one make the rest of your messages look even more like a marketing campaign, so I'd love to understand what you really mean (I seriously doubt you're campaigning for this specific piece of software, after all, but that's the way it comes across). Sorry for not being entirely clear. I thought that my message was good enough, as the OpenStack documentation itself already talks about Pacemaker: http://docs.openstack.org/high-availability-guide/content/ch-pacemaker.html OpenStack infrastructure high availability relies on the Pacemaker cluster stack, the state-of-the-art high availability and load balancing stack for the Linux platform. Pacemaker is storage and application-agnostic, and is in no way specific to OpenStack. Expanding on what we have, what GNU/Linux already has, and what is being used for Linux (platform) HA, I wanted to point out that most of the parts for _one_ possible solution already exists. Whether we want to go *that* route is yet to be decided, of course. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Tricircle]Weekly Team Meeting 2015.08.05 Agenda
Hi Team, As usual we will have weekly meeting today starting UTC1300. The agenda today is to address the AIs left in the last meeting: 1. update the doc for how to work with KeyStone, joehuang 2. gampel check what mistral supports and which taskflow we want in the reference implementation 3. discuss pluggable cascade service module on bottom -- Zhipeng (Howard) Huang Standard Engineer IT Standard Patent/IT Prooduct Line Huawei Technologies Co,. Ltd Email: huangzhip...@huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipe...@uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Fuel][Plugins] Using DriverLog as the Fuel Plugins registry
Hi, If you are now developing a plugin for Fuel, please feel free to use DriverLog to add an entry for your plugin. You can find details instructions on how to do that here [1]. If something seems unclear to you, feel free to request more details. Thanks. [1] https://wiki.openstack.org/wiki/DriverLog#How_To:_Add_a_new_Fuel_Plugin_to_DriverLog -- Best regards, Irina __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Stable][Nova] VMware NSXv Support
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Hi, I think Erno made a valid point here. If that would touch only vmware code, that could be an option to consider. But it looks like both patches are very invasive, and they are not just enabling features that are already in the tree, but introduce new stuff that is not even tested for long in master. I guess we'll need to wait for those till Liberty. Unless nova-core-maint has a different opinion and good arguments to approach the merge. Ihar On 08/05/2015 12:37 PM, Kuvaja, Erno wrote: Hi Gary, While I do understand the interest to get this functionality included, I really fail to see how it would comply with the Stable Branch Policy: https://wiki.openstack.org/wiki/StableBranch#Stable_branch_policy Obviously the last say is on stable-maint-core, but normally new features are really no-no to stable branches. My concerns are more on the metadata side of your changes. Even the refactoring is fairly clean it is major part of the metadata handler. It also changes the API (In the case of X-Metadata-Provider being present) which tends to be sacred on stable branches. The changes here does not actually fix any bug but just implements new functionality that missed kilo not even slightly but by months. Thus my -1 for merging these. - Erno *From:*Gary Kotton [mailto:gkot...@vmware.com] *Sent:* Wednesday, August 05, 2015 8:03 AM *To:* OpenStack List *Subject:* [openstack-dev] [Stable][Nova] VMware NSXv Support Hi, In the Kilo cycle a Neutron driver was added for supporting the Vmware NSXv plugin. This required patches in Nova to enable the plugin to work with Nova. These patches finally landed yesterday. I have back ported them to stable/kilo as the Neutron driver is unable to work without these in stable/kilo. The patches can be found at: 1. VNIC support - https://review.openstack.org/209372 2. Metadata support - https://review.openstack.org/209374 I hope that the stable team can take this into consideration. Thanks in advance Gary __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -BEGIN PGP SIGNATURE- Version: GnuPG v2 iQEcBAEBCAAGBQJVwgkjAAoJEC5aWaUY1u57NacIALsJ8oo6eJKqJIidBSFzwxvg zqJXHE56Lpg62/afRF94B2edfhm791Mz42LTFn0BHHRjV51TQX4k/Jf3Wr22CEvm zFZkU5eVMVOSL3GGnOZqSv/T06gBWmlMVodmSKQjGxrIL1s8G1m4aTwe6Pqs+lie N+cT0pZbcjL/P1wYTac6XMpF226gO1owUjhE4oj9VZzx7kEqNsv22SIzVN2fQcco YLs/LEcabMhuuV4Amde3RqUr0BkB+mlIX1TUv5/FTXT/F4ZwzYS/DBH9MaBJ5t8n hgCTJzCeg598+irgOt3VJ3Jn3Unljz6LNzKIM8RnBG0o51fp8vfE/mODQQaUKOg= =ZYP8 -END PGP SIGNATURE- __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Ironic] weekly subteam status report
Hi, Following is the subteam report for Ironic. As usual, this is pulled directly from the Ironic whiteboard[0] and formatted. Bugs (dtantsur) As of Mon, Aug 3 (diff with July 27) - Open: 142 (-5). 8 new (+2), 48 in progress (-5), 0 critical, 11 high and 8 incomplete - Nova bugs with Ironic tag: 25 (+1). 1 new (+1), 0 critical, 0 high Oslo (lintan) == Oslo proposes a Privilege Separation Daemon project (oslo.privsep ) to replace previous rootwrap mechanism - https://review.openstack.org/#/c/204073/5/specs/liberty/privsep.rst Inspector (dtansur) === gate-ironic-inspector-dsvm-nv is running on ironic patches now - it's pretty reliable, so please pay attention to it when submitting and reviewing patches Bifrost (TheJulia) = - Cleaning up documentation and code. - Looking towards adding testing that leverages diskimage-builder and exercises that functionality to help identify if components being consumed are broken. Drivers == iRMC (naohirot) - https://review.openstack.org//#/q/owner:+naohirot%2540jp.fujitsu.com+status:+open,n,z Status: Active (spec and generic ipmitool impl available for review) - Enhance Power Interface for Soft Reboot and NMI - bp/enhance-power-interface-for-soft-power-off-and-inject-nmi Status: Active (code review is on going) - iRMC out of band inspection - bp/ironic-node-properties-discovery Status: TODO - iRMC Virtual Media Deploy Driver - Add documentation for iRMC virtual media driver - follow up patch to fix nits Until next week, --ruby [0] https://etherpad.openstack.org/p/IronicWhiteBoard __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys
On Wed, Aug 5, 2015 at 2:38 AM, Adam Heczko ahec...@mirantis.com wrote: Hi, I believe that Barbican keystore for signing keys was discussed earlier. I'm not sure if that's best idea since Barbican relies on Keystone authN/authZ. Correct. Once we find a solution for that problem it would be interesting to work towards a solution for storing keys in Barbican. I've talked to several people about this already and it seems to be the natural progression. Once we can do that, I think we can revisit the tooling for rotation. That's why this mechanism should be considered rather as out of band to Keystone/OS API and is rather devops task. regards, Adam On Wed, Aug 5, 2015 at 8:11 AM, joehuang joehu...@huawei.com wrote: Hi, Lance, May we store the keys in Barbican, can the key rotation be done upon Barbican? And if we use Barican as the repository, then it’s easier for Key distribution and rotation in multiple KeyStone deployment scenario, the database replication (sync. or async.) capability could be leveraged. Best Regards Chaoyi Huang ( Joe Huang ) *From:* Lance Bragstad [mailto:lbrags...@gmail.com] *Sent:* Tuesday, August 04, 2015 10:56 PM *To:* OpenStack Development Mailing List (not for usage questions) *Subject:* Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys On Tue, Aug 4, 2015 at 9:28 AM, Boris Bobrov bbob...@mirantis.com wrote: On Tuesday 04 August 2015 08:06:21 Lance Bragstad wrote: On Tue, Aug 4, 2015 at 1:37 AM, Boris Bobrov bbob...@mirantis.com wrote: On Monday 03 August 2015 21:05:00 David Stanek wrote: On Sat, Aug 1, 2015 at 8:03 PM, Boris Bobrov bbob...@mirantis.com wrote: Also, come on, does http://paste.openstack.org/show/406674/ look overly complex? (it should be launched from Fuel master node). I'm reading this on a small phone, so I may have it wrong, but the script appears to be broken. It will ssh to node-1 and rotate. In the simplest case this takes key 0 and moves it to the next highest key number. Then a new key 0 is generated. Later there is a loop that will again ssh into node-1 and run the rotation script. If there is a limit set on the number of keys and you are at that limit a key will be deleted. This extra rotation on node-1 means that it's possible that it has a different set of keys than are on node-2 and node-3. You are absolutely right. Node-1 should be excluded from the loop. pinc also lacks -c 1. I am sure that other issues can be found. In my excuse I want to say that I never ran the script and wrote it just to show how simple it should be. Thank for review though! I also hope that no one is going to use a script from a mailing list. What's the issue with just a simple rsync of the directory? None I think. I just want to reuse the interface provided by keystone-manage. You wanted to use the interface from keystone-manage to handle the actual promotion of the staged key, right? This is why there were two fernet_rotate commands issued? Right. Here is the fixed version (please don't use it anyway): http://paste.openstack.org/show/406862/ Note, this doesn't take into account the initial key repository creation, does it? Here is a similar version that relies on rsync for the distribution after the initial key rotation [0]. [0] http://cdn.pasteraw.com/d6odnvtt1u9zsw5mg4xetzgufy1mjua -- Best regards, Boris Bobrov __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Adam Heczko Security Engineer @ Mirantis Inc. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] Does OpenStack need a common solution for DLM?
On 21:14 Aug 04, Joshua Harlow wrote: I can start a cross-project spec tomorrow if people feel that is useful, it may be slightly opinionated (I am one of the cores that works on https://kazoo.readthedocs.org/ so I am going be slightly biased for obvious reasons). http://lists.openstack.org/pipermail/openstack-dev/2015-August/071412.html -- Mike Perez __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Cinder] A possible solution for HA Active-Active
On 17:03 Aug 05, Flavio Percoco wrote: snip That said, you may want to sync with Joshua since he's going to work on a cross-project spec as well (as he mentioned in the other thread).[0] http://lists.openstack.org/pipermail/openstack-dev/2015-August/071441.html -- Mike Perez __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Ironic] Was there a meeting yesterday (August 4, 2015 at 0500 UTC)
Hi, Was there an ironic meeting yesterday (August 4, 2015 at 0500 UTC)? I don't see any meeting logs from then. --ruby __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Security] Would people see a value in the cve-check-tool? (Reshetova, Elena)
(Merging thread from security ML) Bandit probably isn¹t the correct integration point for this - cve-check has its own analysis procedures while Bandit uses Python AST. Also I see the use workflows being different. For Bandit a developer/gate wants to check a specific code snippet whereas for cve-check to be effective it really needs to examine the entire dependency chain. As Rob and I mentioned earlier a gate process on ³openstack-requirements² seems like an ideal target for this. The idea would be anytime a requirement is added (for example to enable a newer version or an entirely new library to be used) we could run a cve-check job that ensures the new library (or version) doesn¹t have any known CVE¹s against it. This way we can be covered across OpenStack (since OpenStack projects can¹t use Libraries that aren¹t in global requirements). The gate processing time is minimal since it doesn¹t have to run for each project. The only concern that I have is the requisite database. Downloading a 500MB + CVE database for the jobs could become painful. We could either keep the CVE database on each node in the test pool or download it at the start of each cve-check job. I¹d be curious what the infra wizards have to say. I¹d also really like to see what the baseline results look like. If you run it against current global requirements does it find legitimate issues? Does it find false positives. In any case it seems worth exploring as vulnerabilities in upstream dependencies are a key weakness in our current system. Hi folks! Idea really looks good. I am attaching an example of a very simple Python wrapper for the tool Looks like this wrapper is lightweight. But maybe try to integrate it with Bandit and not to create a new tool? --? Victor Ryzhenkin freerunner on #freenode smime.p7s Description: S/MIME cryptographic signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Cinder] A possible solution for HA Active-Active
On 2015-08-05 14:36:37 +0200 (+0200), Philipp Marek wrote: [...] Pacemaker is *the* Linux HA Stack. [...] Can you expand on this assertion? It doesn't look to me like it's part of the Linux source tree and I see strong evidence to suggest it's released and distributed completely separately from the kernel. If you read Linux as GNU/Linux or Linux platform, instead of Linux kernel, it's what I meant. [...] Okay, that makes slightly more sense. So you're implying that Pacemaker is the only HA stack available for Linux-based platforms, or that it's the most popular, or... I guess I'm mostly thrown by your use of the definite article the (which you emphasized, so it seems like you must mean there are effectively no others?). -- Jeremy Stanley __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [keystone] policy issues when generating trusts with different clients
On 08/05/2015 02:34 AM, Steve Martinelli wrote: I think this is happening because the last session created was based off of trustee_auth. Try creating 2 sessions, one for each user (trustor and trustee). Maybe Jamie will chime in. thanks for the reply Steve, i will give that a try. my understanding was that we could recycle the Session objects and just apply a new auth each time they are used. mike __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [trove][qa][stable] gate-trove-functional-dsvm-mysql needs some stable branch love
Trove changes on the stable branches are blocked on bug 1479358 [1] because a change was made to fix trove-integration on master for liberty but didn't take into account that those scripts are branchless and therefore need to work on stable/kilo and stable/juno as well, where we have capped versions of libraries it uses (like python-openstackclient). The gate-trove-functional-dsvm-mysql job is now running on stable branches for compat so we don't break stable again once it's fixed, but we're still blocked on just getting it to work at all. I tried a fix [2] that isn't working on stable for different reasons. I'm not actively pursuing getting this fixed and therefore really need people from the trove team to step up here and get their CI house in order. Otherwise the alternative is we don't run the gate-trove-functional-dsvm-mysql job on stable/juno and stable/kilo since it's not working and I haven't seen much impetus to get it working. [1] https://bugs.launchpad.net/trove-integration/+bug/1479358 [2] https://review.openstack.org/#/c/207193/ -- Thanks, Matt Riedemann __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Cinder] Quobyte Cinder Driver revert?
Hi all! Thank you Mike for proposing the revert of the revert. Today I prepared a change that tackles the docstring issues. As being new to the OpenStack development process, I didn't manage to upload it to Gerrit today. Most likely I'll get it uploaded tomorrow. Best regards, Robert Doebbelin 2015-08-03 21:37 GMT+02:00 Matt Riedemann mrie...@linux.vnet.ibm.com: On 8/3/2015 10:23 AM, Mike Perez wrote: On Fri, Jul 3, 2015 at 9:11 AM, Duncan Thomas duncan.tho...@gmail.com wrote: It was discussed on the mailing list, and at the weekly meeting. Mike had had no response on the issue from the listed contact email, and the CI was reporting failure for every patch for two months On 3 Jul 2015 17:33, Silvan Kaiser sil...@quobyte.com wrote: Hello! I just found the following commit in the cinder log: commit a3f4eed52efce50c2eb1176725bc578272949d7b Merge: 6939b4a e896ae2 Author: Jenkins jenk...@review.openstack.org Date: Thu Jul 2 23:14:39 2015 + Merge Revert First version of Cinder driver for Quobyte Is this part of some restructuring work, etc. that i did miss? I could not find a gerrit review for this and had no prior information? I did not see any related information when i did my weekly checks of the cinder weekly meeting logs and am confused to find this commit. We're still working on the CI issues discussed on the CI mailing list and am fully aware that we've to get this stably reporting. This is not a removal because of the CI issues? Best regards Silvan Kaiser As Duncan stated, this was a two month issue. Deadline or not, that's unacceptable from Quobyte. I have proposed a revert of the revert since these are stable now. however, I expect responsiveness to issues in the future. https://review.openstack.org/#/c/208528/ -- Mike Perez __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev I guess there isn't a quobyte connector in os-brick yet, but just a reminder that there is a libvirt volume driver in nova for talking to quobyte [1]. It'd be good to get a heads up on the nova side when the cinder team is removing drivers so that we can do the same if it's a permanent removal. This is where I expect the liaison thing to come in handy. [1] http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/libvirt/volume/quobyte.py#n99 -- Thanks, Matt Riedemann __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- -- *Quobyte* GmbH Hardenbergplatz 2 - 10623 Berlin - Germany +49-30-814 591 800 - www.quobyte.com Amtsgericht Berlin-Charlottenburg, HRB 149012B management board: Dr. Felix Hupfeld, Dr. Björn Kolbeck, Dr. Jan Stender __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Cinder] A possible solution for HA Active-Active
[...] Pacemaker is *the* Linux HA Stack. [...] Can you expand on this assertion? It doesn't look to me like it's part of the Linux source tree and I see strong evidence to suggest it's released and distributed completely separately from the kernel. If you read Linux as GNU/Linux or Linux platform, instead of Linux kernel, it's what I meant. [...] Okay, that makes slightly more sense. So you're implying that Pacemaker is the only HA stack available for Linux-based platforms, or that it's the most popular, or... I guess I'm mostly thrown by your use of the definite article the (which you emphasized, so it seems like you must mean there are effectively no others?). Well, SUSE and Redhat (7) use Pacemaker by default, Debian/Ubuntu have it (along with others)... That gives it quite some market share, wouldn't you think? Yes, I guess the most popular meaning is a good match here. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [trove][qa][stable] gate-trove-functional-dsvm-mysql needs some stable branch love
Matt, Nikhil was working on it late into the night last night. I'll continue to work with him today and try and get this wrestled to the ground. -amrith -Original Message- From: Matt Riedemann [mailto:mrie...@linux.vnet.ibm.com] Sent: Wednesday, August 05, 2015 6:57 PM To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [trove][qa][stable] gate-trove-functional-dsvm-mysql needs some stable branch love Trove changes on the stable branches are blocked on bug 1479358 [1] because a change was made to fix trove-integration on master for liberty but didn't take into account that those scripts are branchless and therefore need to work on stable/kilo and stable/juno as well, where we have capped versions of libraries it uses (like python-openstackclient). The gate-trove-functional-dsvm-mysql job is now running on stable branches for compat so we don't break stable again once it's fixed, but we're still blocked on just getting it to work at all. I tried a fix [2] that isn't working on stable for different reasons. I'm not actively pursuing getting this fixed and therefore really need people from the trove team to step up here and get their CI house in order. Otherwise the alternative is we don't run the gate-trove-functional-dsvm-mysql job on stable/juno and stable/kilo since it's not working and I haven't seen much impetus to get it working. [1] https://bugs.launchpad.net/trove-integration/+bug/1479358 [2] https://review.openstack.org/#/c/207193/ -- Thanks, Matt Riedemann __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Ironic] Was there a meeting yesterday (August 4, 2015 at 0500 UTC)
Only a few people turned up (including me who was late) so no meeting was held. Hope this helps, Michael... On Wed, Aug 5, 2015 at 10:43 PM, Ruby Loo rlooya...@gmail.com wrote: Hi, Was there an ironic meeting yesterday (August 4, 2015 at 0500 UTC)? I don't see any meeting logs from then. --ruby __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Michael Davies mich...@the-davies.net Rackspace Cloud Builders Australia __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Ironic] Was there a meeting yesterday (August 4, 2015 at 0500 UTC)
There wasn't one. Some of us waited in the meeting room to see if someone turns up, but I just got very very few (almost none) responses. On Wed, Aug 5, 2015 at 7:02 PM, Michael Davies mich...@the-davies.net wrote: Only a few people turned up (including me who was late) so no meeting was held. Hope this helps, Michael... On Wed, Aug 5, 2015 at 10:43 PM, Ruby Loo rlooya...@gmail.com wrote: Hi, Was there an ironic meeting yesterday (August 4, 2015 at 0500 UTC)? I don't see any meeting logs from then. --ruby __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Michael Davies mich...@the-davies.net Rackspace Cloud Builders Australia __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Ironic] Was there a meeting yesterday (August 4, 2015 at 0500 UTC)
On Wed, Aug 05, 2015 at 09:13:18AM -0400, Ruby Loo wrote: Hi, Was there an ironic meeting yesterday (August 4, 2015 at 0500 UTC)? I don't see any meeting logs from then. There was not. http://eavesdrop.openstack.org/irclogs/%23openstack-ironic/%23openstack-ironic.2015-08-04.log.html#t2015-08-04T05:02:34 http://eavesdrop.openstack.org/irclogs/%23openstack-meeting-3/%23openstack-meeting-3.2015-08-04.log.html#t2015-08-04T05:05:54 // jim __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [heat][ec2tokens] Questions about ec2tokens under keystone v3 api.
As I saw heat`s ec2tokens can work only with keystone v2 URL. It happens because keystone has different responses for v2 and v3 versions for token request by ec2 credentials. I found same problem in our ec2api project and keystonemiddleware project. For example: Patch for our ec2api project will be here - https://review.openstack.org/#/c/209085/2/ec2api/api/__init__.py Patch for keystonemiddleware is here - https://review.openstack.org/#/c/205440/ -- Kind regards, Andrey Pavlov. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [keystone] policy issues when generating trusts with different clients
On 08/05/2015 02:34 AM, Steve Martinelli wrote: I think this is happening because the last session created was based off of trustee_auth. Try creating 2 sessions, one for each user (trustor and trustee). Maybe Jamie will chime in. just as a followup, i tried creating new Session objects for each client and i still get permission errors. i'm going to dig into the trust permission validation stuff a little. mike __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Cinder] A possible solution for HA Active-Active
On 2015-08-05 15:31:03 +0200 (+0200), Philipp Marek wrote: [...] Pacemaker is *the* Linux HA Stack. [...] Can you expand on this assertion? It doesn't look to me like it's part of the Linux source tree and I see strong evidence to suggest it's released and distributed completely separately from the kernel. If you read Linux as GNU/Linux or Linux platform, instead of Linux kernel, it's what I meant. [...] Okay, that makes slightly more sense. So you're implying that Pacemaker is the only HA stack available for Linux-based platforms, or that it's the most popular, or... I guess I'm mostly thrown by your use of the definite article the (which you emphasized, so it seems like you must mean there are effectively no others?). Well, SUSE and Redhat (7) use Pacemaker by default, Debian/Ubuntu have it (along with others)... That gives it quite some market share, wouldn't you think? Yes, I guess the most popular meaning is a good match here. I see, so in the same way that nano is *the* Linux text editor (Debian/Ubuntu configure it as the default, SUSE and Redhat have it packaged). Popularity alone doesn't seem like a great criterion for making these sorts of technology choices. -- Jeremy Stanley __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron] Common Base class for agents
I definitely don't think this work should start in a new repository. As Sean and Andreas have said, I think the changes should be done in-tree rather than creating another repository for this work. On Wed, Aug 5, 2015 at 2:42 AM, Andreas Scheuring scheu...@linux.vnet.ibm.com wrote: Sukhdev, last week I spent some time to figure out the current state of modular l2 agent design and discussion. I got the impression it's not in a good shape! So I personally don't think that it makes any sense to start with a modular l2 agent prototype and in the worst case throw it all away, as we missed a single detail. I would prefer to get folks with knowledge cross all l2 agents together and work on a design first, that everyone can agree upon. So my initial mail basically was to start a effort for easily sharing code. Maybe this will end up in a single agent having multiple drivers but that's not the primary goal (which is sharing code). I'm more with Carl, to start a code sharing effort and the macvtap agent effort in parallel, independent from each other. I must admit I have less insights into ovsagent. But I know that it diverged a lot from the other agents. Sean Collins is currently evaluating an approach to bring linuxbrige closer to ovs [1]. Maybe that's the way to got. Do internal refactorings to bring things close to each other and then see what might be possible to get a common agent or at least common code. But any other suggestions are highly welcome! [1] https://review.openstack.org/#/c/208666/ Andreas (IRC: scheuran) On Di, 2015-08-04 at 22:42 -0700, Sukhdev Kapur wrote: We discussed this in ML2 sub-team meeting last week and felt the best approach is to implement this agent in a separate repo. There is already an on-going effort/plan for modular L2 agent. This agent would be a perfect candidate to take on that effort and implement it for macvtap agent. Once done, this could be moved over under neutron tent and other agents could be moved over to utilize this framework. Either of option 1 or 2 could be utilized to implement this agent. Keeping it in a seperate repo keeps the it from impacting any other agents. Once all ready and working, others could be converted over. You get the best of both words - i.e. quick implementation of this agent and a framework for others to use - and plenty of time to bake the framework. thoughts? Sukhdev On Mon, Aug 3, 2015 at 3:53 PM, Carl Baldwin c...@ecbaldwin.net wrote: I see this as two tasks: 1) A refactoring to share common code and 2) the addition of another agent following the pattern of the others. I'd prefer that the two tasks not be mixed in the same review because it makes it more difficult to review as I think Kevin eluded to. For me, either could be done first. I'm sure some reviewers would prefer that #1 be done first to avoid the proliferation of duplicated code. However, IMO, it is not necessary to be so strict. It can take some time to review common code to get it right. I'm afraid that holding up #2 until merging #1 will either motivate us to merge #1 too hastily and do a poor job or hold up #2 longer than it should be. If this were me, I would post both as independent reviews knowing that when one of the two merges, the other will have to be rebased to take the other in to account. Sometimes, having the refactor in flight can help to allay fears about code proliferation. Actually given Kevin's mention of the modular agent stuff, maybe it isn't worth putting much effort in to the refactor patch at all. My $0.02. Carl On Mon, Aug 3, 2015 at 9:46 AM, Andreas Scheuring scheu...@linux.vnet.ibm.com wrote: Hi, I'm planning to add a new ml2 driver and agent to neutron supporting macvtap attachments [1]. Kyle already decided, that this code should land in the neutron tree [2]. The normal approach till now was to copy an existing agent code and modify accordingly, which lead to a lot of duplicated code. So my question is, how to proceed with macvtap agent? I basically see the the 2 options: 1) Do it like in the past, duplicate the code that is needed for macvtap agent (main loop, mechanism for detecting new/changed/deleted devices) and just go for it. 2) Extract a new superclass that holds some of the common code. This would work for linuxridge agent and sriovnic agent -
Re: [openstack-dev] [Cinder] A possible solution for HA Active-Active
Well, SUSE and Redhat (7) use Pacemaker by default, Debian/Ubuntu have it (along with others)... That gives it quite some market share, wouldn't you think? Yes, I guess the most popular meaning is a good match here. I see, so in the same way that nano is *the* Linux text editor (Debian/Ubuntu configure it as the default, SUSE and Redhat have it packaged). Along with quite a few alternatives. How many cluster stack alternatives can you see in SUSE? How many cluster stack alternatives are available in _every_ major distribution? Popularity alone doesn't seem like a great criterion for making these sorts of technology choices. Popularity _alone_ is not the sole criteria, right. But to write something new just because of NIH is the wrong approach, IMO. [[ I'm going to stop arguing now. ]] __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Security] Would people see a value in the cve-check-tool? (Reshetova, Elena)
On 2015-08-05 13:14:40 + (+), McPeak, Travis wrote: [...] The only concern that I have is the requisite database. Downloading a 500MB + CVE database for the jobs could become painful. We could either keep the CVE database on each node in the test pool or download it at the start of each cve-check job. [...] Oh, yep that's a whopper. Downloading that during the job is very likely to make it slow and unreliable. Baking it into our worker base images is also questionable since we need to be able to boot them in cloud providers who may give us as little as a 20 GiB root filesystem device. If it can be compressed or filtered to an order of magnitude smaller, then that seems more reasonable to work with. Otherwise we'd need some separate online query service to hold the database and handle the lookups (either hosted in our infrastructure or elsewhere). -- Jeremy Stanley __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [third-party-ci]Issue with running noop-check-communication
Hi Eduard, There seems to be a bug regarding running jobs on master [1]. Try running it on a slave instead. Ramy [1] https://github.com/rasselin/os-ext-testing/blob/master/README.md#running-jobs-on-jenkins-master From: Eduard Matei [mailto:eduard.ma...@cloudfounders.com] Sent: Wednesday, August 05, 2015 5:03 AM To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [third-party-ci]Issue with running noop-check-communication Hi, We're in the process of rebuilding the Jenkins CI for Cinder and i'm stuck at testing the noop job. I've setup using the latest changes from os-ext-testing and os-ext-testing-data using project-config, jjb and dib and i have a jenkins running which has the 2 jobs defined and i have 3 slaves attached (from a devstack - cloud provider). Now when i make changes to sandbox (to test it) i see jobs being triggered but they hang in Jenkins: pending—There are no nodes with the label ‘’http://10.100.128.3:8080/label/ I checked the job config and it shows Restrict where this project can be run/Label expression master / Slaves in label: 1 and i can trigger it manually and it shows Waiting for next available executor on master - although there are 4 executors in Idle. Any ideas? Thanks, -- Eduard Biceri Matei, Senior Software Developer www.cloudfounders.comhttp://www.cloudfounders.com/ | eduard.ma...@cloudfounders.commailto:eduard.ma...@cloudfounders.com CloudFounders, The Private Cloud Software Company __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [pbr] [stable] [infra] How to generate .Z version increments on stable/liberty commits
To give you an idea, if we enabled that for Kilo we'd be at Nova 11.0.80 (kilo) and Nova 10.0.218 (juno). I am not a fan of doing this second option at all. We would be polluting the ref space of our repos with redundant information making the output of `git tag` unusable to humans. If this was not redundant info and a tag of 11.0.80 provided more information than a generated version of 11.0.0.dev80 / 11.0.80 I think we could live with that, but it does not. It actual does: auto-tagged commit means it passed our CI hence project stands behind it. PBR-generated Z-version could be just local change which has never seen any CI yet. Using pbr to generate versions avoids that problem, but introduces the challenge of not being able to necessarily figure out which commit corresponds to a given version number from the outside. Say I want to check out version 11.0.80 for some reason (maybe .81 has a bug I don't want to deploy). How do I do that without a tag? That also, PBR-generated version is not universally reproducible. So what about making auto-tagging on stable branches optional for projects: by default if project has a stable branch(es) they will get auto-tagging but project could also opt-out and push X.Y.Z tags themselves, via the same release process like Oslo and clients. Cheers, Alan __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [CI]How to set proxy for nodepool
HI Xiexs, “Also, I’ve found some of the infra project-config elements don’t work in my environment and aren’t needed as they’re specific to infra. For those, simply comment out the portions that don’t work. I didn’t notice any negative side-effects.” This one you need to skip because you don’t have access to that server. In fact, here are the ones that I skip in my setup: etc/nodepool/elements/nodepool-base/install.d/90-venv-swift-logs:# skipped etc/nodepool/elements/nodepool-base/install.d/99-install-zuul:# Skipped etc/nodepool/elements/nodepool-base/finalise.d/99-unbound:# Skipped etc/nodepool/elements/cache-devstack/install.d/99-cache-testrepository-db:# Skipped The ’99-unbound’ may work in your setup. If not, you need to disable it here too: etc/nodepool/elements/puppet/bin/prepare-node: enable_unbound = false, Ramy From: Xie, Xianshan [mailto:xi...@cn.fujitsu.com] Sent: Wednesday, August 05, 2015 3:40 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [CI]How to set proxy for nodepool Hi Ramy, Thanks for your patience. I have tried your suggestion, but it did not work for me. According to the log, this element has already ran in the chroot before the pip commands are executed. So, in theory, the pip command would run behind this proxy, but the connection errors are still raised. It`s weird. Then I tried to hard code the proxy with --proxy option into the pip command, it works. Anyway, this is a merely temporary solution for this issue until I figure it out. But, after that, I got a new error: - nodepool.image.build.dpc: + sudo env 'PATH=/opt/git/subunit2sql-env/bin:/usr/lib64/ccache:/usr/lib/ccache:$PATH:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin' /opt/git/subunit2sql-env/bin/python2 /opt/nodepool-scripts/prepare_tempest_testrepository.py /opt/git/openstack/tempest nodepool.image.build.dpc: sudo: unable to resolve host fnst01 nodepool.image.build.dpc: No handlers could be found for logger oslo_db.sqlalchemy.session nodepool.image.build.dpc: Traceback (most recent call last): nodepool.image.build.dpc: File /opt/nodepool-scripts/prepare_tempest_testrepository.py, line 50, in module nodepool.image.build.dpc: main() nodepool.image.build.dpc: File /opt/nodepool-scripts/prepare_tempest_testrepository.py, line 39, in main nodepool.image.build.dpc: session = api.get_session() nodepool.image.build.dpc: File /opt/git/subunit2sql-env/local/lib/python2.7/site-packages/subunit2sql/db/api.py, line 47, in get_session nodepool.image.build.dpc: facade = _create_facade_lazily() nodepool.image.build.dpc: File /opt/git/subunit2sql-env/local/lib/python2.7/site-packages/subunit2sql/db/api.py, line 37, in _create_facade_lazily nodepool.image.build.dpc: **dict(CONF.database.iteritems())) nodepool.image.build.dpc: File /opt/git/subunit2sql-env/local/lib/python2.7/site-packages/oslo_db/sqlalchemy/session.py, line 822, in __init__ nodepool.image.build.dpc: **engine_kwargs) nodepool.image.build.dpc: File /opt/git/subunit2sql-env/local/lib/python2.7/site-packages/oslo_db/sqlalchemy/session.py, line 417, in create_engine nodepool.image.build.dpc: test_conn = _test_connection(engine, max_retries, retry_interval) nodepool.image.build.dpc: File /opt/git/subunit2sql-env/local/lib/python2.7/site-packages/oslo_db/sqlalchemy/session.py, line 596, in _test_connection nodepool.image.build.dpc: six.reraise(type(de_ref), de_ref) nodepool.image.build.dpc: File string, line 2, in reraise nodepool.image.build.dpc: oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, Can't connect to MySQL server on 'logstash.openstack.org' ([Errno -5] No address associated with hostname)) - I think it is also a proxy problem about remote access to the subunit2sql database. And it should work with simpleproxy I think. But I couldn`t find how to use simpleproxy to forward data for the subunit2sql db. Could you please give me more hints? Thanks again. Xiexs From: Asselin, Ramy [mailto:ramy.asse...@hp.com] Sent: Wednesday, August 05, 2015 6:00 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [CI]How to set proxy for nodepool Hi Xiexs, You might need to configure pip to use your proxy. I added my own element here: cache-devstack/install.d/98-setup-pip Basically: set -eux mkdir -p /root/.pip/ cat EOF /root/.pip/pip.conf [global] proxy = your proxy EOF cp -f /root/.pip/pip.conf /etc/ Ramy From: Xie, Xianshan [mailto:xi...@cn.fujitsu.com] Sent: Tuesday, August 04, 2015 12:05 AM To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [CI]How to set proxy for nodepool Hi Ramy, Thanks for your help. I have already confirmed proxy setting again, and it works fine(no matter whether the NODEPOOL_ variables are declared or
Re: [openstack-dev] [Cinder] A possible solution for HA Active-Active
On 2015-08-05 15:48:52 +0200 (+0200), Philipp Marek wrote: [...] How many cluster stack alternatives can you see in SUSE? How many cluster stack alternatives are available in _every_ major distribution? I think it depends a lot on how you define cluster stack and whether the solution to the current dilemma needs one (for example, if the answer is simply a DLM then several examples have already surfaced elsewhere in this thread and the cross-project subthread). Popularity _alone_ is not the sole criteria, right. But to write something new just because of NIH is the wrong approach, IMO. I couldn't agree more. If the need is already met by an available solution which can be reused, that seems better for everyone. I just get concerned when I see messages which state there is only one technology choice and that choice is insert product my employer makes money selling/supporting. Acknowledging alternatives makes for a much less fanatical discussion. -- Jeremy Stanley __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Tricircle]Weekly Team Meeting 2015.08.05 Agenda
Thanks for everyone attending the meeting, minutes could be found here: http://eavesdrop.openstack.org/meetings/tricircle/2015/tricircle.2015-08-05-13.00.html On Wed, Aug 5, 2015 at 8:37 PM, Zhipeng Huang zhipengh...@gmail.com wrote: Hi Team, As usual we will have weekly meeting today starting UTC1300. The agenda today is to address the AIs left in the last meeting: 1. update the doc for how to work with KeyStone, joehuang 2. gampel check what mistral supports and which taskflow we want in the reference implementation 3. discuss pluggable cascade service module on bottom -- Zhipeng (Howard) Huang Standard Engineer IT Standard Patent/IT Prooduct Line Huawei Technologies Co,. Ltd Email: huangzhip...@huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipe...@uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -- Zhipeng (Howard) Huang Standard Engineer IT Standard Patent/IT Prooduct Line Huawei Technologies Co,. Ltd Email: huangzhip...@huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipe...@uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [app-catalog] IRC Meeting Thursday August 6th at 17:00UTC
Hello! Our next OpenStack App Catalog meeting will take place this Thursday August 6th at 17:00 UTC in #openstack-meeting-3 The agenda can be found here: https://wiki.openstack.org/wiki/Meetings/app-catalog Please add agenda items if there's anything specific you would like to discuss. Please join us if you can! __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Cinder] A possible solution for HA Active-Active
Excerpts from Philipp Marek's message of 2015-08-05 00:10:30 -0700: Well, is it already decided that Pacemaker would be chosen to provide HA in Openstack? There's been a talk Pacemaker: the PID 1 of Openstack IIRC. I know that Pacemaker's been pushed aside in an earlier ML post, but IMO there's already *so much* been done for HA in Pacemaker that Openstack should just use it. All HA nodes needs to participate in a Pacemaker cluster - and if one node looses connection, all services will get stopped automatically (by Pacemaker) - or the node gets fenced. No need to invent some sloppy scripts to do exactly the tasks (badly!) that the Linux HA Stack has been providing for quite a few years. So just a piece of information, but yahoo (the company I work for, with vms in the tens of thousands, baremetal in the much more than that...) hasn't used pacemaker, and in all honesty this is the first project (openstack) that I have heard that needs such a solution. I feel that we really should be building our services better so that they can be A-A vs having to depend on another piece of software to get around our 'sloppiness' (for lack of a better word). Nothing against pacemaker personally... IMHO it just doesn't feel like we are doing this right if we need such a product in the first place. Well, Pacemaker is *the* Linux HA Stack. I'm not sure it's wise to claim the definite article for anything in Open Source. :) That said, it's certainly the most mature, and widely accepted. So, before trying to achieve similar goals by self-written scripts (and having to re-discover all the gotchas involved), it would be much better to learn from previous experiences - even if they are not one's own. Pacemaker has eg. the concept of clones[1] - these define services that run multiple instances within a cluster. And behold! the instances get some Pacemaker-internal unique id[2], which can be used to do sharding. Yes, that still means that upon service or node crash the failed instance has to be started on some other node; but as that'll typically be up and running already, the startup time should be in the range of seconds. We'd instantly get * a supervisor to start/stop/restart/fence/monitor the service(s) * node/service failure detection * only small changes needed in the services * and all that in a tested software that's available in all distributions, and that already has its own testsuite... If we decide that this solution won't fulfill all our expectations, fine - let's use something else. But I don't think it makes *any* sense to try to redo some (existing) High-Availability code in some quickly written scripts, just because it looks easy - there are quite a few traps for the unwary. I think Keystone's dev team agrees with you, and also doesn't want to get in the way of that with any half-baked solution. They give you all the CLI tools and filesystem layouts to make this work perfectly. It would be nice to even ship the pacemaker resources in a contrib directory and run tests in the gate on them. But if users have some reason not to use it, they shouldn't be force to use it. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Security] Would people see a value in the cve-check-tool? (Reshetova, Elena)
On 8/5/15, 08:14, McPeak, Travis travis.mcp...@hp.com wrote: (Merging thread from security ML) Bandit probably isn¹t the correct integration point for this - cve-check has its own analysis procedures while Bandit uses Python AST. Also I see the use workflows being different. For Bandit a developer/gate wants to check a specific code snippet whereas for cve-check to be effective it really needs to examine the entire dependency chain. As Rob and I mentioned earlier a gate process on ³openstack-requirements² seems like an ideal target for this. The idea would be anytime a requirement is added (for example to enable a newer version or an entirely new library to be used) we could run a cve-check job that ensures the new library (or version) doesn¹t have any known CVE¹s against it. This way we can be covered across OpenStack (since OpenStack projects can¹t use Libraries that aren¹t in global requirements). The gate processing time is minimal since it doesn¹t have to run for each project. One point of clarification. Not every project has to opt into global-requirements so this isn't necessarily true. Also with the merging of the stackforge and openstack namespaces, it'll be harder to distinguish when a project is or isn't using g-r since in the past it was fairly safe to assume that stackforge/ projects were more likely to not use g-r. The only concern that I have is the requisite database. Downloading a 500MB + CVE database for the jobs could become painful. We could either keep the CVE database on each node in the test pool or download it at the start of each cve-check job. I¹d be curious what the infra wizards have to say. I¹d also really like to see what the baseline results look like. If you run it against current global requirements does it find legitimate issues? Does it find false positives. In any case it seems worth exploring as vulnerabilities in upstream dependencies are a key weakness in our current system. Hi folks! Idea really looks good. I am attaching an example of a very simple Python wrapper for the tool Looks like this wrapper is lightweight. But maybe try to integrate it with Bandit and not to create a new tool? --? Victor Ryzhenkin freerunner on #freenode __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [puppet][keystone] To always use or not use domain name?
On 08/05/2015 08:16 AM, Gilles Dubreuil wrote: While working on trust provider for the Keystone (V3) puppet module, a question about using domain names came up. Shall we allow or not to use names without specifying the domain name in the resource call? I have this trust case involving a trustor user, a trustee user and a project. For each user/project the domain can be explicit (mandatory): trustor_name::domain_name or implicit (optional): trustor_name[::domain_name] If a domain isn't specified the domain name can be assumed (intuited) from either the default domain or the domain of the corresponding object, if unique among all domains. If you are specifying project by name, you must specify domain either via name or id. If you specify proejct by ID, you run the risk of conflicting if you provide a domain speciffiedr (ID or name). Although allowing to not use the domain might seems easier at first, I believe it could lead to confusion and errors. The latter being harder for the user to detect. Therefore it might be better to always pass the domain information. Probably a good idea, as it will catch if you are making some assumption. IE, I say DomainX ProejctQ but I mean DomainQ ProjectQ. I believe using the full domain name approach is better. But it's difficult to tell because in puppet-keystone and puppet-openstacklib now rely on python-openstackclient (OSC) to interface with Keystone. Because we can use OSC defaults (OS_DEFAULT_DOMAIN or equivalent to set the default domain) doesn't necessarily makes it the best approach. For example hard coded value [1] makes it flaky. [1] https://github.com/openstack/python-openstackclient/blob/master/openstackclient/shell.py#L40 To help determine the approach to use, any feedback will be appreciated. Thanks, Gilles __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [sahara] update methods
hey all, the recent discussions[1] on updating resources through the rest api has got me thinking that it might be worthwhile to convert the few methods we have implemented to use PATCH instead of PUT. we are starting to create a bifurcation in the api regarding updates. the new suspend/resume job functionality is using PATCH and the object updating is proposing using PUT. i agree with Andrey's thinking that we have been using PUT as if it were a PATCH, but i think we should just fix this inaccuracy before we go further. i'm ok with leaving the scaling operation as a PUT for now as it uses slightly different logic in the body. but i think we should fix the node group template, cluster template, job binary, and data source update methods to be PATCH instead of PUT. is there any objection to me creating reviews to fix this inaccuracy? i'm guessing the extent of the work would be in sahara and saharaclient. should i propose a bug for this or a spec? thanks, mike [1]: https://review.openstack.org/#/c/208378/ __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Security] Would people see a value in the cve-check-tool? (Reshetova, Elena)
On 2015-08-05 15:04:15 + (+), Ian Cordasco wrote: One point of clarification. Not every project has to opt into global-requirements so this isn't necessarily true. Also with the merging of the stackforge and openstack namespaces, it'll be harder to distinguish when a project is or isn't using g-r since in the past it was fairly safe to assume that stackforge/ projects were more likely to not use g-r. Agreed, this used to be a (perhaps not well-documented) necessity for repos which were in or dependencies of the integrated release. Now that we've dissolved more of those arbitrary distinctions, this seems like a great opportunity for tracking with a governance tag. I'll go ahead and propose one later today if I get a spare moment. -- Jeremy Stanley __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Security] Would people see a value in the cve-check-tool? (Reshetova, Elena)
On Wed, Aug 5, 2015, at 08:22 AM, Jeremy Stanley wrote: On 2015-08-05 15:04:15 + (+), Ian Cordasco wrote: One point of clarification. Not every project has to opt into global-requirements so this isn't necessarily true. Also with the merging of the stackforge and openstack namespaces, it'll be harder to distinguish when a project is or isn't using g-r since in the past it was fairly safe to assume that stackforge/ projects were more likely to not use g-r. Agreed, this used to be a (perhaps not well-documented) necessity for repos which were in or dependencies of the integrated release. Now that we've dissolved more of those arbitrary distinctions, this seems like a great opportunity for tracking with a governance tag. I'll go ahead and propose one later today if I get a spare moment. We already track it in the requirements repo itself [0]. Not sure if we need an additional tracking method. [0] https://git.openstack.org/cgit/openstack/requirements/tree/projects.txt Clark __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Ironic] Was there a meeting yesterday (August 4, 2015 at 0500 UTC)
On 5 August 2015 at 09:35, Jim Rollenhagen j...@jimrollenhagen.com wrote: On Wed, Aug 05, 2015 at 09:13:18AM -0400, Ruby Loo wrote: Hi, Was there an ironic meeting yesterday (August 4, 2015 at 0500 UTC)? I don't see any meeting logs from then. There was not. http://eavesdrop.openstack.org/irclogs/%23openstack-ironic/%23openstack-ironic.2015-08-04.log.html#t2015-08-04T05:02:34 http://eavesdrop.openstack.org/irclogs/%23openstack-meeting-3/%23openstack-meeting-3.2015-08-04.log.html#t2015-08-04T05:05:54 // jim Hi all, thanks for confirming. --ruby __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Ironic] weekly subteam status report
Hi, Oops, my bad. To be clear, there was no ironic meeting this week. But if there had been, this is what the subteams would have reported :) --ruby On 5 August 2015 at 09:05, Ruby Loo rlooya...@gmail.com wrote: Hi, Following is the subteam report for Ironic. As usual, this is pulled directly from the Ironic whiteboard[0] and formatted. Bugs (dtantsur) As of Mon, Aug 3 (diff with July 27) - Open: 142 (-5). 8 new (+2), 48 in progress (-5), 0 critical, 11 high and 8 incomplete - Nova bugs with Ironic tag: 25 (+1). 1 new (+1), 0 critical, 0 high Oslo (lintan) == Oslo proposes a Privilege Separation Daemon project (oslo.privsep ) to replace previous rootwrap mechanism - https://review.openstack.org/#/c/204073/5/specs/liberty/privsep.rst Inspector (dtansur) === gate-ironic-inspector-dsvm-nv is running on ironic patches now - it's pretty reliable, so please pay attention to it when submitting and reviewing patches Bifrost (TheJulia) = - Cleaning up documentation and code. - Looking towards adding testing that leverages diskimage-builder and exercises that functionality to help identify if components being consumed are broken. Drivers == iRMC (naohirot) - https://review.openstack.org//#/q/owner:+naohirot%2540jp.fujitsu.com+status:+open,n,z Status: Active (spec and generic ipmitool impl available for review) - Enhance Power Interface for Soft Reboot and NMI - bp/enhance-power-interface-for-soft-power-off-and-inject-nmi Status: Active (code review is on going) - iRMC out of band inspection - bp/ironic-node-properties-discovery Status: TODO - iRMC Virtual Media Deploy Driver - Add documentation for iRMC virtual media driver - follow up patch to fix nits Until next week, --ruby [0] https://etherpad.openstack.org/p/IronicWhiteBoard __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] VLAN aware VMs: Current status?
Hi Kyle, First of all, sorry for the late response. We are working on the design and implementation, the first patches are planned to be up by the end of this week. We could surely use more hands as it is quite a large amount of work that this blueprint requires. If there are any Neutron experts who would and could help out we would appreciate it. We prepared a wiki page [1], which contains information about the plans and design, it also includes a link to tasks/work items. If anyone is willing to join to the implementation please ping me via mail or IRC (ildikov) so we can sync up regarding to the work items. Thanks and Best Regards, Ildikó [1] https://wiki.openstack.org/wiki/Neutron/TrunkPort -Original Message- From: Kyle Mestery [mailto:mest...@mestery.com] Sent: July 30, 2015 16:26 To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [neutron] VLAN aware VMs: Current status? I'm sending this email to solicit a response from the owners of the VLAN aware VMs spec [1] [2]. The spec was merged on June 24, and I haven't seen any code posted. Given I expect this to take some iterations in review, is the plan to push code for this sometime soon? Thanks! Kyle [1] https://review.openstack.org/#/c/94612/ [2] https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Security] Would people see a value in the cve-check-tool? (Reshetova, Elena)
The only concern that I have is the requisite database. Downloading a 500MB + CVE database for the jobs could become painful. We could either keep the CVE database on each node in the test pool or download it at the start of each cve-check job. I¹d be curious what the infra wizards have to say. Actually the database is downloaded only once ( thefirst time) and then only database diffs are downloaded, which is much faster. I don't know enough about your node setup (do you fully clean up each node between the builds?) and etc., so the best way to test this would be if somebody can try it out and tell if it is a problem. If it is a problem, then we can discuss with the tool maintainer on how to address it. I¹d also really like to see what the baseline results look like. If you run it against current global requirements does it find legitimate issues? Does it find false positives. In any case it seems worth exploring as vulnerabilities in upstream dependencies are a key weakness in our current system. I have so far only run the tool against this file: http://git.openstack.org/cgit/openstack/requirements/tree/upper-constraints. txt and it didn't find any issues in it, given that the version there is taken as minimal supported version. I am myself not actually working in the OpenStack project (so please excuse my ignorance on some even basic matters), but I am actually a bit confused why this file for example called upper-constraints? The name would indicate an upper border, but that doesn't make that much sense with packaging systems. If you can point me to a full set of files that should be analyser with cve-check-tool, I can do the runs and post results here. Best Regards, Elena. -Original Message- From: McPeak, Travis [mailto:travis.mcp...@hp.com] Sent: Wednesday, August 5, 2015 6:15 AM To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [Security] Would people see a value in the cve-check-tool? (Reshetova, Elena) (Merging thread from security ML) Bandit probably isn¹t the correct integration point for this - cve-check has its own analysis procedures while Bandit uses Python AST. Also I see the use workflows being different. For Bandit a developer/gate wants to check a specific code snippet whereas for cve-check to be effective it really needs to examine the entire dependency chain. As Rob and I mentioned earlier a gate process on ³openstack-requirements² seems like an ideal target for this. The idea would be anytime a requirement is added (for example to enable a newer version or an entirely new library to be used) we could run a cve-check job that ensures the new library (or version) doesn¹t have any known CVE¹s against it. This way we can be covered across OpenStack (since OpenStack projects can¹t use Libraries that aren¹t in global requirements). The gate processing time is minimal since it doesn¹t have to run for each project. The only concern that I have is the requisite database. Downloading a 500MB + CVE database for the jobs could become painful. We could either keep the CVE database on each node in the test pool or download it at the start of each cve-check job. I¹d be curious what the infra wizards have to say. I¹d also really like to see what the baseline results look like. If you run it against current global requirements does it find legitimate issues? Does it find false positives. In any case it seems worth exploring as vulnerabilities in upstream dependencies are a key weakness in our current system. Hi folks! Idea really looks good. I am attaching an example of a very simple Python wrapper for the tool Looks like this wrapper is lightweight. But maybe try to integrate it with Bandit and not to create a new tool? --? Victor Ryzhenkin freerunner on #freenode smime.p7s Description: S/MIME cryptographic signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Security] Would people see a value in the cve-check-tool? (Reshetova, Elena)
On 2015-08-05 08:28:27 -0700 (-0700), Clark Boylan wrote: We already track it in the requirements repo itself [0]. Not sure if we need an additional tracking method. [0] https://git.openstack.org/cgit/openstack/requirements/tree/projects.txt That tracks repos which get reqs sync proposals (and possibly also those for which hard requirements rewriting is performed in DevStack-based jobs), but does not reflect repos which have reqs enforcement jobs gating on them. -- Jeremy Stanley __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Cinder] Quobyte Cinder Driver revert?
On 14:37 Aug 03, Matt Riedemann wrote: I guess there isn't a quobyte connector in os-brick yet, but just a reminder that there is a libvirt volume driver in nova for talking to quobyte [1]. It'd be good to get a heads up on the nova side when the cinder team is removing drivers so that we can do the same if it's a permanent removal. This is where I expect the liaison thing to come in handy. [1] http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/libvirt/volume/quobyte.py#n99 Discussed with Matt about this at the Cinder midcycle sprint. In an attempt to not annoy Nova folks, for removals on the Cinder side, there will be a heads up to have the code removed at least in the next release. -- Mike Perez __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Security] Would people see a value in the cve-check-tool? (Reshetova, Elena)
On 2015-08-05 16:08:16 + (+), Reshetova, Elena wrote: [...] Actually the database is downloaded only once ( thefirst time) and then only database diffs are downloaded, which is much faster. I don't know enough about your node setup (do you fully clean up each node between the builds?) and etc., so the best way to test this would be if somebody can try it out and tell if it is a problem. If it is a problem, then we can discuss with the tool maintainer on how to address it. [...] Yes, we actually don't reuse job workers. We run each job in a fresh virtual machine, delete it when complete and launch a new VM to take its place. Thus the database would need to be downloaded as part of the job, or pre-cached in the worker images we boot, or part of some remote query service so that we don't have to have the entire database local to the job runner. I am actually a bit confused why this file for example called upper-constraints? The name would indicate an upper border, but that doesn't make that much sense with packaging systems. [...] It is the maximum available version of each of our dependencies, calculated by the Python packaging tool pip when attempting to resolve mutually coinstallable versions of declared dependency version ranges within the transitive set (which is nontrivial to identify without recursively downloading and installing those packages since they embed their dependency information, can sometimes only determine it at runtime, and vary dependencies by interpreter version as well). That is to say, the upper-constraints.txt file lists the maximum versions of dependencies you can possibly run our software with when installed with the normal Python packaging toolchain. I don't see where it would make sense to test for vulnerabilities in older versions of dependencies anyway since it's not up to us to notify end users of bugs in software we don't distribute unless it actually prevents our software from running. -- Jeremy Stanley __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Security] Would people see a value in the cve-check-tool? (Reshetova, Elena)
Excerpts from Reshetova, Elena's message of 2015-08-05 09:08:16 -0700: The only concern that I have is the requisite database. Downloading a 500MB + CVE database for the jobs could become painful. We could either keep the CVE database on each node in the test pool or download it at the start of each cve-check job. I¹d be curious what the infra wizards have to say. Actually the database is downloaded only once ( thefirst time) and then only database diffs are downloaded, which is much faster. I don't know enough about your node setup (do you fully clean up each node between the builds?) and etc., so the best way to test this would be if somebody can try it out and tell if it is a problem. If it is a problem, then we can discuss with the tool maintainer on how to address it. Doesn't this feel like a job for AFS? Maintain the db there, and let the nodes access it as-needed? __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Security] Would people see a value in the cve-check-tool? (Reshetova, Elena)
On 2015-08-05 15:22:29 + (+), Jeremy Stanley wrote: [...] Now that we've dissolved more of those arbitrary distinctions, this seems like a great opportunity for tracking with a governance tag. I'll go ahead and propose one later today if I get a spare moment. Actually, I take that back. Now that the TC has decreed tags don't apply to individual source repositories (they apply to either project-teams or to deliverables which are higher-level collections of repos) I'm no longer sure how we would go about documenting repo-specific details like this anyway. To Clark's point, tracking requirements is an emergent property from the combination of requirements sync proposals and requirements enforcement jobs. If we can find a way to force one of those to depend on the other (perhaps the requirements sync can stop using a flat file and instead operate on parsed output from our zuul layout?) then that would be a cleaner means of identifying this repo-specific detail. -- Jeremy Stanley __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Security] Would people see a value in the cve-check-tool? (Reshetova, Elena)
On 2015-08-05 09:54:52 -0700 (-0700), Clint Byrum wrote: Doesn't this feel like a job for AFS? Maintain the db there, and let the nodes access it as-needed? I guess it depends on whether the tool needs to read the entire database to perform its queries (in which case using AFS would be basically the same as downloading). -- Jeremy Stanley __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Thoughts on things that don't make freeze cutoffs
On 4 August 2015 at 21:23, Matt Riedemann mrie...@linux.vnet.ibm.com wrote: On 8/4/2015 8:47 AM, Sahid Orentino Ferdjaoui wrote: On Tue, Aug 04, 2015 at 12:54:34PM +0200, Thierry Carrez wrote: John Garbutt wrote: [...] Personally I find a mix of coding and reviewing good to keep a decent level of empathy and sanity. I don't have time for any coding this release (only a bit of documenting), and its not something I can honestly recommend as a best practice. If people don't maintain a good level of reviews, we do tend to drop those folks from nova-core. I know ttx has been pushing for dedicated reviewers. It would be nice to find folks that can do that, but we just haven't found any of those people to date. Hell no! I'd hate dedicated reviewers. [...] This is why I advocate dividing code / reviewers / expertise along smaller areas within Nova, so that new people can focus and become a master again. What I'm pushing for is creating Nova subteams with their own core reviewers, which would be experts and trusted to +2 on a defined subset of code. Yep this makes a lot of sense and unfortunately we bring this idea every time but nothing seems to move in that way. Contributors working in Nova feel more and more frustrated. So specs got approved June 23-25 then about 15 working days to push all of the code and 12 working days to make it merged. From my experience working on Nova it's not possible. For instance on libvirt driver we have one core who does most of the reviews, but we have problem to find the +2/+W and without to mention problem when he is author of the fix [1]. We delay good features (with code pushed and waiting reviews) we can bring to users. I guess users are happy to hear that we are working hard to improve our code base but perhaps they also want features without to wait a year (3.1, 95, 98, me, xp...). And I know that because I'm working every days in Nova since more than 2 years - We have really skilled people who can help. [1] https://review.openstack.org/#/c/176360/ s. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Yeah so this thread is starting to devolve into scaling the core team again, which is what I didn't want to happen. The goal was to talk about how we can line up things in a backlog to still allow review and possible inclusion to a release after the spec freeze cutoff, but not necessarily target something for inclusion (like that rbd snapshot example), and not detract from what has been approved before the spec freeze cutoff or other priorities. I think I got the answer from John which is the path forward is starting to kick the tires on some new tooling like phabricator and then doing some things with runways / kanban boards so we don't have as rigid a process, we just queue up things as they come. So I'm satisfied for now. So this push towards a kanban/runways style is going to be a big effort. Please do reach out if you are happy to help make that happen, I really do need help to make this happen quickly! It would be great to move over to runways like system sometime in Mitaka. I have hoped to do it in kilo-3, but its just not worked out that way :( Thanks, John __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [sahara] update methods
On Wednesday 05 of August 2015 11:14:13 michael mccune wrote: hey all, the recent discussions[1] on updating resources through the rest api has got me thinking that it might be worthwhile to convert the few methods we have implemented to use PATCH instead of PUT. we are starting to create a bifurcation in the api regarding updates. the new suspend/resume job functionality is using PATCH and the object updating is proposing using PUT. i agree with Andrey's thinking that we have been using PUT as if it were a PATCH, but i think we should just fix this inaccuracy before we go further. i'm ok with leaving the scaling operation as a PUT for now as it uses slightly different logic in the body. but i think we should fix the node group template, cluster template, job binary, and data source update methods to be PATCH instead of PUT. is there any objection to me creating reviews to fix this inaccuracy? Isn't this an API change, which would require an API bump? A reason more to keep it working as it is with 1.x and go fast to 2.0. Ciao -- Luigi __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron] Common Base class for agents
Sounds good. As long as proper due-diligence is done and there are is no duplication of effort, it make sense. Thanks -Sukhdev On Wed, Aug 5, 2015 at 12:42 AM, Andreas Scheuring scheu...@linux.vnet.ibm.com wrote: Sukhdev, last week I spent some time to figure out the current state of modular l2 agent design and discussion. I got the impression it's not in a good shape! So I personally don't think that it makes any sense to start with a modular l2 agent prototype and in the worst case throw it all away, as we missed a single detail. I would prefer to get folks with knowledge cross all l2 agents together and work on a design first, that everyone can agree upon. So my initial mail basically was to start a effort for easily sharing code. Maybe this will end up in a single agent having multiple drivers but that's not the primary goal (which is sharing code). I'm more with Carl, to start a code sharing effort and the macvtap agent effort in parallel, independent from each other. I must admit I have less insights into ovsagent. But I know that it diverged a lot from the other agents. Sean Collins is currently evaluating an approach to bring linuxbrige closer to ovs [1]. Maybe that's the way to got. Do internal refactorings to bring things close to each other and then see what might be possible to get a common agent or at least common code. But any other suggestions are highly welcome! [1] https://review.openstack.org/#/c/208666/ Andreas (IRC: scheuran) On Di, 2015-08-04 at 22:42 -0700, Sukhdev Kapur wrote: We discussed this in ML2 sub-team meeting last week and felt the best approach is to implement this agent in a separate repo. There is already an on-going effort/plan for modular L2 agent. This agent would be a perfect candidate to take on that effort and implement it for macvtap agent. Once done, this could be moved over under neutron tent and other agents could be moved over to utilize this framework. Either of option 1 or 2 could be utilized to implement this agent. Keeping it in a seperate repo keeps the it from impacting any other agents. Once all ready and working, others could be converted over. You get the best of both words - i.e. quick implementation of this agent and a framework for others to use - and plenty of time to bake the framework. thoughts? Sukhdev On Mon, Aug 3, 2015 at 3:53 PM, Carl Baldwin c...@ecbaldwin.net wrote: I see this as two tasks: 1) A refactoring to share common code and 2) the addition of another agent following the pattern of the others. I'd prefer that the two tasks not be mixed in the same review because it makes it more difficult to review as I think Kevin eluded to. For me, either could be done first. I'm sure some reviewers would prefer that #1 be done first to avoid the proliferation of duplicated code. However, IMO, it is not necessary to be so strict. It can take some time to review common code to get it right. I'm afraid that holding up #2 until merging #1 will either motivate us to merge #1 too hastily and do a poor job or hold up #2 longer than it should be. If this were me, I would post both as independent reviews knowing that when one of the two merges, the other will have to be rebased to take the other in to account. Sometimes, having the refactor in flight can help to allay fears about code proliferation. Actually given Kevin's mention of the modular agent stuff, maybe it isn't worth putting much effort in to the refactor patch at all. My $0.02. Carl On Mon, Aug 3, 2015 at 9:46 AM, Andreas Scheuring scheu...@linux.vnet.ibm.com wrote: Hi, I'm planning to add a new ml2 driver and agent to neutron supporting macvtap attachments [1]. Kyle already decided, that this code should land in the neutron tree [2]. The normal approach till now was to copy an existing agent code and modify accordingly, which lead to a lot of duplicated code. So my question is, how to proceed with macvtap agent? I basically see the the 2 options: 1) Do it like in the past, duplicate the code that is needed for macvtap agent (main loop, mechanism for detecting new/changed/deleted devices) and just go for it. 2) Extract a new superclass that holds some of the common code. This would work for linuxridge agent and sriovnic agent - they could inherit from the new superclass and
Re: [openstack-dev] [Neutron] Common Base class for agents
Hey Kyle, A concern was raised that this may create issue of breakages/instability in other agents at the late stage of the release cycle - hence I proposed a separate repo. But, if a proper due-diligence is done and the core team has a plan to deal with this, sounds like a good plan to me. regards. -Sukhdev On Wed, Aug 5, 2015 at 6:43 AM, Kyle Mestery mest...@mestery.com wrote: I definitely don't think this work should start in a new repository. As Sean and Andreas have said, I think the changes should be done in-tree rather than creating another repository for this work. On Wed, Aug 5, 2015 at 2:42 AM, Andreas Scheuring scheu...@linux.vnet.ibm.com wrote: Sukhdev, last week I spent some time to figure out the current state of modular l2 agent design and discussion. I got the impression it's not in a good shape! So I personally don't think that it makes any sense to start with a modular l2 agent prototype and in the worst case throw it all away, as we missed a single detail. I would prefer to get folks with knowledge cross all l2 agents together and work on a design first, that everyone can agree upon. So my initial mail basically was to start a effort for easily sharing code. Maybe this will end up in a single agent having multiple drivers but that's not the primary goal (which is sharing code). I'm more with Carl, to start a code sharing effort and the macvtap agent effort in parallel, independent from each other. I must admit I have less insights into ovsagent. But I know that it diverged a lot from the other agents. Sean Collins is currently evaluating an approach to bring linuxbrige closer to ovs [1]. Maybe that's the way to got. Do internal refactorings to bring things close to each other and then see what might be possible to get a common agent or at least common code. But any other suggestions are highly welcome! [1] https://review.openstack.org/#/c/208666/ Andreas (IRC: scheuran) On Di, 2015-08-04 at 22:42 -0700, Sukhdev Kapur wrote: We discussed this in ML2 sub-team meeting last week and felt the best approach is to implement this agent in a separate repo. There is already an on-going effort/plan for modular L2 agent. This agent would be a perfect candidate to take on that effort and implement it for macvtap agent. Once done, this could be moved over under neutron tent and other agents could be moved over to utilize this framework. Either of option 1 or 2 could be utilized to implement this agent. Keeping it in a seperate repo keeps the it from impacting any other agents. Once all ready and working, others could be converted over. You get the best of both words - i.e. quick implementation of this agent and a framework for others to use - and plenty of time to bake the framework. thoughts? Sukhdev On Mon, Aug 3, 2015 at 3:53 PM, Carl Baldwin c...@ecbaldwin.net wrote: I see this as two tasks: 1) A refactoring to share common code and 2) the addition of another agent following the pattern of the others. I'd prefer that the two tasks not be mixed in the same review because it makes it more difficult to review as I think Kevin eluded to. For me, either could be done first. I'm sure some reviewers would prefer that #1 be done first to avoid the proliferation of duplicated code. However, IMO, it is not necessary to be so strict. It can take some time to review common code to get it right. I'm afraid that holding up #2 until merging #1 will either motivate us to merge #1 too hastily and do a poor job or hold up #2 longer than it should be. If this were me, I would post both as independent reviews knowing that when one of the two merges, the other will have to be rebased to take the other in to account. Sometimes, having the refactor in flight can help to allay fears about code proliferation. Actually given Kevin's mention of the modular agent stuff, maybe it isn't worth putting much effort in to the refactor patch at all. My $0.02. Carl On Mon, Aug 3, 2015 at 9:46 AM, Andreas Scheuring scheu...@linux.vnet.ibm.com wrote: Hi, I'm planning to add a new ml2 driver and agent to neutron supporting macvtap attachments [1]. Kyle already decided, that this code should land in the neutron tree [2]. The normal approach till now was to copy an existing agent code and modify accordingly, which lead to a lot of duplicated code. So my question is, how to proceed with macvtap agent? I basically see the the 2 options:
Re: [openstack-dev] [Keystone] [Horizon] Federated Login
On Wed, Aug 5, 2015 at 5:39 AM, David Chadwick d.w.chadw...@kent.ac.uk wrote: On 04/08/2015 18:59, Steve Martinelli wrote: Right, but that API is/should be protected. If we want to list IdPs *before* authenticating a user, we either need: 1) a new API for listing public IdPs or 2) a new policy that doesn't protect that API. Hi Steve yes this was my understanding of the discussion that took place many months ago. I had assumed (wrongly) that something had been done about it, but I guess from your message that we are no further forward on this Actually 2) above might be better reworded as - a new policy/engine that allows public access to be a bona fide policy rule The existing policy simply seems wrong. Why protect the list of IdPs? regards David Thanks, Steve Martinelli OpenStack Keystone Core Inactive hide details for Lance Bragstad ---2015/08/04 01:49:29 PM---On Tue, Aug 4, 2015 at 10:52 AM, Douglas Fish drfish@us.iLance Bragstad ---2015/08/04 01:49:29 PM---On Tue, Aug 4, 2015 at 10:52 AM, Douglas Fish drf...@us.ibm.com wrote: Hi David, From: Lance Bragstad lbrags...@gmail.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Date: 2015/08/04 01:49 PM Subject: Re: [openstack-dev] [Keystone] [Horizon] Federated Login On Tue, Aug 4, 2015 at 10:52 AM, Douglas Fish _drf...@us.ibm.com_ mailto:drf...@us.ibm.com wrote: Hi David, This is a cool looking UI. I've made a minor comment on it in InVision. I'm curious if this is an implementable idea - does keystone support large numbers of 3rd party idps? is there an API to retreive the list of idps or does this require carefully coordinated configuration between Horizon and Keystone so they both recognize the same list of idps? There is an API call for getting a list of Identity Providers from Keystone _ http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3-os-federation-ext.html#list-identity-providers_ Doug Fish David Chadwick _d.w.chadw...@kent.ac.uk_ mailto:d.w.chadw...@kent.ac.uk wrote on 08/01/2015 06:01:48 AM: From: David Chadwick _d.w.chadw...@kent.ac.uk_ mailto:d.w.chadw...@kent.ac.uk To: OpenStack Development Mailing List _openstack-dev@lists.openstack.org_ mailto:openstack-dev@lists.openstack.org Date: 08/01/2015 06:05 AM Subject: [openstack-dev] [Keystone] [Horizon] Federated Login Hi Everyone I have a student building a GUI for federated login with Horizon. The interface supports both a drop down list of configured IDPs, and also Type Ahead for massive federations with hundreds of IdPs. Screenshots are visible in InVision here _https://invis.io/HQ3QN2123_ All comments on the design are appreciated. You can make them directly to the screens via InVision Regards David __ OpenStack Development Mailing List (not for usage questions) Unsubscribe:_ __openstack-dev-requ...@lists.openstack.org?subject:unsubscribe_ http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe _ http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev_ __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: _openstack-dev-requ...@lists.openstack.org?subject:unsubscribe_ http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe_ __http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev_ __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe:
Re: [openstack-dev] [Nova] Non-priority Feature Freeze is Tomorrow (July 30th)
On 31 July 2015 at 11:05, John Garbutt j...@johngarbutt.com wrote: On 30 July 2015 at 09:56, John Garbutt j...@johngarbutt.com wrote: On 29 July 2015 at 19:20, John Garbutt j...@johngarbutt.com wrote: Hi, Tomorrow is: Non-priority Feature Freeze What does this mean? Well... * bug fixes: no impact, still free to merge * priority features: no impact, still free to merge * clean ups: sure we could merge those * non-priority features (i.e. blueprints with a low priority), you are no longer free to merge (we are free to re-approve previously approved things, due to gate issues / merge issues, if needed) Please note, the full Feature Freeze (and string freeze, etc), are when we tag liberty-3, which is expected to be on or just after September 1. This is all about focusing on merging more Bug Fixes and more Priority Features. For more details, please see: https://wiki.openstack.org/wiki/Nova/Liberty_Release_Schedule#Why_is_there_a_non-priority_Feature_Freeze_in_Nova.3F Its the same we did last release: http://lists.openstack.org/pipermail/openstack-dev/2015-February/056208.html Exceptions, I hear you cry? Lets follow a similar process as we did last time... If you want an exception: * please add request in here: https://etherpad.openstack.org/p/liberty-nova-non-priority-feature-freeze * make sure you make your request before the end of Wednesday 6th August * nova-drivers will meet to decide what gets an exception (as before) * with the aim to merge the code for all exceptions by the end of Monday 10th August I have added this detail into the wiki, as we refine the details: https://wiki.openstack.org/wiki/Nova/Liberty_Release_Schedule#Non-priority_Feature_Freeze There are unlikely to be many exceptions given, its really just for exceptional reasons why it didn't get merged in time. The folks in the nova-meeting tomorrow may need to refine this process, but if anything changes, we can send out an update to the ML. Thanks, johnthetubaguy PS Due to time constraints, its likely that it will be on Monday 3rd August, that I will -2 all non-priority blueprint patches, and un-approve all low priority blueprints, unless someone gets to that first. Actually, I should be able to do this on Friday morning, as normal. Bad timing, but I am mostly away from my computer over the next few days, but I am watching email a bit. (Something that was booked before the release dates were announced) Note, I don't plan on blocking things that are just pending a merge. I will only block things that don't have two +2 votes on them. This should help us keep productive through the gate congestion/issues. Thanks, John OK, so the gate and check queue are really not helping us here. Lets move the deadline till midnight (lets say PST) Sunday 2nd August. On Monday afternoon (in the UK sense), I will go through and defer low priority blueprints that don't have a +2. We can wait a bit long to get things merged, if we have to, as its the code review capacity we are trying to optimise for here. Hopefully that should help us get a few more things completed, in the face of the gate issues. Thanks, John PS Please do mark your blueprints as complete when the code merges, if possible. That helps stop me guessing if it is complete or not. Hi, A quick update on the freeze process. Please note, bug fixes are a priority for liberty-3, so please lets keep pushing hard on bug fixes. I have reviewed the current state of all low priority blueprints, to see what is likely to merge or is already partially merged (or in some cases, what has already merged). As a nova-drivers group we discussed the current FFE requested, and added notes in the etherpad: https://etherpad.openstack.org/p/liberty-nova-non-priority-feature-freeze Basically speaking we have three exceptions, were we aim to get the code approved by Friday: https://blueprints.launchpad.net/nova/+spec/libvirt-virtio-net-multiqueue https://blueprints.launchpad.net/nova/+spec/inject-nmi https://blueprints.launchpad.net/nova/+spec/mark-host-down There are a few other patches we might squeeze in, but we can discuss those in the nova meeting tomorrow: https://wiki.openstack.org/wiki/Meetings/Nova On top of the exceptions, we have a few blueprints with all their patches approved, but still going through the gate, they are still free to merge: https://blueprints.launchpad.net/nova/+spec/libvirt-set-admin-password https://blueprints.launchpad.net/nova/+spec/emc-sdc-libvirt-volume-driver https://blueprints.launchpad.net/nova/+spec/add-os-brick-volume-driver-hgst-solutions https://blueprints.launchpad.net/nova/+spec/consolidate-libvirt-fs-volume-drivers https://blueprints.launchpad.net/nova/+spec/vif-driver-ib-passthrough There are a good number of blueprints we have got merged since liberty-2 last Tuesday: https://launchpad.net/nova/+milestone/liberty-3 We did have a few non-priority blueprints merge just in time for
Re: [openstack-dev] [Neutron] Common Base class for agents
Well we can always develop the new framework and wait until the start of the next cycle to swap over the existing agents if it doesn't look stable enough. On Wed, Aug 5, 2015 at 1:36 PM, Sukhdev Kapur sukhdevka...@gmail.com wrote: Hey Kyle, A concern was raised that this may create issue of breakages/instability in other agents at the late stage of the release cycle - hence I proposed a separate repo. But, if a proper due-diligence is done and the core team has a plan to deal with this, sounds like a good plan to me. regards. -Sukhdev On Wed, Aug 5, 2015 at 6:43 AM, Kyle Mestery mest...@mestery.com wrote: I definitely don't think this work should start in a new repository. As Sean and Andreas have said, I think the changes should be done in-tree rather than creating another repository for this work. On Wed, Aug 5, 2015 at 2:42 AM, Andreas Scheuring scheu...@linux.vnet.ibm.com wrote: Sukhdev, last week I spent some time to figure out the current state of modular l2 agent design and discussion. I got the impression it's not in a good shape! So I personally don't think that it makes any sense to start with a modular l2 agent prototype and in the worst case throw it all away, as we missed a single detail. I would prefer to get folks with knowledge cross all l2 agents together and work on a design first, that everyone can agree upon. So my initial mail basically was to start a effort for easily sharing code. Maybe this will end up in a single agent having multiple drivers but that's not the primary goal (which is sharing code). I'm more with Carl, to start a code sharing effort and the macvtap agent effort in parallel, independent from each other. I must admit I have less insights into ovsagent. But I know that it diverged a lot from the other agents. Sean Collins is currently evaluating an approach to bring linuxbrige closer to ovs [1]. Maybe that's the way to got. Do internal refactorings to bring things close to each other and then see what might be possible to get a common agent or at least common code. But any other suggestions are highly welcome! [1] https://review.openstack.org/#/c/208666/ Andreas (IRC: scheuran) On Di, 2015-08-04 at 22:42 -0700, Sukhdev Kapur wrote: We discussed this in ML2 sub-team meeting last week and felt the best approach is to implement this agent in a separate repo. There is already an on-going effort/plan for modular L2 agent. This agent would be a perfect candidate to take on that effort and implement it for macvtap agent. Once done, this could be moved over under neutron tent and other agents could be moved over to utilize this framework. Either of option 1 or 2 could be utilized to implement this agent. Keeping it in a seperate repo keeps the it from impacting any other agents. Once all ready and working, others could be converted over. You get the best of both words - i.e. quick implementation of this agent and a framework for others to use - and plenty of time to bake the framework. thoughts? Sukhdev On Mon, Aug 3, 2015 at 3:53 PM, Carl Baldwin c...@ecbaldwin.net wrote: I see this as two tasks: 1) A refactoring to share common code and 2) the addition of another agent following the pattern of the others. I'd prefer that the two tasks not be mixed in the same review because it makes it more difficult to review as I think Kevin eluded to. For me, either could be done first. I'm sure some reviewers would prefer that #1 be done first to avoid the proliferation of duplicated code. However, IMO, it is not necessary to be so strict. It can take some time to review common code to get it right. I'm afraid that holding up #2 until merging #1 will either motivate us to merge #1 too hastily and do a poor job or hold up #2 longer than it should be. If this were me, I would post both as independent reviews knowing that when one of the two merges, the other will have to be rebased to take the other in to account. Sometimes, having the refactor in flight can help to allay fears about code proliferation. Actually given Kevin's mention of the modular agent stuff, maybe it isn't worth putting much effort in to the refactor patch at all. My $0.02. Carl On Mon, Aug 3, 2015 at 9:46 AM, Andreas Scheuring scheu...@linux.vnet.ibm.com wrote: Hi, I'm planning to add a new ml2 driver and agent to neutron supporting macvtap attachments [1]. Kyle already decided, that this code should land in the neutron tree [2]. The normal approach till now was to copy
Re: [openstack-dev] [Keystone] [Horizon] Federated Login
Some folks said that they'd prefer not to list all associated idps, which i can understand. Actually, I like jamie's suggestion of just making horizon a bit smarter, and expecting the values in the horizon settings (idp+protocol) Thanks, Steve Martinelli OpenStack Keystone Core From: Dolph Mathews dolph.math...@gmail.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Date: 2015/08/05 01:38 PM Subject:Re: [openstack-dev] [Keystone] [Horizon] Federated Login On Wed, Aug 5, 2015 at 5:39 AM, David Chadwick d.w.chadw...@kent.ac.uk wrote: On 04/08/2015 18:59, Steve Martinelli wrote: Right, but that API is/should be protected. If we want to list IdPs *before* authenticating a user, we either need: 1) a new API for listing public IdPs or 2) a new policy that doesn't protect that API. Hi Steve yes this was my understanding of the discussion that took place many months ago. I had assumed (wrongly) that something had been done about it, but I guess from your message that we are no further forward on this Actually 2) above might be better reworded as - a new policy/engine that allows public access to be a bona fide policy rule The existing policy simply seems wrong. Why protect the list of IdPs? regards David Thanks, Steve Martinelli OpenStack Keystone Core Inactive hide details for Lance Bragstad ---2015/08/04 01:49:29 PM---On Tue, Aug 4, 2015 at 10:52 AM, Douglas Fish drfish@us.iLance Bragstad ---2015/08/04 01:49:29 PM---On Tue, Aug 4, 2015 at 10:52 AM, Douglas Fish drf...@us.ibm.com wrote: Hi David, From: Lance Bragstad lbrags...@gmail.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Date: 2015/08/04 01:49 PM Subject: Re: [openstack-dev] [Keystone] [Horizon] Federated Login On Tue, Aug 4, 2015 at 10:52 AM, Douglas Fish _drf...@us.ibm.com_ mailto:drf...@us.ibm.com wrote: Hi David, This is a cool looking UI. I've made a minor comment on it in InVision. I'm curious if this is an implementable idea - does keystone support large numbers of 3rd party idps? is there an API to retreive the list of idps or does this require carefully coordinated configuration between Horizon and Keystone so they both recognize the same list of idps? There is an API call for getting a list of Identity Providers from Keystone _ http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3-os-federation-ext.html#list-identity-providers_ Doug Fish David Chadwick _d.w.chadw...@kent.ac.uk_ mailto:d.w.chadw...@kent.ac.uk wrote on 08/01/2015 06:01:48 AM: From: David Chadwick _d.w.chadw...@kent.ac.uk_ mailto:d.w.chadw...@kent.ac.uk To: OpenStack Development Mailing List _openstack-dev@lists.openstack.org_ mailto:openstack-dev@lists.openstack.org Date: 08/01/2015 06:05 AM Subject: [openstack-dev] [Keystone] [Horizon] Federated Login Hi Everyone I have a student building a GUI for federated login with Horizon. The interface supports both a drop down list of configured IDPs, and also Type Ahead for massive federations with hundreds of IdPs. Screenshots are visible in InVision here _https://invis.io/HQ3QN2123_ All comments on the design are appreciated. You can make them directly to the screens via InVision Regards David __ OpenStack Development Mailing List (not for usage questions) Unsubscribe:_ __openstack-dev-requ...@lists.openstack.org?subject:unsubscribe_ http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe _ http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev_ __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: _openstack-dev-requ...@lists.openstack.org?subject:unsubscribe_ http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe_ __ http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev_ __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Keystone] [Horizon] Federated Login
On Wed, Aug 5, 2015 at 1:02 PM, Steve Martinelli steve...@ca.ibm.com wrote: Some folks said that they'd prefer not to list all associated idps, which i can understand. Actually, I like jamie's suggestion of just making horizon a bit smarter, and expecting the values in the horizon settings (idp+protocol) This *might* lead to a more complicated user experience, unless we deduce the protocol for the IdP selected (but that would defeat the point?). Also, wouldn't we have to make changes to Horizon every time we add an IdP? This might be case by case, but if you're consistently adding Identity Providers, then your ops team might not be too happy reconfiguring Horizon all the time. Thanks, Steve Martinelli OpenStack Keystone Core [image: Inactive hide details for Dolph Mathews ---2015/08/05 01:38:09 PM---On Wed, Aug 5, 2015 at 5:39 AM, David Chadwick d.w.chadwic]Dolph Mathews ---2015/08/05 01:38:09 PM---On Wed, Aug 5, 2015 at 5:39 AM, David Chadwick d.w.chadw...@kent.ac.uk wrote: From: Dolph Mathews dolph.math...@gmail.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Date: 2015/08/05 01:38 PM Subject: Re: [openstack-dev] [Keystone] [Horizon] Federated Login -- On Wed, Aug 5, 2015 at 5:39 AM, David Chadwick *d.w.chadw...@kent.ac.uk* d.w.chadw...@kent.ac.uk wrote: On 04/08/2015 18:59, Steve Martinelli wrote: Right, but that API is/should be protected. If we want to list IdPs *before* authenticating a user, we either need: 1) a new API for listing public IdPs or 2) a new policy that doesn't protect that API. Hi Steve yes this was my understanding of the discussion that took place many months ago. I had assumed (wrongly) that something had been done about it, but I guess from your message that we are no further forward on this Actually 2) above might be better reworded as - a new policy/engine that allows public access to be a bona fide policy rule The existing policy simply seems wrong. Why protect the list of IdPs? regards David Thanks, Steve Martinelli OpenStack Keystone Core Inactive hide details for Lance Bragstad ---2015/08/04 01:49:29 PM---On Tue, Aug 4, 2015 at 10:52 AM, Douglas Fish drfish@us.iLance Bragstad ---2015/08/04 01:49:29 PM---On Tue, Aug 4, 2015 at 10:52 AM, Douglas Fish *drf...@us.ibm.com* drf...@us.ibm.com wrote: Hi David, From: Lance Bragstad *lbrags...@gmail.com* lbrags...@gmail.com To: OpenStack Development Mailing List (not for usage questions) *openstack-dev@lists.openstack.org* openstack-dev@lists.openstack.org Date: 2015/08/04 01:49 PM Subject: Re: [openstack-dev] [Keystone] [Horizon] Federated Login On Tue, Aug 4, 2015 at 10:52 AM, Douglas Fish _drf...@us.ibm.com_ mailto:*drf...@us.ibm.com* drf...@us.ibm.com wrote: Hi David, This is a cool looking UI. I've made a minor comment on it in InVision. I'm curious if this is an implementable idea - does keystone support large numbers of 3rd party idps? is there an API to retreive the list of idps or does this require carefully coordinated configuration between Horizon and Keystone so they both recognize the same list of idps? There is an API call for getting a list of Identity Providers from Keystone _ *http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3-os-federation-ext.html#list-identity-providers_* http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3-os-federation-ext.html#list-identity-providers_ Doug Fish David Chadwick _d.w.chadw...@kent.ac.uk_ mailto:*d.w.chadw...@kent.ac.uk* d.w.chadw...@kent.ac.uk wrote on 08/01/2015 06:01:48 AM: From: David Chadwick _d.w.chadw...@kent.ac.uk_ mailto:*d.w.chadw...@kent.ac.uk* d.w.chadw...@kent.ac.uk To: OpenStack Development Mailing List _openstack-dev@lists.openstack.org_ mailto:*openstack-dev@lists.openstack.org* openstack-dev@lists.openstack.org Date: 08/01/2015 06:05 AM Subject: [openstack-dev] [Keystone] [Horizon] Federated Login Hi Everyone I have a student building a GUI for federated login with Horizon. The interface supports both a drop down list of configured IDPs, and also Type Ahead for massive federations with hundreds of IdPs. Screenshots are visible in InVision here _*https://invis.io/HQ3QN2123_* https://invis.io/HQ3QN2123_ All comments on the
[openstack-dev] [TripleO] [Puppet] Deploying OpenStack with Puppet modules on Docker with Heat
Hi, There is a lot of interest in getting support for container based deployment within TripleO and many different ideas and opinions on how to go about doing that. One idea on the table is to use Heat to help orchestrate the deployment of docker containers. This would work similar to our tripleo-heat -templates implementation except that when using docker you would swap in a nested stack template that would configure containers on baremetal. We've even got a nice example that shows what a containerized TripleO overcloud might look like here [1]. The approach outlines how you might use kolla docker containers alongside of the tripleo-heat-templates to do this sort of deployment. This is all cool stuff but one area of concern is how we do the actual configuration of the containers. The above implementation relies on passing environment variables into kolla built docker containers which then self configure all the required config files and start the service. This sounds like a start... but creating (and maintaining) another from scratch OpenStack configuration tool isn't high on my list of things to spend time on. Sure there is already a kolla community helping to build and maintain this configuration tooling (mostly thinking config files here) but this sounds a bit like what tripleo -image-elements initially tried to do and it turns out there are much more capable configuration tools out there. Since we are already using a good bit of Puppet in tripleo-heat -templates the idea came up that we would try to configure Docker containers using Puppet. Again, here there are several ideas in the Puppet community with regards to how docker might best be configured with Puppet. Keeping those in mind we've been throwing some ideas out on an etherpad here [2] that describes using Heat for orchestration, Puppet for configuration, and Kolla docker images for containers. A quick outline of the approach is: -Extend the heat-container-agent [3] that runs os-collect-config and all the required hooks we require for deployment. This includes docker -compute, bash scripts, and Puppet. NOTE: As described in the etherpad I've taken to using DIB to build this container. I found this to be faster from a TripleO development baseline. -To create config files the heat-container-agent would run a puppet manifest for a given role and generate a directory tree of config files (/var/lib/etc-data for example). -We then run a docker-compose software deployment that mounts those configuration file(s) into a read only volume and uses them to start the containerized service. The approach could look something like this [4]. This nice thing about this is that it requires no modification to OpenStack Puppet modules. We can use those today, as-is. Additionally, although Puppet runs in the agent container we've created a mechanism to set all the resources to noop mode except for those that generate config files. And lastly, we can use exactly the same role manifest for docker that we do for baremetal. Lots of re-use here... and although we are disabling a lot of Puppet functionality in setting all the non-config resources to noop the Kolla containers already do some of that stuff for us (starting services, etc.). All that said (and trying to keep this short) we've still got a bit of work to do around wiring up externally created config files to kolla build docker containers. A couple of issues are: -The external config file mechanism for Kolla containers only seems to support a single config file. Some services (Neutron) can have multiple files. Could we extend the external config support to use multiple files? -If a service has multiple files kolla may need to adjust its service startup script to use multiple files. Perhaps a conf.d approach would work here? -We are missing published version of some key kolla containers. Namely openvswitch and the neutron-openvswitch-agent for starters but I'd also like to have a Ceilometer agent and SNMP agent container as well so we have feature parity with the non-docker compute role. Once we have solutions for the above I think we'll be very close to a fully dockerized compute role with TripleO heat templates. From there we can expand the idea to cover other roles within the tripleo-heat -templates too. I'll stop there for now. Any ideas and thoughts appreciated. Dan - [1] https://review.openstack.org/#/c/178840/ (Containerized TripleO Overcloud.) [2] https://etherpad.openstack.org/p/tripleo-docker-puppet [3] http://git.openstack.org/cgit/openstack/heat -templates/log/hot/software-config/heat-container-agent [4] https://review.openstack.org/#/c/209505/ (Docker compute role configured via Puppet) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Keystone] [Horizon] Federated Login
text/html; charset=UTF-8: Unrecognized __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] [Puppet] [kolla] Deploying OpenStack with Puppet modules on Docker with Heat
Tagging kolla so the kolla community also sees it. Pardon the top posting. -Ryan - Original Message - From: Dan Prince dpri...@redhat.com To: openstack-dev openstack-dev@lists.openstack.org Sent: Wednesday, August 5, 2015 2:29:13 PM Subject: [openstack-dev] [TripleO] [Puppet] Deploying OpenStack with Puppet modules on Docker with Heat Hi, There is a lot of interest in getting support for container based deployment within TripleO and many different ideas and opinions on how to go about doing that. One idea on the table is to use Heat to help orchestrate the deployment of docker containers. This would work similar to our tripleo-heat -templates implementation except that when using docker you would swap in a nested stack template that would configure containers on baremetal. We've even got a nice example that shows what a containerized TripleO overcloud might look like here [1]. The approach outlines how you might use kolla docker containers alongside of the tripleo-heat-templates to do this sort of deployment. This is all cool stuff but one area of concern is how we do the actual configuration of the containers. The above implementation relies on passing environment variables into kolla built docker containers which then self configure all the required config files and start the service. This sounds like a start... but creating (and maintaining) another from scratch OpenStack configuration tool isn't high on my list of things to spend time on. Sure there is already a kolla community helping to build and maintain this configuration tooling (mostly thinking config files here) but this sounds a bit like what tripleo -image-elements initially tried to do and it turns out there are much more capable configuration tools out there. Since we are already using a good bit of Puppet in tripleo-heat -templates the idea came up that we would try to configure Docker containers using Puppet. Again, here there are several ideas in the Puppet community with regards to how docker might best be configured with Puppet. Keeping those in mind we've been throwing some ideas out on an etherpad here [2] that describes using Heat for orchestration, Puppet for configuration, and Kolla docker images for containers. A quick outline of the approach is: -Extend the heat-container-agent [3] that runs os-collect-config and all the required hooks we require for deployment. This includes docker -compute, bash scripts, and Puppet. NOTE: As described in the etherpad I've taken to using DIB to build this container. I found this to be faster from a TripleO development baseline. -To create config files the heat-container-agent would run a puppet manifest for a given role and generate a directory tree of config files (/var/lib/etc-data for example). -We then run a docker-compose software deployment that mounts those configuration file(s) into a read only volume and uses them to start the containerized service. The approach could look something like this [4]. This nice thing about this is that it requires no modification to OpenStack Puppet modules. We can use those today, as-is. Additionally, although Puppet runs in the agent container we've created a mechanism to set all the resources to noop mode except for those that generate config files. And lastly, we can use exactly the same role manifest for docker that we do for baremetal. Lots of re-use here... and although we are disabling a lot of Puppet functionality in setting all the non-config resources to noop the Kolla containers already do some of that stuff for us (starting services, etc.). All that said (and trying to keep this short) we've still got a bit of work to do around wiring up externally created config files to kolla build docker containers. A couple of issues are: -The external config file mechanism for Kolla containers only seems to support a single config file. Some services (Neutron) can have multiple files. Could we extend the external config support to use multiple files? -If a service has multiple files kolla may need to adjust its service startup script to use multiple files. Perhaps a conf.d approach would work here? -We are missing published version of some key kolla containers. Namely openvswitch and the neutron-openvswitch-agent for starters but I'd also like to have a Ceilometer agent and SNMP agent container as well so we have feature parity with the non-docker compute role. Once we have solutions for the above I think we'll be very close to a fully dockerized compute role with TripleO heat templates. From there we can expand the idea to cover other roles within the tripleo-heat -templates too. I'll stop there for now. Any ideas and thoughts appreciated. Dan - [1] https://review.openstack.org/#/c/178840/ (Containerized TripleO Overcloud.) [2] https://etherpad.openstack.org/p/tripleo-docker-puppet [3] http://git.openstack.org/cgit/openstack/heat -templates/log/hot/software-config/heat-container-agent [4]
[openstack-dev] Bandit 0.13.0 released
Today we released Bandit version 0.13.0 which includes the following features and enhancements: Plugins now registered as entry points Improved Bandit run speed Added a confidence filter option Added timestamp to JSON report New plugin to detect Try, Except, Pass Improved detection for hardcoded /tmp plugin Produce universal wheel Created an example profile which lists all current plugins Updated readme and formatting Fixed a bug where correct error code was not sent when filtering results Fixed a bug in SQL injection plugin and improved detection Bundled wordlist for hardcoded password plugin Other enhancements, bug fixes, and improvements As always you can find it on PyPI. Please direct any questions or concerns to the dev mailing list (with the '[Security]' tag) or join us in #openstack-security on Freenode. Thanks, -Travis smime.p7s Description: S/MIME cryptographic signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [sahara] update methods
On 08/05/2015 01:31 PM, Luigi Toscano wrote: Isn't this an API change, which would require an API bump? A reason more to keep it working as it is with 1.x and go fast to 2.0. thanks Luigi, that's fair. i'll hold off on this until we can bump to 2.0. it also means i need to get a move on with that spec. also, i think we should not be adding new update methods which use PUT then. mike __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Keystone] [Horizon] Federated Login
On Wed, Aug 5, 2015 at 1:02 PM, Steve Martinelli steve...@ca.ibm.com wrote: Some folks said that they'd prefer not to list all associated idps, which i can understand. Why? Actually, I like jamie's suggestion of just making horizon a bit smarter, and expecting the values in the horizon settings (idp+protocol) But, it's already in keystone. Thanks, Steve Martinelli OpenStack Keystone Core [image: Inactive hide details for Dolph Mathews ---2015/08/05 01:38:09 PM---On Wed, Aug 5, 2015 at 5:39 AM, David Chadwick d.w.chadwic]Dolph Mathews ---2015/08/05 01:38:09 PM---On Wed, Aug 5, 2015 at 5:39 AM, David Chadwick d.w.chadw...@kent.ac.uk wrote: From: Dolph Mathews dolph.math...@gmail.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Date: 2015/08/05 01:38 PM Subject: Re: [openstack-dev] [Keystone] [Horizon] Federated Login -- On Wed, Aug 5, 2015 at 5:39 AM, David Chadwick *d.w.chadw...@kent.ac.uk* d.w.chadw...@kent.ac.uk wrote: On 04/08/2015 18:59, Steve Martinelli wrote: Right, but that API is/should be protected. If we want to list IdPs *before* authenticating a user, we either need: 1) a new API for listing public IdPs or 2) a new policy that doesn't protect that API. Hi Steve yes this was my understanding of the discussion that took place many months ago. I had assumed (wrongly) that something had been done about it, but I guess from your message that we are no further forward on this Actually 2) above might be better reworded as - a new policy/engine that allows public access to be a bona fide policy rule The existing policy simply seems wrong. Why protect the list of IdPs? regards David Thanks, Steve Martinelli OpenStack Keystone Core Inactive hide details for Lance Bragstad ---2015/08/04 01:49:29 PM---On Tue, Aug 4, 2015 at 10:52 AM, Douglas Fish drfish@us.iLance Bragstad ---2015/08/04 01:49:29 PM---On Tue, Aug 4, 2015 at 10:52 AM, Douglas Fish *drf...@us.ibm.com* drf...@us.ibm.com wrote: Hi David, From: Lance Bragstad *lbrags...@gmail.com* lbrags...@gmail.com To: OpenStack Development Mailing List (not for usage questions) *openstack-dev@lists.openstack.org* openstack-dev@lists.openstack.org Date: 2015/08/04 01:49 PM Subject: Re: [openstack-dev] [Keystone] [Horizon] Federated Login On Tue, Aug 4, 2015 at 10:52 AM, Douglas Fish _drf...@us.ibm.com_ mailto:*drf...@us.ibm.com* drf...@us.ibm.com wrote: Hi David, This is a cool looking UI. I've made a minor comment on it in InVision. I'm curious if this is an implementable idea - does keystone support large numbers of 3rd party idps? is there an API to retreive the list of idps or does this require carefully coordinated configuration between Horizon and Keystone so they both recognize the same list of idps? There is an API call for getting a list of Identity Providers from Keystone _ *http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3-os-federation-ext.html#list-identity-providers_* http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3-os-federation-ext.html#list-identity-providers_ Doug Fish David Chadwick _d.w.chadw...@kent.ac.uk_ mailto:*d.w.chadw...@kent.ac.uk* d.w.chadw...@kent.ac.uk wrote on 08/01/2015 06:01:48 AM: From: David Chadwick _d.w.chadw...@kent.ac.uk_ mailto:*d.w.chadw...@kent.ac.uk* d.w.chadw...@kent.ac.uk To: OpenStack Development Mailing List _openstack-dev@lists.openstack.org_ mailto:*openstack-dev@lists.openstack.org* openstack-dev@lists.openstack.org Date: 08/01/2015 06:05 AM Subject: [openstack-dev] [Keystone] [Horizon] Federated Login Hi Everyone I have a student building a GUI for federated login with Horizon. The interface supports both a drop down list of configured IDPs, and also Type Ahead for massive federations with hundreds of IdPs. Screenshots are visible in InVision here _*https://invis.io/HQ3QN2123_* https://invis.io/HQ3QN2123_ All comments on the design are appreciated. You can make them directly to the screens via InVision Regards David __ OpenStack Development Mailing List (not for usage questions)
[openstack-dev] [neutron][dvr] Removing fip namespace when restarting L3 agent.
Hi all, During testing of Neutron upgrades, I have found that restarting the L3 agent in DVR mode is causing the VM network downtime for configured floating IP. The lockdown is visible when pinging the VM from external network, 2-3 pings are lost. The responsible place in code is: DVR: destroy fip ns: fip-8223e12e-837b-49d4-9793-63603fccbc9f from (pid=156888) delete /opt/openstack/neutron/neutron/agent/l3/dvr_fip_ns.py:164 Can someone explain why the fip namespace is deleted? Can we workout the situation, when there is no downtime of VM access? Artur Korzeniewski Intel Technology Poland sp. z o.o. KRS 101882 ul. Slowackiego 173, 80-298 Gdansk __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Keystone] [Horizon] Federated Login
Forcing Horizon to duplicate Keystone settings just makes everything much harder to configure and much more fragile. Exposing whitelisted, or all, IdPs makes much more sense. On Wed, Aug 5, 2015 at 1:33 PM, Dolph Mathews dolph.math...@gmail.com wrote: On Wed, Aug 5, 2015 at 1:02 PM, Steve Martinelli steve...@ca.ibm.com wrote: Some folks said that they'd prefer not to list all associated idps, which i can understand. Why? Actually, I like jamie's suggestion of just making horizon a bit smarter, and expecting the values in the horizon settings (idp+protocol) But, it's already in keystone. Thanks, Steve Martinelli OpenStack Keystone Core [image: Inactive hide details for Dolph Mathews ---2015/08/05 01:38:09 PM---On Wed, Aug 5, 2015 at 5:39 AM, David Chadwick d.w.chadwic]Dolph Mathews ---2015/08/05 01:38:09 PM---On Wed, Aug 5, 2015 at 5:39 AM, David Chadwick d.w.chadw...@kent.ac.uk wrote: From: Dolph Mathews dolph.math...@gmail.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Date: 2015/08/05 01:38 PM Subject: Re: [openstack-dev] [Keystone] [Horizon] Federated Login -- On Wed, Aug 5, 2015 at 5:39 AM, David Chadwick *d.w.chadw...@kent.ac.uk* d.w.chadw...@kent.ac.uk wrote: On 04/08/2015 18:59, Steve Martinelli wrote: Right, but that API is/should be protected. If we want to list IdPs *before* authenticating a user, we either need: 1) a new API for listing public IdPs or 2) a new policy that doesn't protect that API. Hi Steve yes this was my understanding of the discussion that took place many months ago. I had assumed (wrongly) that something had been done about it, but I guess from your message that we are no further forward on this Actually 2) above might be better reworded as - a new policy/engine that allows public access to be a bona fide policy rule The existing policy simply seems wrong. Why protect the list of IdPs? regards David Thanks, Steve Martinelli OpenStack Keystone Core Inactive hide details for Lance Bragstad ---2015/08/04 01:49:29 PM---On Tue, Aug 4, 2015 at 10:52 AM, Douglas Fish drfish@us.iLance Bragstad ---2015/08/04 01:49:29 PM---On Tue, Aug 4, 2015 at 10:52 AM, Douglas Fish *drf...@us.ibm.com* drf...@us.ibm.com wrote: Hi David, From: Lance Bragstad *lbrags...@gmail.com* lbrags...@gmail.com To: OpenStack Development Mailing List (not for usage questions) *openstack-dev@lists.openstack.org* openstack-dev@lists.openstack.org Date: 2015/08/04 01:49 PM Subject: Re: [openstack-dev] [Keystone] [Horizon] Federated Login On Tue, Aug 4, 2015 at 10:52 AM, Douglas Fish _drf...@us.ibm.com_ mailto:*drf...@us.ibm.com* drf...@us.ibm.com wrote: Hi David, This is a cool looking UI. I've made a minor comment on it in InVision. I'm curious if this is an implementable idea - does keystone support large numbers of 3rd party idps? is there an API to retreive the list of idps or does this require carefully coordinated configuration between Horizon and Keystone so they both recognize the same list of idps? There is an API call for getting a list of Identity Providers from Keystone _ *http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3-os-federation-ext.html#list-identity-providers_* http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3-os-federation-ext.html#list-identity-providers_ Doug Fish David Chadwick _d.w.chadw...@kent.ac.uk_ mailto:*d.w.chadw...@kent.ac.uk* d.w.chadw...@kent.ac.uk wrote on 08/01/2015 06:01:48 AM: From: David Chadwick _d.w.chadw...@kent.ac.uk_ mailto:*d.w.chadw...@kent.ac.uk* d.w.chadw...@kent.ac.uk To: OpenStack Development Mailing List _openstack-dev@lists.openstack.org_ mailto:*openstack-dev@lists.openstack.org* openstack-dev@lists.openstack.org Date: 08/01/2015 06:05 AM Subject: [openstack-dev] [Keystone] [Horizon] Federated Login Hi Everyone I have a student building a GUI for federated login with Horizon. The interface supports both a drop down list of configured IDPs, and also Type Ahead for massive federations with hundreds of IdPs. Screenshots are visible in InVision here _*https://invis.io/HQ3QN2123_* https://invis.io/HQ3QN2123_ All comments on the design are appreciated. You can make them directly to the screens via
Re: [openstack-dev] [neutron][dvr] Removing fip namespace when restarting L3 agent.
Thats troubling... We are considering using DVR soon, and we have to restart neutron-openvswitch-agent and openstack-nova-compute periodically go get them to talk to rabbit again Thanks, Kevin From: Korzeniewski, Artur [artur.korzeniew...@intel.com] Sent: Wednesday, August 05, 2015 12:36 PM To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [neutron][dvr] Removing fip namespace when restarting L3 agent. Hi all, During testing of Neutron upgrades, I have found that restarting the L3 agent in DVR mode is causing the VM network downtime for configured floating IP. The lockdown is visible when pinging the VM from external network, 2-3 pings are lost. The responsible place in code is: DVR: destroy fip ns: fip-8223e12e-837b-49d4-9793-63603fccbc9f from (pid=156888) delete /opt/openstack/neutron/neutron/agent/l3/dvr_fip_ns.py:164 Can someone explain why the fip namespace is deleted? Can we workout the situation, when there is no downtime of VM access? Artur Korzeniewski Intel Technology Poland sp. z o.o. KRS 101882 ul. Slowackiego 173, 80-298 Gdansk __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] Does OpenStack need a common solution for DLM?
Flavio Percoco wrote: On 04/08/15 21:14 -0700, Joshua Harlow wrote: Morgan Fainberg wrote: On Tue, Aug 4, 2015 at 8:44 AM, Joshua Harlow harlo...@outlook.com mailto:harlo...@outlook.com wrote: Flavio Percoco wrote: On 03/08/15 19:48 +0200, Gorka Eguileor wrote: On Mon, Aug 03, 2015 at 03:42:48PM +, Fox, Kevin M wrote: I'm usually for abstraction layers, but they don't always pay off very well due to catering to the lowest common denominator. Lets clearly define the problem space first. IFF the problem space can be fully implemented using Tooz, then lets do that. Then the operator can choose. If Tooz cant and wont handle the problem space, then we're trying to fit a square peg in a round hole. What do you mean with clearly define the problem space? We know what we want, we just need to agree on the compromises we are willing to make, use a DLM and make admins' life a little harder (only for those that deploy A-A) but have an A-A solution earlier, or postpone A-A functionality but make their life easier. And we already know that Tooz is not the Holy Grail and will not perform the miracle of giving Cinder HA A-A. It is only a piece of the problem, so there's nothing to discuss there, and it's not a square peg on a round hole, because it fits perfectly for what it is intended. But once you have filled that square hole you need another peg, the round one for the round hole. If people are expecting to find one thing that fixes everything and gives us HA A-A on its own, then I believe they are a little bit lost. As confusing as it seems, we've now moved from talking about just Cinder to understanding whether this is a problem many projects have and whether we can find a solution that will work for most of them. Therefore, I've renamed this thread to make this more evident. Now, so far we have: - Ironic has an internal distributed lock and it uses a hash-ring - Ceilometer uses tooz - Several projects use a file lock of some other fashion of distributed lock. - *Add yours here* Each one of these projects has a specific use-case that doesn't necessarily overlap. I'd like to see those cases listed somewhere. We've done this in the past already and I believe we can do it now as well. As I've mentioned in another thread, Gorka has done this for Cinder already now we need to do it for other services too. Even if your project has a DLM in place, it'd be good to know what problem you solved with it as it may be a problem that other projects have as well. As a community, we've been able to do away with adding a new service for DLM's thus far. I'm not saying we don't need one but, as mentioned in other threads, lets give this some more thought before we add a new service that'll make deploying and maintaining OpenStack harder. On the contrary, I think it would make deploying and maintaining openstack easier... As each service implements its own DLM pieces this means that they all do it in a way that is different from each other, which actually makes the situation worse (now operators needs to figure out the X different ways this was done, the X different ways to release a messed up/stale/other lock...). DLM(s) like zookeeper and others provide that 'single' way of doing it (they also provide introspection abilities, ie to see who is waiting on a lock, what connection has a lock...) so IMHO I feel the question of should we has really already been passed (but others may disagree). I strongly agree that we are past the point of needing a DLM. We have mostly papered over the missing choice of a consistent DLM across projects with many different implementations. I'm all for picking a DLM that is consistent across all of OpenStack and help our deployers and operators only need to know one of these technologies. A single use of a DLM should not inflame the technology proliferation argument as long as we can be opinionated on the one we use and test against. Is the next step something x-project outlining the choices/direction so we can start that phase of the conversation? I am sure that once we have a clear direction, more and more use-cases will come out of the woodwork... I can start a cross-project spec tomorrow if people feel that is useful, it may be slightly opinionated (I am one of the cores that works on https://kazoo.readthedocs.org/ so I am going be slightly biased for obvious reasons). By all means, go crazy! Ok, craziness has commenced, https://review.openstack.org/#/c/209661/ (WIP...) Flavio __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
Re: [openstack-dev] [trove][qa][stable] gate-trove-functional-dsvm-mysql needs some stable branch love
Hi Matt: Yes, this is on my radar and something I'm actively looking at. Hope to have a solution here for this pretty soon. Appreciate the help with this and the review you put up to get the stable branch unblocked! Cheers, Nikhil On Wed, Aug 5, 2015 at 6:33 AM, Amrith Kumar amr...@tesora.com wrote: Matt, Nikhil was working on it late into the night last night. I'll continue to work with him today and try and get this wrestled to the ground. -amrith -Original Message- From: Matt Riedemann [mailto:mrie...@linux.vnet.ibm.com] Sent: Wednesday, August 05, 2015 6:57 PM To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [trove][qa][stable] gate-trove-functional-dsvm-mysql needs some stable branch love Trove changes on the stable branches are blocked on bug 1479358 [1] because a change was made to fix trove-integration on master for liberty but didn't take into account that those scripts are branchless and therefore need to work on stable/kilo and stable/juno as well, where we have capped versions of libraries it uses (like python-openstackclient). The gate-trove-functional-dsvm-mysql job is now running on stable branches for compat so we don't break stable again once it's fixed, but we're still blocked on just getting it to work at all. I tried a fix [2] that isn't working on stable for different reasons. I'm not actively pursuing getting this fixed and therefore really need people from the trove team to step up here and get their CI house in order. Otherwise the alternative is we don't run the gate-trove-functional-dsvm-mysql job on stable/juno and stable/kilo since it's not working and I haven't seen much impetus to get it working. [1] https://bugs.launchpad.net/trove-integration/+bug/1479358 [2] https://review.openstack.org/#/c/207193/ -- Thanks, Matt Riedemann __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [keystone] policy issues when generating trusts with different clients
Hey Mike, I think it could be one of the hacks that are in place to try and keep compatibility with the old and new way of using the client is returning the wrong thing. Compare the output of trustor.user_id and trustor_auth.get_user_id(sess). For me trustor.user_id is None which will make sense why you'd get permission errors. Whether this is a bug in keystoneclient is debatable because we had to keep compatibility with the old options just not update them for the new paths, the ambiguity is certainly bad. The command that works for me is: trustor.trusts.create( trustor_user=trustor_auth.get_user_id(sess), trustee_user=trustee_auth.get_user_id(sess), project=trustor_auth.get_project_id(sess), role_names=['Member'], impersonation=True, expires_at=None) We're working on a keystoneclient 2.0 that will remove all that old code. Let me know if that fixes it for you. jamie - Original Message - From: michael mccune m...@redhat.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Sent: Wednesday, August 5, 2015 11:37:10 PM Subject: Re: [openstack-dev] [keystone] policy issues when generating trusts with different clients On 08/05/2015 02:34 AM, Steve Martinelli wrote: I think this is happening because the last session created was based off of trustee_auth. Try creating 2 sessions, one for each user (trustor and trustee). Maybe Jamie will chime in. just as a followup, i tried creating new Session objects for each client and i still get permission errors. i'm going to dig into the trust permission validation stuff a little. mike __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [puppet][keystone] To always use or not use domain name?
On 06/08/15 10:16, Jamie Lennox wrote: - Original Message - From: Adam Young ayo...@redhat.com To: openstack-dev@lists.openstack.org Sent: Thursday, August 6, 2015 1:03:55 AM Subject: Re: [openstack-dev] [puppet][keystone] To always use or not use domain name? On 08/05/2015 08:16 AM, Gilles Dubreuil wrote: While working on trust provider for the Keystone (V3) puppet module, a question about using domain names came up. Shall we allow or not to use names without specifying the domain name in the resource call? I have this trust case involving a trustor user, a trustee user and a project. For each user/project the domain can be explicit (mandatory): trustor_name::domain_name or implicit (optional): trustor_name[::domain_name] If a domain isn't specified the domain name can be assumed (intuited) from either the default domain or the domain of the corresponding object, if unique among all domains. If you are specifying project by name, you must specify domain either via name or id. If you specify proejct by ID, you run the risk of conflicting if you provide a domain speciffiedr (ID or name). Although allowing to not use the domain might seems easier at first, I believe it could lead to confusion and errors. The latter being harder for the user to detect. Therefore it might be better to always pass the domain information. Probably a good idea, as it will catch if you are making some assumption. IE, I say DomainX ProejctQ but I mean DomainQ ProjectQ. Agreed. Like it or not domains are a major part of using the v3 api and if you want to use project names and user names we should enforce that domains are provided. Particularly at the puppet level (dealing with users who should understand this stuff) anything that tries to guess what the user means is a bad idea and going to lead to confusion when it breaks. I totally agree. Thanks for participating I believe using the full domain name approach is better. But it's difficult to tell because in puppet-keystone and puppet-openstacklib now rely on python-openstackclient (OSC) to interface with Keystone. Because we can use OSC defaults (OS_DEFAULT_DOMAIN or equivalent to set the default domain) doesn't necessarily makes it the best approach. For example hard coded value [1] makes it flaky. [1] https://github.com/openstack/python-openstackclient/blob/master/openstackclient/shell.py#L40 To help determine the approach to use, any feedback will be appreciated. Thanks, Gilles __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] [Puppet] Deploying OpenStack with Puppet modules on Docker with Heat
On Wed, Aug 5, 2015 at 1:29 PM, Dan Prince dpri...@redhat.com wrote: ...snip... -The external config file mechanism for Kolla containers only seems to support a single config file. Some services (Neutron) can have multiple files. Could we extend the external config support to use multiple files? Yes! I would actually prefer a rework. We implemented that in a hurry but if you look at the initial commit messages we knew it was a stop-gap until a better idea came along. We need to have some way to do this dynamically or at least in a more readable way. I had a thought of laying down a json file with the contents being which files to copy/move/change permissions on and reading that in. In that way the config-external script never actually changes and the deploy tool can determine which files would get pulled in by the way it lays down that json file. I rejected using a '*' match for security reasons but also because some configs need to go to different places. In the case of neutron, the neutron.conf will be in /etc/neutron and the ml2_conf.ini will be in /etc/neutron/plugins/ml2. So I don't think '*' matching will work. We are open to ideas! __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys
Hi, Even if Barbican can store the key, but it will add overhead for restful API interaction between KeyStone and Barbican. May we store the key in the KeyStone DB backend (or another separate DB backend), for example MySQL? Best Regards Chaoyi Huang ( Joe Huang ) From: Lance Bragstad [mailto:lbrags...@gmail.com] Sent: Wednesday, August 05, 2015 9:06 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys On Wed, Aug 5, 2015 at 2:38 AM, Adam Heczko ahec...@mirantis.commailto:ahec...@mirantis.com wrote: Hi, I believe that Barbican keystore for signing keys was discussed earlier. I'm not sure if that's best idea since Barbican relies on Keystone authN/authZ. Correct. Once we find a solution for that problem it would be interesting to work towards a solution for storing keys in Barbican. I've talked to several people about this already and it seems to be the natural progression. Once we can do that, I think we can revisit the tooling for rotation. That's why this mechanism should be considered rather as out of band to Keystone/OS API and is rather devops task. regards, Adam On Wed, Aug 5, 2015 at 8:11 AM, joehuang joehu...@huawei.commailto:joehu...@huawei.com wrote: Hi, Lance, May we store the keys in Barbican, can the key rotation be done upon Barbican? And if we use Barican as the repository, then it’s easier for Key distribution and rotation in multiple KeyStone deployment scenario, the database replication (sync. or async.) capability could be leveraged. Best Regards Chaoyi Huang ( Joe Huang ) From: Lance Bragstad [mailto:lbrags...@gmail.commailto:lbrags...@gmail.com] Sent: Tuesday, August 04, 2015 10:56 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys On Tue, Aug 4, 2015 at 9:28 AM, Boris Bobrov bbob...@mirantis.commailto:bbob...@mirantis.com wrote: On Tuesday 04 August 2015 08:06:21 Lance Bragstad wrote: On Tue, Aug 4, 2015 at 1:37 AM, Boris Bobrov bbob...@mirantis.commailto:bbob...@mirantis.com wrote: On Monday 03 August 2015 21:05:00 David Stanek wrote: On Sat, Aug 1, 2015 at 8:03 PM, Boris Bobrov bbob...@mirantis.commailto:bbob...@mirantis.com wrote: Also, come on, does http://paste.openstack.org/show/406674/ look overly complex? (it should be launched from Fuel master node). I'm reading this on a small phone, so I may have it wrong, but the script appears to be broken. It will ssh to node-1 and rotate. In the simplest case this takes key 0 and moves it to the next highest key number. Then a new key 0 is generated. Later there is a loop that will again ssh into node-1 and run the rotation script. If there is a limit set on the number of keys and you are at that limit a key will be deleted. This extra rotation on node-1 means that it's possible that it has a different set of keys than are on node-2 and node-3. You are absolutely right. Node-1 should be excluded from the loop. pinc also lacks -c 1. I am sure that other issues can be found. In my excuse I want to say that I never ran the script and wrote it just to show how simple it should be. Thank for review though! I also hope that no one is going to use a script from a mailing list. What's the issue with just a simple rsync of the directory? None I think. I just want to reuse the interface provided by keystone-manage. You wanted to use the interface from keystone-manage to handle the actual promotion of the staged key, right? This is why there were two fernet_rotate commands issued? Right. Here is the fixed version (please don't use it anyway): http://paste.openstack.org/show/406862/ Note, this doesn't take into account the initial key repository creation, does it? Here is a similar version that relies on rsync for the distribution after the initial key rotation [0]. [0] http://cdn.pasteraw.com/d6odnvtt1u9zsw5mg4xetzgufy1mjua -- Best regards, Boris Bobrov __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Adam Heczko Security Engineer @ Mirantis Inc.
Re: [openstack-dev] [TripleO] [Puppet] [kolla] Deploying OpenStack with Puppet modules on Docker with Heat
Apologies for top post, but I just wanted to point out in the config-external examples I am aware /opt/kolla is the wrong directory to configure from (it should be /var/lib/kolla or something similar) and we will fix this during l3. Regards -steve On 8/5/15, 6:53 PM, Steven Dake (stdake) std...@cisco.com wrote: On 8/5/15, 11:33 AM, Ryan Hallisey rhall...@redhat.com wrote: Tagging kolla so the kolla community also sees it. Pardon the top posting. -Ryan Ryan, Super appreciated I may have missed this in my deluge of mail - please keep doing that - or if it involves kolla please tag as kolla as I read that folder often :) Dan, Thanks for the detailed posting. Comments inline. - Original Message - From: Dan Prince dpri...@redhat.com To: openstack-dev openstack-dev@lists.openstack.org Sent: Wednesday, August 5, 2015 2:29:13 PM Subject: [openstack-dev] [TripleO] [Puppet] Deploying OpenStack with Puppet modules on Docker with Heat Hi, There is a lot of interest in getting support for container based deployment within TripleO and many different ideas and opinions on how to go about doing that. One idea on the table is to use Heat to help orchestrate the deployment of docker containers. This would work similar to our tripleo-heat -templates implementation except that when using docker you would swap in a nested stack template that would configure containers on baremetal. We've even got a nice example that shows what a containerized TripleO overcloud might look like here [1]. The approach outlines how you might use kolla docker containers alongside of the tripleo-heat-templates to do this sort of deployment. This is all cool stuff but one area of concern is how we do the actual configuration of the containers. The above implementation relies on passing environment variables into kolla built docker containers which then self configure all the required config files and start the service. This sounds like a start... but creating (and maintaining) another from scratch OpenStack configuration tool isn't high on my list Agree we came to the exact same conclusion which is why we changed to the config-external config strategy in the ansible-multi spec at: https://review.openstack.org/#/c/189157/ of things to spend time on. Sure there is already a kolla community helping to build and maintain this configuration tooling (mostly thinking config files here) but this sounds a bit like what tripleo -image-elements initially tried to do and it turns out there are much more capable configuration tools out there. We would prefer to drop config-internal if TripleO doesn¹t plan to use it and focus on config-external. The reason we left it intact is we didn¹t want to leave TripleO in the lurch. Since we are already using a good bit of Puppet in tripleo-heat -templates the idea came up that we would try to configure Docker containers using Puppet. Again, here there are several ideas in the Puppet community with regards to how docker might best be configured with Puppet. Keeping those in mind we've been throwing some ideas out on an etherpad here [2] that describes using Heat for orchestration, Puppet for configuration, and Kolla docker images for containers. A quick outline of the approach is: -Extend the heat-container-agent [3] that runs os-collect-config and all the required hooks we require for deployment. This includes docker -compute, bash scripts, and Puppet. NOTE: As described in the etherpad I've taken to using DIB to build this container. I found this to be faster from a TripleO development baseline. -To create config files the heat-container-agent would run a puppet manifest for a given role and generate a directory tree of config files (/var/lib/etc-data for example). Would prefer per file mounting for config-external. (see more below) -We then run a docker-compose software deployment that mounts those configuration file(s) into a read only volume and uses them to start the containerized service. Sounds good. The approach could look something like this [4]. This nice thing about this is that it requires no modification to OpenStack Puppet modules. We can use those today, as-is. Additionally, although Puppet runs in the agent container we've created a mechanism to set all the resources to noop mode except for those that generate config files. And lastly, we can use exactly the same role manifest for docker that we do for baremetal. Lots of re-use here... and although we are disabling a lot of Puppet functionality in setting all the non-config resources to noop the Kolla containers already do some of that stuff for us (starting services, etc.). All that said (and trying to keep this short) we've still got a bit of work to do around wiring up externally created config files to kolla build docker containers. A couple of issues are: -The external config file mechanism for Kolla containers only seems to support a single config file. Some services (Neutron) can have multiple files.
Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys
Its been said before and I'll reiterate, the fernet keys are not meant to be managed by keystone itself directly. This is viewed as a DevOps concern as keystone itself doesnt dictate the HA method to be used (many deployers use different methodologies). Most CMS systems handle synchronization of files on disk very very well. Much the same as ssl certs, pki infrastructure, Etc. fernet keys are more in the realm of your CMS tool than something that belongs in keystone's DB. If you look through the other messages in both this thread and the others, you'll find this has been the stance from the beginning. --Morgan Sent via mobile On Aug 6, 2015, at 15:25, joehuang joehu...@huawei.com wrote: Hi, Even if Barbican can store the key, but it will add overhead for restful API interaction between KeyStone and Barbican. May we store the key in the KeyStone DB backend (or another separate DB backend), for example MySQL? Best Regards Chaoyi Huang ( Joe Huang ) From: Lance Bragstad [mailto:lbrags...@gmail.com] Sent: Wednesday, August 05, 2015 9:06 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys On Wed, Aug 5, 2015 at 2:38 AM, Adam Heczko ahec...@mirantis.com wrote: Hi, I believe that Barbican keystore for signing keys was discussed earlier. I'm not sure if that's best idea since Barbican relies on Keystone authN/authZ. Correct. Once we find a solution for that problem it would be interesting to work towards a solution for storing keys in Barbican. I've talked to several people about this already and it seems to be the natural progression. Once we can do that, I think we can revisit the tooling for rotation. That's why this mechanism should be considered rather as out of band to Keystone/OS API and is rather devops task. regards, Adam On Wed, Aug 5, 2015 at 8:11 AM, joehuang joehu...@huawei.com wrote: Hi, Lance, May we store the keys in Barbican, can the key rotation be done upon Barbican? And if we use Barican as the repository, then it’s easier for Key distribution and rotation in multiple KeyStone deployment scenario, the database replication (sync. or async.) capability could be leveraged. Best Regards Chaoyi Huang ( Joe Huang ) From: Lance Bragstad [mailto:lbrags...@gmail.com] Sent: Tuesday, August 04, 2015 10:56 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys On Tue, Aug 4, 2015 at 9:28 AM, Boris Bobrov bbob...@mirantis.com wrote: On Tuesday 04 August 2015 08:06:21 Lance Bragstad wrote: On Tue, Aug 4, 2015 at 1:37 AM, Boris Bobrov bbob...@mirantis.com wrote: On Monday 03 August 2015 21:05:00 David Stanek wrote: On Sat, Aug 1, 2015 at 8:03 PM, Boris Bobrov bbob...@mirantis.com wrote: Also, come on, does http://paste.openstack.org/show/406674/ look overly complex? (it should be launched from Fuel master node). I'm reading this on a small phone, so I may have it wrong, but the script appears to be broken. It will ssh to node-1 and rotate. In the simplest case this takes key 0 and moves it to the next highest key number. Then a new key 0 is generated. Later there is a loop that will again ssh into node-1 and run the rotation script. If there is a limit set on the number of keys and you are at that limit a key will be deleted. This extra rotation on node-1 means that it's possible that it has a different set of keys than are on node-2 and node-3. You are absolutely right. Node-1 should be excluded from the loop. pinc also lacks -c 1. I am sure that other issues can be found. In my excuse I want to say that I never ran the script and wrote it just to show how simple it should be. Thank for review though! I also hope that no one is going to use a script from a mailing list. What's the issue with just a simple rsync of the directory? None I think. I just want to reuse the interface provided by keystone-manage. You wanted to use the interface from keystone-manage to handle the actual promotion of the staged key, right? This is why there were two fernet_rotate commands issued? Right. Here is the fixed version (please don't use it anyway): http://paste.openstack.org/show/406862/ Note, this doesn't take into account the initial key repository creation, does it? Here is a similar version that relies on rsync for the distribution after the initial key rotation [0]. [0] http://cdn.pasteraw.com/d6odnvtt1u9zsw5mg4xetzgufy1mjua -- Best regards, Boris Bobrov __ OpenStack Development