Re: [openstack-dev] [oslo.vmware] Bump oslo.vmware to 1.0.0
Dims, There are some problems with exception hierarchy which need to be fixed. -Original Message- From: Davanum Srinivas [mailto:dava...@gmail.com] Sent: Tuesday, June 09, 2015 7:32 PM To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [oslo.vmware] Bump oslo.vmware to 1.0.0 Gary, Tracy, Vipin and other contributors, Is oslo.vmware API solid enough for us to bump to 1.0.0? if not, what's left to be done? thanks, dims -- Davanum Srinivas :: https://urldefense.proofpoint.com/v2/url?u=https-3A__twitter.com_dimsd=BQICAgc=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEsr=CTAUyaHvyUUJ-0QHviztQxBhCDLLSg1DksoSE4TOfZ8m=kJNU_fkxhspoxLdOSLde2j2GO0QDJLhUi8W9uh6x5V4s=_U0Uo0Fogc_CkxUAPs9E2ql0AbSNYzFJ4YRjsq7qdv8e= __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Starting the M Release Name Poll
Hey everybody! I've started the poll for the M release. By now, you should have gotten an email with a link to the vote. If you did not and think that you should have, please let me know (directly, not to the mailing list please) and I can re-send and/or fix as necessary. Please remember that there is an additional step after the close of this poll, which is that the top choice will be vetted for legal risk by the Foundation Staff and their legal resources. If it is found to carry undue risk, the next choice in the list may be selected until a non-problematic name has been selected. We'll send out an official announcement about the final name once it has cleared vetting. I'm keeping the poll open until 23:59:59 UTC a week Sunday June 21. Monty __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Getting rid of suds, which is unmaintained, and which we want out of Debian
On 06/11/2015 10:08 AM, Duncan Thomas wrote: On 11 June 2015 at 10:26, Thomas Goirand z...@debian.org mailto:z...@debian.org wrote: Hi, The current maintainer of suds in Debian sent bug reports against all packages depending on it. We would like to get rid of suds completely. See: https://bugs.debian.org/788080 https://bugs.debian.org/788081 https://bugs.debian.org/788083 https://bugs.debian.org/788085 https://bugs.debian.org/788088 Affected projects are: cinder, nova, trove, ironic, and finally oslo.vmware. So, are we moving to suds-jurko? Or anything else? Cheers, Thomas Goirand (zigo) There's only one cinder driver using it (nimble storage), and it seems to be using only very basic features. There are half a dozen suds forks on pipi, or there's pisimplesoap that the debian maintainer recommends. None of the above are currently packaged for Ubuntu that I can see, so can anybody in-the-know make a reaasoned recommendation as to what to move to? I can do the work of packaging the module that we choose. Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Getting rid of suds, which is unmaintained, and which we want out of Debian
On 06/11/2015 11:31 PM, Nikhil Manchanda wrote: Hi Thomas: I just checked and I don't see suds as a requirement for trove. I don't think it should be a requirement for the trove debian package, either. Thanks, Nikhil Hi, I fixed the package and removed the Suggests: python-suds in both Trove Ironic. Now there's still the issue in: - nova - cinder - oslo.vmware It'd be nice to do something about them. As I mentioned, I'll do the packaging work for anything that will replace it if needed. FYI, I filed bugs: https://bugs.launchpad.net/oslo.vmware/+bug/1465015 https://bugs.launchpad.net/nova/+bug/1465016 https://bugs.launchpad.net/cinder/+bug/1465017 Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Where are all the research papers?
-Original Message- From: Joshua Harlow [mailto:harlo...@outlook.com] Sent: 14 June 2015 07:52 To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] Where are all the research papers? Out of curiosity is there any known listing of papers (like ACM style or other) on openstack or evaluations of it that are published on a wiki or other? Especially as the number of projects increases, I would start to expect more articles being published, more papers and such (all of them would be an interesting read...). I did find one but I'm starting to wonder if there are more and if there are not, what is stopping people from writing more? (are we not doing enough outreach to people that would right papers?) 'On fault resilience of OpenStack' https://kabru.eecs.umich.edu/papers/publications/2013/socc2013_ju.pdf It'd be neat to somehow get more published articles about openstack coming from universities somehow (even if the articles are about bugs like in 'What Bugs Live in the Cloud?' @ http://ucare.cs.uchicago.edu/pdf/socc14-cbs.pdf). Maybe we should also feature them on the openstack blog when/if they get published as a showing of good faith to the article creator/researcher/other... Anyone have any thoughts on this? Google Scholar is a good source of material and you can subscribe to a feed according to keywords. There is around 30 per week publications mentioning OpenStack. https://scholar.google.fr/scholar?hl=enas_sdt=0,5q=openstackscisbd=1 -Josh __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev- requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [openstack][cinder]VMware: concurrent create two volumes, one is failed when vcenter session is timout
Sorry for the long delay. This is fixed in oslo.vmware 0.9.0. https://github.com/openstack/oslo.vmware/commit/a229faf8ba59724a4fda3f37d5a7473376f93d9c From: hao wang [mailto:sxmatch1...@gmail.com] Sent: Saturday, May 30, 2015 12:40 PM To: openstack-dev@lists.openstack.org Subject: [openstack-dev] [openstack][cinder]VMware: concurrent create two volumes, one is failed when vcenter session is timout The default timeout about vCenter server(5.5 used) is 30 minutes, so when session is timeout and then concurrent create two volumes through cinder API. There will be one of volumes creation failed. The error log is here: http://paste.openstack.org/show/246678/ Looks like if there is concurrent requests, the vmware driver will re-create the session both at same time. I'm not sure whether it is the issue point. Need some help from VMware engineer, thanks. -- Best Wishes For You! __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Gerrit maintenance concluded
Thanks for doing this! On 6/13/15, 4:26 AM, Jeremy Stanley fu...@yuggoth.org wrote: Our maintenance has concluded successfully without incident and the accompanying Gerrit outage was roughly an hour. We moved 57 repositories to new Git namespaces: stackforge/cookbook-openstack-bare-metal - openstack/cookbook-openstack-bare-metal stackforge/cookbook-openstack-block-storage - openstack/cookbook-openstack-block-storage stackforge/cookbook-openstack-client - openstack/cookbook-openstack-client stackforge/cookbook-openstack-common - openstack/cookbook-openstack-common stackforge/cookbook-openstack-compute - openstack/cookbook-openstack-compute stackforge/cookbook-openstack-dashboard - openstack/cookbook-openstack-dashboard stackforge/cookbook-openstack-data-processing - openstack/cookbook-openstack-data-processing stackforge/cookbook-openstack-database - openstack/cookbook-openstack-database stackforge/cookbook-openstack-identity - openstack/cookbook-openstack-identity stackforge/cookbook-openstack-image - openstack/cookbook-openstack-image stackforge/cookbook-openstack-integration-test - openstack/cookbook-openstack-integration-test stackforge/cookbook-openstack-network - openstack/cookbook-openstack-network stackforge/cookbook-openstack-object-storage - openstack/cookbook-openstack-object-storage stackforge/cookbook-openstack-ops-database - openstack/cookbook-openstack-ops-database stackforge/cookbook-openstack-ops-messaging - openstack/cookbook-openstack-ops-messaging stackforge/cookbook-openstack-orchestration - openstack/cookbook-openstack-orchestration stackforge/cookbook-openstack-telemetry - openstack/cookbook-openstack-telemetry stackforge/dragonflow - openstack/dragonflow stackforge/mistral - openstack/mistral stackforge/mistral-dashboard - openstack/mistral-dashboard stackforge/mistral-extra - openstack/mistral-extra stackforge/networking-bgpvpn - openstack/networking-bgpvpn stackforge/networking-cisco - openstack/networking-cisco stackforge/networking-l2gw - openstack/networking-l2gw stackforge/networking-midonet - openstack/networking-midonet stackforge/networking-odl - openstack/networking-odl stackforge/networking-ofagent - openstack/networking-ofagent stackforge/networking-ovn - openstack/networking-ovn stackforge/octavia - openstack/octavia stackforge/openstack-chef-repo - openstack/openstack-chef-repo stackforge/openstack-chef-specs - openstack/openstack-chef-specs stackforge/puppet-ceilometer - openstack/puppet-ceilometer stackforge/puppet-cinder - openstack/puppet-cinder stackforge/puppet-designate - openstack/puppet-designate stackforge/puppet-glance - openstack/puppet-glance stackforge/puppet-gnocchi - openstack/puppet-gnocchi stackforge/puppet-heat - openstack/puppet-heat stackforge/puppet-horizon - openstack/puppet-horizon stackforge/puppet-ironic - openstack/puppet-ironic stackforge/puppet-keystone - openstack/puppet-keystone stackforge/puppet-manila - openstack/puppet-manila stackforge/puppet-modulesync-configs - openstack/puppet-modulesync-configs stackforge/puppet-monasca - openstack/puppet-monasca stackforge/puppet-neutron - openstack/puppet-neutron stackforge/puppet-nova - openstack/puppet-nova stackforge/puppet-openstack-specs - openstack/puppet-openstack-specs stackforge/puppet-openstack_extras - openstack/puppet-openstack_extras stackforge/puppet-openstacklib - openstack/puppet-openstacklib stackforge/puppet-sahara - openstack/puppet-sahara stackforge/puppet-swift - openstack/puppet-swift stackforge/puppet-tempest - openstack/puppet-tempest stackforge/puppet-tripleo - openstack/puppet-tripleo stackforge/puppet-trove - openstack/puppet-trove stackforge/puppet-tuskar - openstack/puppet-tuskar stackforge/puppet-vswitch - openstack/puppet-vswitch stackforge/python-mistralclient - openstack/python-mistralclient stackforge/vmware-nsx - openstack/vmware-nsx We moved and also renamed 1 repository: stackforge/ironic-discoverd - openstack/ironic-inspector We retired 3 unmaintained/abandoned repositories: stackforge/fuel-plugin-external-nfs - stackforge-attic/fuel-plugin-external-nfs stackforge/fuel-plugin-group-based-policy - stackforge-attic/fuel-plugin-group-based-policy stackforge/zvm-driver - stackforge-attic/zvm-driver I've uploaded these .gitreview
[openstack-dev] [TripleO] a bad week for CI
Last week was a really bad week for TripleO CI. Several breaking changes (2 of which Fedora package related) kept at least a subset of the jobs down for over half the week. Here is a brief summary of the issues we hit along with links to some of the work arounds. Fedora 21: python-address package breaks python-ipaddr https://review.openstack.org/#/c/189745/ Puppet keystone: pki setup was broken (this revert was accepted and a proper fix is in the works) https://review.openstack.org/#/c/189892/ Nova: Ironic ephemeral partitions aren't present (We may need to look for a proper roll forward fix for this as the breaking change fixed a BDM bug with virt Nova drivers but breaks the Ironic ephemeral partition entirely) https://review.openstack.org/#/c/190622/ Fedora 21: keepalived package update breaks some VIPs (still need to file a Fedora bugzilla with more details to get this package fixed) https://review.openstack.org/#/c/191536/ If you are trying to spin up your own TripleO dev environments you will likely need to cherry pick at least one of these in order to have a working setup depending on if you use Fedora and are using Puppet (which may alleviate the need for the Ironic ephemeral partition). Dan __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Openstack][Neutron][OVN] OVN-Neutron - TODO List
Hi Salvatore, Thanks for the comments, added some more information. Gal On Sun, Jun 14, 2015 at 7:58 PM, Salvatore Orlando sorla...@nicira.com wrote: Gal, thanks for this summery. Some additional info inline. Salvatore On 12 June 2015 at 19:38, Gal Sagie gal.sa...@gmail.com wrote: Hello All, I wanted to share some of our next working items and hopefully get more people on board with the project. I personally would mentor any new comer that wants to get familiar with the project and help with any of these items. You can also feel free to approach Russell Bryant (rbry...@redhat.com) which is heading the OVN-Openstack integration. We both are usually active on IRC at #openstack-neutron-ovn (freenode) , you can drop us a visit if you have any questions. The Neutron sprint in Fort Collins [1] has a work item for OVN, hopefully some work can start there on some of these items (or others). Russell Bryant and myself unfortunately won't be there, but feel free to contact us online or in email. *1. Security Group Implementation* Currently security groups are not being configured to OVN, there is a document written about how to model security groups to OVN northbound ACL's. [2] I suspect getting this right is not going to be trivial, hopefully i might be able to also start tackling this item next week. From what I recall Miguel was very interested in helping out on this front. Have you already reached out to him? [Gal] I will talk with him about it, thanks for letting me know. *2. Provider Network support* Russell sent a design proposal to the ovs-dev mailing list [3], need to follow on that and implement in OVN I think I have replied to that proposal with a few comments, perhaps you might have a look at those. [Gal] haven't read it yet, will probably get to it day after tommorow *3. Tempest configuration* Russell has a patch for that [4] which needs additional help to make it work. That patch has merged now. So perhaps Russell does not need help anymore! [Gal] I synced with Russell before sending this list, the patch is not merged yet (it depends on another patch) even that its reviewed, and i think the tempest job is still failing, so more tweaking needs to be done. I am having a bit of crazy week, but will take it on my self next week if nothing magical will happen :) *4. Unit Tests / Functional Tests * We want to start adding more testing to the project in all fronts *5. Integration with OVS-DPDK* OVS-DPDK has a ML2 mechanism driver [5] to enable userspace DPDK dataplane for OVS, we want to try and see how this can combine with OVN mechanism driver together. (one idea is to use hierarchical port binding for that) Need to design and test it and provide additional working items for this integration I think this is a rare case where the OVN integration might leverage additional mechanism drivers as AFAICT the DPDK driver mainly interacts with VIF plugging (operating at the Neutron port bindings level), and does not interfere with logical network resource processing. [Gal] I agree, the reason why i proposed it here is because DPDK has many benefits and require tweaking not only in the VIF plugging but with the whole OVS-DPDK data plane bring up and huge pages allocations and installation and so on, so i thought we could get this for free by reusing that mechanism driver. Of course someone needs to check that this integration actually works and if not, what needs to be added to make it work. *6. L2 Gateway Integration* OVN supports L2 gateway translation between virtual and physical networks. We want to leverage the current L2-Gateway sub project in stack forge [6] and use it to enable configuration of L2 gateways in OVN. I have looked briefly at the project and it seems the API's are good, but currently the implementation relay on RPC and agent implementation (where we would like to configure it using OVSDB) , so this needs to be sorted and tested. Last time I checked the progress of this project, they were focusing on ToR VxLAN offload as a first use case. And, as far as I recall, this is what networking-l2gw provides nowaday (Armando and Sukdev might have more info). Nevertheless, the API is generic enough that in my opinion it might be possible for OVN to leverage it. We shall implement a distinct l2gw service plugin like [1]; as I have some familiarity with this kind of APIs, let me know if I can be of any help. [1] http://git.openstack.org/cgit/openstack/networking-l2gw/tree/networking_l2gw/services/l2gateway/plugin.py [Gal] Would love if you could help with this, from what i briefly saw the API's seems generic enough, but as stated below we first need to see what is the correct way to configure it to OVN and thats something that first needs to be resolved in OVN side i believe Another issue is related to OVN it self which doesn't have L2
Re: [openstack-dev] [Neutron] Proposing Ann Kamyshnikova for the API DB core reviewer team
+1 From: Akihiro MOTOKI amot...@gmail.commailto:amot...@gmail.com Reply-To: OpenStack List openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Date: Friday, June 12, 2015 at 8:20 AM To: OpenStack List openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [Neutron] Proposing Ann Kamyshnikova for the API DB core reviewer team +1 2015-06-11 23:34 GMT+09:00 Henry Gessau ges...@cisco.commailto:ges...@cisco.com: As one of the Lieutenants [1] for the API and DB areas under the PTL, I would like to propose Ann Kamyshnikova as a member of the Neutron API and DB core reviewer team. Ann has been a long time contributor in Neutron showing expertise particularly in database matters. She has also worked with and contributed code to the oslo.db and sqlalchemy/alembic projects. Ann was a critical contributor to the Neutron database healing effort that was completed in the Juno cycle. Her deep knowledge of databases and backends, and her expertise with oslo.db, sqlalchemy and alembic will be very important in this area. Ann is a trusted member of our community and her review stats [2][3][4] place her comfortably with other Neutron core reviewers. She consistently catches database issues early when patches are submitted for review, and shows dedication to helping developers understand and perfect their database-related changes. Existing Neutron core reviewers from the API and DB area of focus, please vote +1/-1 for the addition of Ann to the core reviewer team. Specifically, I'm looking for votes from Akihiro (co-Lieutenant), Mark, Maru, Armando and Carl. [1] http://docs.openstack.org/developer/neutron/policies/core-reviewers.html#adding-or-removing-core-reviewers [2] https://review.openstack.org/#/q/reviewer:%22Ann+Kamyshnikova+%253Cakamyshnikova%2540mirantis.com%253E%22,n,z [3] http://stackalytics.com/report/contribution/neutron-group/90https://urldefense.proofpoint.com/v2/url?u=http-3A__stackalytics.com_report_contribution_neutron-2Dgroup_90d=BQMFaQc=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEsr=VlZxHpZBmzzkWT5jqz9JYBk8YTeq9N3-diTlNj4GyNcm=NyetsTvwqeeB9g_IeOFBT96-p9ybHEwOY7oEF7WHzQQs=UfXR658klk2zl2fYQaizVDo-1Ve6nLCilVQr9YqbJIEe= [4] http://stackalytics.com/?user_id=akamyshnikovametric=markshttps://urldefense.proofpoint.com/v2/url?u=http-3A__stackalytics.com_-3Fuser-5Fid-3Dakamyshnikova-26metric-3Dmarksd=BQMFaQc=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEsr=VlZxHpZBmzzkWT5jqz9JYBk8YTeq9N3-diTlNj4GyNcm=NyetsTvwqeeB9g_IeOFBT96-p9ybHEwOY7oEF7WHzQQs=wIJOvRNJV-wRMFA8DxC1ngzBjz7Tlmvoot3DB8WE_q4e= __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Where are all the research papers?
Tim Bell wrote: -Original Message- From: Joshua Harlow [mailto:harlo...@outlook.com] Sent: 14 June 2015 07:52 To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] Where are all the research papers? Out of curiosity is there any known listing of papers (like ACM style or other) on openstack or evaluations of it that are published on a wiki or other? Especially as the number of projects increases, I would start to expect more articles being published, more papers and such (all of them would be an interesting read...). I did find one but I'm starting to wonder if there are more and if there are not, what is stopping people from writing more? (are we not doing enough outreach to people that would right papers?) 'On fault resilience of OpenStack' https://kabru.eecs.umich.edu/papers/publications/2013/socc2013_ju.pdf It'd be neat to somehow get more published articles about openstack coming from universities somehow (even if the articles are about bugs like in 'What Bugs Live in the Cloud?' @ http://ucare.cs.uchicago.edu/pdf/socc14-cbs.pdf). Maybe we should also feature them on the openstack blog when/if they get published as a showing of good faith to the article creator/researcher/other... Anyone have any thoughts on this? Google Scholar is a good source of material and you can subscribe to a feed according to keywords. There is around 30 per week publications mentioning OpenStack. https://scholar.google.fr/scholar?hl=enas_sdt=0,5q=openstackscisbd=1 Awesome! Thanks very much. I still wonder if we should put these authors/researchers more in the spotlight via https://www.openstack.org/blog/ (the weekly newsletter). The more papers the better IMHO (and the more attention we give, the more papers we may help make possible...) -Josh __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev- requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [os-ansible-deployment] Core team nomination
+1 most definitely! On Saturday, June 13, 2015, Kevin Carter kevin.car...@rackspace.com wrote: Hello, I would like to nominate Ian Cordasco (sigmavirus24 on IRC) for the os-ansible-deployment-core team. Ian has been contributing to the OSAD project for some time now and has always had quality reviews[0], he's landing great patches[1], he's almost always in the meetings, and is simply an amazing person to work with. His open source first attitude, security mindset, and willingness to work cross project is invaluable and will only stand to better the project and the deployers whom consume it. Please respond with +1/-1s and or any other concerns. As a reminder, we are using the voting process outlined at [ https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess ] to add members to our core team. Thank you. -- Kevin Carter [0] https://review.openstack.org/#/q/status:closed+owner:%22Ian+Cordasco%22+project:stackforge/os-ansible-deployment,n,z [1] https://review.openstack.org/#/q/status:merged+owner:%22Ian+Cordasco%22+project:stackforge/os-ansible-deployment,n,z __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Jesse Pretorius mobile: +44 7586 906045 email: jesse.pretor...@gmail.com skype: jesse.pretorius __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [oslo.vmware] Bump oslo.vmware to 1.0.0
Hi, I agree with Vipin. Can we please address the exception handling. We already have the patches. Thanks Gary On 6/14/15, 12:41 PM, Vipin Balachandran vbalachand...@vmware.com wrote: Dims, There are some problems with exception hierarchy which need to be fixed. -Original Message- From: Davanum Srinivas [mailto:dava...@gmail.com] Sent: Tuesday, June 09, 2015 7:32 PM To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [oslo.vmware] Bump oslo.vmware to 1.0.0 Gary, Tracy, Vipin and other contributors, Is oslo.vmware API solid enough for us to bump to 1.0.0? if not, what's left to be done? thanks, dims -- Davanum Srinivas :: https://urldefense.proofpoint.com/v2/url?u=https-3A__twitter.com_dimsd=BQ ICAgc=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEsr=CTAUyaHvyUUJ-0QHviztQ xBhCDLLSg1DksoSE4TOfZ8m=kJNU_fkxhspoxLdOSLde2j2GO0QDJLhUi8W9uh6x5V4s=_U0 Uo0Fogc_CkxUAPs9E2ql0AbSNYzFJ4YRjsq7qdv8e= __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron] Proposing YAMAMOTO Takashi for the Control Plane core team
+1 From: Kevin Benton blak...@gmail.commailto:blak...@gmail.com Reply-To: OpenStack List openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Date: Thursday, June 11, 2015 at 9:15 PM To: OpenStack List openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Subject: [openstack-dev] [Neutron] Proposing YAMAMOTO Takashi for the Control Plane core team Hello all! As the Lieutenant of the built-in control plane[1], I would like YAMAMOTO Takashi to be a member of the control plane core reviewer team. He has been extensively reviewing the entire codebase[2] and his feedback on patches related to the reference implementation has been very useful. This includes everything ranging from the AMPQ API to OVS flows. Existing cores that have spent time working on the reference implementation (agents and AMQP code), please vote +1/-1 for his addition to the team. Aaron, Gary, Assaf, Maru, Kyle, Armando, Carl and Oleg; you have all been reviewing things in these areas recently so I would like to hear from you specifically. 1. http://docs.openstack.org/developer/neutron/policies/core-reviewers.html#core-review-hierarchy 2. http://stackalytics.com/report/contribution/neutron-group/90https://urldefense.proofpoint.com/v2/url?u=http-3A__stackalytics.com_report_contribution_neutron-2Dgroup_90d=BQMFaQc=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEsr=VlZxHpZBmzzkWT5jqz9JYBk8YTeq9N3-diTlNj4GyNcm=A2TGKyHtjvHMoKAlUFNuhtp-EzJCAvuVDvWuJUrU1wks=9R8xJGSgvpUrtXxNWiW9X8dZVgUoEaCH5GvU6rrGBqUe= Cheers -- Kevin Benton __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [kolla] Proposal for new core-reviewer Harm Waites
Hey folks, I am proposing Harm Waites for the Kolla core team. He did a fantastic job implementing Designate in a container[1] which I’m sure was incredibly difficult and never gave up even though there were 13 separate patch reviews :) Beyond Harm’s code contributions, he is responsible for 32% of the “independent” reviews[1] where independents compose 20% of our total reviewer output. I think we should judge core reviewers on more then output, and I knew Harm was core reviewer material with his fantastic review of the cinder container where he picked out 26 specific things that could be broken that other core reviewers may have missed ;) [3]. His other reviews are also as thorough as this particular review was. Harm is active in IRC and in our meetings for which his TZ fits. Finally Harm has agreed to contribute to the ansible-multi implementation that we will finish in the liberty-2 cycle. Consider my proposal to count as one +1 vote. Any Kolla core is free to vote +1, abstain, or vote –1. A –1 vote is a veto for the candidate, so if you are on the fence, best to abstain :) Since our core team has grown a bit, I’d like 3 core reviewer +1 votes this time around (vs Sam’s 2 core reviewer votes). I will leave the voting open until June 21 UTC. If the vote is unanimous prior to that time or a veto vote is received, I’ll close voting and make appropriate adjustments to the gerrit groups. Regards -steve [1] https://review.openstack.org/#/c/182799/ [2] http://stackalytics.com/?project_type=allmodule=kollacompany=%2aindependent [3] https://review.openstack.org/#/c/170965/ __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron] Proposing Rossella Sblendido for the Control Plane core team
+1 On 6/13/15, 1:41 AM, Carl Baldwin c...@ecbaldwin.net wrote: +1! On Fri, Jun 12, 2015 at 1:44 PM, Kevin Benton blak...@gmail.com wrote: Hello! As the Lieutenant of the built-in control plane[1], I would like Rossella Sblendido to be a member of the control plane core reviewer team. Her review stats are in line with other cores[2] and her feedback on patches related to the agents has been great. Additionally, she has been working quite a bit on the blueprint to restructure the L2 agent code so she is very familiar with the agent code and the APIs it leverages. Existing cores that have spent time working on the reference implementation (agents and AMQP code), please vote +1/-1 for her addition to the team. Aaron, Gary, Assaf, Maru, Kyle, Armando, Carl and Oleg; you have all been reviewing things in these areas recently so I would like to hear from you specifically. 1. http://docs.openstack.org/developer/neutron/policies/core-reviewers.html# core-review-hierarchy 2. https://urldefense.proofpoint.com/v2/url?u=http-3A__stackalytics.com_repo rt_contribution_neutron-2Dgroup_30d=BQICAgc=Sqcl0Ez6M0X8aeM67LKIiDJAXVe Aw-YihVMNtXt-uEsr=VlZxHpZBmzzkWT5jqz9JYBk8YTeq9N3-diTlNj4GyNcm=Cd2HEEvj DeuSBMv1Qe-5Asfui8itK9iR66T3yA628Z4s=fJdXlJ3K8B9p79uT3PtCWc6vtPJ3NbTPpiK cPlOIlcIe= Cheers -- Kevin Benton _ _ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Openstack][Neutron][OVN] OVN-Neutron - TODO List
Gal, thanks for this summery. Some additional info inline. Salvatore On 12 June 2015 at 19:38, Gal Sagie gal.sa...@gmail.com wrote: Hello All, I wanted to share some of our next working items and hopefully get more people on board with the project. I personally would mentor any new comer that wants to get familiar with the project and help with any of these items. You can also feel free to approach Russell Bryant (rbry...@redhat.com) which is heading the OVN-Openstack integration. We both are usually active on IRC at #openstack-neutron-ovn (freenode) , you can drop us a visit if you have any questions. The Neutron sprint in Fort Collins [1] has a work item for OVN, hopefully some work can start there on some of these items (or others). Russell Bryant and myself unfortunately won't be there, but feel free to contact us online or in email. *1. Security Group Implementation* Currently security groups are not being configured to OVN, there is a document written about how to model security groups to OVN northbound ACL's. [2] I suspect getting this right is not going to be trivial, hopefully i might be able to also start tackling this item next week. From what I recall Miguel was very interested in helping out on this front. Have you already reached out to him? *2. Provider Network support* Russell sent a design proposal to the ovs-dev mailing list [3], need to follow on that and implement in OVN I think I have replied to that proposal with a few comments, perhaps you might have a look at those. *3. Tempest configuration* Russell has a patch for that [4] which needs additional help to make it work. That patch has merged now. So perhaps Russell does not need help anymore! *4. Unit Tests / Functional Tests * We want to start adding more testing to the project in all fronts *5. Integration with OVS-DPDK* OVS-DPDK has a ML2 mechanism driver [5] to enable userspace DPDK dataplane for OVS, we want to try and see how this can combine with OVN mechanism driver together. (one idea is to use hierarchical port binding for that) Need to design and test it and provide additional working items for this integration I think this is a rare case where the OVN integration might leverage additional mechanism drivers as AFAICT the DPDK driver mainly interacts with VIF plugging (operating at the Neutron port bindings level), and does not interfere with logical network resource processing. *6. L2 Gateway Integration* OVN supports L2 gateway translation between virtual and physical networks. We want to leverage the current L2-Gateway sub project in stack forge [6] and use it to enable configuration of L2 gateways in OVN. I have looked briefly at the project and it seems the API's are good, but currently the implementation relay on RPC and agent implementation (where we would like to configure it using OVSDB) , so this needs to be sorted and tested. Last time I checked the progress of this project, they were focusing on ToR VxLAN offload as a first use case. And, as far as I recall, this is what networking-l2gw provides nowaday (Armando and Sukdev might have more info). Nevertheless, the API is generic enough that in my opinion it might be possible for OVN to leverage it. We shall implement a distinct l2gw service plugin like [1]; as I have some familiarity with this kind of APIs, let me know if I can be of any help. [1] http://git.openstack.org/cgit/openstack/networking-l2gw/tree/networking_l2gw/services/l2gateway/plugin.py Another issue is related to OVN it self which doesn't have L2 Gateway awareness in the Northbound DB (which is the DB that neutron configure) but only has the API in the Southbound DB Yeah, this could be some sort of a problem... I don't think Neutron should interact with SB DB. OVN architecture has not been conceived to work this way. *7. QoS Support* We want to be able to support the new QoS API that is being implemented in Liberty [7] Need to see how we can leverage the work that will implement this for OVS in the reference implementation and what additions need to be made for OVN case. Do you already have anything in mind? *8. L3 Implementation* L3 is not yet implemented in OVN, need to follow up on the design and add the L3 service plugin and implementation. If I can be of any help on this front, I'd be glad to offer my assistance (which may of no use for you, but that's another story!) By the way, is there a reason for which native OVN DHCP/metadata access support are not in this todo list? *9. VLAN Aware VM's* This is not directly related to OVN, but we need to see that OVN use case of configuring parent ports (for the use case of Containers inside a VM) is being addressed, and if the implementation is finished, to align the API for OVN as well. I reckon the proposed API (master/child ports) or the alternative concept of trunk ports both map fairly well on the data structures so far
Re: [openstack-dev] [packaging] RPM Packaging for OpenStack IRC Meeting
Hi Joshua, An example of some specs already doing this (they are built using the cheetah template engine/style): https://github.com/stackforge/anvil/tree/master/conf/templates/packaging/specs They are turned into 'normal' spec files (the compilation part) during build time. Right, I think thats a good idea, this allows us to work on reducing the differences over time and start with basically what we have today. Greetings, Dirk __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] Microversioning work questions and kick-start
On 12 June 2015 at 16:58, Henry Gessau ges...@cisco.com wrote: On Thu, Jun 11, 2015, Salvatore Orlando sorla...@nicira.com wrote: Finally, I received queries from several community members that would be keen on helping supporting this microversioning effort. I wonder if the PTL and the API lieutenants would ok with agreeing to have a team of developers meeting regularly, working towards implementing this feature, and report progress and/or issues to the general Neutron meeting. Yes, I am ok with agreeing to form such a team. ;) With an effort this complex it makes sense to have tl;dr type summaries in the general meeting. This has worked well for large-effort features before, and when the work winds down the topic can fold back into the main meeting. Thanks Henry! So I would say that perhaps we could gather interest from developers - either using this thread or another one - and once we have a critical mass of, for instance, 5 developers, we will kick off the activities, book a weekly slot for more or less regular meetings, set expectations, review design, discuss implementation, and hopefully get this thing done. Salvatore __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [murano] shellcheck all .sh scripts in murano-deployment
Since there were no objections, and as a follow-up I’ve created a BP for that in murano: https://blueprints.launchpad.net/murano/+spec/add-shellcheck-jobs -- Kirill Zaitsev Murano team Software Engineer Mirantis, Inc On 10 Jun 2015 at 18:07:19, Filip Blaha (filip.bl...@hp.com) wrote: Thanks for comment and suggestion! there is also shutil2 framework for unit testing over shell scripts. We shall consider it whether it could bring us value for the effort. I personally have no strong opinion about that. Little contradiction to my previous mail:-) Regards Filip On 06/10/2015 03:34 PM, Jeremy Stanley wrote: On 2015-06-10 13:48:26 +0200 (+0200), Filip Blaha wrote: +1, nice idea. Shell script are not easy to review - large files, not covered by unit tests. Any automatic tool could be beneficial. It's worth noting that just because your shell scripts don't have their own validation tests doesn't mean they can't. For example see the test-features.sh and test-functions.sh scripts in the https://git.openstack.org/cgit/openstack-infra/devstack-gate/ repo, making sure we maintain a contract on things like branch fallback logic which is easy to subtly break if not tested. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Magnum] Add periodic task threading for conductor server
hi magnum team, I am planing to add periodic task for magnum conductor service, it will be good to sync task status with heat and container service. and I have already have a WIP patch[1], I'd like to start a discussion on the implement. Currently, conductor service is an rpc server, and it has several handlers endpoints = [ docker_conductor.Handler(), k8s_conductor.Handler(), bay_conductor.Handler(), conductor_listener.Handler(), ] all handler runs in the rpc server. 1. my patch [1] is to add periodic task functions in each handlers (if it requires such tasks) and setup these functions when start rpc server, add them to a thread group. so for example: if we have task in bay_conductor.Handler() and docker_conductor.Handler(), then adding 2 threads to current service's tg. each thread run it own periodic tasks. the advantage is we separate each handler's task job to separate thread. but hongbin's concern is if it will has some impacts on horizontally scalability. 2. another implement is put all tasks in a thread, this thread will run all tasks(for bay,k8s, docker etc), just like sahara does see [2] 3 last one is start a new service in a separate process to run tasks.( I think this will be too heavy/wasteful) I'd like to get what's your suggestion, thanks in advance. [1] https://review.openstack.org/#/c/187090/4 [2] https://github.com/openstack/sahara/blob/master/sahara/service/periodic.py#L118 -- BR, Eli(Li Yong)Qiao attachment: liyong_qiao.vcf__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] Microversioning work questions and kick-start
On 12 June 2015 at 12:22, Sean Dague s...@dague.net wrote: On 06/11/2015 06:03 PM, Salvatore Orlando wrote: As most of you already know, work is beginning to move forward on the micro-versioned Neutron API, for which a specification is available at [1] From a practical perspective there is one non-negligible preliminary issue that needs attention. then Neutron API URI prefix includes the full version number - currently 2.0. For instance: http://neutron_server:9696/v2.0/networks.json This clearly makes a microversioned approach a bit weird - if you have to use, for instance, 2.0 as a URI prefix for API version 2.12. On the one hand it might make sense to start the micro-versioned API as a sort of clean slate, possibly using a version-agnostic URI prefix or no prefix at all; also as pointed out by some community members it will give a chance to validate this versioned API approach. This will have however the drawback that both the unversioned, extension-based so-called 2.0 API will keep living and evolving side-by-side with the versioned API, and then switching to the versioned API will not be transparent to clients. It would be good to receive some opinions from the developer and user community on the topic. It will definitely be challenging to have both evolving at once. The Nova team had a lot of pains in that happening in the 18 months of v3.0 work. Once we got the microversion mechanism in place we hard froze v2.0. That being said we actually had 2 internal code bases, so our situation was a bit gorpier than yours. Well, we'd have both extensions and revisions evolving at the same time. The situation won't be nearly as difficult to handle as nova v3.0, but it still be a hassle. While I understand where people advocating for not freezing the current extension-based mechanisms until microversioning is proved and tested, on the other hand my concern is that this effort is already on the failure road as it's pretty much an experimental alternative to the current way of evolving the API for the foreseeable future. Therefore I would say that the community might accept that if microversioning is functionally complete and reliably working in version X (where X is an openstack release), then automatically the old API will be frozen in the same version, deprecated in X+1 and killed in X+2. On the version in the url: My expectation is at some point the future we'll privot out of having a version string in our URL entirely, but it's one of those things that can come later. The url root is mostly important from a service catalog perspective, in that what's there matches what the code returns in all it's internal links. Honestly, long term, it would be great if 1) we actually developer standards for naming in the service catalog 2) the API services stop having their API url in code, but instead reflect it back from the catalog. Which would fix one of the gorpiest parts of putting your API servers behind haproxy or ssl termination. I.e. I wouldn't be too concerned on that front, it looks a little funny, but won't really get in anyone's way. Furthermore, another topic that has been brought up is whether plugins should be allowed to control the version of the API server, like specifying minimum and maximum version. My short answer is no, because the plugin should implement the API, not controlling it. Also, the spec provides a facility for plugins to disable features if they are unable to support them. Finally, I received queries from several community members that would be keen on helping supporting this microversioning effort. I wonder if the PTL and the API lieutenants would ok with agreeing to have a team of developers meeting regularly, working towards implementing this feature, and report progress and/or issues to the general Neutron meeting. Salvatore [1] https://review.openstack.org/#/c/136760/ __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Sean Dague http://dague.net __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [puppet] [Swift] Multiple proxy recipes will create out of sync rings
With respect to using a seed - the facility to supply one to the rebalance operation has recently been added to puppet-swift master branch (commit b8b4434), however the seed parameter is not available to any of the usual calling methods (this looks to be deliberate from the commit message), so is not immediately useful without surgery :-) Regards Mark On 13/06/15 18:05, Mark Kirkwood wrote: From what I can see, the ring gets created and rebalanced in puppet-swift/manifest/ringbuilder.pp i.e calling: class { '::swift::ringbuilder': # the part power should be determined by assuming 100 partitions per drive part_power = '18', replicas = '3', min_part_hours = 1, require= Class['swift'], } *not* when each device is added. Yeah, using a seed is probably a good solution too. For the moment I'm using the idea of one proxy being a 'ring server/master' which achieves the same thing (identical rings everywhere). However I'll have a look at using a seed, as this may simplify the code and also the operational procedure needed to replace said 'master' if it fails (i.e to avoid accidentally creating a new ring when you really don't need to...) Regards, Mark On 12/06/15 23:10, McCabe, Donagh wrote: I skimmed the code, but since I'm not familiar with the environment, I could not find where swift-ring-builder rebalance is invoked. I'm guessing that each time you add a device to a ring, a rebalance is also done. Leaving aside how inefficient that is, the key thing is that the rebalance command has an optional seed parameter. Unless you explicitly set the seed (to same value on all node obviously), you won't get the same ring on all nodes. You also need to make sure you add the same set of drives and in same order. Regards, Donagh -Original Message- From: Mark Kirkwood [mailto:mark.kirkw...@catalyst.net.nz] Sent: 12 June 2015 06:28 To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [puppet] [Swift] Multiple proxy recipes will create out of sync rings I've looking at using puppet-swift to deploy a swift cluster. Firstly - without http://git.openstack.org/cgit/stackforge/puppet-swift/tree/tests/site.pp I would have struggled a great deal more to get up and running, so a big thank you for a nice worked example of how to do multiple nodes! However I have stumbled upon a problem - with respect to creating multiple proxy nodes. There are some recipes around that follow on from the site.pp above and explicitly build 1 proxy (e.g https://github.com/CiscoSystems/puppet-openstack-ha/blob/folsom_ha/examples/swift-nodes.pp) Now the problem is - each proxy node does a ring builder create, so ends up with *different* builder (and therefore) ring files. This is not good, as the end result is a cluster with all storage nodes and *one* proxy with the same set of ring files, and *all* other proxies with *different* ring (and builder) files. I have used logic similar to the attached to work around this, i.e only create rings if we are the 'ring server', otherwise get 'em via rsync. Thoughts? Regards Mark __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Magnum] TLS Support in Magnum
Hi All, This is to bring the blueprint secure-kubernetes https://blueprints.launchpad.net/magnum/+spec/secure-kubernetes in discussion. I have been trying to figure out what could be the possible change area to support this feature in Magnum. Below is just a rough idea on how to proceed further on it. This task can be further broken in smaller pieces. *1. Add support for TLS in python-k8sclient.* The current auto-generated code doesn't support TLS. So this work will be to add TLS support in kubernetes python APIs. *2. Add support for Barbican in Magnum.* Barbican will be used to store all the keys and certificates. *3. Add support of TLS in Magnum.* This work mainly involves supporting the use of key and certificates in magnum to support TLS. The user generates the keys, certificates and store them in Barbican. Now there is two way to access these keys while creating a bay. 1. Heat will access Barbican directly. While creating bay, the user will provide this key and heat templates will fetch this key from Barbican. 2. Magnum-conductor access Barbican. While creating bay, the user will provide this key and then Magnum-conductor will fetch this key from Barbican and provide this key to heat. Then heat will copy this files on kubernetes master node. Then bay will use this key to start a Kubernetes services signed with these keys. After discussion when we all come to same point, I will create separate blueprints for each task. I am currently working on configuring Kubernetes services with TLS keys. Please provide your suggestions if any. Regards, Madhuri __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Magnum] Docker Native Networking
Just put here for more comments, thanks! IMHO, there are three different options, worth discussion before the designing: #Opt 1: Magnum as the API. The entire driver layer still leverages the third-party solutions like swarm or kubernetes, then we have to keep compatible with their limitation in networking. This will be quick to implement, but inflexible in networking capacities. This is the traditional way, easy for now, but hard for the future. #Opt 2: Magnum as the Engine. We only leverage third-party drivers to manage the lifestyle of containers, but utilize the OpenStack mechanism (i.e., Neutron) to handle the networking. This need investigation into the possibility of combination in different environments. This can target to combine the best of Docker and OpenStack. #Opt 3: Magnum as the Solution. Besides the third-party drivers, we design our own container provision mechanism (Not Nova-Docker), then it will be more fluent to integrate with Neutron. However, this need to design new naive driver layer (Docker+Flannel is a good reference, as the design idea is similar and simpler with Neutron). This is the most clean option. Finally, after investigating most existing open source container networking mechanisms, overlay is the common idea, where Neutron seems a good candidate. On Sat, Jun 13, 2015 at 2:22 AM, Adrian Otto adrian.o...@rackspace.com wrote: Team, OpenStack Networking support for Magnum Bays was an important topic for us in Vancouver at the design summit. Here is one blueprint that requires discussion that’s beyond the scope of what we can easily fit in the BP whiteboard: https://blueprints.launchpad.net/magnum/+spec/native-docker-network Before we dive into implementation planning, I'll offer these as guardrails to use as a starting point: 1) Users of the Swarm bay type have the ability to create containers. Those containers may reside on different hosts (Nova instances). We want those containers to be able to communicate with each other over a network similar to the way that they can over the Flannel network used with Kubernetes. 2) We should leverage community work as much as possible, combining the best of Docker and OpenStack to produce an integrated solution that is easy to use, and exhibits performance that's suitable for common use cases. 3) Recognize that our Docker community is working on libnetwork [1] which will allow for the creation of logical networks similar to links that allow containers to communicate with each other across host boundaries. The implementation is pluggable, and we decided in Vancouver that working on a Neutron plugin for libnetwork could potentially make the user experience consistent whether you are using Docker within Magnum or not. 4) We would like to plug in Neutron to Flannel as a modular option for Kubernetes Bays, so both solutions leverage OpenStack networking, and users can use familiar, native tools. References: [1] https://github.com/docker/libnetwork Please let me know what you think of this approach. I’d like to re-state the Blueprint description, clear the whiteboard, and put up a spec that will accommodate in-line comments so we can work on the implementation specifics better in context. Adrian __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Best wishes! Baohua __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [keystone][puppet] Federation using ipsilon
On 06/13/2015 01:37 PM, Rich Megginson wrote: On 06/12/2015 07:30 PM, Adam Young wrote: On 06/12/2015 04:53 PM, Rich Megginson wrote: I've done a first pass of setting up a puppet module to configure Keystone to use ipsilon for federation, using https://github.com/richm/puppet-apache-auth-mods, and a version of ipsilon-client-install with patches https://fedorahosted.org/ipsilon/ticket/141 and https://fedorahosted.org/ipsilon/ticket/142, and a heavily modified version of the ipa/rdo federation setup scripts - https://github.com/richm/rdo-vm-factory. I would like some feedback from the Keystone and puppet folks about this approach. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev I take it this is not WebSSO yet, but only Federation. Around here... https://github.com/richm/puppet-apache-auth-mods/blob/master/manifests/keystone_ipsilon.pp#L64 You would need to have the trusted dashboard, etc. Right. In order to do websso, there is some additional setup that needs to be done in the apache conf for the keystone wsgi virtual hosts (which is in the rdo-federation-setup script). There is also some additional configuration to do to Horizon to enable federated auth and/or websso. But I think that is what you intend. Right. What I've done so far is only the first step. It looks good at first blush. I'm trying to get to the point where I can recreate RDO factory, but on a machine I launch in the Cloud Lab. I've gotten it as far as allocating a floating IP address: https://github.com/admiyo/ossipee/ Once I can get through the RDO Factory steps, I'll give it a live test. However, without an ECP setup, we really have no way to test it. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron] VLAN-aware VMs meeting
On Fri, Jun 12, 2015 at 8:51 AM, Ildikó Váncsa ildiko.van...@ericsson.com wrote: Hi, Since we reopened the review for this blueprint we’ve got a large number of comments. It can be clearly seen that the original proposal has to be changed, although it still requires some discussion to define a reasonable design that provides the desired feature and is aligned with the architecture and guidelines of Neutron. In order to speed up the process to fit into the Liberty timeframe, we would like to have a discussion about this. The goal is to discuss the alternatives we have, decide which to go on with and sort out the possible issues. After this discussion the blueprint will be updated with the desired solution. I would like to propose a time slot for _*next Tuesday (06. 16.), 17:00UTC – 18:00UTC*_. I would like to have the discussion on the #openstack-neutron channel, that gives a chance to guys who might be interested, but missed this mail to attend. I tried to check the slot, but please let me know if it collides with any Neutron related meeting. This looks to be fine. I would suggest that it may make more sense to have it in an #openstack-meeting channel, though we can certainly do a free-form chat in #openstack-neutron as well. I think the desired end-goal here should be to figure out any remaining nits that are being discussed on the spec so we can move forward in Liberty. Thanks, Kyle Thanks and Best Regards, Ildikó __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Magnum] Add periodic task threading for conductor server
I think option #3 is the most desired choice in performance’s point of view, because magnum is going to support multiple conductors and all conductors share the same DB. However, if each conductor runs its own thread for periodic task, we will end up to have multiple instances of tasks for doing the same job (syncing heat’s state to magnum’s DB). I think magnum should have only one instance of periodic task since the replicated instance of tasks will stress the computing and networking resources. Best regards, Hongbin From: Qiao,Liyong [mailto:liyong.q...@intel.com] Sent: June-14-15 9:38 PM To: openstack-dev@lists.openstack.org Cc: qiaoliy...@gmail.com Subject: [openstack-dev] [Magnum] Add periodic task threading for conductor server hi magnum team, I am planing to add periodic task for magnum conductor service, it will be good to sync task status with heat and container service. and I have already have a WIP patch[1], I'd like to start a discussion on the implement. Currently, conductor service is an rpc server, and it has several handlers endpoints = [ docker_conductor.Handler(), k8s_conductor.Handler(), bay_conductor.Handler(), conductor_listener.Handler(), ] all handler runs in the rpc server. 1. my patch [1] is to add periodic task functions in each handlers (if it requires such tasks) and setup these functions when start rpc server, add them to a thread group. so for example: if we have task in bay_conductor.Handler() and docker_conductor.Handler(), then adding 2 threads to current service's tg. each thread run it own periodic tasks. the advantage is we separate each handler's task job to separate thread. but hongbin's concern is if it will has some impacts on horizontally scalability. 2. another implement is put all tasks in a thread, this thread will run all tasks(for bay,k8s, docker etc), just like sahara does see [2] 3 last one is start a new service in a separate process to run tasks.( I think this will be too heavy/wasteful) I'd like to get what's your suggestion, thanks in advance. [1] https://review.openstack.org/#/c/187090/4 [2] https://github.com/openstack/sahara/blob/master/sahara/service/periodic.py#L118 -- BR, Eli(Li Yong)Qiao __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Looking for help getting git-review to work over https
On 12 June 2015 at 10:49, ZZelle zze...@gmail.com wrote: Indeed, the doc[1] is unclear git-review can be installed using: python setup.py install or pip install . Of those two things we only support pip install . - in part because you have much less control over mirrors, proxies and inferior SSL support with 'python setup.py install'. http://docs.openstack.org/developer/pbr/ Note that we don’t support the easy_install aspects of setuptools: while we depend on setup_requires, for any install_requires we recommend that they be installed prior to running setup.py install - either by hand, or by using an install tool such as pip. As pip doesn't yet support setup_requires itself, this will currently need a manual 'sudo pip install pbr' beforehand :/ -Rob -- Robert Collins rbtcoll...@hp.com Distinguished Technologist HP Converged Cloud __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev