Re: [openstack-dev] [Trove] Trove Blueprint Meeting on 13 October canceled
On Mon, Oct 13, 2014 at 8:05 AM, Nikhil Manchanda nik...@manchanda.me wrote: Hey folks: We have an empty agenda for the Trove Blueprint meeting tomorrow, so I'm going to go ahead and cancel it. We do have a few blueprints that are in-flight which need review comments, so please take this time to review these blueprints and provide feedback: https://review.openstack.org/#/c/123571/ https://review.openstack.org/#/c/124717/ https://review.openstack.org/#/c/122736/ https://review.openstack.org/#/c/122767/ To all Trove contributors and active community members. We _must_ put enough efforts to review specs, they are hanging enough time to spend 1 hour(BP meeting time frame) per week to look at them. FYI: oslo.concurrency - staring Sept. 23 (reviewed only by: Robert Myers) Cassandra clustering - starting Sept. 24 (reviewed only by: Amrith Kumar) Added datastore log operation spec - starting Sept. 29 (no reviews) Oracle 12c support - starting Sept. 19 (no reviews) Let's stay productive, and let's get enough features for next release. See you guys at the regular Trove meeting on Wednesday! Thanks, Nikhil ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Best regards, Denis Makogon ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] Can Neutron VPNaaS work with strongswan? (Openswan removed from Debian)
Hi Thomas, I worked out a patch https://review.openstack.org/#/c/100791/ based on the latest strongSwan configurations, it can work. but the neutron-spec is still on review, see https://review.openstack.org/#/c/101457/ Can someone help review and approve that spec, thanks. On Mon, Oct 13, 2014 at 12:50 PM, trinath.soman...@freescale.com trinath.soman...@freescale.com wrote: Hi- Yes, VPNaaS works with Strong Swan too. I have tried and was successful. Take the cherry-pick of 67 patchset from https://review.openstack.org/#/c/33148 Work on the conflicts and run neutron. It works perfect. Hope this helps. -- Trinath Somanchi - B39208 trinath.soman...@freescale.com | extn: 4048 -Original Message- From: Thomas Goirand [mailto:z...@debian.org] Sent: Sunday, October 12, 2014 9:54 AM To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [neutron] Can Neutron VPNaaS work with strongswan? (Openswan removed from Debian) Hi, As you may know, OpenSwan has been largely unmaintained in Debian, and then was removed from Testing, and then Sid last summer. OpenSwan had some unaddressed security issues, and removing it from Debian was IMO the correct thing to do. Ubuntu followed, and Utopic doesn't have OpenSwan anymore either. Though there's StrongSwan, which is apparently an alternative. But can Neutron work with it? If not, how much work would it be to make Neutron use StrongSwan instead of OpenSwan, and could the maintainers of the VPNaaS people do this be worked on for Kilo? BTW, why not using something as popular as OpenVPN, which has more chances to be well maintained? Cheers, Thomas Goirand (zigo) ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Best Regards Zhang Hua(张华) Software Engineer | Canonical IRC: zhhuabj ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Fuel] [Rally] Using fuelclient as a library - battle report
Ilya, would that be possible to contribute your changes back to our upstream client? We are willing it to be evolved in this exact direction, though we never had enough resources for it. We will be happy to review and accept your requests. It should be easier for you too - instead of maintaining the fork, you will simply use upstream version. Thanks, On Fri, Oct 10, 2014 at 5:52 PM, Ilya Kharin ikha...@mirantis.com wrote: Hi guys, I agree with some of the issues Lukasz mentioned. All of them require some workaround to make it possible to use the client as a library. I can summarize some of the problems I encountered while working with fuelclient: - The distribution of the package is absent. The client cannot be specified as a dependency because it is not presented on PyPI and cannot be installed by pip (that is so for the current releases but not for the development branch). - The client cannot be initialized properly because it's designed as a singleton which is initialized on the import of a module. Initialization parameters can be specified either in a configuration file with a hardcoded filename (which can potentially be absent on the client-side host because of its location at /etc/fuel/client/config.yaml) or in environment variables. These limitations force us to specify the environment variables and then use inline imports. In my team we are thinking about our own implementation of the client as a solution. Best regards, Ilya On Mon, Oct 6, 2014 at 6:09 PM, Igor Kalnitsky ikalnit...@mirantis.com wrote: Hi Lukasz, I have the same thoughts - we have to design a good Python library for dealing with Nailgun and this library has to be used by: * Fuel CLI * System Tests * Fuel Upgrade * OSTF * other scripts But it's a big deal and we definetely should have a separate blueprint for this task. Moreover, we have to carefully consider its architecture to be convenient not only for CLI usage. Thanks, Igor On Mon, Oct 6, 2014 at 4:49 PM, Lukasz Oles lo...@mirantis.com wrote: Hello all, I'm researching if we can use Rally project for some Fuel testing. It's part of 100-nodes blueprint[1]. To write some Rally scenario I used our Fuelclient library. In it's current state it's really painful to use and it's not usable as production tool. Here is the list of the biggest issues: 1. If API returns code other than 20x it exits. Literally it calls sys.exit(). It should just rise Exception. 2. Using API Client as a Singleton. In theory we can have more than one connection, but all new objects will use default connection. 3. Can not use keystone token. It requires user and password. Server address and all credentials can be given via config file or environment variables. There is no way to set it during client initialization. All this issues show that library was designed only with CLI in mind. Especially issue nr 1. Now I know why ostf doesn't use fuelcient, why Rally wrote their own client. And I can bet that MOX team is also using their own version. I'm aware of Fuelclient refactoring blueprint[1] I reviewed it and gave +1 to most of the reviews. Unfortunately it focuses on CLI usage. Move to Cliff is very good idea, but for library it actually makes things worse [2] like moving data validation to CLI or initializing object using single dictionary instead of normal arguments. I think instead of focusing on CLI usage we should focus on a library part. To make it easier to use by other programs. After that we can focus on CLI. It's very important now when we are planning to support 100 nodes and more in future because more and more users will start use Fuel via API instead of UI. What do you think about this? Regards, [1] https://blueprints.launchpad.net/fuel/+spec/refactoring-for-fuelclient [2] https://review.openstack.org/#/c/117294/ Regards, -- Łukasz Oleś ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Mike Scherbakov #mihgen ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Python-novaclient] Python-novaclient tests fail
Hi All, I am trying to test *python-novaclient* using /*pythonsetup.pytes**t*/asreported in http://docs.openstack.org/developer/python-novaclient/ cid:part1.07080205.06020204@dektech.com.au. In order to figure out the test logic I ran tests but an error is occurred: Exception: Traceback (most recent call last): File /usr/local/lib/python2.7/dist-packages/pip/basecommand.py, line 122, in main status = self.run(options, args) File /usr/local/lib/python2.7/dist-packages/pip/commands/install.py, line 283, in run requirement_set.install(install_options, global_options, root=options.root_path) File /usr/local/lib/python2.7/dist-packages/pip/req.py, line 1431, in install requirement.uninstall(auto_confirm=True) File /usr/local/lib/python2.7/dist-packages/pip/req.py, line 598, in uninstall paths_to_remove.remove(auto_confirm) File /usr/local/lib/python2.7/dist-packages/pip/req.py, line 1836, in remove renames(path, new_path) File /usr/local/lib/python2.7/dist-packages/pip/util.py, line 295, in renames shutil.move(old, new) File /usr/lib/python2.7/shutil.py, line 300, in move rmtree(src) File /usr/lib/python2.7/shutil.py, line 247, in rmtree rmtree(fullname, ignore_errors, onerror) File /usr/lib/python2.7/shutil.py, line 252, in rmtree onerror(os.remove, fullname, sys.exc_info()) File /usr/lib/python2.7/shutil.py, line 250, in rmtree os.remove(fullname) OSError: [Errno 13] Permission denied: '/usr/local/lib/python2.7/dist-packages/hacking/tests/test_doctest.pyc' Storing debug log for failure in /home/devstack/.pip/pip.log I am working on *master branch* and *no source code modification* are performed. Do you know how to fix it? Thanks, Daniele. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Python-novaclient] Python-novaclient tests fail
Simple way to run tests is using tox: $ tox -epy27 For more details, look at nova guide: http://docs.openstack.org/developer/nova/devref/unit_tests.html PS: Why novaclient guide recommends to use python setup.py test? A bit strange for me. On Mon, Oct 13, 2014 at 1:19 PM, Daniele Casini daniele.cas...@dektech.com.au wrote: Hi All, I am trying to test *python-novaclient* using *python setup.py test* as reported in http://docs.openstack.org/developer/python-novaclient/ ?ui=2ik=fc0b506fdfview=attth=1490909bfc88b13cattid=0.0.1.1disp=embzwatsh=0 . In order to figure out the test logic I ran tests but an error is occurred: Exception: Traceback (most recent call last): File /usr/local/lib/python2.7/dist-packages/pip/basecommand.py, line 122, in main status = self.run(options, args) File /usr/local/lib/python2.7/dist-packages/pip/commands/install.py, line 283, in run requirement_set.install(install_options, global_options, root=options.root_path) File /usr/local/lib/python2.7/dist-packages/pip/req.py, line 1431, in install requirement.uninstall(auto_confirm=True) File /usr/local/lib/python2.7/dist-packages/pip/req.py, line 598, in uninstall paths_to_remove.remove(auto_confirm) File /usr/local/lib/python2.7/dist-packages/pip/req.py, line 1836, in remove renames(path, new_path) File /usr/local/lib/python2.7/dist-packages/pip/util.py, line 295, in renames shutil.move(old, new) File /usr/lib/python2.7/shutil.py, line 300, in move rmtree(src) File /usr/lib/python2.7/shutil.py, line 247, in rmtree rmtree(fullname, ignore_errors, onerror) File /usr/lib/python2.7/shutil.py, line 252, in rmtree onerror(os.remove, fullname, sys.exc_info()) File /usr/lib/python2.7/shutil.py, line 250, in rmtree os.remove(fullname) OSError: [Errno 13] Permission denied: '/usr/local/lib/python2.7/dist-packages/hacking/tests/test_doctest.pyc' Storing debug log for failure in /home/devstack/.pip/pip.log I am working on *master branch* and *no source code modification* are performed. Do you know how to fix it? Thanks, Daniele. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Best regards, Andrey Kurilin. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [api] Forming the API Working Group
Zhipeng Huang wrote: HI all, will we have a discussion on this issun at Paris Summit? I expect the API WG to propose API discussions within the Cross-project workshops track and get space granted to them. https://etherpad.openstack.org/p/kilo-crossproject-summit-topics Cheers, -- Thierry Carrez (ttx) ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova]Instance console.log
Hi, Is there a way to disable console.log or redirect it to a custom location? Use-case: we have a custom distributed storage-solution, and to enable faster migration we symlinked the /opt/stack/dara/nova/instances to a directory on our mountpoint (which is backed by our shared storage). The problem is that console.log is placed here and during boot it's writing more data than our storage solution can send to backend so throttling occurs, which slows down the instance. So, any ideas how to disable console.log or redirect it? Thanks, Eduard -- *Eduard Biceri Matei, Senior Software Developer* www.cloudfounders.com | eduard.ma...@cloudfounders.com *CloudFounders, The Private Cloud Software Company* Disclaimer: This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [horizon] Packaging Sinon.JS as xstatic
Hello folks! Discussing the proposed Sinon.js dependency to Horizon on the last meeting has brought quite an expected question: why should we add it when there is already such wonderful testing framework as Jasmine? And if you need some testing feature present in Jasmine, why not rewrite your QUnit test in Jasmine? I was not ready to answer the question at that moment so I took a pause to learn more about Jasmine capabilities compared to Sinon.JS. First of all, I googled to see if someone did this investigation before. Unfortunately, I haven't found much: judging from the links [1], [2] both Jasmine and Sinon.JS provide the same functionality, while Sinon.JS is a bit more flexible and could be more convenient in some cases (I guess those cases are specific to the project being tested). Then I had studied Jasmine/Sinon.JS docs and repos myself and have found that: * both project repos have lots of contributors and fresh commits * indeed, they provide roughly the same functionality: matchers/testing spies/stubs/mocks/faking timers/AJAX mocking, but * to use AJAX mocking in Jasmine, you need to use a separate library [5], which I guess means another xstatic dependency besides xstatic-jasmine if you want to mock AJAX calls via Jasmine * Sinon.JS has a much more comprehensive documentation [6] than Jasmine [3], [4] So, while Horizon doesn't have too many QUnit tests meaning that they could be rewritten in Jasmine in a relatively short time, it seems that in order to mock AJAX requests (the reason I looked to the Sinon.JS) in Jasmine another xstatic dependency should be added (Radomir Dopieralski could correct me here if I'm wrong). Also, I've found quite an interesting feature in Sinon.JS's AJAX mocks: it is possible to mock only a filtered set of server calls and let others pass through [7] - didn't find such feature in Jasmine ajax.js docs. On the other hand, reducing all JS unit-tests to one framework is good thing also, and given that Jasmine is officially used for Angular.js testing, I'd rather see Jasmine as the 'only Horizon JS unit-testing framework' than the QUnit. But then, again: want to have AJAX mocks = add 'jasmine-ajax' dependency to the already existing 'jasmine' (why not add Sinon.JS then?). Summarizing all the things I've written so far, I would: * replace QUnit with Jasmine (=remove QUnit dependency) * add Sinon.JS just to have its AJAX-mocking features. [1] http://stackoverflow.com/questions/15002541/does-jasmine-need-sinon-js [2] http://stackoverflow.com/questions/12216053/whats-the-advantage-of-using-sinon-js-over-jasmines-built-in-spys [3] http://jasmine.github.io/1.3/introduction.html [4] http://jasmine.github.io/2.0/ajax.html [5] https://github.com/pivotal/jasmine-ajax [6] http://sinonjs.org/docs/ [7] http://sinonjs.org/docs/#server search for 'Filtered requests' On Tue, Oct 7, 2014 at 1:19 PM, Timur Sufiev tsuf...@mirantis.com wrote: Hello all! Recently I've stumbled upon wonderful Sinon.JS library [1] for stubs and mocks in JS unit tests and found that it can be used for simplifying unit test I've made in [2] and speeding it up. Just before wrapping it as xstatic package I'd like to clarify 2 questions regarding Sinon.JS: * Are Horizon folks fine with adding this dependency? Right now it will be used just for one test, but it would be useful for anybody who wants to mock AJAX requests in tests or emulate timeout events being fired up (again, see very brief and concise examples at [1]). * Is it okay to include QUnit and Jasmine adapters for Sinon.JS in the same xstatic package? Well, personally I'd vote for including QUnit adapter [3] since it has very little code and allows for seamless integration of Sinon.JS with QUnit testing framework. If someone is interested in using Jasmine matchers for Sinon [4], please let me know. [1] http://sinonjs.org/ [2] https://review.openstack.org/#/c/113855/2/horizon/static/horizon/tests/tables.js [3] http://sinonjs.org/qunit/ [4] https://github.com/froots/jasmine-sinon -- Timur Sufiev -- Timur Sufiev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Python-novaclient] Python-novaclient tests fail
I have already used sudo but it still fails: ImportError: cannot import name exceptions Ran 63 tests in 0.146s (+0.014s) FAILED (id=3, failures=63) error: testr failed (1) So, it is quite strange because I do not modify the source code. Let me know if you have some suggestions. Thanks, Daniele. On 10/13/2014 01:19 PM, Murugan, Visnusaran wrote: Just a permission issue. Use a “sudo”. You could alternatively install novaclient under a virtualenv and run the same “python setup.py test” without sudo. -Vishnu *From:*Daniele Casini [mailto:daniele.cas...@dektech.com.au] *Sent:* Monday, October 13, 2014 3:50 PM *To:* openstack-dev@lists.openstack.org *Subject:* [openstack-dev] [Python-novaclient] Python-novaclient tests fail Hi All, I am trying to test *python-novaclient* using */python/**//**/setup.py/**//**/test/*asreported in http://docs.openstack.org/developer/python-novaclient/ cid:part1.07080205.06020204@dektech.com.au. In order to figure out the test logic I ran tests but an error is occurred: Exception: Traceback (most recent call last): File /usr/local/lib/python2.7/dist-packages/pip/basecommand.py, line 122, in main status = self.run(options, args) File /usr/local/lib/python2.7/dist-packages/pip/commands/install.py, line 283, in run requirement_set.install(install_options, global_options, root=options.root_path) File /usr/local/lib/python2.7/dist-packages/pip/req.py, line 1431, in install requirement.uninstall(auto_confirm=True) File /usr/local/lib/python2.7/dist-packages/pip/req.py, line 598, in uninstall paths_to_remove.remove(auto_confirm) File /usr/local/lib/python2.7/dist-packages/pip/req.py, line 1836, in remove renames(path, new_path) File /usr/local/lib/python2.7/dist-packages/pip/util.py, line 295, in renames shutil.move(old, new) File /usr/lib/python2.7/shutil.py, line 300, in move rmtree(src) File /usr/lib/python2.7/shutil.py, line 247, in rmtree rmtree(fullname, ignore_errors, onerror) File /usr/lib/python2.7/shutil.py, line 252, in rmtree onerror(os.remove, fullname, sys.exc_info()) File /usr/lib/python2.7/shutil.py, line 250, in rmtree os.remove(fullname) OSError: [Errno 13] Permission denied: '/usr/local/lib/python2.7/dist-packages/hacking/tests/test_doctest.pyc' Storing debug log for failure in /home/devstack/.pip/pip.log I am working on *master branch* and *no source code modification* are performed. Do you know how to fix it? Thanks, Daniele. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Daniele Casini DEK Technologies Via dei Castelli Romani, 22 00040 Pomezia (Roma) E-mail: daniele.cas...@dektech.com.au WEB: www.dektech.com.au ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [all][oslo] projects still using obsolete oslo modules
I’ve put together a little script to generate a report of the projects using modules that used to be in the oslo-incubator but that have moved to libraries [1]. These modules have been deleted, and now only exist in the stable/juno branch of the incubator. We do not anticipate back-porting fixes except for serious security concerns, so it is important to update all projects to use the libraries where the modules now live. Liaisons, please look through the list below and file bugs against your project for any changes needed to move to the new libraries and start working on the updates. We need to prioritize this work for early in Kilo to ensure that your projects do not fall further out of step. K-1 is the ideal target, with K-2 as an absolute latest date. I anticipate having several more libraries by the time the K-2 milestone arrives. Most of the porting work involves adding dependencies and updating import statements, but check the documentation for each library for any special guidance. Also, because the incubator is updated to use our released libraries, you may end up having to port to several libraries *and* sync a copy of any remaining incubator dependencies that have not graduated all in a single patch in order to have a working copy. I suggest giving your review teams a heads-up about what to expect to avoid -2 for the scope of the patch. Doug [1] https://review.openstack.org/#/c/127039/ openstack-dev/heat-cfnclient: exception openstack-dev/heat-cfnclient: gettextutils openstack-dev/heat-cfnclient: importutils openstack-dev/heat-cfnclient: jsonutils openstack-dev/heat-cfnclient: timeutils openstack/ceilometer: gettextutils openstack/ceilometer: log_handler openstack/python-troveclient: strutils openstack/melange: exception openstack/melange: extensions openstack/melange: utils openstack/melange: wsgi openstack/melange: setup openstack/tuskar: config.generator openstack/tuskar: db openstack/tuskar: db.sqlalchemy openstack/tuskar: excutils openstack/tuskar: gettextutils openstack/tuskar: importutils openstack/tuskar: jsonutils openstack/tuskar: strutils openstack/tuskar: timeutils openstack/sahara-dashboard: importutils openstack/barbican: gettextutils openstack/barbican: jsonutils openstack/barbican: timeutils openstack/barbican: importutils openstack/kite: db openstack/kite: db.sqlalchemy openstack/kite: jsonutils openstack/kite: timeutils openstack/python-ironicclient: gettextutils openstack/python-ironicclient: importutils openstack/python-ironicclient: strutils openstack/python-melangeclient: setup openstack/neutron: excutils openstack/neutron: gettextutils openstack/neutron: importutils openstack/neutron: jsonutils openstack/neutron: middleware.base openstack/neutron: middleware.catch_errors openstack/neutron: middleware.correlation_id openstack/neutron: middleware.debug openstack/neutron: middleware.request_id openstack/neutron: middleware.sizelimit openstack/neutron: network_utils openstack/neutron: strutils openstack/neutron: timeutils openstack/tempest: importlib openstack/manila: excutils openstack/manila: gettextutils openstack/manila: importutils openstack/manila: jsonutils openstack/manila: network_utils openstack/manila: strutils openstack/manila: timeutils openstack/keystone: gettextutils openstack/python-glanceclient: importutils openstack/python-glanceclient: network_utils openstack/python-glanceclient: strutils openstack/python-keystoneclient: jsonutils openstack/python-keystoneclient: strutils openstack/python-keystoneclient: timeutils openstack/zaqar: config.generator openstack/zaqar: excutils openstack/zaqar: gettextutils openstack/zaqar: importutils openstack/zaqar: jsonutils openstack/zaqar: setup openstack/zaqar: strutils openstack/zaqar: timeutils openstack/zaqar: version openstack/python-novaclient: gettextutils openstack/ironic: config.generator openstack/ironic: gettextutils openstack/cinder: config.generator openstack/cinder: excutils openstack/cinder: gettextutils openstack/cinder: importutils openstack/cinder: jsonutils openstack/cinder: log_handler openstack/cinder: network_utils openstack/cinder: strutils openstack/cinder: timeutils openstack/cinder: units openstack/python-manilaclient: gettextutils openstack/python-manilaclient: importutils openstack/python-manilaclient: jsonutils openstack/python-manilaclient: strutils openstack/python-manilaclient: timeutils openstack/trove: exception openstack/trove: excutils openstack/trove: gettextutils openstack/trove: importutils openstack/trove: iniparser openstack/trove: jsonutils openstack/trove: network_utils openstack/trove: notifier openstack/trove: pastedeploy openstack/trove: rpc openstack/trove: strutils openstack/trove: testutils openstack/trove: timeutils openstack/trove: utils openstack/trove: wsgi openstack/sahara: config.generator openstack/sahara: excutils openstack/sahara: importutils openstack/sahara: middleware.base openstack/sahara: strutils openstack/sahara:
Re: [openstack-dev] [all][oslo] projects still using obsolete oslo modules
Doug, Thank you for script. This really simplifies life! Best regards, Boris Pavlovic On Mon, Oct 13, 2014 at 5:20 PM, Doug Hellmann d...@doughellmann.com wrote: I’ve put together a little script to generate a report of the projects using modules that used to be in the oslo-incubator but that have moved to libraries [1]. These modules have been deleted, and now only exist in the stable/juno branch of the incubator. We do not anticipate back-porting fixes except for serious security concerns, so it is important to update all projects to use the libraries where the modules now live. Liaisons, please look through the list below and file bugs against your project for any changes needed to move to the new libraries and start working on the updates. We need to prioritize this work for early in Kilo to ensure that your projects do not fall further out of step. K-1 is the ideal target, with K-2 as an absolute latest date. I anticipate having several more libraries by the time the K-2 milestone arrives. Most of the porting work involves adding dependencies and updating import statements, but check the documentation for each library for any special guidance. Also, because the incubator is updated to use our released libraries, you may end up having to port to several libraries *and* sync a copy of any remaining incubator dependencies that have not graduated all in a single patch in order to have a working copy. I suggest giving your review teams a heads-up about what to expect to avoid -2 for the scope of the patch. Doug [1] https://review.openstack.org/#/c/127039/ openstack-dev/heat-cfnclient: exception openstack-dev/heat-cfnclient: gettextutils openstack-dev/heat-cfnclient: importutils openstack-dev/heat-cfnclient: jsonutils openstack-dev/heat-cfnclient: timeutils openstack/ceilometer: gettextutils openstack/ceilometer: log_handler openstack/python-troveclient: strutils openstack/melange: exception openstack/melange: extensions openstack/melange: utils openstack/melange: wsgi openstack/melange: setup openstack/tuskar: config.generator openstack/tuskar: db openstack/tuskar: db.sqlalchemy openstack/tuskar: excutils openstack/tuskar: gettextutils openstack/tuskar: importutils openstack/tuskar: jsonutils openstack/tuskar: strutils openstack/tuskar: timeutils openstack/sahara-dashboard: importutils openstack/barbican: gettextutils openstack/barbican: jsonutils openstack/barbican: timeutils openstack/barbican: importutils openstack/kite: db openstack/kite: db.sqlalchemy openstack/kite: jsonutils openstack/kite: timeutils openstack/python-ironicclient: gettextutils openstack/python-ironicclient: importutils openstack/python-ironicclient: strutils openstack/python-melangeclient: setup openstack/neutron: excutils openstack/neutron: gettextutils openstack/neutron: importutils openstack/neutron: jsonutils openstack/neutron: middleware.base openstack/neutron: middleware.catch_errors openstack/neutron: middleware.correlation_id openstack/neutron: middleware.debug openstack/neutron: middleware.request_id openstack/neutron: middleware.sizelimit openstack/neutron: network_utils openstack/neutron: strutils openstack/neutron: timeutils openstack/tempest: importlib openstack/manila: excutils openstack/manila: gettextutils openstack/manila: importutils openstack/manila: jsonutils openstack/manila: network_utils openstack/manila: strutils openstack/manila: timeutils openstack/keystone: gettextutils openstack/python-glanceclient: importutils openstack/python-glanceclient: network_utils openstack/python-glanceclient: strutils openstack/python-keystoneclient: jsonutils openstack/python-keystoneclient: strutils openstack/python-keystoneclient: timeutils openstack/zaqar: config.generator openstack/zaqar: excutils openstack/zaqar: gettextutils openstack/zaqar: importutils openstack/zaqar: jsonutils openstack/zaqar: setup openstack/zaqar: strutils openstack/zaqar: timeutils openstack/zaqar: version openstack/python-novaclient: gettextutils openstack/ironic: config.generator openstack/ironic: gettextutils openstack/cinder: config.generator openstack/cinder: excutils openstack/cinder: gettextutils openstack/cinder: importutils openstack/cinder: jsonutils openstack/cinder: log_handler openstack/cinder: network_utils openstack/cinder: strutils openstack/cinder: timeutils openstack/cinder: units openstack/python-manilaclient: gettextutils openstack/python-manilaclient: importutils openstack/python-manilaclient: jsonutils openstack/python-manilaclient: strutils openstack/python-manilaclient: timeutils openstack/trove: exception openstack/trove: excutils openstack/trove: gettextutils openstack/trove: importutils openstack/trove: iniparser openstack/trove: jsonutils openstack/trove: network_utils openstack/trove: notifier openstack/trove: pastedeploy openstack/trove: rpc openstack/trove:
[openstack-dev] [all] Resolving a possible meeting conflict
Hi all: I've setup a weekly meeting for the neutron-drivers team on IRC at 1500UTC [1], but I noticed it conflicts with the PHP SDK IRC meeting at 1530UTC [2]. However, I see the PHP SDK IRC meeting hasn't happened since August 6 [3]. Can I assume this meeting is no longer going on? If so, can I clean it up from the meetings page? Thanks! Kyle [1] https://wiki.openstack.org/wiki/Meetings/NeutronDrivers [2] https://wiki.openstack.org/wiki/Meetings#PHP_SDK_Team_Meeting [3] http://eavesdrop.openstack.org/meetings/openstack_sdk_php/2014/ ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Python-novaclient] Python-novaclient tests fail
I have already use tox instead of python setup.py test and the test is successfully passed. However, I do not understand because it does not pass using the way reported in the official document. Thus, two questions occour to me: Can I use tox in order to test python-novaclient? Should the official document (i.e http://docs.openstack.org/developer/python-novaclient/ ?ui=2ik=fc0b506fdfview=attth=1490909bfc88b13cattid=0.0.1.1disp=embzwatsh=0) be changed adding tox, as well? Thanks, Daniele. On 10/13/2014 01:29 PM, Andrey Kurilin wrote: Simple way to run tests is using tox: $ tox -epy27 For more details, look at nova guide: http://docs.openstack.org/developer/nova/devref/unit_tests.html PS: Why novaclient guide recommends to use python setup.py test? A bit strange for me. On Mon, Oct 13, 2014 at 1:19 PM, Daniele Casini daniele.cas...@dektech.com.au mailto:daniele.cas...@dektech.com.au wrote: Hi All, I am trying to test *python-novaclient* using /*pythonsetup.pytes**t*/asreported in http://docs.openstack.org/developer/python-novaclient/ ?ui=2ik=fc0b506fdfview=attth=1490909bfc88b13cattid=0.0.1.1disp=embzwatsh=0. In order to figure out the test logic I ran tests but an error is occurred: Exception: Traceback (most recent call last): File /usr/local/lib/python2.7/dist-packages/pip/basecommand.py, line 122, in main status = self.run(options, args) File /usr/local/lib/python2.7/dist-packages/pip/commands/install.py, line 283, in run requirement_set.install(install_options, global_options, root=options.root_path) File /usr/local/lib/python2.7/dist-packages/pip/req.py, line 1431, in install requirement.uninstall(auto_confirm=True) File /usr/local/lib/python2.7/dist-packages/pip/req.py, line 598, in uninstall paths_to_remove.remove(auto_confirm) File /usr/local/lib/python2.7/dist-packages/pip/req.py, line 1836, in remove renames(path, new_path) File /usr/local/lib/python2.7/dist-packages/pip/util.py, line 295, in renames shutil.move(old, new) File /usr/lib/python2.7/shutil.py, line 300, in move rmtree(src) File /usr/lib/python2.7/shutil.py, line 247, in rmtree rmtree(fullname, ignore_errors, onerror) File /usr/lib/python2.7/shutil.py, line 252, in rmtree onerror(os.remove, fullname, sys.exc_info()) File /usr/lib/python2.7/shutil.py, line 250, in rmtree os.remove(fullname) OSError: [Errno 13] Permission denied: '/usr/local/lib/python2.7/dist-packages/hacking/tests/test_doctest.pyc' Storing debug log for failure in /home/devstack/.pip/pip.log I am working on *master branch* and *no source code modification* are performed. Do you know how to fix it? Thanks, Daniele. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org mailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Best regards, Andrey Kurilin. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Daniele Casini DEK Technologies Via dei Castelli Romani, 22 00040 Pomezia (Roma) E-mail: daniele.cas...@dektech.com.au WEB: www.dektech.com.au ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Fuel] Propose adding Igor K. to core reviewers for fuel-web projects
Hi everyone! I would like to propose Igor Kalnitsky as a core reviewer on the Fuel-web team. Igor has been working on openstack patching, nailgun, fuel upgrade and provided a lot of good reviews [1]. In addition he's also very active in IRC and mailing list. Can the other core team members please reply with your votes if you agree or disagree. Thanks! [1] http://stackalytics.com/?project_type=stackforgerelease=junomodule=fuel-web ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Fuel] Propose adding Igor K. to core reviewers for fuel-web projects
+1. I'm not core, but he has done the most thorough reviews lately and shows great initiative in maintaining quality in Fuel. On Mon, Oct 13, 2014 at 5:53 PM, Evgeniy L e...@mirantis.com wrote: Hi everyone! I would like to propose Igor Kalnitsky as a core reviewer on the Fuel-web team. Igor has been working on openstack patching, nailgun, fuel upgrade and provided a lot of good reviews [1]. In addition he's also very active in IRC and mailing list. Can the other core team members please reply with your votes if you agree or disagree. Thanks! [1] http://stackalytics.com/?project_type=stackforgerelease=junomodule=fuel-web ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [all][oslo] reviewing oslo specs
The openstack/oslo-specs repository is open for submissions for Kilo, and the Oslo team would appreciate your help in reviewing the proposed changes. Changes to Oslo libraries affect all projects, so we want to collect as much input as we can before committing to a direction. There are several specs already up for review, and I expect more between now and the summit. https://review.openstack.org/#/q/project:openstack%2Foslo-specs+is:open,n,z Doug ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Fuel] Propose adding Igor K. to core reviewers for fuel-web projects
+1 On Mon, Oct 13, 2014 at 1:53 PM, Evgeniy L e...@mirantis.com wrote: Hi everyone! I would like to propose Igor Kalnitsky as a core reviewer on the Fuel-web team. Igor has been working on openstack patching, nailgun, fuel upgrade and provided a lot of good reviews [1]. In addition he's also very active in IRC and mailing list. Can the other core team members please reply with your votes if you agree or disagree. Thanks! [1] http://stackalytics.com/?project_type=stackforgerelease=junomodule=fuel-web ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Fuel] Propose adding Igor K. to core reviewers for fuel-web projects
+1 2014-10-13 20:53 GMT+07:00 Evgeniy L e...@mirantis.com: Hi everyone! I would like to propose Igor Kalnitsky as a core reviewer on the Fuel-web team. Igor has been working on openstack patching, nailgun, fuel upgrade and provided a lot of good reviews [1]. In addition he's also very active in IRC and mailing list. Can the other core team members please reply with your votes if you agree or disagree. Thanks! [1] http://stackalytics.com/?project_type=stackforgerelease=junomodule=fuel-web ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Vitaly Kramskikh, Software Engineer, Mirantis, Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [api] Forming the API Working Group
On 10/10/2014 02:05 AM, Christopher Yeoh wrote: I agree with what you've written on the wiki page. I think our priority needs to be to flesh out https://wiki.openstack.org/wiki/Governance/Proposed/APIGuidelines so we have something to reference when reviewing specs. At the moment I see that document as something anyone should be able to document a project's API convention even if they conflict with another project for the moment. Once we've got a fair amount of content we can start as a group resolving any conflicts. Agreed that we should be fleshing out the above wiki page. How would you like us to do that? Should we have an etherpad to discuss individual topics? Having multiple people editing the wiki page offering commentary seems a bit chaotic, and I think we would do well to have the Gerrit review process in place to handle proposed guidelines and rules for APIs. See below for specifics on this... Speaking of the wiki page, I wrote it very matter-of-factly. As if this is the way things are. They’re not. The wiki page is just a starting point. If something was missed, add it. If something can be improved, improve it. Let’s try to keep it simple though. One problem with API WG members reviewing spec proposals that affect the API is finding the specs in the first place across many different projects repositories. I've said for a while now that I would love to have separate repositories -- ala the Keystone API in the openstack/identity-api repository -- that contains specifications for APIs in a single format (APIBlueprint was suggested at one point, but Swagger 2.0 seems to me to have more upside). I also think it would be ideal to have an openstack/openstack-api repo that would house guidelines and rules that this working group came up with, along with examples of appropriate usage. This repo would function very similar to the openstack/governance [1] repo that the TC uses to flesh out proposals on community, release management, and governance changes. If people are OK with this idea, I will go ahead and create the repo and add the wiki page content as the initial commit, then everyone can simply submit patches to the document(s) using the normal Gerrit process, and we can iterate on these things using the same tools as other repositories. Best, -jay [1] https://review.openstack.org/#/q/status:open+project:openstack/governance,n,z I invite everyone who chimed in on the original thread [1] that kicked this off to add themselves as a member committed to making the OpenStack APIs better. I’ve Cc’d everyone who asked to be kept in the loop. I already see some cross project summit topics [2] on APIs. But frankly, with the number of people committed to this topic, I’d expect there to be more. I encourage everyone to submit more API related sessions with better descriptions and goals about what you want to achieve in those sessions. Yea if there is enough content in the API guidelines then perhaps some time can be spent on working on resolving any conflicts in the document so projects know what direction to head in. Regards, Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [neutron] neutron-specs is open for Kilo submissions
This was mentioned in the neutron meeting [1], but I wanted to send an email to the list so everyone knows. We've opened up Kilo specs for Neutron now. Please note the template has changed, so if you are resubmitting an old spec, make sure to change it to follow the new template [2]. I also wanted to highlight we've tweaked the approval process a bit with the formation of the neutron-drivers team [3], but reviews by cores and non-cores are still very much appreciated and required. Thanks! Kyle [1] http://git.openstack.org/cgit/openstack/neutron-specs/tree/specs/kilo [2] http://git.openstack.org/cgit/openstack/neutron-specs/tree/specs/template.rst [3] https://wiki.openstack.org/wiki/Neutron-drivers ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][oslo] projects still using obsolete oslo modules
On Mon, Oct 13, 2014 at 09:20:38AM -0400, Doug Hellmann wrote: I’ve put together a little script to generate a report of the projects using modules that used to be in the oslo-incubator but that have moved to libraries [1]. These modules have been deleted, and now only exist in the stable/juno branch of the incubator. We do not anticipate back-porting fixes except for serious security concerns, so it is important to update all projects to use the libraries where the modules now live. Liaisons, please look through the list below and file bugs against your project for any changes needed to move to the new libraries and start working on the updates. We need to prioritize this work for early in Kilo to ensure that your projects do not fall further out of step. K-1 is the ideal target, with K-2 as an absolute latest date. I anticipate having several more libraries by the time the K-2 milestone arrives. Thanks for the heads-up Doug, I've raised this bug to track the heat fixes: https://bugs.launchpad.net/heat/+bug/1380629 I'll try to get patches posted this week addressing the issues identified. Steve ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [api] Forming the API Working Group
On 10/10/2014 12:09 PM, Jay Pipes wrote: Thanks for getting this going, Everett! Comments inline... On 10/08/2014 07:05 PM, Everett Toews wrote: https://wiki.openstack.org/wiki/API_Working_Group This is the start of the API Working Group (API WG). yay! :) To avoid bike shedding over the name of the working group, I decided to title the wiki page API Working Group. Simple, to the point, and avoids loaded terms like standards, best practices, guidelines, conventions, etc. Yup, ++ The point isn’t what we name it. The point is what action we take about it. I propose the deliverables in the API WG wiki page. Speaking of the wiki page, I wrote it very matter-of-factly. As if this is the way things are. They’re not. The wiki page is just a starting point. If something was missed, add it. If something can be improved, improve it. Let’s try to keep it simple though. The wiki content looks fine, with the exception that I really do feel the working group needs to have some ability to review and enforce consistency within proposed REST APIs. The wording right now is: The API WG is focused on creating guidelines for the APIs which of course is fine, but I think that the Technical Committee should essentially grant the working group the power to enforce guidelines and consistency for proposed new REST APIs -- whether it's a new REST API version in an existing project or a REST APi for a newly-proposed OpenStack server project. I think that's a great goal. I'd like to see the group earn this level of influence based on its own merit rather than have the TC just grant it up front. I'd say let's give the group a cycle to build up, generate guidelines, and participate in API design reviews. I'd rather see it as the TC making something official that was effectively already happening in practice. -- Russell Bryant ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][oslo] projects still using obsolete oslo modules
On Oct 13, 2014, at 11:06 AM, Steven Hardy sha...@redhat.com wrote: On Mon, Oct 13, 2014 at 09:20:38AM -0400, Doug Hellmann wrote: I’ve put together a little script to generate a report of the projects using modules that used to be in the oslo-incubator but that have moved to libraries [1]. These modules have been deleted, and now only exist in the stable/juno branch of the incubator. We do not anticipate back-porting fixes except for serious security concerns, so it is important to update all projects to use the libraries where the modules now live. Liaisons, please look through the list below and file bugs against your project for any changes needed to move to the new libraries and start working on the updates. We need to prioritize this work for early in Kilo to ensure that your projects do not fall further out of step. K-1 is the ideal target, with K-2 as an absolute latest date. I anticipate having several more libraries by the time the K-2 milestone arrives. Thanks for the heads-up Doug, I've raised this bug to track the heat fixes: https://bugs.launchpad.net/heat/+bug/1380629 I'll try to get patches posted this week addressing the issues identified. Thanks, Steve! Let us know if you run into any issues or need help with code reviews. Doug Steve ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [api] Forming the API Working Group
On 10/13/2014 11:10 AM, Russell Bryant wrote: On 10/10/2014 12:09 PM, Jay Pipes wrote: Thanks for getting this going, Everett! Comments inline... On 10/08/2014 07:05 PM, Everett Toews wrote: https://wiki.openstack.org/wiki/API_Working_Group This is the start of the API Working Group (API WG). yay! :) To avoid bike shedding over the name of the working group, I decided to title the wiki page API Working Group. Simple, to the point, and avoids loaded terms like standards, best practices, guidelines, conventions, etc. Yup, ++ The point isn’t what we name it. The point is what action we take about it. I propose the deliverables in the API WG wiki page. Speaking of the wiki page, I wrote it very matter-of-factly. As if this is the way things are. They’re not. The wiki page is just a starting point. If something was missed, add it. If something can be improved, improve it. Let’s try to keep it simple though. The wiki content looks fine, with the exception that I really do feel the working group needs to have some ability to review and enforce consistency within proposed REST APIs. The wording right now is: The API WG is focused on creating guidelines for the APIs which of course is fine, but I think that the Technical Committee should essentially grant the working group the power to enforce guidelines and consistency for proposed new REST APIs -- whether it's a new REST API version in an existing project or a REST APi for a newly-proposed OpenStack server project. I think that's a great goal. I'd like to see the group earn this level of influence based on its own merit rather than have the TC just grant it up front. I'd say let's give the group a cycle to build up, generate guidelines, and participate in API design reviews. I'd rather see it as the TC making something official that was effectively already happening in practice. Sure, totally fair point. Best, -jay ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Fuel] Cinder/Neutron plugins on UI
Hi, We've discussed what we will be able to do for the current release and what we will not be able to implement. We have not only technical problems, but also we don't have a lot of time for implementation. We were trying to find solution which will work well enough with all of the constraints. For the current release we want to implement approach which was suggested by Mike. We are going to generate for UI checkbox which defines if plugin is set for deployment. In nailgun we'll be able to parse generated checkboxes and remove or add relation between plugin and cluster models. With this relation we'll be able to identify if plugins is used, it will allow us to remove the plugins if it's unused (in the future), or if we need to pass tasks to orchestrator. Also in POC, we are going to implement something really simple, like updating plugin attributes directly via api. Thanks, On Thu, Oct 9, 2014 at 8:13 PM, Dmitry Borodaenko dborodae...@mirantis.com wrote: Notes from the architecture review meeting on plugins UX: - separate page for plugins management - user installs the plugin on the master - global master node configuration across all environments: - user can see a list of plugins on Plugins tab (plugins description) - Enable/Disable plugin - should we enable/disable plugins globally, or only per environment? - yes, we need a global plugins management page, it will later be extended to upload or remove plugins - if a plugin is used in a deployed environment, options to globally disable or remove that plugin are blocked - show which environments (or a number of environments) have a specific plugin enabled - global plugins page is a Should in 6.0 (but easy to add) - future: a plugin like ostf should have a deployable flag set to false, so that it doesn't show up as an option per env - user creates new environment - in setup wizard on the releases page (1st step), a list of checkboxes for all plugins is offered (same page as releases?) - all globally enabled plugins are checked (enabled) by default - changes in selection of plugins will trigger regeneration of subsequent setup wizard steps - plugin may include a yaml mixin for settings page options in openstack.yaml format - in future releases, it will support describing setup wizard (disk configuration, network settings etc.) options in the same way - what is the simplest case? does plugin writer have to define the plugin enable/disable checkbox, or is it autogenerated? - if plugin does not define any configuration options: a checkbox is automatically added into Additional Services section of the settings page (disabled by default) - *problem:* if a plugin is enabled by default, but the option to deploy it is disabled by default, such environment would count against the plugin (and won't allow to remove this plugin globally) even though it actually wasn't deployed - manifest of plugins enabled/used for an environment? We ended the discussion on the problem highlighted in bold above: what's the best way to detect which plugins are actually used in an environment? On Thu, Oct 9, 2014 at 6:42 AM, Vitaly Kramskikh vkramsk...@mirantis.com wrote: Evgeniy, Yes, the plugin management page should be a separate page. As for dependency on releases, I meant that some plugin can work only on Ubuntu for example, so for different releases different plugins could be available. And please confirm that you also agree with the flow: the user install a plugin, then he enables it on the plugin management page, and then he creates an environment and on the first step he can uncheck some plugins which he doesn't want to use in that particular environment. 2014-10-09 20:11 GMT+07:00 Evgeniy L e...@mirantis.com: Hi, Vitaly, I like the idea of having separate page, but I'm not sure if it should be on releases page. Usually a plugin is not release specific, usually it's environment specific and you can have different set of plugins for different environments. Also I don't think that we should enable plugins by default, user should enable plugin if he wants it to be installed. Thanks, On Thu, Oct 9, 2014 at 3:34 PM, Vitaly Kramskikh vkramsk...@mirantis.com wrote: Let me propose another approach. I agree with most of Dmitry's statements and it seems in MVP we need plugin management UI where we can enable installed plugins. It should be a separate page. If you want to create environment with a plugin, enable the plugin on this page and create a new environment. You can also disable and uninstall plugins using that page (if there is no environments with the plugin enabled). The main reason why I'm
Re: [openstack-dev] [api] Forming the API Working Group
Zhipeng Huang wrote: THX for the link ! So we will have the workshop on Nov 3th at Meridien Etoile Hotel? The workshops happen on the 4th (Tuesday). -- Thierry Carrez (ttx) ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][oslo] projects still using obsolete oslo modules
Awesome! Thanks Doug for this, I will start working on moving the ironic* stuff to use the oslo libraries. Lucas On Mon, Oct 13, 2014 at 2:20 PM, Doug Hellmann d...@doughellmann.com wrote: I’ve put together a little script to generate a report of the projects using modules that used to be in the oslo-incubator but that have moved to libraries [1]. These modules have been deleted, and now only exist in the stable/juno branch of the incubator. We do not anticipate back-porting fixes except for serious security concerns, so it is important to update all projects to use the libraries where the modules now live. Liaisons, please look through the list below and file bugs against your project for any changes needed to move to the new libraries and start working on the updates. We need to prioritize this work for early in Kilo to ensure that your projects do not fall further out of step. K-1 is the ideal target, with K-2 as an absolute latest date. I anticipate having several more libraries by the time the K-2 milestone arrives. Most of the porting work involves adding dependencies and updating import statements, but check the documentation for each library for any special guidance. Also, because the incubator is updated to use our released libraries, you may end up having to port to several libraries *and* sync a copy of any remaining incubator dependencies that have not graduated all in a single patch in order to have a working copy. I suggest giving your review teams a heads-up about what to expect to avoid -2 for the scope of the patch. Doug [1] https://review.openstack.org/#/c/127039/ openstack-dev/heat-cfnclient: exception openstack-dev/heat-cfnclient: gettextutils openstack-dev/heat-cfnclient: importutils openstack-dev/heat-cfnclient: jsonutils openstack-dev/heat-cfnclient: timeutils openstack/ceilometer: gettextutils openstack/ceilometer: log_handler openstack/python-troveclient: strutils openstack/melange: exception openstack/melange: extensions openstack/melange: utils openstack/melange: wsgi openstack/melange: setup openstack/tuskar: config.generator openstack/tuskar: db openstack/tuskar: db.sqlalchemy openstack/tuskar: excutils openstack/tuskar: gettextutils openstack/tuskar: importutils openstack/tuskar: jsonutils openstack/tuskar: strutils openstack/tuskar: timeutils openstack/sahara-dashboard: importutils openstack/barbican: gettextutils openstack/barbican: jsonutils openstack/barbican: timeutils openstack/barbican: importutils openstack/kite: db openstack/kite: db.sqlalchemy openstack/kite: jsonutils openstack/kite: timeutils openstack/python-ironicclient: gettextutils openstack/python-ironicclient: importutils openstack/python-ironicclient: strutils openstack/python-melangeclient: setup openstack/neutron: excutils openstack/neutron: gettextutils openstack/neutron: importutils openstack/neutron: jsonutils openstack/neutron: middleware.base openstack/neutron: middleware.catch_errors openstack/neutron: middleware.correlation_id openstack/neutron: middleware.debug openstack/neutron: middleware.request_id openstack/neutron: middleware.sizelimit openstack/neutron: network_utils openstack/neutron: strutils openstack/neutron: timeutils openstack/tempest: importlib openstack/manila: excutils openstack/manila: gettextutils openstack/manila: importutils openstack/manila: jsonutils openstack/manila: network_utils openstack/manila: strutils openstack/manila: timeutils openstack/keystone: gettextutils openstack/python-glanceclient: importutils openstack/python-glanceclient: network_utils openstack/python-glanceclient: strutils openstack/python-keystoneclient: jsonutils openstack/python-keystoneclient: strutils openstack/python-keystoneclient: timeutils openstack/zaqar: config.generator openstack/zaqar: excutils openstack/zaqar: gettextutils openstack/zaqar: importutils openstack/zaqar: jsonutils openstack/zaqar: setup openstack/zaqar: strutils openstack/zaqar: timeutils openstack/zaqar: version openstack/python-novaclient: gettextutils openstack/ironic: config.generator openstack/ironic: gettextutils openstack/cinder: config.generator openstack/cinder: excutils openstack/cinder: gettextutils openstack/cinder: importutils openstack/cinder: jsonutils openstack/cinder: log_handler openstack/cinder: network_utils openstack/cinder: strutils openstack/cinder: timeutils openstack/cinder: units openstack/python-manilaclient: gettextutils openstack/python-manilaclient: importutils openstack/python-manilaclient: jsonutils openstack/python-manilaclient: strutils openstack/python-manilaclient: timeutils openstack/trove: exception openstack/trove: excutils openstack/trove: gettextutils openstack/trove: importutils openstack/trove: iniparser openstack/trove: jsonutils openstack/trove: network_utils openstack/trove: notifier openstack/trove: pastedeploy
Re: [openstack-dev] [Python-novaclient] Python-novaclient tests fail
On 10/13/2014 06:19 AM, Murugan, Visnusaran wrote: Just a permission issue. Use a “sudo”. You could alternatively install novaclient under a virtualenv and run the same “python setup.py test” without sudo. To my knowledge you should never have to run our unit tests with sudo, and I wouldn't recommend doing it. Handing root permissions to test code that was written to run as a regular user just doesn't seem like a good idea to me. -Vishnu From: Daniele Casini [mailto:daniele.cas...@dektech.com.au] Sent: Monday, October 13, 2014 3:50 PM To: openstack-dev@lists.openstack.org Subject: [openstack-dev] [Python-novaclient] Python-novaclient tests fail Hi All, I am trying to test python-novaclient using python setup.py test as reported in http://docs.openstack.org/developer/python-novaclient/cid:part1.07080205.06020204@dektech.com.au. In order to figure out the test logic I ran tests but an error is occurred: Exception: Traceback (most recent call last): File /usr/local/lib/python2.7/dist-packages/pip/basecommand.py, line 122, in main status = self.run(options, args) File /usr/local/lib/python2.7/dist-packages/pip/commands/install.py, line 283, in run requirement_set.install(install_options, global_options, root=options.root_path) File /usr/local/lib/python2.7/dist-packages/pip/req.py, line 1431, in install requirement.uninstall(auto_confirm=True) File /usr/local/lib/python2.7/dist-packages/pip/req.py, line 598, in uninstall paths_to_remove.remove(auto_confirm) File /usr/local/lib/python2.7/dist-packages/pip/req.py, line 1836, in remove renames(path, new_path) File /usr/local/lib/python2.7/dist-packages/pip/util.py, line 295, in renames shutil.move(old, new) File /usr/lib/python2.7/shutil.py, line 300, in move rmtree(src) File /usr/lib/python2.7/shutil.py, line 247, in rmtree rmtree(fullname, ignore_errors, onerror) File /usr/lib/python2.7/shutil.py, line 252, in rmtree onerror(os.remove, fullname, sys.exc_info()) File /usr/lib/python2.7/shutil.py, line 250, in rmtree os.remove(fullname) OSError: [Errno 13] Permission denied: '/usr/local/lib/python2.7/dist-packages/hacking/tests/test_doctest.pyc' Storing debug log for failure in /home/devstack/.pip/pip.log I am working on master branch and no source code modification are performed. Do you know how to fix it? Thanks, Daniele. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Fuel] Let's remove fuelweb from repo paths
Hi folks, I want to bring this topic up again. We had a blocker in Fuel-Web project - Evgeniy L found a bug for old releases, so I had to add data migration. Today I built a new ISO and it successfully passed BVT tests. So I would ask you to merge this patches if there are no objections. Thanks, Igor On Fri, Oct 10, 2014 at 2:55 PM, Igor Kalnitsky ikalnit...@mirantis.com wrote: Folks, BVT tests are passed successfully. What about merging? Thanks, Igor On Fri, Oct 10, 2014 at 12:40 PM, Igor Kalnitsky ikalnit...@mirantis.com wrote: As I mentioned early, I already have an ISO with patches and it works fine in my own deployment. However, I ran the BVT tests on centos [1] and ubuntu [2]. [1]: http://jenkins-product.srt.mirantis.net:8080/view/custom_iso/job/custom.centos.bvt_1/198/ [2]: http://jenkins-product.srt.mirantis.net:8080/view/custom_iso/job/custom.ubuntu.bvt_2/170/ On Fri, Oct 10, 2014 at 12:25 PM, Mike Scherbakov mscherba...@mirantis.com wrote: I have no objections, and essentially I'm for such initiatives at the beginning of development cycle, when risks are lower. If we ensure tests coverage, and do it carefully (for instance, building custom ISO with changes and making sure it passes BVTs), then let's do it. On Wed, Oct 8, 2014 at 9:08 PM, Igor Kalnitsky ikalnit...@mirantis.com wrote: Hi fuelers, I'm going to propose you remove fuelweb word from repos' paths. What am I talking about? Let me show you. Currently we have the following paths to repos: /var/www/nailgun/2014.2-6.0/centos/fuelweb/x86_64/ /var/www/nailgun/2014.2-6.0/ubuntu/fuelweb/x86_64/ Obviously, the word fuelweb is redundant here and doesn't reflect reality, because our repos contain not only fuel packages, but openstack. Moreover, fuel-upgrade script installs repos without that word (fuelweb, I mean) so we have inconsistent file structure for repos, which may lead to problems in future. So I propose to do it now, while we can do it without risks and safety. I prepared a set of patches https://review.openstack.org/#/c/126885/ https://review.openstack.org/#/c/126886/ https://review.openstack.org/#/c/126887/ and built an ISO #508 [1] - both master node and centos cluster was deployed successfully. Folks, please, take a look over patches above and let's merge it. Thanks, Igor [1]: http://jenkins-product.srt.mirantis.net:8080/view/custom_iso/job/custom_master_iso/508/ ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Mike Scherbakov #mihgen ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Python-novaclient] Python-novaclient tests fail
On 10/13/2014 08:08 AM, Daniele Casini wrote: I have already used sudo but it still fails: ImportError: cannot import name exceptions Ran 63 tests in 0.146s (+0.014s) FAILED (id=3, failures=63) error: testr failed (1) So, it is quite strange because I do not modify the source code. Let me know if you have some suggestions. This is probably because of missing dependencies. tox takes care of that by building a virtual environment with all of the dependencies installed, but if you're running setup.py directly you'd have to take care of that. As Andrey noted, tox is used in the gate so the novaclient docs should really be updated. Thanks, Daniele. On 10/13/2014 01:19 PM, Murugan, Visnusaran wrote: Just a permission issue. Use a “sudo”. You could alternatively install novaclient under a virtualenv and run the same “python setup.py test” without sudo. -Vishnu *From:*Daniele Casini [mailto:daniele.cas...@dektech.com.au] *Sent:* Monday, October 13, 2014 3:50 PM *To:* openstack-dev@lists.openstack.org *Subject:* [openstack-dev] [Python-novaclient] Python-novaclient tests fail Hi All, I am trying to test *python-novaclient* using */python/**//**/setup.py/**//**/test/*asreported in http://docs.openstack.org/developer/python-novaclient/ cid:part1.07080205.06020204@dektech.com.au. In order to figure out the test logic I ran tests but an error is occurred: Exception: Traceback (most recent call last): File /usr/local/lib/python2.7/dist-packages/pip/basecommand.py, line 122, in main status = self.run(options, args) File /usr/local/lib/python2.7/dist-packages/pip/commands/install.py, line 283, in run requirement_set.install(install_options, global_options, root=options.root_path) File /usr/local/lib/python2.7/dist-packages/pip/req.py, line 1431, in install requirement.uninstall(auto_confirm=True) File /usr/local/lib/python2.7/dist-packages/pip/req.py, line 598, in uninstall paths_to_remove.remove(auto_confirm) File /usr/local/lib/python2.7/dist-packages/pip/req.py, line 1836, in remove renames(path, new_path) File /usr/local/lib/python2.7/dist-packages/pip/util.py, line 295, in renames shutil.move(old, new) File /usr/lib/python2.7/shutil.py, line 300, in move rmtree(src) File /usr/lib/python2.7/shutil.py, line 247, in rmtree rmtree(fullname, ignore_errors, onerror) File /usr/lib/python2.7/shutil.py, line 252, in rmtree onerror(os.remove, fullname, sys.exc_info()) File /usr/lib/python2.7/shutil.py, line 250, in rmtree os.remove(fullname) OSError: [Errno 13] Permission denied: '/usr/local/lib/python2.7/dist-packages/hacking/tests/test_doctest.pyc' Storing debug log for failure in /home/devstack/.pip/pip.log I am working on *master branch* and *no source code modification* are performed. Do you know how to fix it? Thanks, Daniele. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Fuel] Propose adding Igor K. to core reviewers for fuel-web projects
+1 On Mon, Oct 13, 2014 at 6:49 PM, Vitaly Kramskikh vkramsk...@mirantis.com wrote: +1 2014-10-13 20:53 GMT+07:00 Evgeniy L e...@mirantis.com: Hi everyone! I would like to propose Igor Kalnitsky as a core reviewer on the Fuel-web team. Igor has been working on openstack patching, nailgun, fuel upgrade and provided a lot of good reviews [1]. In addition he's also very active in IRC and mailing list. Can the other core team members please reply with your votes if you agree or disagree. Thanks! [1] http://stackalytics.com/?project_type=stackforgerelease=junomodule=fuel-web ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Vitaly Kramskikh, Software Engineer, Mirantis, Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Mike Scherbakov #mihgen ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Python-novaclient] Python-novaclient tests fail
the novaclient docs should really be updated. simple fix: https://review.openstack.org/#/c/127971/ but, imo, docs should contains more information(I will fix it a little bit later, if no one takes this) On Mon, Oct 13, 2014 at 7:49 PM, Ben Nemec openst...@nemebean.com wrote: On 10/13/2014 08:08 AM, Daniele Casini wrote: I have already used sudo but it still fails: ImportError: cannot import name exceptions Ran 63 tests in 0.146s (+0.014s) FAILED (id=3, failures=63) error: testr failed (1) So, it is quite strange because I do not modify the source code. Let me know if you have some suggestions. This is probably because of missing dependencies. tox takes care of that by building a virtual environment with all of the dependencies installed, but if you're running setup.py directly you'd have to take care of that. As Andrey noted, tox is used in the gate so the novaclient docs should really be updated. Thanks, Daniele. On 10/13/2014 01:19 PM, Murugan, Visnusaran wrote: Just a permission issue. Use a “sudo”. You could alternatively install novaclient under a virtualenv and run the same “python setup.py test” without sudo. -Vishnu *From:*Daniele Casini [mailto:daniele.cas...@dektech.com.au] *Sent:* Monday, October 13, 2014 3:50 PM *To:* openstack-dev@lists.openstack.org *Subject:* [openstack-dev] [Python-novaclient] Python-novaclient tests fail Hi All, I am trying to test *python-novaclient* using */python/**//**/setup.py/**//**/test/*asreported in http://docs.openstack.org/developer/python-novaclient/ cid:part1.07080205.06020204@dektech.com.au. In order to figure out the test logic I ran tests but an error is occurred: Exception: Traceback (most recent call last): File /usr/local/lib/python2.7/dist-packages/pip/basecommand.py, line 122, in main status = self.run(options, args) File /usr/local/lib/python2.7/dist-packages/pip/commands/install.py, line 283, in run requirement_set.install(install_options, global_options, root=options.root_path) File /usr/local/lib/python2.7/dist-packages/pip/req.py, line 1431, in install requirement.uninstall(auto_confirm=True) File /usr/local/lib/python2.7/dist-packages/pip/req.py, line 598, in uninstall paths_to_remove.remove(auto_confirm) File /usr/local/lib/python2.7/dist-packages/pip/req.py, line 1836, in remove renames(path, new_path) File /usr/local/lib/python2.7/dist-packages/pip/util.py, line 295, in renames shutil.move(old, new) File /usr/lib/python2.7/shutil.py, line 300, in move rmtree(src) File /usr/lib/python2.7/shutil.py, line 247, in rmtree rmtree(fullname, ignore_errors, onerror) File /usr/lib/python2.7/shutil.py, line 252, in rmtree onerror(os.remove, fullname, sys.exc_info()) File /usr/lib/python2.7/shutil.py, line 250, in rmtree os.remove(fullname) OSError: [Errno 13] Permission denied: '/usr/local/lib/python2.7/dist-packages/hacking/tests/test_doctest.pyc' Storing debug log for failure in /home/devstack/.pip/pip.log I am working on *master branch* and *no source code modification* are performed. Do you know how to fix it? Thanks, Daniele. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Best regards, Andrey Kurilin. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [neutron] Updates to the weekly Neutron meting
As part of stream lining how we work as a team in Neutron, and to make better use of our weekly meeting, I'm changing the format of the weekly meeting. This involves removing the meeting as a sort of status report tool from our plethora of sub-teams into more of an On-Demand agenda [1]. We'll leave bugs and docs as standing items for now, but the rest of the agenda will be generated dynamically. I want to encourage anyone to put agenda items down [2] with your name that we can use to discuss during out meeting time each week. Also, keep in mind we're still rotating meetings weekly to accommodate timezones. Today's meeting is at 2100 UTC as a reminder. Thank you! Kyle [1] https://wiki.openstack.org/wiki/Network/Meetings [2] https://wiki.openstack.org/wiki/Network/Meetings#On_Demand_Agenda ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Neutron documentation to update about new vendor plugin, but without code in repository?
Hi, If the plan is to move ALL existing vendor specific plugins/drivers out-of-tree, then having a place-holder within the OpenStack domain would suffice, where the vendors can list their plugins/drivers along with their documentation as how to install and use etc. The main Openstack Neutron documentation page can explain the plugin framework (ml2 type drivers, mechanism drivers, serviec plugin and so on) and its purpose/usage etc, then provide a link to refer the currently supported vendor specific plugins/drivers for more details. That way the documentation will be accurate to what is in-tree and limit the documentation of external plugins/drivers to have just a reference link. So its now vendor's responsibility to keep their driver's up-to-date and their documentation accurate. The OpenStack dev and docs team dont have to worry about gating/publishing/maintaining the vendor specific plugins/drivers. The built-in drivers such as LinuxBridge or OpenVSwitch etc can continue to be in-tree and their documentation will be part of main Neutron's docs. So the Neutron is guaranteed to work with built-in plugins/drivers as per the documentation and the user is informed to refer the external vendor plug-in page for additional/specific plugins/drivers. Thanks, Vad -- On Fri, Oct 10, 2014 at 8:10 PM, Anne Gentle a...@openstack.org wrote: On Fri, Oct 10, 2014 at 7:36 PM, Kevin Benton blak...@gmail.com wrote: I think you will probably have to wait until after the summit so we can see the direction that will be taken with the rest of the in-tree drivers/plugins. It seems like we are moving towards removing all of them so we would definitely need a solution to documenting out-of-tree drivers as you suggested. However, I think the minimum requirements for having a driver being documented should be third-party testing of Neutron patches. Otherwise the docs will become littered with a bunch of links to drivers/plugins with no indication of what actually works, which ultimately makes Neutron look bad. This is my line of thinking as well, expanded to ultimately makes OpenStack docs look bad -- a perception I want to avoid. Keep the viewpoints coming. We have a crucial balancing act ahead: users need to trust docs and trust the drivers. Ultimately the responsibility for the docs is in the hands of the driver contributors so it seems those should be on a domain name where drivers control publishing and OpenStack docs are not a gatekeeper, quality checker, reviewer, or publisher. We have documented the status of hypervisor drivers on an OpenStack wiki page. [1] To me, that type of list could be maintained on the wiki page better than in the docs themselves. Thoughts? Feelings? More discussion, please. And thank you for the responses so far. Anne [1] https://wiki.openstack.org/wiki/HypervisorSupportMatrix On Fri, Oct 10, 2014 at 1:28 PM, Vadivel Poonathan vadivel.openst...@gmail.com wrote: Hi Anne, Thanks for your immediate response!... Just to clarify... I have developed and maintaining a Neutron plug-in (ML2 mechanism_driver) since Grizzly and now it is up-to-date with Icehouse. But it was never listed nor part of the main Openstack releases. Now i would like to have my plugin mentioned as supported plugin/mechanism_driver for so and so vendor equipments in the docs.openstack.org, but without having the actual plugin code to be posted in the main Openstack GIT repository. Reason is that I dont have plan/bandwidth to go thru the entire process of new plugin blue-print/development/review/testing etc as required by the Openstack development community. Bcos this is already developed, tested and released to some customers directly. Now I just want to get it to the official Openstack documentation, so that more people can get this and use. The plugin package is made available to public from Ubuntu repository along with necessary documentation. So people can directly get it from Ubuntu repository and use it. All i need is to get listed in the docs.openstack.org so that people knows that it exists and can be used with any Openstack. Pls. confrim whether this is something possible?... Thanks again!.. Vad -- On Fri, Oct 10, 2014 at 12:18 PM, Anne Gentle a...@openstack.org wrote: On Fri, Oct 10, 2014 at 2:11 PM, Vadivel Poonathan vadivel.openst...@gmail.com wrote: Hi, How to include a new vendor plug-in (aka mechanism_driver in ML2 framework) into the Openstack documentation?.. In other words, is it possible to include a new plug-in in the Openstack documentation page without having the actual plug-in code as part of the Openstack neutron repository?... The actual plug-in is posted and available for the public to download as Ubuntu package. But i need to mention somewhere in the Openstack documentation that this new plugin is available for the public to use along with its documentation. We definitely want you to include pointers to
[openstack-dev] [Keystone] keystone-specs is open for Kilo submissions
This has actually been the case since just after the cut of the RC1 for Juno, but I wanted to make sure that it was explicitly called out on the mailing list. Keystone has opened up for Kilo specifications, please submit your specifications to the keystone-specs[1] repository. Ideally we will approve all specs for Kilo as early as possible so that new code can land in the first part of the Kilo development cycle. Any reviews of the current specifications by cores or non-cores alike, of course, are appreciated and valued. Cheers, Morgan Fainberg [1] http://git.openstack.org/cgit/openstack/keystone-specs ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] Updates to the weekly Neutron meting
Kyle, This works for me. My only comment is that linking sub team pages from the Neutron meeting page served a dual purpose. It attached it to the agenda -- which is now deprecated -- and it served as sort of an anchor for the sub team in to the Neutron team on the wiki. At least for the L3 sub team, that has been the case. Maybe we should freshen up the old Neutron teams page [1] which looks to be long out of date. I think it probably got stale because of the dual purpose of the meeting page. Carl [1] https://wiki.openstack.org/wiki/Neutron/Teams On Mon, Oct 13, 2014 at 11:49 AM, Kyle Mestery mest...@mestery.com wrote: As part of stream lining how we work as a team in Neutron, and to make better use of our weekly meeting, I'm changing the format of the weekly meeting. This involves removing the meeting as a sort of status report tool from our plethora of sub-teams into more of an On-Demand agenda [1]. We'll leave bugs and docs as standing items for now, but the rest of the agenda will be generated dynamically. I want to encourage anyone to put agenda items down [2] with your name that we can use to discuss during out meeting time each week. Also, keep in mind we're still rotating meetings weekly to accommodate timezones. Today's meeting is at 2100 UTC as a reminder. Thank you! Kyle [1] https://wiki.openstack.org/wiki/Network/Meetings [2] https://wiki.openstack.org/wiki/Network/Meetings#On_Demand_Agenda ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] Updates to the weekly Neutron meting
Thanks Carl. I agree, we need to collapse the pages so we can get a status readout on the wiki. Keep in mind we also have this page [1], which is much fresher than the Teams page. I'd prefer to see the Teams page either marked as deprecated or updated to reflect the content in [1], with even more changes made. If you want to take this on, that would be great! Otherwise I'll look to update this tomorrow. Thanks, Kyle [1] https://wiki.openstack.org/wiki/NeutronSubTeams On Mon, Oct 13, 2014 at 2:41 PM, Carl Baldwin c...@ecbaldwin.net wrote: Kyle, This works for me. My only comment is that linking sub team pages from the Neutron meeting page served a dual purpose. It attached it to the agenda -- which is now deprecated -- and it served as sort of an anchor for the sub team in to the Neutron team on the wiki. At least for the L3 sub team, that has been the case. Maybe we should freshen up the old Neutron teams page [1] which looks to be long out of date. I think it probably got stale because of the dual purpose of the meeting page. Carl [1] https://wiki.openstack.org/wiki/Neutron/Teams On Mon, Oct 13, 2014 at 11:49 AM, Kyle Mestery mest...@mestery.com wrote: As part of stream lining how we work as a team in Neutron, and to make better use of our weekly meeting, I'm changing the format of the weekly meeting. This involves removing the meeting as a sort of status report tool from our plethora of sub-teams into more of an On-Demand agenda [1]. We'll leave bugs and docs as standing items for now, but the rest of the agenda will be generated dynamically. I want to encourage anyone to put agenda items down [2] with your name that we can use to discuss during out meeting time each week. Also, keep in mind we're still rotating meetings weekly to accommodate timezones. Today's meeting is at 2100 UTC as a reminder. Thank you! Kyle [1] https://wiki.openstack.org/wiki/Network/Meetings [2] https://wiki.openstack.org/wiki/Network/Meetings#On_Demand_Agenda ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] Updates to the weekly Neutron meting
Kyle, I missed that one! Thanks for the pointer. I'll get this started today. Carl On Mon, Oct 13, 2014 at 1:47 PM, Kyle Mestery mest...@mestery.com wrote: Thanks Carl. I agree, we need to collapse the pages so we can get a status readout on the wiki. Keep in mind we also have this page [1], which is much fresher than the Teams page. I'd prefer to see the Teams page either marked as deprecated or updated to reflect the content in [1], with even more changes made. If you want to take this on, that would be great! Otherwise I'll look to update this tomorrow. Thanks, Kyle [1] https://wiki.openstack.org/wiki/NeutronSubTeams On Mon, Oct 13, 2014 at 2:41 PM, Carl Baldwin c...@ecbaldwin.net wrote: Kyle, This works for me. My only comment is that linking sub team pages from the Neutron meeting page served a dual purpose. It attached it to the agenda -- which is now deprecated -- and it served as sort of an anchor for the sub team in to the Neutron team on the wiki. At least for the L3 sub team, that has been the case. Maybe we should freshen up the old Neutron teams page [1] which looks to be long out of date. I think it probably got stale because of the dual purpose of the meeting page. Carl [1] https://wiki.openstack.org/wiki/Neutron/Teams On Mon, Oct 13, 2014 at 11:49 AM, Kyle Mestery mest...@mestery.com wrote: As part of stream lining how we work as a team in Neutron, and to make better use of our weekly meeting, I'm changing the format of the weekly meeting. This involves removing the meeting as a sort of status report tool from our plethora of sub-teams into more of an On-Demand agenda [1]. We'll leave bugs and docs as standing items for now, but the rest of the agenda will be generated dynamically. I want to encourage anyone to put agenda items down [2] with your name that we can use to discuss during out meeting time each week. Also, keep in mind we're still rotating meetings weekly to accommodate timezones. Today's meeting is at 2100 UTC as a reminder. Thank you! Kyle [1] https://wiki.openstack.org/wiki/Network/Meetings [2] https://wiki.openstack.org/wiki/Network/Meetings#On_Demand_Agenda ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][oslo] projects still using obsolete oslo modules
Filed https://bugs.launchpad.net/sahara/+bug/1380725 for sahara stuff. Andrew. On Mon, Oct 13, 2014 at 6:20 AM, Doug Hellmann d...@doughellmann.com wrote: I’ve put together a little script to generate a report of the projects using modules that used to be in the oslo-incubator but that have moved to libraries [1]. These modules have been deleted, and now only exist in the stable/juno branch of the incubator. We do not anticipate back-porting fixes except for serious security concerns, so it is important to update all projects to use the libraries where the modules now live. Liaisons, please look through the list below and file bugs against your project for any changes needed to move to the new libraries and start working on the updates. We need to prioritize this work for early in Kilo to ensure that your projects do not fall further out of step. K-1 is the ideal target, with K-2 as an absolute latest date. I anticipate having several more libraries by the time the K-2 milestone arrives. Most of the porting work involves adding dependencies and updating import statements, but check the documentation for each library for any special guidance. Also, because the incubator is updated to use our released libraries, you may end up having to port to several libraries *and* sync a copy of any remaining incubator dependencies that have not graduated all in a single patch in order to have a working copy. I suggest giving your review teams a heads-up about what to expect to avoid -2 for the scope of the patch. Doug [1] https://review.openstack.org/#/c/127039/ openstack-dev/heat-cfnclient: exception openstack-dev/heat-cfnclient: gettextutils openstack-dev/heat-cfnclient: importutils openstack-dev/heat-cfnclient: jsonutils openstack-dev/heat-cfnclient: timeutils openstack/ceilometer: gettextutils openstack/ceilometer: log_handler openstack/python-troveclient: strutils openstack/melange: exception openstack/melange: extensions openstack/melange: utils openstack/melange: wsgi openstack/melange: setup openstack/tuskar: config.generator openstack/tuskar: db openstack/tuskar: db.sqlalchemy openstack/tuskar: excutils openstack/tuskar: gettextutils openstack/tuskar: importutils openstack/tuskar: jsonutils openstack/tuskar: strutils openstack/tuskar: timeutils openstack/sahara-dashboard: importutils openstack/barbican: gettextutils openstack/barbican: jsonutils openstack/barbican: timeutils openstack/barbican: importutils openstack/kite: db openstack/kite: db.sqlalchemy openstack/kite: jsonutils openstack/kite: timeutils openstack/python-ironicclient: gettextutils openstack/python-ironicclient: importutils openstack/python-ironicclient: strutils openstack/python-melangeclient: setup openstack/neutron: excutils openstack/neutron: gettextutils openstack/neutron: importutils openstack/neutron: jsonutils openstack/neutron: middleware.base openstack/neutron: middleware.catch_errors openstack/neutron: middleware.correlation_id openstack/neutron: middleware.debug openstack/neutron: middleware.request_id openstack/neutron: middleware.sizelimit openstack/neutron: network_utils openstack/neutron: strutils openstack/neutron: timeutils openstack/tempest: importlib openstack/manila: excutils openstack/manila: gettextutils openstack/manila: importutils openstack/manila: jsonutils openstack/manila: network_utils openstack/manila: strutils openstack/manila: timeutils openstack/keystone: gettextutils openstack/python-glanceclient: importutils openstack/python-glanceclient: network_utils openstack/python-glanceclient: strutils openstack/python-keystoneclient: jsonutils openstack/python-keystoneclient: strutils openstack/python-keystoneclient: timeutils openstack/zaqar: config.generator openstack/zaqar: excutils openstack/zaqar: gettextutils openstack/zaqar: importutils openstack/zaqar: jsonutils openstack/zaqar: setup openstack/zaqar: strutils openstack/zaqar: timeutils openstack/zaqar: version openstack/python-novaclient: gettextutils openstack/ironic: config.generator openstack/ironic: gettextutils openstack/cinder: config.generator openstack/cinder: excutils openstack/cinder: gettextutils openstack/cinder: importutils openstack/cinder: jsonutils openstack/cinder: log_handler openstack/cinder: network_utils openstack/cinder: strutils openstack/cinder: timeutils openstack/cinder: units openstack/python-manilaclient: gettextutils openstack/python-manilaclient: importutils openstack/python-manilaclient: jsonutils openstack/python-manilaclient: strutils openstack/python-manilaclient: timeutils openstack/trove: exception openstack/trove: excutils openstack/trove: gettextutils openstack/trove: importutils openstack/trove: iniparser openstack/trove: jsonutils openstack/trove: network_utils openstack/trove: notifier openstack/trove: pastedeploy openstack/trove: rpc openstack/trove: strutils
[openstack-dev] [all][policy][keystone] Better Policy Model and Representing Capabilites
Description of the problem: Without attempting an action on an endpoint with a current scoped token, it is impossible to know what actions are available to a user. Horizon makes some attempts to solve this issue by sourcing all of the policy files from all of the services to determine what a user can accomplish with a given role. This is highly inefficient as it requires processing the various policy.json files for each request in multiple places and presents a mechanism that is not really scalable to understand what a user can do with the current authorization. Horizon may not be the only service that (in the long term) would want to know what actions a token can take. I would like to start a discussion on how we should improve our policy implementation (OpenStack wide) to help make it easier to know what is possible with a current authorization context (Keystone token). The key feature should be that whatever the implementation is, it doesn’t require another round-trip to a third party service to “enforce” the policy which avoids another scaling point like UUID Keystone token validation. Here are a couple of ideas that we’ve discussed over the last few development cycles (and none of this changes the requirements to manage scope of authorization, e.g. project, domain, trust, ...): 1. Keystone is the holder of all policy files. Each service gets it’s policy file from Keystone and it is possible to validate the policy (by any other service) against a token provided they get the relevant policy file from the authoritative source (Keystone). Pros: This is nearly completely compatible with the current policy system. The biggest change is that policy files are published to Keystone instead of to a local file on disk. This also could open the door to having keystone build “stacked” policies (user/project/domain/endpoint/service specific) where the deployer could layer policy definitions (layering would allow for stricter enforcement at more specific levels, e.g. users from project X can’t terminate any VMs). Cons: This doesn’t ease up the processing requirement or the need to hold (potentially) a significant number of policy files for each service that wants to evaluate what actions a token can do. 2. Each enforcement point in a service is turned into an attribute/role, and the token contains all of the information on what a user can do (effectively shipping the entire policy information with the token). Pros: It is trivial to know what a token provides access to: the token would contain something like `{“nova”: [“terminate”, “boot”], “keystone”: [“create_user”, “update_user”], ...}`. It would be easily possible to allow glance “get image” nova “boot” capability instead of needing to know the roles for policy.json for both glance and nova work for booting a new VM. Cons: This would likely require a central registry of all the actions that could be taken (something akin to an IANA port list). Without a grouping to apply these authorizations to a user (e.g. keystone_admin would convey “create_project, delete_project, update_project, create_user, delete_user, update_user, ...”) this becomes unwieldy. The “roles” or “attribute” that convey capabilities are also relatively static instead of highly dynamic as they are today. This could also contribute to token-bloat. I’m sure there are more ways to approach this problem, so please don’t hesitate to add to the conversation and expand on the options. The above options are by no mean exhaustive nor fully explored. This change may not even be something to be expected within the current development cycle (Kilo) or even the next, but this is a conversation that needs to be started as it will help make OpenStack better. Thanks, Morgan — Morgan Fainberg ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Fuel] Propose adding Igor K. to core reviewers for fuel-web projects
+1 Best Regards, Sergii Golovatiuk On 13 Oct 2014, at 18:55, Mike Scherbakov mscherba...@mirantis.com wrote: +1 On Mon, Oct 13, 2014 at 6:49 PM, Vitaly Kramskikh vkramsk...@mirantis.com wrote: +1 2014-10-13 20:53 GMT+07:00 Evgeniy L e...@mirantis.com: Hi everyone! I would like to propose Igor Kalnitsky as a core reviewer on the Fuel-web team. Igor has been working on openstack patching, nailgun, fuel upgrade and provided a lot of good reviews [1]. In addition he's also very active in IRC and mailing list. Can the other core team members please reply with your votes if you agree or disagree. Thanks! [1] http://stackalytics.com/?project_type=stackforgerelease=junomodule=fuel-web ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Vitaly Kramskikh, Software Engineer, Mirantis, Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Mike Scherbakov #mihgen ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Nova] Automatic evacuate
[switching to openstack-dev] Has anyone automated nova evacuate so that VM's on a failed compute host using shared storage are automatically moved onto a new host or is manually entering *nova compute instance host* required in all cases? If it's manual only or require custom Heat/Ceilometer templates, how hard would it be to enable automatic evacuation within Novs? i.e. (within /etc/nova/nova.conf) auto_evac = true Or is this possible now and I've simply not run across it? *Adam Lawson* AQORN, Inc. 427 North Tatnall Street Ste. 58461 Wilmington, Delaware 19801-2230 Toll-free: (844) 4-AQORN-NOW ext. 101 International: +1 302-387-4660 Direct: +1 916-246-2072 On Sat, Sep 27, 2014 at 12:32 AM, Clint Byrum cl...@fewbar.com wrote: So, what you're looking for is basically the same old IT, but with an API. I get that. For me, the point of this cloud thing is so that server operators can make _reasonable_ guarantees, and application operators can make use of them in an automated fashion. If you start guaranteeing 4 and 5 nines for single VM's, you're right back in the boat of spending a lot on server infrastructure even if your users could live without it sometimes. Compute hosts are going to go down. Networks are going to partition. It is not actually expensive to deal with that at the application layer. In fact when you know your business rules, you'll do a better job at doing this efficiently than some blanket replicate all the things layer might. I know, some clouds are just new ways to chop up these fancy 40 core megaservers that everyone is shipping. I'm sure OpenStack can do it, but I'm saying, I don't think OpenStack _should_ do it. Excerpts from Adam Lawson's message of 2014-09-26 20:30:29 -0700: Generally speaking that's true when you have full control over how you deploy applications as a consumer. As a provider however, cloud resiliency is king and it's generally frowned upon to associate instances directly to the underlying physical hardware for any reason. It's good when instances can come and go as needed, but in a production context, a failed compute host shouldn't take down every instance hosted on it. Otherwise there is no real abstraction going on and the cloud loses immense value. On Sep 26, 2014 4:15 PM, Clint Byrum cl...@fewbar.com wrote: Excerpts from Adam Lawson's message of 2014-09-26 14:43:40 -0700: Hello fellow stackers. I'm looking for discussions/plans re VM continuity. I.e. Protection for instances using ephemeral storage against host failures or auto-failover capability for instances on hosts where the host suffers from an attitude problem? I know fail-overs are supported and I'm quite certain auto-fail-overs are possible in the event of a host failure (hosting instances not using shared storage). I just can't find where this has been addressed/discussed. Someone help a brother out? ; ) I'm sure some of that is possible, but it's a cloud, so why not do things the cloud way? Spin up redundant bits in disparate availability zones. Replicate only what must be replicated. Use volumes for DR only when replication would be too expensive. Instances are cattle, not pets. Keep them alive just long enough to make your profit. ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openst...@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [neutron] allow-mac-to-be-updated
Hi, Is anyone working on this blueprint[1]? I have an implementation [2] and would like to write up a spec. Thanks, Chuck [1] https://blueprints.launchpad.net/neutron/+spec/allow-mac-to-be-updated [2] https://review.openstack.org/#/c/112129/ ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] Automatic evacuate
Looks like this was proposed and denied to be part of Nova for some reason last year. Thoughts on why and is the reasoning (whatever it was) still applicable? *Adam Lawson* AQORN, Inc. 427 North Tatnall Street Ste. 58461 Wilmington, Delaware 19801-2230 Toll-free: (844) 4-AQORN-NOW ext. 101 International: +1 302-387-4660 Direct: +1 916-246-2072 On Mon, Oct 13, 2014 at 1:26 PM, Adam Lawson alaw...@aqorn.com wrote: [switching to openstack-dev] Has anyone automated nova evacuate so that VM's on a failed compute host using shared storage are automatically moved onto a new host or is manually entering *nova compute instance host* required in all cases? If it's manual only or require custom Heat/Ceilometer templates, how hard would it be to enable automatic evacuation within Novs? i.e. (within /etc/nova/nova.conf) auto_evac = true Or is this possible now and I've simply not run across it? *Adam Lawson* AQORN, Inc. 427 North Tatnall Street Ste. 58461 Wilmington, Delaware 19801-2230 Toll-free: (844) 4-AQORN-NOW ext. 101 International: +1 302-387-4660 Direct: +1 916-246-2072 On Sat, Sep 27, 2014 at 12:32 AM, Clint Byrum cl...@fewbar.com wrote: So, what you're looking for is basically the same old IT, but with an API. I get that. For me, the point of this cloud thing is so that server operators can make _reasonable_ guarantees, and application operators can make use of them in an automated fashion. If you start guaranteeing 4 and 5 nines for single VM's, you're right back in the boat of spending a lot on server infrastructure even if your users could live without it sometimes. Compute hosts are going to go down. Networks are going to partition. It is not actually expensive to deal with that at the application layer. In fact when you know your business rules, you'll do a better job at doing this efficiently than some blanket replicate all the things layer might. I know, some clouds are just new ways to chop up these fancy 40 core megaservers that everyone is shipping. I'm sure OpenStack can do it, but I'm saying, I don't think OpenStack _should_ do it. Excerpts from Adam Lawson's message of 2014-09-26 20:30:29 -0700: Generally speaking that's true when you have full control over how you deploy applications as a consumer. As a provider however, cloud resiliency is king and it's generally frowned upon to associate instances directly to the underlying physical hardware for any reason. It's good when instances can come and go as needed, but in a production context, a failed compute host shouldn't take down every instance hosted on it. Otherwise there is no real abstraction going on and the cloud loses immense value. On Sep 26, 2014 4:15 PM, Clint Byrum cl...@fewbar.com wrote: Excerpts from Adam Lawson's message of 2014-09-26 14:43:40 -0700: Hello fellow stackers. I'm looking for discussions/plans re VM continuity. I.e. Protection for instances using ephemeral storage against host failures or auto-failover capability for instances on hosts where the host suffers from an attitude problem? I know fail-overs are supported and I'm quite certain auto-fail-overs are possible in the event of a host failure (hosting instances not using shared storage). I just can't find where this has been addressed/discussed. Someone help a brother out? ; ) I'm sure some of that is possible, but it's a cloud, so why not do things the cloud way? Spin up redundant bits in disparate availability zones. Replicate only what must be replicated. Use volumes for DR only when replication would be too expensive. Instances are cattle, not pets. Keep them alive just long enough to make your profit. ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openst...@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][policy][keystone] Better Policy Model and Representing Capabilites
This is a hot topic for some brainstorms here, since I started to hack a bit with OpenStack =) Regarding the given options, the second one looks better IMO, and we could avoid some of the token bloating issues by having a parameter where the service specifies what is set of actions that are important (the parameter could be service name). Although we have some services with a huge set of possible operations, like Nova. But there is also some points that seem important to keep in mind, giving that we have some cases for each action, not just the action itsel. For example: update_project. A project_admin can update its own project but not another project. And I don't see other options to check this than having two different rules: update_own_project, update_any_project, and the rules would be checked against the project_id in the token scope. On Mon, Oct 13, 2014 at 5:17 PM, Morgan Fainberg morgan.fainb...@gmail.com wrote: Description of the problem: Without attempting an action on an endpoint with a current scoped token, it is impossible to know what actions are available to a user. Horizon makes some attempts to solve this issue by sourcing all of the policy files from all of the services to determine what a user can accomplish with a given role. This is highly inefficient as it requires processing the various policy.json files for each request in multiple places and presents a mechanism that is not really scalable to understand what a user can do with the current authorization. Horizon may not be the only service that (in the long term) would want to know what actions a token can take. I would like to start a discussion on how we should improve our policy implementation (OpenStack wide) to help make it easier to know what is possible with a current authorization context (Keystone token). The key feature should be that whatever the implementation is, it doesn’t require another round-trip to a third party service to “enforce” the policy which avoids another scaling point like UUID Keystone token validation. Here are a couple of ideas that we’ve discussed over the last few development cycles (and none of this changes the requirements to manage scope of authorization, e.g. project, domain, trust, ...): 1. Keystone is the holder of all policy files. Each service gets it’s policy file from Keystone and it is possible to validate the policy (by any other service) against a token provided they get the relevant policy file from the authoritative source (Keystone). Pros: This is nearly completely compatible with the current policy system. The biggest change is that policy files are published to Keystone instead of to a local file on disk. This also could open the door to having keystone build “stacked” policies (user/project/domain/endpoint/service specific) where the deployer could layer policy definitions (layering would allow for stricter enforcement at more specific levels, e.g. users from project X can’t terminate any VMs). Cons: This doesn’t ease up the processing requirement or the need to hold (potentially) a significant number of policy files for each service that wants to evaluate what actions a token can do. 2. Each enforcement point in a service is turned into an attribute/role, and the token contains all of the information on what a user can do (effectively shipping the entire policy information with the token). Pros: It is trivial to know what a token provides access to: the token would contain something like `{“nova”: [“terminate”, “boot”], “keystone”: [“create_user”, “update_user”], ...}`. It would be easily possible to allow glance “get image” nova “boot” capability instead of needing to know the roles for policy.json for both glance and nova work for booting a new VM. Cons: This would likely require a central registry of all the actions that could be taken (something akin to an IANA port list). Without a grouping to apply these authorizations to a user (e.g. keystone_admin would convey “create_project, delete_project, update_project, create_user, delete_user, update_user, ...”) this becomes unwieldy. The “roles” or “attribute” that convey capabilities are also relatively static instead of highly dynamic as they are today. This could also contribute to token-bloat. I’m sure there are more ways to approach this problem, so please don’t hesitate to add to the conversation and expand on the options. The above options are by no mean exhaustive nor fully explored. This change may not even be something to be expected within the current development cycle (Kilo) or even the next, but this is a conversation that needs to be started as it will help make OpenStack better. Thanks, Morgan — Morgan Fainberg ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Rodrigo Duarte Sousa Software Engineer
Re: [openstack-dev] [UX] [Heat] [Mistral] Merlin project PoC update: shift from HOT builder to Mistral Workbook builder
The HOT Builder code is available now at https://github.com/rackerlabs/hotbuilder although at the moment it is non-functional because it has not been ported over to Horizon. Drago From: Angus Salkeld asalk...@mirantis.commailto:asalk...@mirantis.com Reply-To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Date: Tuesday, September 30, 2014 at 2:42 AM To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [UX] [Heat] [Mistral] Merlin project PoC update: shift from HOT builder to Mistral Workbook builder On Fri, Sep 26, 2014 at 7:04 AM, Steve Baker sba...@redhat.commailto:sba...@redhat.com wrote: On 26/09/14 05:36, Timur Sufiev wrote: Hello, folks! Following Drago Rosson's introduction of Barricade.js and our discussion in ML about possibility of using it in Merlin [1], I've decided to change the plans for PoC: now the goal for Merlin's PoC is to implement Mistral Workbook builder on top of Barricade.js. The reasons for that are: * To better understand Barricade.js potential as data abstraction layer in Merlin, I need to learn much more about its possibilities and limitations than simple examining/reviewing of its source code allows. The best way to do this is by building upon it. * It's becoming too crowded in the HOT builder's sandbox - doing the same work as Drago currently does [2] seems like a waste of resources to me (especially in case he'll opensource his HOT builder someday just as he did with Barricade.js). Drago, it would be to everyone's benefit if your HOT builder efforts were developed on a public git repository, no matter how functional it is currently. Is there any chance you can publish what you're working on to https://github.com/dragorosson or rackerlabs for a start? Drago any news of this? This would prevent a lot of duplication of work and later merging of code. The sooner this is done the better. -Angus * Why Mistral and not Murano or Solum? Because Mistral's YAML templates have simpler structure than Murano's ones do and is better defined at that moment than the ones in Solum. There already some commits in https://github.com/stackforge/merlin and since client-side app doesn't talk to the Mistral's server yet, it is pretty easy to run it (just follow the instructions in README.md) and then see it in browser at http://localhost:8080. UI is yet not great, as the current focus is data abstraction layer exploration, i.e. how to exploit Barricade.js capabilities to reflect all relations between Mistral's entities. I hope to finish the minimal set of features in a few weeks - and will certainly announce it in the ML. [1] http://lists.openstack.org/pipermail/openstack-dev/2014-September/044591.html [2] http://lists.openstack.org/pipermail/openstack-dev/2014-August/044186.html ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][oslo] projects still using obsolete oslo modules
Thanks for putting this together Doug! I've opened https://bugs.launchpad.net/trove/+bug/1380789 to track the changes that are needed here for Trove. Cheers, Nikhil Doug Hellmann writes: I’ve put together a little script to generate a report of the projects using modules that used to be in the oslo-incubator but that have moved to libraries [1]. These modules have been deleted, and now only exist in the stable/juno branch of the incubator. We do not anticipate back-porting fixes except for serious security concerns, so it is important to update all projects to use the libraries where the modules now live. Liaisons, please look through the list below and file bugs against your project for any changes needed to move to the new libraries and start working on the updates. We need to prioritize this work for early in Kilo to ensure that your projects do not fall further out of step. K-1 is the ideal target, with K-2 as an absolute latest date. I anticipate having several more libraries by the time the K-2 milestone arrives. Most of the porting work involves adding dependencies and updating import statements, but check the documentation for each library for any special guidance. Also, because the incubator is updated to use our released libraries, you may end up having to port to several libraries *and* sync a copy of any remaining incubator dependencies that have not graduated all in a single patch in order to have a working copy. I suggest giving your review teams a heads-up about what to expect to avoid -2 for the scope of the patch. Doug [1] https://review.openstack.org/#/c/127039/ openstack-dev/heat-cfnclient: exception openstack-dev/heat-cfnclient: gettextutils openstack-dev/heat-cfnclient: importutils openstack-dev/heat-cfnclient: jsonutils openstack-dev/heat-cfnclient: timeutils openstack/ceilometer: gettextutils openstack/ceilometer: log_handler openstack/python-troveclient: strutils openstack/melange: exception openstack/melange: extensions openstack/melange: utils openstack/melange: wsgi openstack/melange: setup openstack/tuskar: config.generator openstack/tuskar: db openstack/tuskar: db.sqlalchemy openstack/tuskar: excutils openstack/tuskar: gettextutils openstack/tuskar: importutils openstack/tuskar: jsonutils openstack/tuskar: strutils openstack/tuskar: timeutils openstack/sahara-dashboard: importutils openstack/barbican: gettextutils openstack/barbican: jsonutils openstack/barbican: timeutils openstack/barbican: importutils openstack/kite: db openstack/kite: db.sqlalchemy openstack/kite: jsonutils openstack/kite: timeutils openstack/python-ironicclient: gettextutils openstack/python-ironicclient: importutils openstack/python-ironicclient: strutils openstack/python-melangeclient: setup openstack/neutron: excutils openstack/neutron: gettextutils openstack/neutron: importutils openstack/neutron: jsonutils openstack/neutron: middleware.base openstack/neutron: middleware.catch_errors openstack/neutron: middleware.correlation_id openstack/neutron: middleware.debug openstack/neutron: middleware.request_id openstack/neutron: middleware.sizelimit openstack/neutron: network_utils openstack/neutron: strutils openstack/neutron: timeutils openstack/tempest: importlib openstack/manila: excutils openstack/manila: gettextutils openstack/manila: importutils openstack/manila: jsonutils openstack/manila: network_utils openstack/manila: strutils openstack/manila: timeutils openstack/keystone: gettextutils openstack/python-glanceclient: importutils openstack/python-glanceclient: network_utils openstack/python-glanceclient: strutils openstack/python-keystoneclient: jsonutils openstack/python-keystoneclient: strutils openstack/python-keystoneclient: timeutils openstack/zaqar: config.generator openstack/zaqar: excutils openstack/zaqar: gettextutils openstack/zaqar: importutils openstack/zaqar: jsonutils openstack/zaqar: setup openstack/zaqar: strutils openstack/zaqar: timeutils openstack/zaqar: version openstack/python-novaclient: gettextutils openstack/ironic: config.generator openstack/ironic: gettextutils openstack/cinder: config.generator openstack/cinder: excutils openstack/cinder: gettextutils openstack/cinder: importutils openstack/cinder: jsonutils openstack/cinder: log_handler openstack/cinder: network_utils openstack/cinder: strutils openstack/cinder: timeutils openstack/cinder: units openstack/python-manilaclient: gettextutils openstack/python-manilaclient: importutils openstack/python-manilaclient: jsonutils openstack/python-manilaclient: strutils openstack/python-manilaclient: timeutils openstack/trove: exception openstack/trove: excutils openstack/trove: gettextutils openstack/trove: importutils openstack/trove: iniparser openstack/trove: jsonutils openstack/trove: network_utils openstack/trove: notifier openstack/trove: pastedeploy
Re: [openstack-dev] [Nova] Automatic evacuate
On Mon, Oct 13, 2014 at 1:32 PM, Adam Lawson alaw...@aqorn.com wrote: Looks like this was proposed and denied to be part of Nova for some reason last year. Thoughts on why and is the reasoning (whatever it was) still applicable? Link? *Adam Lawson* AQORN, Inc. 427 North Tatnall Street Ste. 58461 Wilmington, Delaware 19801-2230 Toll-free: (844) 4-AQORN-NOW ext. 101 International: +1 302-387-4660 Direct: +1 916-246-2072 On Mon, Oct 13, 2014 at 1:26 PM, Adam Lawson alaw...@aqorn.com wrote: [switching to openstack-dev] Has anyone automated nova evacuate so that VM's on a failed compute host using shared storage are automatically moved onto a new host or is manually entering *nova compute instance host* required in all cases? If it's manual only or require custom Heat/Ceilometer templates, how hard would it be to enable automatic evacuation within Novs? i.e. (within /etc/nova/nova.conf) auto_evac = true Or is this possible now and I've simply not run across it? *Adam Lawson* AQORN, Inc. 427 North Tatnall Street Ste. 58461 Wilmington, Delaware 19801-2230 Toll-free: (844) 4-AQORN-NOW ext. 101 International: +1 302-387-4660 Direct: +1 916-246-2072 On Sat, Sep 27, 2014 at 12:32 AM, Clint Byrum cl...@fewbar.com wrote: So, what you're looking for is basically the same old IT, but with an API. I get that. For me, the point of this cloud thing is so that server operators can make _reasonable_ guarantees, and application operators can make use of them in an automated fashion. If you start guaranteeing 4 and 5 nines for single VM's, you're right back in the boat of spending a lot on server infrastructure even if your users could live without it sometimes. Compute hosts are going to go down. Networks are going to partition. It is not actually expensive to deal with that at the application layer. In fact when you know your business rules, you'll do a better job at doing this efficiently than some blanket replicate all the things layer might. I know, some clouds are just new ways to chop up these fancy 40 core megaservers that everyone is shipping. I'm sure OpenStack can do it, but I'm saying, I don't think OpenStack _should_ do it. Excerpts from Adam Lawson's message of 2014-09-26 20:30:29 -0700: Generally speaking that's true when you have full control over how you deploy applications as a consumer. As a provider however, cloud resiliency is king and it's generally frowned upon to associate instances directly to the underlying physical hardware for any reason. It's good when instances can come and go as needed, but in a production context, a failed compute host shouldn't take down every instance hosted on it. Otherwise there is no real abstraction going on and the cloud loses immense value. On Sep 26, 2014 4:15 PM, Clint Byrum cl...@fewbar.com wrote: Excerpts from Adam Lawson's message of 2014-09-26 14:43:40 -0700: Hello fellow stackers. I'm looking for discussions/plans re VM continuity. I.e. Protection for instances using ephemeral storage against host failures or auto-failover capability for instances on hosts where the host suffers from an attitude problem? I know fail-overs are supported and I'm quite certain auto-fail-overs are possible in the event of a host failure (hosting instances not using shared storage). I just can't find where this has been addressed/discussed. Someone help a brother out? ; ) I'm sure some of that is possible, but it's a cloud, so why not do things the cloud way? Spin up redundant bits in disparate availability zones. Replicate only what must be replicated. Use volumes for DR only when replication would be too expensive. Instances are cattle, not pets. Keep them alive just long enough to make your profit. ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openst...@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] Automatic evacuate
I think Adam is talking about this bp: https://blueprints.launchpad.net/nova/+spec/evacuate-instance-automatically For now, we're using Nagios probe/event to trigger the Nova evacuate command, but I think it's possible to do that in Nova if we can find a good way to define the trigger policy. On 14/10/14 10:15, Joe Gordon wrote: On Mon, Oct 13, 2014 at 1:32 PM, Adam Lawson alaw...@aqorn.com mailto:alaw...@aqorn.com wrote: Looks like this was proposed and denied to be part of Nova for some reason last year. Thoughts on why and is the reasoning (whatever it was) still applicable? Link? */ Adam Lawson/* AQORN, Inc. 427 North Tatnall Street Ste. 58461 Wilmington, Delaware 19801-2230 Toll-free: (844) 4-AQORN-NOW ext. 101 International: +1 302-387-4660 tel:%2B1%20302-387-4660 Direct: +1 916-246-2072 tel:%2B1%20916-246-2072 On Mon, Oct 13, 2014 at 1:26 PM, Adam Lawson alaw...@aqorn.com mailto:alaw...@aqorn.com wrote: [switching to openstack-dev] Has anyone automated nova evacuate so that VM's on a failed compute host using shared storage are automatically moved onto a new host or is manually entering /nova compute instance host/ required in all cases? If it's manual only or require custom Heat/Ceilometer templates, how hard would it be to enable automatic evacuation within Novs? i.e. (within /etc/nova/nova.conf) auto_evac = true Or is this possible now and I've simply not run across it? */ Adam Lawson/* AQORN, Inc. 427 North Tatnall Street Ste. 58461 Wilmington, Delaware 19801-2230 Toll-free: (844) 4-AQORN-NOW ext. 101 International: +1 302-387-4660 tel:%2B1%20302-387-4660 Direct: +1 916-246-2072 tel:%2B1%20916-246-2072 On Sat, Sep 27, 2014 at 12:32 AM, Clint Byrum cl...@fewbar.com mailto:cl...@fewbar.com wrote: So, what you're looking for is basically the same old IT, but with an API. I get that. For me, the point of this cloud thing is so that server operators can make _reasonable_ guarantees, and application operators can make use of them in an automated fashion. If you start guaranteeing 4 and 5 nines for single VM's, you're right back in the boat of spending a lot on server infrastructure even if your users could live without it sometimes. Compute hosts are going to go down. Networks are going to partition. It is not actually expensive to deal with that at the application layer. In fact when you know your business rules, you'll do a better job at doing this efficiently than some blanket replicate all the things layer might. I know, some clouds are just new ways to chop up these fancy 40 core megaservers that everyone is shipping. I'm sure OpenStack can do it, but I'm saying, I don't think OpenStack _should_ do it. Excerpts from Adam Lawson's message of 2014-09-26 20:30:29 -0700: Generally speaking that's true when you have full control over how you deploy applications as a consumer. As a provider however, cloud resiliency is king and it's generally frowned upon to associate instances directly to the underlying physical hardware for any reason. It's good when instances can come and go as needed, but in a production context, a failed compute host shouldn't take down every instance hosted on it. Otherwise there is no real abstraction going on and the cloud loses immense value. On Sep 26, 2014 4:15 PM, Clint Byrum cl...@fewbar.com mailto:cl...@fewbar.com wrote: Excerpts from Adam Lawson's message of 2014-09-26 14:43:40 -0700: Hello fellow stackers. I'm looking for discussions/plans re VM continuity. I.e. Protection for instances using ephemeral storage against host failures or auto-failover capability for instances on hosts where the host suffers from an attitude problem? I know fail-overs are supported and I'm quite certain auto-fail-overs are possible in the event of a host failure (hosting instances not using shared storage). I just can't find where this has been
Re: [openstack-dev] Treating notifications as a contract
On Tue, 7 Oct 2014, Sandy Walsh wrote: Haven't had any time to get anything written down (pressing deadlines with StackTach.v3) but open to suggestions. Perhaps we should just add something to the olso.messaging etherpad to find time at the summit to talk about it? Have you got a link for that? Another topic that I think is at least somewhat related to the standardizing/contractualizing notifications topic is deprecating polling (to get metrics/samples). In the ceilometer side of the telemetry universe, if samples can't be gathered via notifications then somebody writes a polling plugin or agent and sticks it in the ceilometer tree where it is run as either an independent agent (c.f. the new ipmi-agent) or a plugin under the compute-agent or a plugin under the central-agent. This is problematic in a few ways (at least to me): * Those plugins distract from the potential leanness of a core ceilometer system. * The meters created by those plugins are produced for ceilometer rather than for telemetry. Yes, of course you can re-publish the samples in all sorts of ways. * The services aren't owning the form and publication of information about themselves. There are solid arguments against each of these problems individually but as a set I find them saying services should make more notifications pretty loud and clear and obviously to make that work we need tidy notifications with good clean semantics. -- Chris Dent tw:@anticdent freenode:cdent https://tank.peermore.com/tanks/cdent ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] Automatic evacuate
Nice timing. I was working on a blog post on this topic. On 10/13/2014 05:40 PM, Fei Long Wang wrote: I think Adam is talking about this bp: https://blueprints.launchpad.net/nova/+spec/evacuate-instance-automatically For now, we're using Nagios probe/event to trigger the Nova evacuate command, but I think it's possible to do that in Nova if we can find a good way to define the trigger policy. I actually think that's the right way to do it. There are a couple of other things to consider: 1) An ideal solution also includes fencing. When you evacuate, you want to make sure you've fenced the original compute node. You need to make absolutely sure that the same VM can't be running more than once, especially when the disks are backed by shared storage. Because of the fencing requirement, another option would be to use Pacemaker to orchestrate this whole thing. Historically Pacemaker hasn't been suitable to scale to the number of compute nodes an OpenStack deployment might have, but Pacemaker has a new feature called pacemaker_remote [1] that may be suitable. 2) Looking forward, there is a lot of demand for doing this on a per instance basis. We should decide on a best practice for allowing end users to indicate whether they would like their VMs automatically rescued by the infrastructure, or just left down in the case of a failure. It could be as simple as a special tag set on an instance [2]. [1] http://clusterlabs.org/doc/en-US/Pacemaker/1.1/html-single/Pacemaker_Remote/ [2] https://review.openstack.org/#/c/127281/ -- Russell Bryant ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Infra] Meeting Tuesday October 14th at 19:00 UTC
Hi everyone, The OpenStack Infrastructure (Infra) team is hosting our weekly meeting on Tuesday October 14th, at 19:00 UTC in #openstack-meeting Meeting agenda available here: https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is welcome to to add agenda items) Everyone interested in infrastructure and process surrounding automated testing and deployment is encouraged to attend. -- Elizabeth Krumbach Joseph || Lyz || pleia2 http://www.princessleia.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] Automatic evacuate
This is also a use case for Congress, please check use case 3 in the following link. https://docs.google.com/document/d/1ExDmT06vDZjzOPePYBqojMRfXodvsk0R8nRkX-zrkSw/edit# 2014-10-14 5:59 GMT+08:00 Russell Bryant rbry...@redhat.com: Nice timing. I was working on a blog post on this topic. On 10/13/2014 05:40 PM, Fei Long Wang wrote: I think Adam is talking about this bp: https://blueprints.launchpad.net/nova/+spec/evacuate-instance-automatically For now, we're using Nagios probe/event to trigger the Nova evacuate command, but I think it's possible to do that in Nova if we can find a good way to define the trigger policy. I actually think that's the right way to do it. There are a couple of other things to consider: 1) An ideal solution also includes fencing. When you evacuate, you want to make sure you've fenced the original compute node. You need to make absolutely sure that the same VM can't be running more than once, especially when the disks are backed by shared storage. Because of the fencing requirement, another option would be to use Pacemaker to orchestrate this whole thing. Historically Pacemaker hasn't been suitable to scale to the number of compute nodes an OpenStack deployment might have, but Pacemaker has a new feature called pacemaker_remote [1] that may be suitable. 2) Looking forward, there is a lot of demand for doing this on a per instance basis. We should decide on a best practice for allowing end users to indicate whether they would like their VMs automatically rescued by the infrastructure, or just left down in the case of a failure. It could be as simple as a special tag set on an instance [2]. [1] http://clusterlabs.org/doc/en-US/Pacemaker/1.1/html-single/Pacemaker_Remote/ [2] https://review.openstack.org/#/c/127281/ -- Russell Bryant ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Thanks, Jay ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] 2 Minute tokens
Too-short token expiration times are one of my concerns, in my current exercise. Working on a replacement for Nova backup. Basically creating backups jobs, writing the jobs into a queue, with a background worker that reads jobs from the queue. Tokens could expire while the jobs are in the queue (not too likely). Tokens could expire during the execution of a backup (while can be very long running, in some cases). Had not run into mention of trusts before. Is the intent to cover this sort of use-case? (Pulled up what I could find on trusts. Need to chew on this a bit, as it is not immediately clear if this fits.) On Wed, Oct 1, 2014 at 6:53 AM, Adam Young ayo...@redhat.com wrote: On 10/01/2014 04:14 AM, Steven Hardy wrote: On Tue, Sep 30, 2014 at 10:44:51AM -0400, Adam Young wrote: What is keeping us from dropping the (scoped) token duration to 5 minutes? If we could keep their lifetime as short as network skew lets us, we would be able to: Get rid of revocation checking. Get rid of persisted tokens. OK, so that assumes we can move back to PKI tokens, but we're working on that. What are the uses that require long lived tokens? Can they be replaced with a better mechanism for long term delegation (OAuth or Keystone trusts) as Heat has done? FWIW I think you're misrepresenting Heat's usage of Trusts here - 2 minute tokens will break Heat just as much as any other service: https://bugs.launchpad.net/heat/+bug/1306294 http://lists.openstack.org/pipermail/openstack-dev/2014- September/045585.html Summary: - Heat uses the request token to process requests (e.g stack create), which may take an arbitrary amount of time (default timeout one hour). - Some use-cases demand timeout of more than one hour (specifically big TripleO deployments), heat breaks in these situations atm, folks are working around it by using long (several hour) token expiry times. - Trusts are only used of asynchronous signalling, e.g Ceilometer signals Heat, we switch to a trust scoped token to process the response to the alarm (e.g launch more instances on behalf of the user for autoscaling) My understanding, ref notes in that bug, is that using Trusts while servicing a request to effectively circumvent token expiry was not legit (or at least yukky and to be avoided). If you think otherwise then please let me know, as that would be the simplest way to fix the bug above (switch to a trust token while doing the long-running create operation). Using trusts to circumvent timeout is OK. There are two issues in tension here: 1. A user needs to be able to maintain control of their own data. 2. We want to limit the attack surface provided by tokens. Since tokens are currently blanket access to the users data, there really is no lessening of control by using trusts in a wider context. I'd argue that using trusts would actually reduce the capability for abuse,if coupled with short lived tokens. With long lived tokens, anyone can reuse the token. With a trust, only the trustee would be able to create a new token. Could we start by identifying the set of operations that are currently timing out due to the one hour token duration and add an optional trustid on those operations? Trusts is not really ideal for this use-case anyway, as it requires the service to have knowledge of the roles to delegate (or that the user provides a pre-created trust), ref bug #1366133. I suppose we could just delegate all the roles we find in the request scope and be done with it, given that bug has been wontfixed. Steve ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] Automatic evacuate
On 10/13/2014 06:18 PM, Jay Lau wrote: This is also a use case for Congress, please check use case 3 in the following link. https://docs.google.com/document/d/1ExDmT06vDZjzOPePYBqojMRfXodvsk0R8nRkX-zrkSw/edit# Wow, really? That honestly makes me very worried about the scope of Congress being far too big (so early, and maybe period). -- Russell Bryant ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [api] Forming the API Working Group
On Mon, 13 Oct 2014 10:52:26 -0400 Jay Pipes jaypi...@gmail.com wrote: On 10/10/2014 02:05 AM, Christopher Yeoh wrote: I agree with what you've written on the wiki page. I think our priority needs to be to flesh out https://wiki.openstack.org/wiki/Governance/Proposed/APIGuidelines so we have something to reference when reviewing specs. At the moment I see that document as something anyone should be able to document a project's API convention even if they conflict with another project for the moment. Once we've got a fair amount of content we can start as a group resolving any conflicts. Agreed that we should be fleshing out the above wiki page. How would you like us to do that? Should we have an etherpad to discuss individual topics? Having multiple people editing the wiki page offering commentary seems a bit chaotic, and I think we would do well to have the Gerrit review process in place to handle proposed guidelines and rules for APIs. See below for specifics on this... Honestly I don't think we have enough content yet to have much of a discussion. I started the wiki page https://wiki.openstack.org/wiki/Governance/Proposed/APIGuidelines in the hope that people from other projects would start adding conventions that they use in their projects. I think its fine for the moment if its contradictory, we just need to gather what projects currently do (or want to do) in one place so we can start discussing any contradictions. So I'd again encourage anyone interested in APIs from the various projects to just start dumping their project viewpoint in there. Speaking of the wiki page, I wrote it very matter-of-factly. As if this is the way things are. They’re not. The wiki page is just a starting point. If something was missed, add it. If something can be improved, improve it. Let’s try to keep it simple though. One problem with API WG members reviewing spec proposals that affect the API is finding the specs in the first place across many different projects repositories. I've said for a while now that I would love to have separate repositories -- ala the Keystone API in the openstack/identity-api repository -- that contains specifications for APIs in a single format (APIBlueprint was suggested at one point, but Swagger 2.0 seems to me to have more upside). I also think it would be ideal to have an openstack/openstack-api repo that would house guidelines and rules that this working group came up with, along with examples of appropriate usage. This repo would function very similar to the openstack/governance [1] repo that the TC uses to flesh out proposals on community, release management, and governance changes. If people are OK with this idea, I will go ahead and create the repo and add the wiki page content as the initial commit, then everyone can simply submit patches to the document(s) using the normal Gerrit process, and we can iterate on these things using the same tools as other repositories. I like the idea of a repo and using Gerrit for discussions to resolve issues. I don't think it works so well when people are wanting to dump lots of information in initially. Unless we agree to just merge anything vaguely reasonable and then resolve the conflicts later when we have a reasonable amount of content. Otherwise stuff will get lost in gerrit history comments and people's updates to the document will overwrite each other. I guess we could also start fleshing out in the repo how we'll work in practice too (eg once the document is stable what process do we have for making changes - two +2's is probably not adequate for something like this). Regards, Chris Best, -jay [1] https://review.openstack.org/#/q/status:open+project:openstack/governance,n,z I invite everyone who chimed in on the original thread [1] that kicked this off to add themselves as a member committed to making the OpenStack APIs better. I’ve Cc’d everyone who asked to be kept in the loop. I already see some cross project summit topics [2] on APIs. But frankly, with the number of people committed to this topic, I’d expect there to be more. I encourage everyone to submit more API related sessions with better descriptions and goals about what you want to achieve in those sessions. Yea if there is enough content in the API guidelines then perhaps some time can be spent on working on resolving any conflicts in the document so projects know what direction to head in. Regards, Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___
Re: [openstack-dev] Neutron documentation to update about new vendor plugin, but without code in repository?
The OpenStack dev and docs team dont have to worry about gating/publishing/maintaining the vendor specific plugins/drivers. I disagree about the gating part. If a vendor wants to have a link that shows they are compatible with openstack, they should be reporting test results on all patches. A link to a vendor driver in the docs should signify some form of testing that the community is comfortable with. On Mon, Oct 13, 2014 at 11:33 AM, Vadivel Poonathan vadivel.openst...@gmail.com wrote: Hi, If the plan is to move ALL existing vendor specific plugins/drivers out-of-tree, then having a place-holder within the OpenStack domain would suffice, where the vendors can list their plugins/drivers along with their documentation as how to install and use etc. The main Openstack Neutron documentation page can explain the plugin framework (ml2 type drivers, mechanism drivers, serviec plugin and so on) and its purpose/usage etc, then provide a link to refer the currently supported vendor specific plugins/drivers for more details. That way the documentation will be accurate to what is in-tree and limit the documentation of external plugins/drivers to have just a reference link. So its now vendor's responsibility to keep their driver's up-to-date and their documentation accurate. The OpenStack dev and docs team dont have to worry about gating/publishing/maintaining the vendor specific plugins/drivers. The built-in drivers such as LinuxBridge or OpenVSwitch etc can continue to be in-tree and their documentation will be part of main Neutron's docs. So the Neutron is guaranteed to work with built-in plugins/drivers as per the documentation and the user is informed to refer the external vendor plug-in page for additional/specific plugins/drivers. Thanks, Vad -- On Fri, Oct 10, 2014 at 8:10 PM, Anne Gentle a...@openstack.org wrote: On Fri, Oct 10, 2014 at 7:36 PM, Kevin Benton blak...@gmail.com wrote: I think you will probably have to wait until after the summit so we can see the direction that will be taken with the rest of the in-tree drivers/plugins. It seems like we are moving towards removing all of them so we would definitely need a solution to documenting out-of-tree drivers as you suggested. However, I think the minimum requirements for having a driver being documented should be third-party testing of Neutron patches. Otherwise the docs will become littered with a bunch of links to drivers/plugins with no indication of what actually works, which ultimately makes Neutron look bad. This is my line of thinking as well, expanded to ultimately makes OpenStack docs look bad -- a perception I want to avoid. Keep the viewpoints coming. We have a crucial balancing act ahead: users need to trust docs and trust the drivers. Ultimately the responsibility for the docs is in the hands of the driver contributors so it seems those should be on a domain name where drivers control publishing and OpenStack docs are not a gatekeeper, quality checker, reviewer, or publisher. We have documented the status of hypervisor drivers on an OpenStack wiki page. [1] To me, that type of list could be maintained on the wiki page better than in the docs themselves. Thoughts? Feelings? More discussion, please. And thank you for the responses so far. Anne [1] https://wiki.openstack.org/wiki/HypervisorSupportMatrix On Fri, Oct 10, 2014 at 1:28 PM, Vadivel Poonathan vadivel.openst...@gmail.com wrote: Hi Anne, Thanks for your immediate response!... Just to clarify... I have developed and maintaining a Neutron plug-in (ML2 mechanism_driver) since Grizzly and now it is up-to-date with Icehouse. But it was never listed nor part of the main Openstack releases. Now i would like to have my plugin mentioned as supported plugin/mechanism_driver for so and so vendor equipments in the docs.openstack.org, but without having the actual plugin code to be posted in the main Openstack GIT repository. Reason is that I dont have plan/bandwidth to go thru the entire process of new plugin blue-print/development/review/testing etc as required by the Openstack development community. Bcos this is already developed, tested and released to some customers directly. Now I just want to get it to the official Openstack documentation, so that more people can get this and use. The plugin package is made available to public from Ubuntu repository along with necessary documentation. So people can directly get it from Ubuntu repository and use it. All i need is to get listed in the docs.openstack.org so that people knows that it exists and can be used with any Openstack. Pls. confrim whether this is something possible?... Thanks again!.. Vad -- On Fri, Oct 10, 2014 at 12:18 PM, Anne Gentle a...@openstack.org wrote: On Fri, Oct 10, 2014 at 2:11 PM, Vadivel Poonathan vadivel.openst...@gmail.com wrote: Hi, How to include a new vendor plug-in (aka mechanism_driver in ML2
Re: [openstack-dev] [Nova] Automatic evacuate
*I think Adam is talking about this bp: https://blueprints.launchpad.net/nova/+spec/evacuate-instance-automatically https://blueprints.launchpad.net/nova/+spec/evacuate-instance-automatically * Correct - yes. Sorry about that. ; ) So it would seem the question is not whether to support auto-evac but how it should be handled. If not handled by Nova, it gets complicated. Asking a user to configure a custom Nagios trigger/action... not sure if we'd recommend that as our definition of ideal. - I can foresee Congress being used to control whether auto-evac is required and what other policies come into play by virtue of an unplanned host removal from service. But that seems like a bit overkill. - i can foresee Nova/scheduler being used to perform the evac itself. Are they still pushing back? - I can foresee Ceilometer being used to capture service state and define how long a node should be inaccessible before it's considered offline. But seems a bit out of scope for what ceilometer was meant to do. I'm all about making this super easy to do a simple task though, at least so the settings are all defined in one place. Nova seems logical but I'm wondering if there is still resistance. So curious; how are these higher-level discussions initiated/facilitated? TC? *Adam Lawson* AQORN, Inc. 427 North Tatnall Street Ste. 58461 Wilmington, Delaware 19801-2230 Toll-free: (844) 4-AQORN-NOW ext. 101 International: +1 302-387-4660 Direct: +1 916-246-2072 On Mon, Oct 13, 2014 at 3:21 PM, Russell Bryant rbry...@redhat.com wrote: On 10/13/2014 06:18 PM, Jay Lau wrote: This is also a use case for Congress, please check use case 3 in the following link. https://docs.google.com/document/d/1ExDmT06vDZjzOPePYBqojMRfXodvsk0R8nRkX-zrkSw/edit# Wow, really? That honestly makes me very worried about the scope of Congress being far too big (so early, and maybe period). -- Russell Bryant ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [cinder] Any plan on cinder for kilo?
Hi, I noticed nova has already opened blueprint and specs for kilo, so I was wondering what is the plan on cinder for kilo? If I want to contribute some code(add new feature) for cinder, whether the step is the same with nova (need write specs first)? Thanks! Yoo ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][policy][keystone] Better Policy Model and Representing Capabilites
On 10/13/2014 01:17 PM, Morgan Fainberg wrote: Description of the problem: Without attempting an action on an endpoint with a current scoped token, it is impossible to know what actions are available to a user. Horizon makes some attempts to solve this issue by sourcing all of the policy files from all of the services to determine what a user can accomplish with a given role. This is highly inefficient as it requires processing the various policy.json files for each request in multiple places and presents a mechanism that is not really scalable to understand what a user can do with the current authorization. Horizon may not be the only service that (in the long term) would want to know what actions a token can take. This is also extremely useful for being able to actually support more restricted tokens as well. If I as an end user want to request a token that only has the roles required to perform a particular action, I'm going to need to have a way of knowing what those roles are. I think that is one of the main things missing to allow the role-filtered tokens option that I wrote up after the last Summit to be a viable approach: https://blog-nkinder.rhcloud.com/?p=101 I would like to start a discussion on how we should improve our policy implementation (OpenStack wide) to help make it easier to know what is possible with a current authorization context (Keystone token). The key feature should be that whatever the implementation is, it doesn’t require another round-trip to a third party service to “enforce” the policy which avoids another scaling point like UUID Keystone token validation. Here are a couple of ideas that we’ve discussed over the last few development cycles (and none of this changes the requirements to manage scope of authorization, e.g. project, domain, trust, ...): 1. Keystone is the holder of all policy files. Each service gets it’s policy file from Keystone and it is possible to validate the policy (by any other service) against a token provided they get the relevant policy file from the authoritative source (Keystone). Pros: This is nearly completely compatible with the current policy system. The biggest change is that policy files are published to Keystone instead of to a local file on disk. This also could open the door to having keystone build “stacked” policies (user/project/domain/endpoint/service specific) where the deployer could layer policy definitions (layering would allow for stricter enforcement at more specific levels, e.g. users from project X can’t terminate any VMs). I think that there are a some additional advantages to centralizing policy storage (not enforcement). - The ability to centralize management of policy would be very nice. If I want to update the policy for all of my compute nodes, I can do it in one location without the need for external configuration management solutions. - We could piggy-back on Keystone's signing capabilities to allow policy to be signed, providing protection against policy tampering on an individual endpoint. Cons: This doesn’t ease up the processing requirement or the need to hold (potentially) a significant number of policy files for each service that wants to evaluate what actions a token can do. Are you thinking of there being a call to keystone that answers what can I do with token A against endpoint B? This seems similar in concept to the LDAP get effective rights control. There would definitely be some processing overhead to this though you could set up multiple keystone instances and replicate the policy to spread out the load. It also might be possible to index the enforcement points by role in an attempt to minimize the processing for this sort of call. 2. Each enforcement point in a service is turned into an attribute/role, and the token contains all of the information on what a user can do (effectively shipping the entire policy information with the token). Pros: It is trivial to know what a token provides access to: the token would contain something like `{“nova”: [“terminate”, “boot”], “keystone”: [“create_user”, “update_user”], ...}`. It would be easily possible to allow glance “get image” nova “boot” capability instead of needing to know the roles for policy.json for both glance and nova work for booting a new VM. Cons: This would likely require a central registry of all the actions that could be taken (something akin to an IANA port list). Without a grouping to apply these authorizations to a user (e.g. keystone_admin would convey “create_project, delete_project, update_project, create_user, delete_user, update_user, ...”) this becomes unwieldy. The “roles” or “attribute” that convey capabilities are also relatively static instead of highly dynamic as they are today. This could also contribute to token-bloat. I think we really want to avoid additional token bloat. Thanks, -NGK I’m sure
Re: [openstack-dev] Neutron documentation to update about new vendor plugin, but without code in repository?
On Mon, Oct 13, 2014 at 6:44 PM, Kevin Benton blak...@gmail.com wrote: The OpenStack dev and docs team dont have to worry about gating/publishing/maintaining the vendor specific plugins/drivers. I disagree about the gating part. If a vendor wants to have a link that shows they are compatible with openstack, they should be reporting test results on all patches. A link to a vendor driver in the docs should signify some form of testing that the community is comfortable with. I agree with Kevin here. If you want to play upstream, in whatever form that takes by the end of Kilo, you have to work with the existing third-party requirements and team to take advantage of being a part of things like upstream docs. Thanks, Kyle On Mon, Oct 13, 2014 at 11:33 AM, Vadivel Poonathan vadivel.openst...@gmail.com wrote: Hi, If the plan is to move ALL existing vendor specific plugins/drivers out-of-tree, then having a place-holder within the OpenStack domain would suffice, where the vendors can list their plugins/drivers along with their documentation as how to install and use etc. The main Openstack Neutron documentation page can explain the plugin framework (ml2 type drivers, mechanism drivers, serviec plugin and so on) and its purpose/usage etc, then provide a link to refer the currently supported vendor specific plugins/drivers for more details. That way the documentation will be accurate to what is in-tree and limit the documentation of external plugins/drivers to have just a reference link. So its now vendor's responsibility to keep their driver's up-to-date and their documentation accurate. The OpenStack dev and docs team dont have to worry about gating/publishing/maintaining the vendor specific plugins/drivers. The built-in drivers such as LinuxBridge or OpenVSwitch etc can continue to be in-tree and their documentation will be part of main Neutron's docs. So the Neutron is guaranteed to work with built-in plugins/drivers as per the documentation and the user is informed to refer the external vendor plug-in page for additional/specific plugins/drivers. Thanks, Vad -- On Fri, Oct 10, 2014 at 8:10 PM, Anne Gentle a...@openstack.org wrote: On Fri, Oct 10, 2014 at 7:36 PM, Kevin Benton blak...@gmail.com wrote: I think you will probably have to wait until after the summit so we can see the direction that will be taken with the rest of the in-tree drivers/plugins. It seems like we are moving towards removing all of them so we would definitely need a solution to documenting out-of-tree drivers as you suggested. However, I think the minimum requirements for having a driver being documented should be third-party testing of Neutron patches. Otherwise the docs will become littered with a bunch of links to drivers/plugins with no indication of what actually works, which ultimately makes Neutron look bad. This is my line of thinking as well, expanded to ultimately makes OpenStack docs look bad -- a perception I want to avoid. Keep the viewpoints coming. We have a crucial balancing act ahead: users need to trust docs and trust the drivers. Ultimately the responsibility for the docs is in the hands of the driver contributors so it seems those should be on a domain name where drivers control publishing and OpenStack docs are not a gatekeeper, quality checker, reviewer, or publisher. We have documented the status of hypervisor drivers on an OpenStack wiki page. [1] To me, that type of list could be maintained on the wiki page better than in the docs themselves. Thoughts? Feelings? More discussion, please. And thank you for the responses so far. Anne [1] https://wiki.openstack.org/wiki/HypervisorSupportMatrix On Fri, Oct 10, 2014 at 1:28 PM, Vadivel Poonathan vadivel.openst...@gmail.com wrote: Hi Anne, Thanks for your immediate response!... Just to clarify... I have developed and maintaining a Neutron plug-in (ML2 mechanism_driver) since Grizzly and now it is up-to-date with Icehouse. But it was never listed nor part of the main Openstack releases. Now i would like to have my plugin mentioned as supported plugin/mechanism_driver for so and so vendor equipments in the docs.openstack.org, but without having the actual plugin code to be posted in the main Openstack GIT repository. Reason is that I dont have plan/bandwidth to go thru the entire process of new plugin blue-print/development/review/testing etc as required by the Openstack development community. Bcos this is already developed, tested and released to some customers directly. Now I just want to get it to the official Openstack documentation, so that more people can get this and use. The plugin package is made available to public from Ubuntu repository along with necessary documentation. So people can directly get it from Ubuntu repository and use it. All i need is to get listed in the docs.openstack.org so that people knows that it exists and can be used with any
Re: [openstack-dev] [neutron] allow-mac-to-be-updated
On Mon, Oct 13, 2014 at 3:31 PM, Chuck Carlino chuckjcarl...@gmail.com wrote: Hi, Is anyone working on this blueprint[1]? I have an implementation [2] and would like to write up a spec. This was registered by Aaron Rosen back in July of 2013, with no movement since then. I think it's safe to say he's not working on it now. I've moved the LP BP over to you Chuck, please also file a spec in neutron-specs. Thanks, Kyle Thanks, Chuck [1] https://blueprints.launchpad.net/neutron/+spec/allow-mac-to-be-updated [2] https://review.openstack.org/#/c/112129/ ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [neutron][all] Naming convention for unused variables
(Context: https://review.openstack.org/#/c/117418/) I'm looking for some rough consensus on what naming conventions we want for unused variables in Neutron, and across the larger OpenStack python codebase since there's no reason for Neutron to innovate here. As far as I can see, there are two cases: 1. The I just don't care variable Eg:_, _, filename = path.rpartition('/') In python this is very commonly '_', but this conflicts with the gettext builtin so we should avoid it in OpenStack. Possible candidates include: a. 'x' b. '__' (double-underscore) c. No convention 2. I know it is unused, but the name still serves as documentation Note this turns up as two cases: as a local, and as a function parameter. Eg: out, _err = execute('df', path) Eg: def makefile(self, _mode, _other): return self._buffer I deliberately chose that second example to highlight that the leading- underscore convention collides with its use for private properties. Possible candidates include: a. _foo (leading-underscore, note collides with private properties) b. unused_foo (suggested in the Google python styleguide) c. NOQA_foo (as suggested in c/117418) d. No convention (including not indicating that variables are known-unused) As with all style discussions, everyone feels irrationally attached to their favourite, but the important bit is to be consistent to aid readability (and in this case, also to help the mechanical code checkers). Vote / Discuss / Suggest additional alternatives. -- - Gus ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron][all] Naming convention for unused variables
On 14 October 2014 14:28, Angus Lees g...@inodes.org wrote: (Context: https://review.openstack.org/#/c/117418/) I'm looking for some rough consensus on what naming conventions we want for unused variables in Neutron, and across the larger OpenStack python codebase since there's no reason for Neutron to innovate here. As far as I can see, there are two cases: 1. The I just don't care variable Eg:_, _, filename = path.rpartition('/') In python this is very commonly '_', but this conflicts with the gettext builtin so we should avoid it in OpenStack. Possible candidates include: a. 'x' b. '__' (double-underscore) c. No convention b works for me, its aesthetically close to the _ for Python itself. 2. I know it is unused, but the name still serves as documentation Note this turns up as two cases: as a local, and as a function parameter. Eg: out, _err = execute('df', path) Eg: def makefile(self, _mode, _other): return self._buffer I deliberately chose that second example to highlight that the leading- underscore convention collides with its use for private properties. Possible candidates include: a. _foo (leading-underscore, note collides with private properties) b. unused_foo (suggested in the Google python styleguide) c. NOQA_foo (as suggested in c/117418) d. No convention (including not indicating that variables are known-unused) I would say a) and don't signal function parameter use via the parameter names: the only reason to have unused parameters is when we are implementing a contract specified elsewhere, and in that case, the parameter name from elsewhere should be used. -Rob -- Robert Collins rbtcoll...@hp.com Distinguished Technologist HP Converged Cloud ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [api] Forming the API Working Group
On 10/13/2014 07:11 PM, Christopher Yeoh wrote: On Mon, 13 Oct 2014 10:52:26 -0400 Jay Pipes jaypi...@gmail.com wrote: On 10/10/2014 02:05 AM, Christopher Yeoh wrote: I agree with what you've written on the wiki page. I think our priority needs to be to flesh out https://wiki.openstack.org/wiki/Governance/Proposed/APIGuidelines so we have something to reference when reviewing specs. At the moment I see that document as something anyone should be able to document a project's API convention even if they conflict with another project for the moment. Once we've got a fair amount of content we can start as a group resolving any conflicts. Agreed that we should be fleshing out the above wiki page. How would you like us to do that? Should we have an etherpad to discuss individual topics? Having multiple people editing the wiki page offering commentary seems a bit chaotic, and I think we would do well to have the Gerrit review process in place to handle proposed guidelines and rules for APIs. See below for specifics on this... Honestly I don't think we have enough content yet to have much of a discussion. I started the wiki page https://wiki.openstack.org/wiki/Governance/Proposed/APIGuidelines in the hope that people from other projects would start adding conventions that they use in their projects. I think its fine for the moment if its contradictory, we just need to gather what projects currently do (or want to do) in one place so we can start discussing any contradictions. Actually, I don't care all that much about what projects *currently* do. I want this API working group to come up with concrete guidelines and rules/examples of what APIs *should* look like. So I'd again encourage anyone interested in APIs from the various projects to just start dumping their project viewpoint in there. I went ahead and just created a repository that contained all the stuff that should be pretty much agreed-to, and a bunch of stub topic documents that can be used to propose specific ideas (and get feedback on) here: http://github.com/jaypipes/openstack-api Hopefully, you can give it a look and get a feel for why I think the code review process will be better than the wiki for controlling the deliverables produced by this team... I like the idea of a repo and using Gerrit for discussions to resolve issues. I don't think it works so well when people are wanting to dump lots of information in initially. Unless we agree to just merge anything vaguely reasonable and then resolve the conflicts later when we have a reasonable amount of content. Otherwise stuff will get lost in gerrit history comments and people's updates to the document will overwrite each other. I guess we could also start fleshing out in the repo how we'll work in practice too (eg once the document is stable what process do we have for making changes - two +2's is probably not adequate for something like this). We can make it work exactly like the openstack/governance repo, where ttx has the only ability to +2/+W approve a patch for merging, and he tallies a majority vote from the TC members, who vote -1 or +1 on a proposed patch. Instead of ttx, though, we can have an API working group lead selected from the set of folks currently listed as committed to the effort? Best, -jay ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron][all] Naming convention for unused variables
+1 for readable, well-documented code /bikeshed On Mon, Oct 13, 2014 at 8:28 PM, Angus Lees g...@inodes.org wrote: (Context: https://review.openstack.org/#/c/117418/) I'm looking for some rough consensus on what naming conventions we want for unused variables in Neutron, and across the larger OpenStack python codebase since there's no reason for Neutron to innovate here. As far as I can see, there are two cases: 1. The I just don't care variable Eg:_, _, filename = path.rpartition('/') In python this is very commonly '_', but this conflicts with the gettext builtin so we should avoid it in OpenStack. Possible candidates include: a. 'x' b. '__' (double-underscore) c. No convention 2. I know it is unused, but the name still serves as documentation Note this turns up as two cases: as a local, and as a function parameter. Eg: out, _err = execute('df', path) Eg: def makefile(self, _mode, _other): return self._buffer I deliberately chose that second example to highlight that the leading- underscore convention collides with its use for private properties. Possible candidates include: a. _foo (leading-underscore, note collides with private properties) b. unused_foo (suggested in the Google python styleguide) c. NOQA_foo (as suggested in c/117418) d. No convention (including not indicating that variables are known-unused) As with all style discussions, everyone feels irrationally attached to their favourite, but the important bit is to be consistent to aid readability (and in this case, also to help the mechanical code checkers). Vote / Discuss / Suggest additional alternatives. -- - Gus ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [cinder] Any plan on cinder for kilo?
On 08:09 Tue 14 Oct , yoo bright wrote: Hi, I noticed nova has already opened blueprint and specs for kilo, so I was wondering what is the plan on cinder for kilo? If I want to contribute some code(add new feature) for cinder, whether the step is the same with nova (need write specs first)? Thanks! Yoo The specs/kilo directory should now be available to propose specs to. -- Mike Perez ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [api] Forming the API Working Group
2014-10-13 16:52 GMT+02:00 Jay Pipes jaypi...@gmail.com: On 10/10/2014 02:05 AM, Christopher Yeoh wrote: I agree with what you've written on the wiki page. I think our priority needs to be to flesh out https://wiki.openstack.org/wiki/Governance/Proposed/APIGuidelines so we have something to reference when reviewing specs. At the moment I see that document as something anyone should be able to document a project's API convention even if they conflict with another project for the moment. Once we've got a fair amount of content we can start as a group resolving any conflicts. Agreed that we should be fleshing out the above wiki page. How would you like us to do that? Should we have an etherpad to discuss individual topics? Having multiple people editing the wiki page offering commentary seems a bit chaotic, and I think we would do well to have the Gerrit review process in place to handle proposed guidelines and rules for APIs. See below for specifics on this... Speaking of the wiki page, I wrote it very matter-of-factly. As if this is the way things are. They’re not. The wiki page is just a starting point. If something was missed, add it. If something can be improved, improve it. Let’s try to keep it simple though. One problem with API WG members reviewing spec proposals that affect the API is finding the specs in the first place across many different projects repositories. I've said for a while now that I would love to have separate repositories -- ala the Keystone API in the openstack/identity-api repository -- that contains specifications for APIs in a single format (APIBlueprint was suggested at one point, but Swagger 2.0 seems to me to have more upside). I also think it would be ideal to have an openstack/openstack-api repo that would house guidelines and rules that this working group came up with, along with examples of appropriate usage. This repo would function very similar to the openstack/governance [1] repo that the TC uses to flesh out proposals on community, release management, and governance changes. If people are OK with this idea, I will go ahead and create the repo and add the wiki page content as the initial commit, then everyone can simply submit patches to the document(s) using the normal Gerrit process, and we can iterate on these things using the same tools as other repositories. Thanks Jay, I much prefer this idea. I concerned how to handle API rule conflicts if using a wiki page. eg: Someone prefer CamelCase names as attributes but the other does snake_case. If using gerrit, we can propose favorite rules as each commit and we can discuss them on it. That would be nice to build a consensus for the rules. Thanks Ken ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [kolla] on Dockerfile patterns
I've been reading a bunch of the existing Dockerfiles, and I have two humble requests: 1. It would be good if the interesting code came from python sdist/bdists rather than rpms. This will make it possible to rebuild the containers using code from a private branch or even unsubmitted code, without having to go through a redhat/rpm release process first. I care much less about where the python dependencies come from. Pulling them from rpms rather than pip/pypi seems like a very good idea, given the relative difficulty of caching pypi content and we also pull in the required C, etc libraries for free. With this in place, I think I could drop my own containers and switch to reusing kolla's for building virtual testing environments. This would make me happy. 2. I think we should separate out run the server from do once-off setup. Currently the containers run a start.sh that typically sets up the database, runs the servers, creates keystone users and sets up the keystone catalog. In something like k8s, the container will almost certainly be run multiple times in parallel and restarted numerous times, so all those other steps go against the service-oriented k8s ideal and are at-best wasted. I suggest making the container contain the deployed code and offer a few thin scripts/commands for entrypoints. The main replicationController/pod _just_ starts the server, and then we have separate pods (or perhaps even non-k8s container invocations) that do initial database setup/migrate, and post- install keystone setup. I'm open to whether we want to make these as lightweight/independent as possible (every daemon in an individual container), or limit it to one per project (eg: run nova-api, nova-conductor, nova-scheduler, etc all in one container). I think the differences are run-time scalability and resource- attribution vs upfront coding effort and are not hugely significant either way. Post-install catalog setup we can combine into one cross-service setup like tripleO does[1]. Although k8s doesn't have explicit support for batch tasks currently, I'm doing the pre-install setup in restartPolicy: onFailure pods currently and it seems to work quite well[2]. (I'm saying post install catalog setup, but really keystone catalog can happen at any point pre/post aiui.) [1] https://github.com/openstack/tripleo-incubator/blob/master/scripts/setup-endpoints [2] https://github.com/anguslees/kube-openstack/blob/master/kubernetes-in/nova-db-sync-pod.yaml -- - Gus ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [api] Forming the API Working Group
On Mon, 13 Oct 2014 22:20:32 -0400 Jay Pipes jaypi...@gmail.com wrote: On 10/13/2014 07:11 PM, Christopher Yeoh wrote: On Mon, 13 Oct 2014 10:52:26 -0400 Jay Pipes jaypi...@gmail.com wrote: On 10/10/2014 02:05 AM, Christopher Yeoh wrote: I agree with what you've written on the wiki page. I think our priority needs to be to flesh out https://wiki.openstack.org/wiki/Governance/Proposed/APIGuidelines so we have something to reference when reviewing specs. At the moment I see that document as something anyone should be able to document a project's API convention even if they conflict with another project for the moment. Once we've got a fair amount of content we can start as a group resolving any conflicts. Agreed that we should be fleshing out the above wiki page. How would you like us to do that? Should we have an etherpad to discuss individual topics? Having multiple people editing the wiki page offering commentary seems a bit chaotic, and I think we would do well to have the Gerrit review process in place to handle proposed guidelines and rules for APIs. See below for specifics on this... Honestly I don't think we have enough content yet to have much of a discussion. I started the wiki page https://wiki.openstack.org/wiki/Governance/Proposed/APIGuidelines in the hope that people from other projects would start adding conventions that they use in their projects. I think its fine for the moment if its contradictory, we just need to gather what projects currently do (or want to do) in one place so we can start discussing any contradictions. Actually, I don't care all that much about what projects *currently* do. I want this API working group to come up with concrete guidelines and rules/examples of what APIs *should* look like. What projects currently do gives us a baseline to work from. It also should expose where we have currently have inconsistencies between projects. And whilst I don't have a problem with having some guidelines which suggest a future standard for APIs, I don't think we should be requiring any type of feature which has not yet been implemented in at least one, preferably two openstack projects and released and tested for a cycle. Eg standards should be lagging rather than leading. So I'd again encourage anyone interested in APIs from the various projects to just start dumping their project viewpoint in there. I went ahead and just created a repository that contained all the stuff that should be pretty much agreed-to, and a bunch of stub topic documents that can be used to propose specific ideas (and get feedback on) here: http://github.com/jaypipes/openstack-api Hopefully, you can give it a look and get a feel for why I think the code review process will be better than the wiki for controlling the deliverables produced by this team... I think it will be better in git (but we also need it in gerrit) when it comes to resolving conflicts and after we've established a decent document (eg when we have more content). I'm just looking to make it as easy as possible for anyone to add any guidelines now. Once we've actually got something to discuss then we use git/gerrit with patches proposed to resolve conflicts within the document. I like the idea of a repo and using Gerrit for discussions to resolve issues. I don't think it works so well when people are wanting to dump lots of information in initially. Unless we agree to just merge anything vaguely reasonable and then resolve the conflicts later when we have a reasonable amount of content. Otherwise stuff will get lost in gerrit history comments and people's updates to the document will overwrite each other. I guess we could also start fleshing out in the repo how we'll work in practice too (eg once the document is stable what process do we have for making changes - two +2's is probably not adequate for something like this). We can make it work exactly like the openstack/governance repo, where ttx has the only ability to +2/+W approve a patch for merging, and he tallies a majority vote from the TC members, who vote -1 or +1 on a proposed patch. Instead of ttx, though, we can have an API working group lead selected from the set of folks currently listed as committed to the effort? Yep, that sounds fine, though I don't think a simple majority is sufficient for something like api standards. We either get consensus or we don't include it in the final document. Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [bashate] towards inbox zero on bashate changes, release?
Hi, I took the liberty of rebasing and approving the fairly obvious and already +1'd bashate changes today that had been sitting for quite a while. What's left is minimal and fall into three categories 1) changes for auto-detection. IMO, we should drop all these and just leave bashate as taking a list of files to check, and let test-harnesses fix it. Everyone using it at the moment seems fine without them https://review.openstack.org/110966 (Introduce directories as possible arguements) https://review.openstack.org/126842 (Add possibility to load checks automatically) https://review.openstack.org/117772 (Implement .bashateignore handling) https://review.openstack.org/113892 (Remove hidden directories from discover) 2) status-quo changes requiring IMO greater justification https://review.openstack.org/126853 (Small clean-up) https://review.openstack.org/126842 (Add possibility to load checks automatically) https://review.openstack.org/127473 (Put all messages into separate package) 3) if/then checking; IMO change is a minor regression https://review.openstack.org/127052 (Fixed if-then check when then is not in the end of line) Maybe it is time for a release? One thing; does the pre-release check run over TOT devstack and ensure there are no errors? We don't want to release and then 10 minutes later gate jobs start failing. -i ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Neutron documentation to update about new vendor plugin, but without code in repository?
I agree with Kevin and Kyle. Even if we decided to use separate tree for neutron plugins and drivers, they still will be regarded as part of the upstream. These plugins/drivers need to prove they are well integrated with Neutron master in some way and gating integration proves it is well tested and integrated. I believe it is a reasonable assumption and requirement that a vendor plugin/driver is listed in the upstream docs. This is a same kind of question as what vendor plugins are tested and worth documented in the upstream docs. I hope you work with the neutron team and run the third party requirements. Thanks, Akihiro On Tue, Oct 14, 2014 at 10:09 AM, Kyle Mestery mest...@mestery.com wrote: On Mon, Oct 13, 2014 at 6:44 PM, Kevin Benton blak...@gmail.com wrote: The OpenStack dev and docs team dont have to worry about gating/publishing/maintaining the vendor specific plugins/drivers. I disagree about the gating part. If a vendor wants to have a link that shows they are compatible with openstack, they should be reporting test results on all patches. A link to a vendor driver in the docs should signify some form of testing that the community is comfortable with. I agree with Kevin here. If you want to play upstream, in whatever form that takes by the end of Kilo, you have to work with the existing third-party requirements and team to take advantage of being a part of things like upstream docs. Thanks, Kyle On Mon, Oct 13, 2014 at 11:33 AM, Vadivel Poonathan vadivel.openst...@gmail.com wrote: Hi, If the plan is to move ALL existing vendor specific plugins/drivers out-of-tree, then having a place-holder within the OpenStack domain would suffice, where the vendors can list their plugins/drivers along with their documentation as how to install and use etc. The main Openstack Neutron documentation page can explain the plugin framework (ml2 type drivers, mechanism drivers, serviec plugin and so on) and its purpose/usage etc, then provide a link to refer the currently supported vendor specific plugins/drivers for more details. That way the documentation will be accurate to what is in-tree and limit the documentation of external plugins/drivers to have just a reference link. So its now vendor's responsibility to keep their driver's up-to-date and their documentation accurate. The OpenStack dev and docs team dont have to worry about gating/publishing/maintaining the vendor specific plugins/drivers. The built-in drivers such as LinuxBridge or OpenVSwitch etc can continue to be in-tree and their documentation will be part of main Neutron's docs. So the Neutron is guaranteed to work with built-in plugins/drivers as per the documentation and the user is informed to refer the external vendor plug-in page for additional/specific plugins/drivers. Thanks, Vad -- On Fri, Oct 10, 2014 at 8:10 PM, Anne Gentle a...@openstack.org wrote: On Fri, Oct 10, 2014 at 7:36 PM, Kevin Benton blak...@gmail.com wrote: I think you will probably have to wait until after the summit so we can see the direction that will be taken with the rest of the in-tree drivers/plugins. It seems like we are moving towards removing all of them so we would definitely need a solution to documenting out-of-tree drivers as you suggested. However, I think the minimum requirements for having a driver being documented should be third-party testing of Neutron patches. Otherwise the docs will become littered with a bunch of links to drivers/plugins with no indication of what actually works, which ultimately makes Neutron look bad. This is my line of thinking as well, expanded to ultimately makes OpenStack docs look bad -- a perception I want to avoid. Keep the viewpoints coming. We have a crucial balancing act ahead: users need to trust docs and trust the drivers. Ultimately the responsibility for the docs is in the hands of the driver contributors so it seems those should be on a domain name where drivers control publishing and OpenStack docs are not a gatekeeper, quality checker, reviewer, or publisher. We have documented the status of hypervisor drivers on an OpenStack wiki page. [1] To me, that type of list could be maintained on the wiki page better than in the docs themselves. Thoughts? Feelings? More discussion, please. And thank you for the responses so far. Anne [1] https://wiki.openstack.org/wiki/HypervisorSupportMatrix On Fri, Oct 10, 2014 at 1:28 PM, Vadivel Poonathan vadivel.openst...@gmail.com wrote: Hi Anne, Thanks for your immediate response!... Just to clarify... I have developed and maintaining a Neutron plug-in (ML2 mechanism_driver) since Grizzly and now it is up-to-date with Icehouse. But it was never listed nor part of the main Openstack releases. Now i would like to have my plugin mentioned as supported plugin/mechanism_driver for so and so vendor equipments in the docs.openstack.org, but without having the actual plugin code to
[openstack-dev] [gantt] Scheduler group meeting - Agenda 10/14
1) Forklift status 2) Kilo Summit sessions 3) Kilo BPs 4) Opens -- Don Dugger Censeo Toto nos in Kansa esse decisse. - D. Gale Ph: 303/443-3786 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Fuel] [Rally] Using fuelclient as a library - battle report
Mike, I never mentioned us having any fork. What I said is that fuelclient is not currently usable as a library and that is why we are more inclined to write our own client that solves the limited scope of our problems. On Mon, Oct 13, 2014 at 11:53 AM, Mike Scherbakov mscherba...@mirantis.com wrote: Ilya, would that be possible to contribute your changes back to our upstream client? We are willing it to be evolved in this exact direction, though we never had enough resources for it. We will be happy to review and accept your requests. It should be easier for you too - instead of maintaining the fork, you will simply use upstream version. Thanks, On Fri, Oct 10, 2014 at 5:52 PM, Ilya Kharin ikha...@mirantis.com wrote: Hi guys, I agree with some of the issues Lukasz mentioned. All of them require some workaround to make it possible to use the client as a library. I can summarize some of the problems I encountered while working with fuelclient: - The distribution of the package is absent. The client cannot be specified as a dependency because it is not presented on PyPI and cannot be installed by pip (that is so for the current releases but not for the development branch). - The client cannot be initialized properly because it's designed as a singleton which is initialized on the import of a module. Initialization parameters can be specified either in a configuration file with a hardcoded filename (which can potentially be absent on the client-side host because of its location at /etc/fuel/client/config.yaml) or in environment variables. These limitations force us to specify the environment variables and then use inline imports. In my team we are thinking about our own implementation of the client as a solution. Best regards, Ilya On Mon, Oct 6, 2014 at 6:09 PM, Igor Kalnitsky ikalnit...@mirantis.com wrote: Hi Lukasz, I have the same thoughts - we have to design a good Python library for dealing with Nailgun and this library has to be used by: * Fuel CLI * System Tests * Fuel Upgrade * OSTF * other scripts But it's a big deal and we definetely should have a separate blueprint for this task. Moreover, we have to carefully consider its architecture to be convenient not only for CLI usage. Thanks, Igor On Mon, Oct 6, 2014 at 4:49 PM, Lukasz Oles lo...@mirantis.com wrote: Hello all, I'm researching if we can use Rally project for some Fuel testing. It's part of 100-nodes blueprint[1]. To write some Rally scenario I used our Fuelclient library. In it's current state it's really painful to use and it's not usable as production tool. Here is the list of the biggest issues: 1. If API returns code other than 20x it exits. Literally it calls sys.exit(). It should just rise Exception. 2. Using API Client as a Singleton. In theory we can have more than one connection, but all new objects will use default connection. 3. Can not use keystone token. It requires user and password. Server address and all credentials can be given via config file or environment variables. There is no way to set it during client initialization. All this issues show that library was designed only with CLI in mind. Especially issue nr 1. Now I know why ostf doesn't use fuelcient, why Rally wrote their own client. And I can bet that MOX team is also using their own version. I'm aware of Fuelclient refactoring blueprint[1] I reviewed it and gave +1 to most of the reviews. Unfortunately it focuses on CLI usage. Move to Cliff is very good idea, but for library it actually makes things worse [2] like moving data validation to CLI or initializing object using single dictionary instead of normal arguments. I think instead of focusing on CLI usage we should focus on a library part. To make it easier to use by other programs. After that we can focus on CLI. It's very important now when we are planning to support 100 nodes and more in future because more and more users will start use Fuel via API instead of UI. What do you think about this? Regards, [1] https://blueprints.launchpad.net/fuel/+spec/refactoring-for-fuelclient [2] https://review.openstack.org/#/c/117294/ Regards, -- Łukasz Oleś ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Mike Scherbakov #mihgen ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org