Re: [openstack-dev] [oslo] The right (and the wrong) way of handling module imports in oslo-incubator
On 13/11/14 23:25 +, Amrith Kumar wrote: At the suggestion of Doug Hellmann, and relative to a conversation with him and Flavio at Summit. Doug suggested that I pose this question on the dev mailing list so someone from OSLO can communicate the answer to the entire community (rather than just the private email exchange that we had). Here’s the situation, I’m using loopingcall.py as an example, this is not limited to this module but serves as an example. An OSLO incubator module loopingcall depends on another OSLO incubator module timeutils. timeutils has graduated [drum-roll] and is now part of oslo.utils. There is also other project code that references timeutils. So, to handle the graduation of timeutils, the changes I’ll be making are: 1. Remove timeutils from openstack-common.conf 2. Make the project code reference oslo.utils But what of loopingcall? Should I a. Update it and change the import(s) therein to point to oslo.utils, or b. Sync the oslo-incubator code for loopingcall, picking up all changes at least upto and including the change in oslo-incubator that handles the graduation of oslo.utils. In speaking with Doug and Flavio, (after I submitted copious amounts of code that did (a)) above, I’ve come to learn that I chose the wrong answer. The correct answer is (b). This doesn’t have to be part of the same commit, and what I’ve ended up doing is this … c. Leave timeutils in project/openstack/common and let oslo-incubator depend on it while migrating the project to use oslo.utils. In a subsequent commit, a sync from oslo-incubator can happen. I’d like someone on OSLO to confirm this, and for other projects whose lead I followed, you may want to address these in the changes you have in flight or have already merged. `b` is the right answer there. As a general rule - probably the easiest way to solve the above dilema - people should *never* modify incubator modules in the project. Sticking to this rule will automatically answer the question of how to update, maintain and consume code from oslo-incubator. If there are projects that picked `a` as the right answer, please, update your patches and follow the already well defined workflow for oslo-incubator. Doing otherwise will just make things harder for us who maintain oslo, for stable maintenance and for your own contributors. Amrith, thanks for bringing this up and for updating your patches, I know it's a pain and I appreciate your collaboration there. Cheers, Flavio P.S: Gentle note. Oslo is not an acronym. -- @flaper87 Flavio Percoco pgpHpum198hgM.pgp Description: PGP signature ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] 答复:horizon failed due to django compressor
ok, the reason seems to be pyscss version conflict problem:gongysh@localhost:/opt/stack/horizon$ python manage.py shell Python 2.7.5 (default, Nov 3 2014, 14:26:24) [GCC 4.8.3 20140911 (Red Hat 4.8.3-7)] on linux2 Type help, copyright, credits or license for more information. (InteractiveConsole) from django_pyscss.scss import DjangoScss Traceback (most recent call last): File console, line 1, in module File /usr/lib/python2.7/site-packages/django_pyscss-1.0.6-py2.7.egg/django_pyscss/scss.py, line 9, in module from scss import ( ImportError: cannot import name dequote we should use pyscss lower than 1.3:gongysh@localhost:/opt/stack/neutron$ sudo pip freeze | grep -i pyscss django-pyscss==1.0.6 pyScss==1.2.1 --发件人:gong_ys2004 gong_ys2...@aliyun.com发送时间:2014年11月14日(星期五) 09:13收件人:OpenStack Development Mailing List openstack-dev@lists.openstack.org主 题:horizon failed due to django compressor Hi, I installed the devstack on my fedora 20 system, the process seems good because I have seen the normal output of the stack.sh: Horizon is now available at http://192.168.88.225/ Keystone is serving at http://192.168.88.225:5000/v2.0/ Examples on using novaclient command line is in exercise.sh The default users are: admin and demo The password: admin This is your host ip: 192.168.88.225 2014-11-14 00:03:05.159 | stack.sh completed in 249 seconds but when I tried to access the horizon by http://192.168.88.225/ it always said the following:FilterError at //bin/sh: django_pyscss.compressor.DjangoScssFilter: command not found Request Method:GETRequest URL:http://localhost/Django Version:1.6.7Exception Type:FilterErrorException Value:/bin/sh: django_pyscss.compressor.DjangoScssFilter: command not found Exception Location:/usr/lib/python2.7/site-packages/compressor/filters/base.py in input, line 173Python Executable:/usr/bin/pythonPython Version:2.7.5Python Path:['/opt/stack/horizon/openstack_dashboard/wsgi/../..', '/usr/lib/python2.7/site-packages/XStatic_Spin-1.2.5.2-py2.7.egg', '/usr/lib/python2.7/site-packages/XStatic_Rickshaw-1.5.0.0-py2.7.egg', '/usr/lib/python2.7/site-packages/XStatic_QUnit-1.14.0.2-py2.7.egg', '/usr/lib/python2.7/site-packages/XStatic_JSEncrypt-2.0.0.2-py2.7.egg', '/usr/lib/python2.7/site-packages/XStatic_jquery_ui-1.11.0.1-py2.7.egg', '/usr/lib/python2.7/site-packages/XStatic_JQuery.TableSorter-2.14.5.1-py2.7.egg', '/usr/lib/python2.7/site-packages/XStatic_JQuery.quicksearch-2.0.3.1-py2.7.egg', '/usr/lib/python2.7/site-packages/XStatic_JQuery_Migrate-1.2.1.1-py2.7.egg', '/usr/lib/python2.7/site-packages/XStatic_jQuery-1.10.2.1-py2.7.egg', '/usr/lib/python2.7/site-packages/XStatic_Jasmine-1.3.1.1-py2.7.egg', '/usr/lib/python2.7/site-packages/XStatic_Font_Awesome-4.1.0.1-py2.7.egg', '/usr/lib/python2.7/site-packages/XStatic_Hogan-2.0.0.2-py2.7.egg', '/usr/lib/python2.7/site-packages/XStatic_D3-3.1.6.2-py2.7.egg', '/usr/lib/python2.7/site-packages/XStatic_Bootstrap_SCSS-3.2.0.0-py2.7.egg', '/usr/lib/python2.7/site-packages/XStatic_Bootstrap_Datepicker-1.3.1.0-py2.7.egg', '/usr/lib/python2.7/site-packages/XStatic_Angular_Mock-1.2.1.1-py2.7.egg', '/usr/lib/python2.7/site-packages/XStatic_Angular_Cookies-1.2.1.1-py2.7.egg', '/usr/lib/python2.7/site-packages/XStatic_Angular-1.2.1.1-py2.7.egg', '/usr/lib/python2.7/site-packages/XStatic-1.0.1-py2.7.egg', '/usr/lib/python2.7/site-packages/python_swiftclient-2.3.1-py2.7.egg', '/usr/lib/python2.7/site-packages/python_novaclient-2.20.0-py2.7.egg', '/usr/lib/python2.7/site-packages/python_heatclient-0.2.12-py2.7.egg', '/usr/lib/python2.7/site-packages/python_glanceclient-0.14.2-py2.7.egg', '/usr/lib/python2.7/site-packages/python_cinderclient-1.1.1-py2.7.egg', '/usr/lib/python2.7/site-packages/pyScss-1.3.0.a1-py2.7-linux-x86_64.egg', '/usr/lib/python2.7/site-packages/django_pyscss-1.0.6-py2.7.egg', '/usr/lib/python2.7/site-packages/enum34-1.0.3-py2.7.egg', '/opt/stack/keystone', '/opt/stack/glance_store', '/opt/stack/glance', '/opt/stack/neutron', '/opt/stack/nova', '/opt/stack/horizon', '/usr/lib64/python27.zip', '/usr/lib64/python2.7', '/usr/lib64/python2.7/plat-linux2', '/usr/lib64/python2.7/lib-tk', '/usr/lib64/python2.7/lib-old', '/usr/lib64/python2.7/lib-dynload', '/usr/lib64/python2.7/site-packages', '/usr/lib64/python2.7/site-packages/gtk-2.0', '/usr/lib/python2.7/site-packages', '/opt/stack/horizon/openstack_dashboard'] I appreciate any help, thanks Yong sheng gong___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] Is this fix introducing another different bug to dhcp-agent?
-BEGIN PGP SIGNED MESSAGE- Hash: SHA512 THANKS A LOT! I hope this patch will get enough attention from core reviewers to give it a chance to arrive in 2014.2.1 Juno release that is scheduled the end of the month. /Ihar On 14/11/14 04:01, Xu Han Peng wrote: I opened a new bug and submitted a fix for this problem since it was introduced by my previous patch. https://bugs.launchpad.net/neutron/+bug/1392564 https://review.openstack.org/#/c/134432/ It will be great if you can have a look at the fix and comment. Thanks! Xu Han On 11/14/2014 05:54 AM, Ihar Hrachyshka wrote: Robert, Miguel, do you plan to take care of the bug and the fix, or you need help? RDO depends on the fix, also we should introduce the fix before the next Juno release that includes the bad patch, so I would be glad to step in if you don't have spare cycles. /Ihar On 13/11/14 16:44, Robert Li (baoli) wrote: Nice catch. Since it’s already merged, a new bug may be in order. —Robert On 11/13/14, 10:25 AM, Miguel Ángel Ajo majop...@redhat.com mailto:majop...@redhat.com wrote: I believe this fix to IPv6 dhcp spawn breaks isolated metadata when we have a subnet combination like this on a network: 1) IPv6 subnet, with DHCP enabled 2) IPv4 subnet, with isolated metadata enabled. https://review.openstack.org/#/c/123671/1/neutron/agent/dhcp_agent.py I haven’t been able to test yet, but wanted to share it before I forget. Miguel Ángel ajo @ freenode.net ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Thanks, Xu Han ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -BEGIN PGP SIGNATURE- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) iQEcBAEBCgAGBQJUZcBTAAoJEC5aWaUY1u57+GwIAN6ToEVtY86pDLU6uMl5yCgj KcB8zxPVCa/tRYF96vRJKFts+gnxIE321Nk7P3WBDvL/YsZnNXkGxWUEICPRb9AU dS/xXXRe2wVIzESXvjsPvc5x3E3jOMWQlOQHAEmKAAVcwbaAAOWoTuo0R9OjxVam pWbBvQd0ThWfBVCQH6JiRCtmXwy7CUVEluhN40GEhw7R5abTixIB2KazKAIU5utk dxRG9X/5uaEzCeElTIoyWHsVm10N0p6gcQVQFeaTzyQTUAgf5N2nJDNfMD0ZPuOl YAF9s5L2Eu/2Y7kgxzGWgZ3gNs7KZvBKT5UZUtElNU8v22WVoDo9eZC0qSYb1Cc= =b2/h -END PGP SIGNATURE- ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Fuel] Soft Code Freeze for 6.0 in action!
Hi all, it is time to freeze with fixing Low and Medium bugs, as we entered to the SCF [1] phase. Like in 5.1, let's do an exception for those patches, which have at least one +1 in gerrit. If core reviewers think that such patches can be landed, let's merge. Otherwise - please move the bug to the next milestone (6.1). I suggest to update [1] with this exception, as it seems to be useful for every release. Now, let's focus on Critical and High bugs. We have a bit more than 2 weeks to fix them [2]. We are keeping an exception for 3.13 kernel in Ubuntu. If we don't land it in the next day or two, we will have to postpone the update to 6.1. Let's focus on this task and bring new kernel in. [1] https://wiki.openstack.org/wiki/Fuel/Soft_Code_Freeze [2] https://wiki.openstack.org/wiki/Fuel/6.0_Release_Schedule -- Mike Scherbakov #mihgen ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] [Ironic] [Cinder] Baremetal volumes -- how to model direct attached storage
Hi My thoughts: Shoe-horning the ephemeral partition into Cinder seems like a lot of pain for almost no gain[1]. The only gain I can think of would be that we could bring a node down, boot it into a special ramdisk that exposes the volume to the network, so cindery operations (e.g. migration) could be performed, but I'm not even sure if anyone is asking for that? Forcing Cinder to understand and track something it can never normally do anything with, seems like we're just trying to squeeze ourselves into an ever-shrinking VM costume! Having said that, preserve ephemeral is a terrible oxymoron, so if we can do something about it, we probably should. How about instead, we teach Nova/Ironic about a concept of no ephemeral? They make a partition on the first disk for the first image they deploy, and then they never touch the other part(s) of the disk(s), until the instance is destroyed. This creates one additional burden for operators, which is to create and format a partition the first time they boot, but since this is a very small number of commands, and something we could trivially bake into our (root?) elements, I'm not sure it's a huge problem. This gets rid of the cognitive dissonance of preserving something that is described as ephemeral, and (IMO) makes it extremely clear that OpenStack isn't going to touch anything but the first partition of the first disk. If this were baked into the flavour rather than something we tack onto a nova rebuild command, it offers greater safety for operators, against the risk of accidentallying a vital state partition with a misconstructed rebuild command. [1] for local disk, I mean. I still think it'd be nice for operators to be able to use a networked Cinder volume for /mnt/state/, but that presents a whole different set of challenges :) Cheers, -- Chris Jones On 13 Nov 2014, at 09:25, Robert Collins robe...@robertcollins.net wrote: Back in the day before the ephemeral hack (though that was something folk have said they would like for libvirt too - so its not such a hack per-se) this was (broadly) sketched out. We spoke with the cinder PTL at the time in portland, from memory. There was no spec, so here is my brain-dumpy-recollection... - actual volumes are a poor match because we wouldn't be running cinder-volume on an ongoing basis and service records would accumulate etc. - we'd need cross-service scheduler support to make cinder operations line up with allocated bare metal nodes (and to e.g. make sure both our data volume and golden image volume are scheduled to the same machine). - folk want to be able to do fairly arbitrary RAID( JBOD) setups and that affects scheduling as well, one way to work it is to have Ironic export capabilities and specify actual RAID setups via matching flavors - this is the direction the ephemeral work took us, and is conceptually straight forwardly extended to RAID. We did talk about doing a little JSON schema to describe RAID / volume layouts, which cinder could potentially use for user defined volume flavors too. One thing I think that is missing from your description is in this: To be clear, in TripleO, we need a way to keep the data on a local direct attached storage device while deploying a new image to the box. I think we need to be able to do this with a single drive shared between image and data - doing one disk image, one disk data would add substantial waste given the size of disks these days (and for some form factors like moonshot it would rule out using them at all). Of course, being able to do entirely network stored golden images might be something some deployments want, but we can't require them all to do that ;) -Rob On 13 November 2014 11:30, Clint Byrum cl...@fewbar.com wrote: Each summit since we created preserve ephemeral mode in Nova, I have some conversations where at least one person's brain breaks for a second. There isn't always alcohol involved before, there almost certainly is always a drink needed after. The very term is vexing, and I think we have done ourselves a disservice to have it, even if it was the best option at the time. To be clear, in TripleO, we need a way to keep the data on a local direct attached storage device while deploying a new image to the box. If we were on VMs, we'd attach volumes, and just deploy new VMs and move the volume over. If we had a SAN, we'd just move the LUN's. But at some point when you deploy a cloud you're holding data that is expensive to replicate all at once, and so you'd rather just keep using the same server instead of trying to move the data. Since we don't have baremetal Cinder, we had to come up with a way to do this, so we used Nova rebuild, and slipped it a special command that said don't overwrite the partition you'd normally make the 'ephemeral' partition. This works fine, but it is confusing and limiting. We'd like something better. I had an
Re: [openstack-dev] [all] Supported (linux) distributions
Tony Breeds wrote: I'm looking for a description of which linux distributions we as developers expect to support openstack on. I haven't found anything that summerises this. It's because it's a two-way street: Linux distributions support OpenStack (mainly by packaging all OpenStack requirements) and OpenStack supports distributions (mainly by supporting the versions of Python they ship). Depending on what you actually mean by support, the list changes. So far the closest I've come is: 1) http://docs.openstack.org/index.html We have: Installation Guide for Debian 7 Installation Guide for openSUSE 13.1 and SUSE Linux Enterprise Server 11 SP3 Installation Guide for Red Hat Enterprise Linux 7, CentOS 7, and Fedora 20 Installation Guide for Ubuntu 14.04 That's the list of distributions that have install instructions in our doc. There are distributions out there packaging OpenStack which did not contribute this documentation (Gentoo comes to mind). 2) https://github.com/openstack-dev/devstack/blame/master/doc/source/overview.rst#L18-L33 which defines our CI strategy as: Ubuntu LTS + dev (14.04 and 14.10) Fedora $current and $current-1 (20 and 19) RHEL $current (7) That's systems we run our own CI tests on and we pledge not to break. That said we run those tests from Python source code, not from distribution packages, so that pledge is obviously limited. These 2 lists don't have 100% overlap, so perhaps the union would be a reasonable starting place? If you are after the list of distributions with a well-known packaging of OpenStack then yes, the union of those two lists + Gentoo sounds accurate to me. -- Thierry Carrez (ttx) ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [sahara] Nominate Sergey Reshetniak to sahara-core
Congrats! Welcome to the core team! On Tue, Nov 11, 2014 at 9:02 PM, Matthew Farrellee m...@redhat.com wrote: On 11/11/2014 12:35 PM, Sergey Lukjanov wrote: Hi folks, I'd like to propose Sergey to sahara-core. He's made a lot of work on different parts of Sahara and he has a very good knowledge of codebase, especially in plugins area. Sergey has been consistently giving us very well thought out and constructive reviews for Sahara project. Sahara core team members, please, vote +/- 2. Thanks. -- Sincerely yours, Sergey Lukjanov Sahara Technical Lead (OpenStack Data Processing) Principal Software Engineer Mirantis Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev +2 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Sincerely yours, Sergey Lukjanov Sahara Technical Lead (OpenStack Data Processing) Principal Software Engineer Mirantis Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [stable][nova] Doc build failure during backport
-BEGIN PGP SIGNED MESSAGE- Hash: SHA512 On 14/11/14 02:36, Fei Long Wang wrote: Greetings, Recently, I'm working on fixing Nova evacuate bugs for RBD. And both the two patches have been merged in Kilo[1,2]. But during backporting them to Juno/Icehouse, one patch got a document build failure, see http://logs.openstack.org/26/131626/3/check/gate-nova-docs/789d9bd/console.html Therefore, I have to change the docstring format a little bit. But seems there are some reviewers have concern about this. Though personally I think it's ok, given we even could resolve conflicts by changing code during backporting. So may I get some suggestion/comments about this situation? Thanks. Personally, I don't see it as a fight worth fighting. Pinning of the version in master was controversial from the start [1]. Ideally, you would patch master docstring, then meld the patch into yours. Though once final sphinx 1.3 is released, master will be forced to fix all those issues including the docstring you backport, so I don't see there is value in pushing back the backport for such a minor reason. Let's not allow the next Juno release (2014.2.1) scheduled the end of the month to miss the fix for procedural reasons. [1]: https://bugs.launchpad.net/oslotest/+bug/1379998/comments/5 [1] https://review.openstack.org/131626 [2] https://review.openstack.org/131613 -BEGIN PGP SIGNATURE- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) iQEcBAEBCgAGBQJUZcSJAAoJEC5aWaUY1u57F8UH/jNNErCnh+0zVdQqfPLYmgSw kQO2+w2WZvjq1N931eLM3Q6mdz053JamU3pUbD2aHjJTj0Zl6InPvJBUOTYZRfY1 uF0JU5jUWmbaZFKZ6Xqxre52ZOt7scyVlCO5dT/aP9D4eHoj6ZnBqWO8fDaIHhht PX2zf+/Rkof8oDivLccED71W0KWTRA4iZCA3/C5kpe111VaPE1AxRA+sRoe4bBTz XyCSGFvXbB9SDWv9kDt2IuiW6LNUeKTsjsCJHYNnYMwwLRhn7AH5m4pqGysscOOU /JITCYZTbfFiLKdlo4QdvDJ9TrUqrC+ilydUyVFBOTGTXP57443BRZHXlW0f6eg= =xx1Z -END PGP SIGNATURE- ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Consistency, efficiency, and safety of NovaObject.save()
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 13/11/14 19:19, Dan Smith wrote: Unfortunately this model doesn't apply to Nova objects, which are persisted remotely. Unless I've missed something, SQLA doesn't run on Nova Compute at all. Instead, when Nova Compute calls object.save() this results in an RPC call to Nova Conductor, which persists the object in the DB using SQLA. Compute wouldn't be able to use common DB transactions without some hairy lifecycle management in Conductor, so Compute apis need to be explicitly aware of this. So just a note to Dan, this is an example of where I keep hearing this about Nova objects.I’ve discussed this with Dan and if I understand him correctly, I think the idea is that a Nova Compute call can be organized such that the objects layer interacts with the database layer in a more coarse-grained fashion, if that is desired, so if you really need several things to happen in one DB transaction, you should organize the relevant objects code to work that way. Instance.save() is a single thing. It implies that Instance.metadata, Instance.system_metadata, Instance.info_cache, Instance.security_groups, Instance.numa_topology, Instance.pci_requests, etc should all be written to the database atomically (or fail). We don't do it atomically and in a transaction right now, but only because db/api is broken into lots of small pieces (all of which existed before objects). If there was a good way to do this: with transaction: save_instance_data() save_instance_metadata() save_instance_system_metadata() ...etc Then we'd do that at the backend, achieve atomicity, and the client (the compute node) wouldn't notice, or care, beyond the fact that it had assumed that was happening all along. It sounds like Mike's facade will provide us a nice way to clean up the db/api calls that the objects are using to persist data in such a way that we can do the above safely like we should have been doing all along. Agree. Does that make sense? Still for me to get my head around is how often we are in fact organizing the bridge between objects / database such that we are using the database most effectively, and not breaking up a single logical operation into many individual transactions. I know that Nova objects doesn’t mandate that this occur but I still want to learn if perhaps it tends to “encourage” that pattern to emerge - it’s hard for me to make that guess right now because I haven’t surveyed nova objects very much at all as I’ve been really trying to stick with getting database patterns sane to start with. I don't agree that it encourages anything relating to how you interact with the database, one way or the other. Almost all of our objects are organized in the exact way that previously we had dicts-of-dicts-of-dicts and an RPC call to flush things to the database. We've changed very little of that access pattern. I think we should push back to Matt to provide a description of why he thinks that this is a problem. I don't think it's a problem. It puts a practical limit on the scope of an 'api call' which can be covered by a single database transaction though, because it would be difficult to arrange for 2 RPC calls to both use the same DB transaction on the remote end. I think we agree on this. Matt - -- Matthew Booth Red Hat Engineering, Virtualisation Team Phone: +442070094448 (UK) GPG ID: D33C3490 GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490 -BEGIN PGP SIGNATURE- Version: GnuPG v1 iEYEARECAAYFAlRlyHYACgkQNEHqGdM8NJCQsQCgmEgZGyCjIJ3zQ+mjo14AQTeo 58UAn3nhO2OfOm/xZ3yIMhlVp0EhJm+0 =xjYZ -END PGP SIGNATURE- ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Consistency, efficiency, and safety of NovaObject.save()
On 13/11/14 18:26, Mike Bayer wrote: On Nov 13, 2014, at 3:52 AM, Nikola Đipanov ndipa...@redhat.com wrote: On 11/13/2014 02:45 AM, Dan Smith wrote: I’m not sure if I’m seeing the second SELECT here either but I’m less familiar with what I’m looking at. compute_node_update() does the one SELECT as we said, then it doesn’t look like self._from_db_object() would emit any further SQL specific to that row. I don't think you're missing anything. I don't see anything in that object code, or the other db/sqlalchemy/api.py code that looks like a second select. Perhaps he was referring to two *queries*, being the initial select and the following update? FWIW - I think an example Matt was giving me yesterday was block devices where we have: @require_context def block_device_mapping_update(context, bdm_id, values, legacy=True): _scrub_empty_str_values(values, ['volume_size']) values = _from_legacy_values(values, legacy, allow_updates=True) query =_block_device_mapping_get_query(context).filter_by(id=bdm_id) query.update(values) return query.first() which gets called from object save() OK well there, that is still a single UPDATE statement and then a SELECT. It’s using an aggregate UPDATE so there is no load up front required. Unless _from_legacy_values() does something, that’s still just UPDATE + SELECT, just not in the usual order. I’d suggest this method be swapped around to load the object first, then use the traditional flush process to flush it, as regular flush is a lot more reliable, so I’d agree this method is awkward and should be fixed, but I’m not sure there’s a second SELECT there. Indeed, looks like there's just a single select here. Aggregate does, however, fetch twice. This was the first one I found, and I may have 'seen' this pattern in more places than it actually exists. Matt -- Matthew Booth Red Hat Engineering, Virtualisation Team Phone: +442070094448 (UK) GPG ID: D33C3490 GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [oslo] The right (and the wrong) way of handling module imports in oslo-incubator
-BEGIN PGP SIGNED MESSAGE- Hash: SHA512 On 14/11/14 09:14, Flavio Percoco wrote: On 13/11/14 23:25 +, Amrith Kumar wrote: At the suggestion of Doug Hellmann, and relative to a conversation with him and Flavio at Summit. Doug suggested that I pose this question on the dev mailing list so someone from OSLO can communicate the answer to the entire community (rather than just the private email exchange that we had). Here’s the situation, I’m using loopingcall.py as an example, this is not limited to this module but serves as an example. An OSLO incubator module loopingcall depends on another OSLO incubator module timeutils. timeutils has graduated [drum-roll] and is now part of oslo.utils. There is also other project code that references timeutils. So, to handle the graduation of timeutils, the changes I’ll be making are: 1. Remove timeutils from openstack-common.conf 2. Make the project code reference oslo.utils But what of loopingcall? Should I a. Update it and change the import(s) therein to point to oslo.utils, or b. Sync the oslo-incubator code for loopingcall, picking up all changes at least upto and including the change in oslo-incubator that handles the graduation of oslo.utils. In speaking with Doug and Flavio, (after I submitted copious amounts of code that did (a)) above, I’ve come to learn that I chose the wrong answer. The correct answer is (b). This doesn’t have to be part of the same commit, and what I’ve ended up doing is this … c. Leave timeutils in project/openstack/common and let oslo-incubator depend on it while migrating the project to use oslo.utils. In a subsequent commit, a sync from oslo-incubator can happen. I’d like someone on OSLO to confirm this, and for other projects whose lead I followed, you may want to address these in the changes you have in flight or have already merged. `b` is the right answer there. As a general rule - probably the easiest way to solve the above dilema - people should *never* modify incubator modules in the project. Sticking to this rule will automatically answer the question of how to update, maintain and consume code from oslo-incubator. Crazy idea: we should have a bot that -1's all the patches that modify oslo-incubator code without being marked by some special tag (OsloSync?). We've slipped several local modifications to those files before (I know two cases in Neutron, though I hardly monitor all the patch queue). If there are projects that picked `a` as the right answer, please, update your patches and follow the already well defined workflow for oslo-incubator. Doing otherwise will just make things harder for us who maintain oslo, for stable maintenance and for your own contributors. Amrith, thanks for bringing this up and for updating your patches, I know it's a pain and I appreciate your collaboration there. Cheers, Flavio P.S: Gentle note. Oslo is not an acronym. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -BEGIN PGP SIGNATURE- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) iQEcBAEBCgAGBQJUZcwOAAoJEC5aWaUY1u5741wH/2B/Pf56YtAaru7jAR9bN7IZ KNY2zkGIDNMZQJWz3DxW7R0myKdmHVqrM/wA+8Lkf0l/ITh1LtiXtRxx5E2sPnJP jef3ODTESooOKGcGOxvKO8tt/Bl4EorJzkX70dyJzlV7fKfZuwCFpZCp73S7npBh BYd5Dfhi+pTyIZvtWSYKJzloJ/BasvKM+pwvlVsV9JIwMNrwwaLx2yDh+D3fltEg bROJooq5J6z/pN19bZ5UkFU2z9lHNI6K6pa1eWLqQdm8WJnNmfmJjiX+vO2wp1z5 7qDsKL/dbHzXfPrBX8MqO9SEvt/jrDS8TeA2tMgAZg8+F5ST1dVHFGxWXJ4eoF4= =2v6F -END PGP SIGNATURE- ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] config options not correctly deprecated
FYI, I've forwarded this thread to the operators mailing list as I feel they will be very much interested by this discussion. BR Simon On Fri, Nov 14, 2014 at 1:37 AM, Sean Dague s...@dague.net wrote: On 11/13/2014 06:56 PM, Clint Byrum wrote: Excerpts from Ben Nemec's message of 2014-11-13 15:20:47 -0800: On 11/10/2014 05:00 AM, Daniel P. Berrange wrote: On Mon, Nov 10, 2014 at 09:45:02AM +, Derek Higgins wrote: Tl;dr oslo.config wasn't logging warnings about deprecated config options, do we need to support them for another cycle? AFAIK, there has not been any change in olso.config behaviour in the Juno release, as compared to previous releases. The oslo.config behaviour is that the generated sample config file contain all the deprecation information. The idea that olso.config issue log warnings is a decent RFE to make the use of deprecated config settings more visible. This is an enhancement though, not a bug. A set of patches to remove deprecated options in Nova was landed on Thursday[1], these were marked as deprecated during the juno dev cycle and got removed now that kilo has started. Yes, this is our standard practice - at the start of each release cycle, we delete anything that was marked as deprected in the previous release cycle. ie we give downstream users/apps 1 release cycle of grace to move to the new option names. Most of the deprecated config options are listed as deprecated in the documentation for nova.conf changes[2] linked to from the Nova upgrade section in the Juno release notes[3] (the deprecated cinder config options are not listed here along with the allowed_direct_url_schemes glance option). The sample nova.conf generated by olso lists all the deprecations. For example, for cinder options it shows what the old config option name was. [cinder] # # Options defined in nova.volume.cinder # # Info to match when looking for cinder in the service # catalog. Format is: separated values of the form: # service_type:service_name:endpoint_type (string value) # Deprecated group/name - [DEFAULT]/cinder_catalog_info #catalog_info=volume:cinder:publicURL Also note the deprecated name will not appear as an option in the sample config file at all, other than in this deprecation comment. My main worry is that there were no warnings about these options being deprecated in nova's logs (as a result they were still being used in tripleo), once I noticed tripleo's CI jobs were failing and discovered the reason I submitted 4 reverts to put back the deprecated options in nova[4] as I believe they should now be supported for another cycle (along with a fix to oslo.config to log warnings about their use). The 4 patches have now been blocked as they go against our deprecation policy. I believe the correct way to handle this is to support these options for another cycle so that other operators don't get hit when upgrading to kilo. While at that same time fix oslo.config to report the deprecated options in kilo. I have marked this mail with the [all] tag because there are other projects using the same deprecated_name (or deprecated_group) parameter when adding config options, I think those projects also now need to support their deprecated options for another cycle. AFAIK, there's nothing different about Juno vs previous release cycles, so I don't see any reason to do anything different this time around. No matter what we do there is always a possibility that downstream apps / users will not notice and/or ignore the deprecation. We should certainly look at how to make deprecation more obvious, but I don't think we should change our policy just because an app missed the fact that these were deprecated. So the difference to me is that this cycle we are aware that we're creating a crappy experience for deployers. In the past we didn't have anything in the CI environment simulating a real deployment so these sorts of issues went unnoticed. IMHO telling deployers that they have to troll the sample configs and try to figure out which deprecated opts they're still using is not an acceptable answer. I don't know if this is really fair, as all of the deprecated options do appear here: http://docs.openstack.org/juno/config-reference/content/nova-conf-changes-juno.html So the real bug is that in TripleO we're not paying attention to the appropriate stream of deprecations. Logs on running systems is a mighty big hammer when the documentation is being updated for us, and we're just not paying attention in the right place. BTW, where SHOULD continuous deployers pay attention for this stuff? Now that we do know, I think we need to address the issue. The first step is to revert the deprecated removals - they're not hurting anything, and if we wait another cycle we can fix oslo.config and then remove them once
Re: [openstack-dev] [nova][libvirt] - 'nova reboot' causes console-log truncated
On Thu, Nov 13, 2014 at 01:55:06PM -0800, Surojit Pathak wrote: Hi all, [Issue observed] If we issue 'nova reboot server', we get to have the console output of the latest bootup of the server only. The console output of the previous boot for the same server vanishes due to truncation[1]. If we do reboot from within the VM instance [ #sudo reboot ], or reboot the instance with 'virsh reboot instance' the behavior is not the same, where the console.log keeps increasing, with the new output being appended. This loss of history makes some debugging scenario difficult due to lack of information being available. Please point me to any solution/blueprint for this issue, if already planned. Otherwise, please comment on my analysis and proposals as solution, below - [Analysis] Nova's libvirt driver on compute node tries to do a graceful restart of the server instance, by attempting a soft_reboot first. If soft_reboot fails, it attempts a hard_reboot. As part of soft_reboot, it brings down the instance by calling shutdown(), and then calls createWithFlags() to bring this up. Because of this, qemu-kvm process for the instance gets terminated and new process is launched. In QEMU, the chardev file is opened with O_TRUNC, and thus we lose the previous content of the console.log file. On the other-hand, during 'virsh reboot instance', the same qemu-kvm process continues, and libvirt actually does a qemuDomainSetFakeReboot(). Thus the same file continues capturing the new console output as a continuation into the same file. Nova and libvirt have support for issuing a graceful reboot via the QEMU guest agent. So if you make sure that is installed, and tell Nova to use it, then Nova won't have to stop recreate the QEMU process and thus won't have the problem of overwriting the logs. Regards, Daniel -- |: http://berrange.com -o-http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] config options not correctly deprecated
On Thu, Nov 13, 2014 at 05:20:47PM -0600, Ben Nemec wrote: On 11/10/2014 05:00 AM, Daniel P. Berrange wrote: On Mon, Nov 10, 2014 at 09:45:02AM +, Derek Higgins wrote: Tl;dr oslo.config wasn't logging warnings about deprecated config options, do we need to support them for another cycle? AFAIK, there has not been any change in olso.config behaviour in the Juno release, as compared to previous releases. The oslo.config behaviour is that the generated sample config file contain all the deprecation information. The idea that olso.config issue log warnings is a decent RFE to make the use of deprecated config settings more visible. This is an enhancement though, not a bug. A set of patches to remove deprecated options in Nova was landed on Thursday[1], these were marked as deprecated during the juno dev cycle and got removed now that kilo has started. Yes, this is our standard practice - at the start of each release cycle, we delete anything that was marked as deprected in the previous release cycle. ie we give downstream users/apps 1 release cycle of grace to move to the new option names. Most of the deprecated config options are listed as deprecated in the documentation for nova.conf changes[2] linked to from the Nova upgrade section in the Juno release notes[3] (the deprecated cinder config options are not listed here along with the allowed_direct_url_schemes glance option). The sample nova.conf generated by olso lists all the deprecations. For example, for cinder options it shows what the old config option name was. [cinder] # # Options defined in nova.volume.cinder # # Info to match when looking for cinder in the service # catalog. Format is: separated values of the form: # service_type:service_name:endpoint_type (string value) # Deprecated group/name - [DEFAULT]/cinder_catalog_info #catalog_info=volume:cinder:publicURL Also note the deprecated name will not appear as an option in the sample config file at all, other than in this deprecation comment. My main worry is that there were no warnings about these options being deprecated in nova's logs (as a result they were still being used in tripleo), once I noticed tripleo's CI jobs were failing and discovered the reason I submitted 4 reverts to put back the deprecated options in nova[4] as I believe they should now be supported for another cycle (along with a fix to oslo.config to log warnings about their use). The 4 patches have now been blocked as they go against our deprecation policy. I believe the correct way to handle this is to support these options for another cycle so that other operators don't get hit when upgrading to kilo. While at that same time fix oslo.config to report the deprecated options in kilo. I have marked this mail with the [all] tag because there are other projects using the same deprecated_name (or deprecated_group) parameter when adding config options, I think those projects also now need to support their deprecated options for another cycle. AFAIK, there's nothing different about Juno vs previous release cycles, so I don't see any reason to do anything different this time around. No matter what we do there is always a possibility that downstream apps / users will not notice and/or ignore the deprecation. We should certainly look at how to make deprecation more obvious, but I don't think we should change our policy just because an app missed the fact that these were deprecated. So the difference to me is that this cycle we are aware that we're creating a crappy experience for deployers. In the past we didn't have anything in the CI environment simulating a real deployment so these sorts of issues went unnoticed. IMHO telling deployers that they have to troll the sample configs and try to figure out which deprecated opts they're still using is not an acceptable answer. Now that we do know, I think we need to address the issue. The first step is to revert the deprecated removals - they're not hurting anything, and if we wait another cycle we can fix oslo.config and then remove them once deployers have had a reasonable chance to address the deprecation. I don't see any compelling reason to revert the deletions. They are not going to impact most users until Kilo is released, so most operators have 6 months to advance warning to prepare for this. We just need to make sure they are more aware of the fact that these will be removed. Regards, Daniel -- |: http://berrange.com -o-http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| ___ OpenStack-dev mailing list
Re: [openstack-dev] [stable] Organizational changes to support stable branches
-BEGIN PGP SIGNED MESSAGE- Hash: SHA512 On 13/11/14 12:34, Thierry Carrez wrote: TL;DR: Every project should designate a Stable branch liaison. Hi everyone, Last week at the summit we discussed evolving the governance around stable branches, in order to maintain them more efficiently (and hopefully for a longer time) in the future. The current situation is the following: there is a single stable-maint-core review team that reviews all backports for all projects, making sure the stable rules are followed. This does not scale that well, so we started adding project-specific people to the single group, but they (rightfully) only care about one project. Things had to change for Kilo. Here is what we came up with: 1. We propose that integrated projects with stable branches designate a formal Stable Branch Liaison (by default, that would be the PTL, but I strongly encourage someone specifically interested in stable branches to step up). The Stable Branch Liaison is responsible for making sure backports are proposed for critical issues in their project, and make sure proposed backports are reviewed. They are also the contact point for stable branch release managers around point release times. Where is the list of liaisons tracked? Do we have a page similar to oslo liaisons one? FYI I'd step in as a formal stable liaison for neutron (unless there are objections from project PTL; added Kyle to CC). 2. We propose to set up project-specific review groups ($PROJECT-stable-core) which would be in charge of reviewing backports for a given project, following the stable rules. Originally that group should be the Stable Branch Liaison + stable-maint-core. The group is managed by stable-maint-core, so that we make sure any addition is well aware of the Stable Branch rules before they are added. The Stable Branch Liaison should suggest names for addition to the group as needed. 3. The current stable-maint-core group would be reduced to stable branch release managers and other active cross-project stable branch rules custodians. We'll remove project-specific people and PTLs that were added in the past. The new group would be responsible for granting exceptions for all questionable backports raised by $PROJECT-stable-core groups, providing backports reviews help everywhere, maintain the stable branch rules (and make sure they are respected), and educate proposed $PROJECT-stable-core members on the rules. 4. Each stable branch (stable/icehouse, stable/juno...) that we concurrently support should have a champion. Stable Branch Champions are tasked with championing a specific stable branch support, making sure the branch stays in good shape and remains usable at all times. They monitor periodic jobs failures and enlist the help of others in order to fix the branches in case of breakage. They should also raise flags if for some reason they are blocked and don't receive enough support, in which case early abandon of the branch will be considered. Adam Gandelman volunteered to be the stable/juno champion. Ihar Hrachyshka (was) volunteered to be the stable/icehouse champion. 5. To set expectations right and evolve the meaning of stable over time to gradually mean more not changing, we propose to introduce support phases for stable branches. During the first 6 months of life of a stable branch (Phase I) any significant bug may be backported. During the next 6 months of life of a stable branch (Phase II), only critical issues and security fixes may be backported. After that and until end of life (Phase III), only security fixes may be backported. That way, at any given time, there is only one stable branch in Phase I support. 6. In order to raise awareness, all stable branch discussions will now happen on the -dev list (with prefix [stable]). The openstack-stable-maint list is now only used for periodic jobs reports, and is otherwise read-only. Let us know if you have any comment, otherwise we'll proceed to set those new policies up. -BEGIN PGP SIGNATURE- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) iQEcBAEBCgAGBQJUZdfuAAoJEC5aWaUY1u57eusIALvXfBsEXTY8NQuaE4jQew74 PB7UkdO4lxAd6QYbqt3/0USgw7L9nLQrK8k+PZPJlCEDQkeRMwIfAyWSdpTvSK+H BnPFoOezI+lu01VT7Gut1uwNd9pKkQLxfR4/bCgDpV0Iy5fHFRWMpbBnKTuZpoh+ y9Wd2t6D1w5refrWIL7tzbwElhnp+Lee0HeaEnYyv3ktF7M6di62iANYrSeRvzDA EzQsSaUdb9joUQMijgcBtCqLOixUrWpeX+by1yOhbgJ82733V9gbg13hS1jzS9t9 KI2v2u3Xga9F43gCYEtRtbVlsFIZItds60vl7uw4aIEFhzeYm3/mQWamyrGvlrs= =B5OD -END PGP SIGNATURE- ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] config options not correctly deprecated
On 13/11/14 23:56, Clint Byrum wrote: Excerpts from Ben Nemec's message of 2014-11-13 15:20:47 -0800: On 11/10/2014 05:00 AM, Daniel P. Berrange wrote: On Mon, Nov 10, 2014 at 09:45:02AM +, Derek Higgins wrote: Tl;dr oslo.config wasn't logging warnings about deprecated config options, do we need to support them for another cycle? AFAIK, there has not been any change in olso.config behaviour in the Juno release, as compared to previous releases. The oslo.config behaviour is that the generated sample config file contain all the deprecation information. The idea that olso.config issue log warnings is a decent RFE to make the use of deprecated config settings more visible. This is an enhancement though, not a bug. A set of patches to remove deprecated options in Nova was landed on Thursday[1], these were marked as deprecated during the juno dev cycle and got removed now that kilo has started. Yes, this is our standard practice - at the start of each release cycle, we delete anything that was marked as deprected in the previous release cycle. ie we give downstream users/apps 1 release cycle of grace to move to the new option names. Most of the deprecated config options are listed as deprecated in the documentation for nova.conf changes[2] linked to from the Nova upgrade section in the Juno release notes[3] (the deprecated cinder config options are not listed here along with the allowed_direct_url_schemes glance option). The sample nova.conf generated by olso lists all the deprecations. For example, for cinder options it shows what the old config option name was. [cinder] # # Options defined in nova.volume.cinder # # Info to match when looking for cinder in the service # catalog. Format is: separated values of the form: # service_type:service_name:endpoint_type (string value) # Deprecated group/name - [DEFAULT]/cinder_catalog_info #catalog_info=volume:cinder:publicURL Also note the deprecated name will not appear as an option in the sample config file at all, other than in this deprecation comment. My main worry is that there were no warnings about these options being deprecated in nova's logs (as a result they were still being used in tripleo), once I noticed tripleo's CI jobs were failing and discovered the reason I submitted 4 reverts to put back the deprecated options in nova[4] as I believe they should now be supported for another cycle (along with a fix to oslo.config to log warnings about their use). The 4 patches have now been blocked as they go against our deprecation policy. I believe the correct way to handle this is to support these options for another cycle so that other operators don't get hit when upgrading to kilo. While at that same time fix oslo.config to report the deprecated options in kilo. I have marked this mail with the [all] tag because there are other projects using the same deprecated_name (or deprecated_group) parameter when adding config options, I think those projects also now need to support their deprecated options for another cycle. AFAIK, there's nothing different about Juno vs previous release cycles, so I don't see any reason to do anything different this time around. No matter what we do there is always a possibility that downstream apps / users will not notice and/or ignore the deprecation. We should certainly look at how to make deprecation more obvious, but I don't think we should change our policy just because an app missed the fact that these were deprecated. So the difference to me is that this cycle we are aware that we're creating a crappy experience for deployers. In the past we didn't have anything in the CI environment simulating a real deployment so these sorts of issues went unnoticed. IMHO telling deployers that they have to troll the sample configs and try to figure out which deprecated opts they're still using is not an acceptable answer. I don't know if this is really fair, as all of the deprecated options do appear here: http://docs.openstack.org/juno/config-reference/content/nova-conf-changes-juno.html They're not, the options that were moved to the [cinder] section don't appear there along with at least one other option. So the real bug is that in TripleO we're not paying attention to the appropriate stream of deprecations. Logs on running systems is a mighty big hammer when the documentation is being updated for us, and we're just not paying attention in the right place. BTW, where SHOULD continuous deployers pay attention for this stuff? IMO, the logs. Now that we do know, I think we need to address the issue. The first step is to revert the deprecated removals - they're not hurting anything, and if we wait another cycle we can fix oslo.config and then remove them once deployers have had a reasonable chance to address the deprecation. In this case, we can just fix the templates. Are we broken? Yes. Can we fix it?
Re: [openstack-dev] [Neutron] Stale patches
Thanks for cleaning up the house!, Best regards, Miguel Ángel Ajo On Friday, 14 de November de 2014 at 00:46, Salvatore Orlando wrote: There are a lot of neutron patches which, for different reasons, have not been updated in a while. In order to ensure reviewers focus on active patch, I have set a few patches (about 75) as 'abandoned'. No patch with an update in the past month, either patchset or review, has been abandoned. Moreover, only a part of the patches not updated for over a month have been abandoned. I took extra care in identifying which ones could safely be abandoned, and which ones were instead still valuable; nevertheless, if you find out I abandoned a change you're actively working on, please restore it. If you are the owner of one of these patches, you can use the 'restore change' button in gerrit to resurrect the change. If you're not the other and wish to resume work on these patches either contact any member of the neutron-core team in IRC or push a new patch. Salvatore ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org) http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [all] A more dynamic wiki, introducing Categories
Hello folks in the past months Shari and I have implemented more chunks of the taxonomy developed for us by Katherine Cranford (a volunteer expert). Using categories in the wiki pages can help us create dynamic pages and keep information more visible, well organized and discoverable. For example, we had a page listing all Teams. When someone needed to create a new Team, someone would have to: 1) create a new page for the team, save, 2) go to the Teams page, edit page adding a link to the new page, save. Today instead the process looks like this: - Create a new page for the new team - Add the text [[Category: Teams]] to the rest of the text - save Done. The new page will be automatically shown on https://wiki.openstack.org/wiki/Category:Teams Category pages are just like any other wiki page: they can have text, images, can be edited, translated etc. And they automatically show the pages contained in their category. As an example, see how the dynamic page for Programs https://wiki.openstack.org/wiki/Category:Programs and the 'static' version look like: https://wiki.openstack.org/wiki/Programs I would apply to Programs the same redirect I created for Teams (unless someone stops me). Pages generated automatically this way are a huge improvement to navigation inside the wiki and I suggest you all to get familiar with the list of Categories shown in a nice tree on: https://wiki.openstack.org/wiki/Category:Home and read the Taxonomy section in: https://wiki.openstack.org/wiki/How_To_Use_The_Wiki As this is a wiki, your help is needed: think about adding pages you manage to an existing category and when possible consider moving 'static' pages to the newly created dynamic pages. Cheers, stef ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Horizon] Separate horizon and openstack_dashboard
On 11/11/14 09:30, Jiri Tomasek wrote: On 11/10/2014 12:19 PM, Matthias Runge wrote: On Thu, Oct 30, 2014 at 01:13:48PM +0100, Matthias Runge wrote: Hi, tl;dr: how to progreed in separating horizon and openstack_dashboard About a year ago now we agreed, it makes sense to separate horizon and openstack_dashboard. At the past summit, we discussed this again. Currently, our repo contains two directories: horizon and openstack_dashboard, they both will need new names. We discussed a renaming in the past; the former consensus was: rename horizon to horizon_lib and rename openstack_dashboard to horizon. IMHO that doesn't make any sense and will confuse people a lot. I wouldn't object to rename horizon to horizon_lib, although any other name, e.g django-horizon should be fine as well. openstack_dashboard is our official name; people from outside refer to the Dashboard as Horizon, why not rename to openstack_horizon here? It is official name, but I disagree that people refer more to Dashboard than to Horizon. People mostly talk about Horizon and when they say Horizon they refer to the UI. Dashboard is not much used outside Horizon community and is a bit confusing (overloaded). Small example - when you want to add the general overview page - the actual dashboard, that should be the *dashboard* name how it should be used. Otherwise we are adding _Dashboard_ view into Project's _dashboard_ into OpenStack _Dashboard_ project. That's what I really see confusing :) My opinion and preference: * horizon_lib (framework) + horizon (UI) Thoughts? Opinions? Suggestions? From what was discussed on contributors meetup, keeping the names 'horizon' for the lib (framework) and 'openstack_dashboard' for dashboard seemed most convenient. And I happen to aggree with that. Jirka Not sure if it seemed the most convenient, I haven't seen agreement there. -- Jarda ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [glance] security and swift multi-tenant fixes on stable branch
On 2014-11-13 18:28:14 +0100 (+0100), Ihar Hrachyshka wrote: [...] I think those who maintain glance_store module in downstream distributions will cherry-pick the security fix into their packages, so there is nothing to do in terms of stable branches to handle the security issue. [...] As a counterargument, some Oslo libs have grown stable branches for security backports and cut corresponding point releases on an as-needed basis so as to avoid introducing new features in stable server deployments. -- Jeremy Stanley The current glance stable/juno requirement for glance_store is = 0.1.1. If you run stable/juno against glance_store 0.1.1 and try to create an image, you get (multi-tenant store): $ glance image-create --name image1 --container-format bare --disk-format raw html head title410 Gone/title /head body h1410 Gone/h1 Error in store configuration. Adding images to store is disabled.br /br / /body /html (HTTP N/A) With the latest (0.1.9) glance_store, you get: $ glance image-create --name image1 --container-format bare --disk-format raw html head title500 Internal Server Error/title /head body h1500 Internal Server Error/h1 Failed to upload image 702d5865-8925-4d0d-b52c-c93833dc5eaabr /br / /body /html (HTTP 500) Before glance_store was separated out it would have been straightforward to backport the relevant fixes to Glance's tightly coupled in-tree store code. I'm neutral on the mechanics, but I think we need to get to a point where if someone is running stable/juno and has a version of glance_store which satisfies what's specified in requirements.txt they should have secure, working code. -Stuart ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Horizon] proposal: alternating weekly meeting time [doodle poll created]
On 12/11/14 02:45, Richard Jones wrote: I have set up a doodle poll to let folk enter their preferred times. It's in UTC/GMT (/London time, because doodle) so use something like http://everytimezone.com/ to figure that out :) https://doodle.com/47h3f35nad62ncnf Richard Quick Question: Why is the length of the time slot 2h? Since it should take just 1h, should I consider beginning or end of the meeting to be relevant? It is challenging to find time slot between other meetings, since I might have just 1h slot there. -- Jarda ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Fuel] Failed upgrade chain - 5.1 - 5.1.1 - 6.0
Hi folks, Yesterday I performed the following upgrade chain: 5.1 - 5.1.1 - 6.0 and upgrading to 6.0 was failed. I did some research and figure out that Nailgun's container doesn't have Nailgun at all. So here's what happened. 1. When we're upgrading to 6.0 we already have three repos in the system: 5.1, 5.1.1 and 6.0. 2. When we start a new Nailgun container, it runs start.sh which runs puppet [1]. 3. And puppet fails to install a Nailgun package. Why it's happened? We have the same Nailgun version in both 5.1.1 and 6.0 repos, but those packages have different dependencies. From yum's point of view, the packages are same (since they have the same version), so yum tries to install the Nailgun from 5.1.1 repo, but the container contains already installed dependencies for the Nailgun from 6.0 (they were installed during building container), and obviously it fails with failed dependencies errors (we have higher versions in the system). Well, the issue blocks complex upgrade chains and definitely we have to do something ASAP. In order to protect us from similar issues in future I propose next changes: 1. Increase versions of all Fuel packages at least once per release. 2. Remove a package installation code from manifests - the container should not install packages, they should be installed during container building process. What do you think, folks? [1]: https://github.com/stackforge/fuel-main/blob/master/docker/nailgun/start.sh#L11 Thanks, Igor ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Fuel] Failed upgrade chain - 5.1 - 5.1.1 - 6.0
Igor, I'm trying to figure out where the failure is. The 5.1.1 and 6.0 prebuilt containers contain the correct version of nailgun RPM pre-installed. It shouldn't matter if the versions are identical (which is a problem for sure). Can you open a bug and post upgrade logs? I'm interested to see which pieces cause the failure and this is some area I didn't plan for in container upgrades. Best Regards, Matthew Mosesohn On Fri, Nov 14, 2014 at 3:43 PM, Igor Kalnitsky ikalnit...@mirantis.com wrote: Hi folks, Yesterday I performed the following upgrade chain: 5.1 - 5.1.1 - 6.0 and upgrading to 6.0 was failed. I did some research and figure out that Nailgun's container doesn't have Nailgun at all. So here's what happened. 1. When we're upgrading to 6.0 we already have three repos in the system: 5.1, 5.1.1 and 6.0. 2. When we start a new Nailgun container, it runs start.sh which runs puppet [1]. 3. And puppet fails to install a Nailgun package. Why it's happened? We have the same Nailgun version in both 5.1.1 and 6.0 repos, but those packages have different dependencies. From yum's point of view, the packages are same (since they have the same version), so yum tries to install the Nailgun from 5.1.1 repo, but the container contains already installed dependencies for the Nailgun from 6.0 (they were installed during building container), and obviously it fails with failed dependencies errors (we have higher versions in the system). Well, the issue blocks complex upgrade chains and definitely we have to do something ASAP. In order to protect us from similar issues in future I propose next changes: 1. Increase versions of all Fuel packages at least once per release. 2. Remove a package installation code from manifests - the container should not install packages, they should be installed during container building process. What do you think, folks? [1]: https://github.com/stackforge/fuel-main/blob/master/docker/nailgun/start.sh#L11 Thanks, Igor ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon
I think that it boils down to whether it'is possible that distributions: 1. package the node-based tools (grunt, karma, protractor, ...) as installable programs, and 2. xstatic-package the bower-based packages that we use (probably a couple of dozen at least). We might even be able to get away without using grunt, though an alternative to its LiveReload facility (and one that doesn't then also depend on another node program like django-livereload does) would be required. I believe tox and django's runserver (and other manage commands) could suffice to replace the other functionality typically provided by grunt. Richard On 14 November 2014 18:51, Radomir Dopieralski openst...@sheep.art.pl wrote: On 13/11/14 23:30, Martin Geisler wrote: [...] While I agree that it's chaotic, I also think you make the problem worse than it really is. First, remember that the user who installs Horizon won't need to use the JavaScript based *developer* tools such as npm, bower, etc. That is, I think Horizon developers will use these tools to produce a release -- a tarball -- and that tarball will be something you unpack on your webserver and then you're done. I base this on what I've seen in the project I've been working. The release tarball you download here don't mention npm, bower, or any of the other tools: https://github.com/zerovm/swift-browser/releases The tools were used to produce the tarball and were used to test it, but they're not part of the released product. Somewhat similar to how GCC isn't included in the tarball if you download a pre-compiled binary. [...] Maybe a difference is that you don't (yet) install a web application like you install a system application. Instead you *deploy* it: you unpack files on a webserver, you configure permissions, you setup cache rules, you configure a database, etc. [...] I think I see where the misunderstanding is in this whole discussion. It seems it revolves around the purpose and role of the distribution. From the naive point of view, the role of a Linux distribution is to just collect all the software from respective upstream developers and put it in a single repository, so that it can be easily installed by the users. That's not the case. The role of a distribution is to provide a working ecosystem of software, that is: a) installed and configured in consistent way, b) tested to work with the specific versions that it ships with, c) audited for security, d) maintained, including security patches, e) documented, including external tutorials and the like, f) supported, either by the community or by the companies that provide support, g) free of licensing issues and legal risks as much as possible, h) managed with the common system management tools. In order to do that, they can't just take a tarball and drop it in a directory. They always produce their own builds, to make sure it's the same thing that the source code specifies. They sometimes have to rearrange configuration files, to make them fit the standards in their system. They provide sane configuration defaults. They track the security reports about all the installed components, and apply fixes, often before the security issue is even announced. Basically, a distribution adds a whole bunch of additional guarantees for the software they ship. Those are often long-term guarantees, as they will be supporting our software long after we have forgotten about it already. You say that web development doesn't work like that. That may be true, but that's mostly because if you develop a new web application in-house, and deploy it on your server, you don't really have such large legal risk, configuration complexity or support problem -- you just have to care about that single application, because the packagers from the distribution that you are using are taking care about all the rest of software on your server. -- Radomir Dopieralski ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Horizon] proposal: alternating weekly meeting time [doodle poll created]
I did it that way so that the form wouldn't blow out completely, and I thought for our purposes of scheduling a 2 hour precision would be good enough. On 14 November 2014 22:32, Jaromir Coufal jcou...@redhat.com wrote: On 12/11/14 02:45, Richard Jones wrote: I have set up a doodle poll to let folk enter their preferred times. It's in UTC/GMT (/London time, because doodle) so use something like http://everytimezone.com/ to figure that out :) https://doodle.com/47h3f35nad62ncnf Richard Quick Question: Why is the length of the time slot 2h? Since it should take just 1h, should I consider beginning or end of the meeting to be relevant? It is challenging to find time slot between other meetings, since I might have just 1h slot there. -- Jarda ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] L2 gateway as a service
Thanks Maruti, I have some comments and questions which I've posted on gerrit. There are two things I would like to discuss on the mailing list concerning this effort. 1) Is this spec replacing https://review.openstack.org/#/c/100278 and https://review.openstack.org/#/c/93613 - I hope so, otherwise this just adds even more complexity. 2) It sounds like you should be able to implement this service plugin in either a feature branch or a repository distinct from neutron. Can you confirm that? Salvatore On 13 November 2014 13:26, Kamat, Maruti Haridas maruti.ka...@hp.com wrote: Hi Friends, As discussed during the summit, I have uploaded the spec for review at *https://review.openstack.org/#/c/134179/* https://review.openstack.org/ Thanks, Maruti ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Fuel] Failed upgrade chain - 5.1 - 5.1.1 - 6.0
Mat, I've filed a bug [1] and provide you a puppet output inside Nailgun container [2]. If I remove /etc/yum.repos.d/5.1.1_nailgun - the puppet works fine. [1]:https://bugs.launchpad.net/fuel/+bug/1392698 [2]: http://xsnippet.org/360279/raw/ Thanks, Igor On Fri, Nov 14, 2014 at 1:51 PM, Matthew Mosesohn mmoses...@mirantis.com wrote: Igor, I'm trying to figure out where the failure is. The 5.1.1 and 6.0 prebuilt containers contain the correct version of nailgun RPM pre-installed. It shouldn't matter if the versions are identical (which is a problem for sure). Can you open a bug and post upgrade logs? I'm interested to see which pieces cause the failure and this is some area I didn't plan for in container upgrades. Best Regards, Matthew Mosesohn On Fri, Nov 14, 2014 at 3:43 PM, Igor Kalnitsky ikalnit...@mirantis.com wrote: Hi folks, Yesterday I performed the following upgrade chain: 5.1 - 5.1.1 - 6.0 and upgrading to 6.0 was failed. I did some research and figure out that Nailgun's container doesn't have Nailgun at all. So here's what happened. 1. When we're upgrading to 6.0 we already have three repos in the system: 5.1, 5.1.1 and 6.0. 2. When we start a new Nailgun container, it runs start.sh which runs puppet [1]. 3. And puppet fails to install a Nailgun package. Why it's happened? We have the same Nailgun version in both 5.1.1 and 6.0 repos, but those packages have different dependencies. From yum's point of view, the packages are same (since they have the same version), so yum tries to install the Nailgun from 5.1.1 repo, but the container contains already installed dependencies for the Nailgun from 6.0 (they were installed during building container), and obviously it fails with failed dependencies errors (we have higher versions in the system). Well, the issue blocks complex upgrade chains and definitely we have to do something ASAP. In order to protect us from similar issues in future I propose next changes: 1. Increase versions of all Fuel packages at least once per release. 2. Remove a package installation code from manifests - the container should not install packages, they should be installed during container building process. What do you think, folks? [1]: https://github.com/stackforge/fuel-main/blob/master/docker/nailgun/start.sh#L11 Thanks, Igor ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Fuel] Failed upgrade chain - 5.1 - 5.1.1 - 6.0
Igor Kalnitsky ikalnit...@mirantis.com writes: Mat, I've filed a bug [1] and provide you a puppet output inside Nailgun container [2]. If I remove /etc/yum.repos.d/5.1.1_nailgun - the puppet works fine. [1]:https://bugs.launchpad.net/fuel/+bug/1392698 [2]: http://xsnippet.org/360279/raw/ Hello, I guess I asked for a related problem during make iso on #fuel-dev few days ago. The logs seems similar to me: http://paste.openstack.org/show/131837/ Maybe you want to add also this reference to the bug. Best, Daniele ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon
On 13/11/14 21:11, Matthew Farina wrote: I would like to take a moment to point out that developing system software is different from developing web applications. The way systems software is developed and often deployed is different from web applications. Horizon as it sits today appears to be web application development by systems software developers. This raises the barrier to entry for web application developers. The approach being proposed moves horizon into the realm of web application technologies that web application people use today. The debate as I'm reading it is about taking web application development processes and turning them into systems development processes which are often foreign to web application developers. How is this going to work out? How will web app people be willing to get involved? Why should this be treated the same? Most of OpenStack is a systems problem. This piece is a little different. To make it successful should it get some wiggle room to work well in the space it's in? Note, I'm not saying it should be insecure or anything like that. There are just different approaches. Basically, you're saying, we should lower standards to attract more people? I disagree in your request, to handle horizon different from the rest of OpenStack: why? And it worked quite well in the past. This IMHO that's just wrong. When new folks show up: great! Everybody's welcome. We might need to educate people here. There are so many patterns used, and they have been proven wrong in the past. Technically, it's possible to have several copies of the same library installed. But because it's possible, that doesn't mean, you should do that. Matthias ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [sahara] Nominate Michael McCune to sahara-core
On 11/14/2014 03:55 AM, Sergey Lukjanov wrote: Congrats! Welcome to the core team! thanks =) mike ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon
On 13/11/14 19:11, Donald Stufft wrote: As far as I’m aware npm supports TLS the same as pip does. That secures the transport between the end users and the repository so you can be assured that there is no man in the middle. Security wise npm (and pip) are about ~95% (mad up numbers, but you can get the gist) of the effectiveness as the OS package managers. Oh, e.g rpm allows packages to be cryptographically signed, and depending on your systems config, that is enforced. This is quite different from just tls'ing a connection. Matthias ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon
On Nov 14, 2014, at 7:48 AM, Matthias Runge mru...@redhat.com wrote: On 13/11/14 19:11, Donald Stufft wrote: As far as I’m aware npm supports TLS the same as pip does. That secures the transport between the end users and the repository so you can be assured that there is no man in the middle. Security wise npm (and pip) are about ~95% (mad up numbers, but you can get the gist) of the effectiveness as the OS package managers. Oh, e.g rpm allows packages to be cryptographically signed, and depending on your systems config, that is enforced. This is quite different from just tls'ing a connection. You do realize that TLS provides cryptographic proof of authenticity and integrity just like PGP does right? (It also provides the cool benefit of privacy which PGP signing does not). Generally even with PGP signing you still have a number of online keys sitting on servers which are able to sign packages and the tooling will accept their signatures. The essential difference is basically, with TLS you depend on the web server to not be compromised, with PGP signing you depend on the build server to not be compromised. In theory you *can* use PGP signing in a way that all of the signing keys are offline, however this requires having a person manually sign all artifacts that are created (and even then, you'd want them to also generate said artifacts to ensure that they were not compromised). However in the real world, most (if not all) systems involve online keys. All this isn't to say that TLS is 100% as good as using something like PGP for signatures though. PGP does have some good benefits, the major one being that it travels better/easier/at all. For instance a PGP signature can be transfered alongside a package file and hosted on untrusted mirrors while relying on TLS means that you *must* trust the machine from which you're getting the files from. TLS is a fairly decent way of securing a package infrastructure though, it prevents all of the major attacks that PGP signing does in practice but it moves the high value target from the build machines to the web servers and makes mirroring require trusting the mirror. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon
On 11/14/2014 06:30 AM, Martin Geisler wrote: That is, I think Horizon developers will use these tools to produce a release -- a tarball -- and that tarball will be something you unpack on your webserver and then you're done. I base this on what I've seen in the project I've been working. The release tarball you download here don't mention npm, bower, or any of the other tools: https://github.com/zerovm/swift-browser/releases The tools were used to produce the tarball and were used to test it, but they're not part of the released product. Somewhat similar to how GCC isn't included in the tarball if you download a pre-compiled binary. When doing packages, I don't even use the tarball, but a git clone, which itself produces an orig.tar.xz file. I do that to allow more flexibility and to be able to do upstream code change easily. So to build a Debian package, I will need to have all the tooling, just like I need GCC to build packages. I thought this needed to be cleared out. On 11/14/2014 06:30 AM, Martin Geisler wrote: Maybe a difference is that you don't (yet) install a web application like you install a system application. Instead you *deploy* it: you unpack files on a webserver, you configure permissions, you setup cache rules, you configure a database, etc. I really don't see why a web application should be any different from any other component of OpenStack. No, I wont deploy it, I will just apt-get install it... On 11/14/2014 06:30 AM, Martin Geisler wrote: A web app is something a single user installs on a system (www-data or a similar user) and then this user configures the system web server to serve this web app. The configuration part is the role of the package maintainer's script. At least in Debian, there's the facility to configure apache and https (if you respond positively to the debconf prompts about this), so Horizon is directly useable after you install the package. I don't want this feature to go away. On 11/14/2014 06:30 AM, Martin Geisler wrote: I agree that it would be cool to have web apps be as robust and general purpose as system apps. However, I think that day is a ways off. I'm not sure why you are saying this. Horizon works out of the box in Debian, and so is murano-dashboard and the sahara support. On 11/14/2014 06:30 AM, Martin Geisler wrote: The dependency solver is as good as the community needs it to be. Or put differently, if the JavaScript community is able to produce working software with npm, then they obviously produce it within the bounds of the capabilities of its dependency solver. I'm happy to believe that apt has a top-notch and highly tuned dependency solver. That doesn't really matter since it would be solving problems we don't have. Dependency solving is pure math. It's very hard to get it right. I don't agree that some language may need something weaker, and that it's possible for the maintainers to adapt. It's just that it may, in some case, be possible to go around some defects if they exist, but everyone needs a robust dependency solver. On 11/14/2014 06:30 AM, Martin Geisler wrote: In my view, you're taking on way too much work by going into those details. I don't think I need or want you do to anything more than repack the tarball that npm retrieves -- I don't think you should run tests or generate documentation. Of course, I need to run tests. That's a big part of the QA work, and I will certainly not give-up on that. You will have a hard time convincing anyone within the OpenStack community that it's OK to not run unit tests. As for the doc, well, I do believe it's a big plus. On 11/14/2014 06:30 AM, Martin Geisler wrote: As a user or sysadmin, I would be happy to add a deb line to my sources.list and get Debian packages that wrap the node modules. This means that the packages would *not* be in Debian. Therefore, horizon couldn't be uploaded to Debian (as there would be some not available dependencies). That's absolutely not what I want to do. I want Horizon, just like the rest of OpenStack, to be fully in Debian. Cheers, Thomas ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Fuel] Separate code freeze for repos
Hi, There was an idea to make a separate code freeze for repos Could you please clarify what do you mean? I think we should have a way to merge patches for the next release event if it's code freeze for the current. Thanks, On Tue, Nov 11, 2014 at 2:16 PM, Vitaly Kramskikh vkramsk...@mirantis.com wrote: Folks, There was an idea to make a separate code freeze for repos, but we decided not to do it. Do we plan to try it this time? It is really painful to maintain multi-level tree of dependent review requests and wait for a few weeks until we can merge new stuff in master. -- Vitaly Kramskikh, Software Engineer, Mirantis, Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [stable] Organizational changes to support stable branches
Ihar Hrachyshka wrote: On 13/11/14 12:34, Thierry Carrez wrote: 1. We propose that integrated projects with stable branches designate a formal Stable Branch Liaison (by default, that would be the PTL, but I strongly encourage someone specifically interested in stable branches to step up). The Stable Branch Liaison is responsible for making sure backports are proposed for critical issues in their project, and make sure proposed backports are reviewed. They are also the contact point for stable branch release managers around point release times. Where is the list of liaisons tracked? Do we have a page similar to oslo liaisons one? FYI I'd step in as a formal stable liaison for neutron (unless there are objections from project PTL; added Kyle to CC). I just added a section for Stable Branch liaisons on the cross-project liaisons page: https://wiki.openstack.org/wiki/CrossProjectLiaisons#Stable_Branch Please add yourself there, unless Kyle objects :) -- Thierry Carrez (ttx) ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] improving PyPi modules design FHS (was: the future of angularjs development in Horizon)
On Nov 13, 2014, at 6:29 PM, Thomas Goirand z...@debian.org wrote: On 11/14/2014 06:40 AM, Donald Stufft wrote: Sure! That's how I do most of my Python modules these days. I don't just create them from scratch, I use my own debpypi script, which generates a template for packaging. But it can't be fully automated. I could almost do it in a fully automated manner for PEAR packages for PHP (see debpear in the Debian archive), but it's harder with Python and pip/PyPi. I would be interested to know what makes Python harder in this regard, I would like to fix it. The fact that the standard from PyPi is very fuzzy is one of the issue. There's nothing in the format (for example in the DOAP.xml record) that tells if a module supports Python3 for example. Then the short and long descriptions aren't respected, often, you get some changelog entries there. Then there's no real convention for the location of the sphinx doc. There's also the fact that dependencies for Python have to be written by hand on a Debian package. See for example, dependencies on arparse, distribute, ordereddict, which I never put in a Debian package as it's available in Python 2.7. Or the fact that there's no real unique place where dependencies are written on a PyPi package (is it hidden somewhere in setup.py, or is it explicitly written in requirements.txt?). Etc. On the PHP world, everything is much cleaner, in the package.xml, which is very easily parse-able. (This is fairly off topic, so if you want to reply to this in private that’s fine): Let's just change the subject line, so that those not interested in the discussion can skip the topic entirely. Nothing that says if it supports py3: Yea, this is a problem, you can somewhat estimate it using the Python 3 classifier though. The issue is that this is a not-mandatory tag. And often, it isn't set. Short and Long descriptions aren’t respected: I’m not sure what you mean by isn’t respected? On my templating script, I grab what's supposed to be the short and long description. But this leads to importing some RST format long description that do include unrelated things. In fact, I'm not even sure there's such things as long and short desc in the proper way, so that it could just be included in debian/control without manual work. I suspect this is just a difference between the two systems them. We do have such concepts as short and long description, but we support mark up (via RST) in the long description and obviously since PyPI is a not a curated index there’s nothing stopping people from doing whatever they want in those descriptions. Have to write dependencies by hand: Not sure what you mean by not depending on argparse, distribute, ordereddict, etc? argparse and order edict are often depended on because of Python 2.6, Right. I think this is an issue in Debian: we should have had a Provides: in python 2.7, so that it wouldn't have mater. I just hope this specific issue will just fade away as Python 2.6 gets older and less used. For those particular cases probably, the general issue likely won’t go away though, it’ll occur anytime a new version of Python adds a new module that is either already available separately or that someone writes a backport package for older versions of Python. On the plus side the newer formats support conditional dependencies so you can say things like: Requires-Diet: argparse; python_version == ‘2.6' which will cause it to only be a dependency on Python 2.6. The sdist format doesn’t yet support this (although since setup.py is executable you can approximate it by generating a list of dependencies that varies depending on Python version). setuptools/distribute should only be dependended on if the project is using entry points or something similar. If only everyone was using PBR... :) No unique place where dependencies are written: If the project is using setuptools (or is usable from pip) then dependencies should be inside of the install_requires field in the setup.py. I can send some code for getting this information. Sadly it’s not in a static form yet so it requires executing the setup.py. Executing blindly setup.py before I can inspect it would be an issue. However, yes please, I'm curious on how to extract the information, so please do send the code! I just woke up so I’ll extract it from pip and send it later today, however the general gist is that you execute ``setup.py egg_info`` which will generate a .egg-info directory alongside the setup.py file, and then inside of that is a requires.txt file which can be parsed to extract the dependencies. The gotchas here are that the egg_info command and the idea of dependencies at all is a setuptools feature not distutils, so it only works if the project supports setuptools style setup.py. Even if they don’t support it you can force the setup.py to use setuptools with a nasty
Re: [openstack-dev] [all] A more dynamic wiki, introducing Categories
Stefano Maffulli wrote: [...] Category pages are just like any other wiki page: they can have text, images, can be edited, translated etc. And they automatically show the pages contained in their category. As an example, see how the dynamic page for Programs https://wiki.openstack.org/wiki/Category:Programs and the 'static' version look like: https://wiki.openstack.org/wiki/Programs I would apply to Programs the same redirect I created for Teams (unless someone stops me). In this precise example, I feel like the dynamic page is much less usable than the static page, due to the deep hierarchy. On the static version, clicking on the program name would give us the main page for that program, while on the dynamic version, it would lead us to another list of categories/page from which it's difficult to guess which one is the main one. We seem to bury the useful page down layers of category pages. Since the canonical list of programs is maintained in the governance repository (which should be published in a few days on its own website), I don't think we should duplicate it manually in the wiki anyway. My suggestion is therefore to recategorize all Category:Programs page into Category:Teams pages, so that all teams (whether officially recognized as a program or not) appear on the same dynamic page as far as the wiki is concerned. If people are interested in getting the list of official programs, they should get it from the governance repo / site anyway. Pages generated automatically this way are a huge improvement to navigation inside the wiki and I suggest you all to get familiar with the list of Categories shown in a nice tree on: https://wiki.openstack.org/wiki/Category:Home +1 -- Autogenerated list of relevant pages shall make our wiki much more discoverable and browseable. -- Thierry Carrez (ttx) ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Fuel] Separate code freeze for repos
Evgeniy, That means that the stable branch can be created for some repos earlier. For example, fuel-web repo seems not to have critical issues for now and I'd like master branch of that repo to be opened for merging various stuff which shouldn't go to 6.0 and do not wait until all other repos stabilize. 2014-11-14 16:42 GMT+03:00 Evgeniy L e...@mirantis.com: Hi, There was an idea to make a separate code freeze for repos Could you please clarify what do you mean? I think we should have a way to merge patches for the next release event if it's code freeze for the current. Thanks, On Tue, Nov 11, 2014 at 2:16 PM, Vitaly Kramskikh vkramsk...@mirantis.com wrote: Folks, There was an idea to make a separate code freeze for repos, but we decided not to do it. Do we plan to try it this time? It is really painful to maintain multi-level tree of dependent review requests and wait for a few weeks until we can merge new stuff in master. -- Vitaly Kramskikh, Software Engineer, Mirantis, Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Vitaly Kramskikh, Software Engineer, Mirantis, Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [oslo] The right (and the wrong) way of handling module imports in oslo-incubator
On 11/14/2014 03:31 AM, Ihar Hrachyshka wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA512 On 14/11/14 09:14, Flavio Percoco wrote: On 13/11/14 23:25 +, Amrith Kumar wrote: At the suggestion of Doug Hellmann, and relative to a conversation with him and Flavio at Summit. Doug suggested that I pose this question on the dev mailing list so someone from OSLO can communicate the answer to the entire community (rather than just the private email exchange that we had). Here’s the situation, I’m using loopingcall.py as an example, this is not limited to this module but serves as an example. An OSLO incubator module loopingcall depends on another OSLO incubator module timeutils. timeutils has graduated [drum-roll] and is now part of oslo.utils. There is also other project code that references timeutils. So, to handle the graduation of timeutils, the changes I’ll be making are: 1. Remove timeutils from openstack-common.conf 2. Make the project code reference oslo.utils But what of loopingcall? Should I a. Update it and change the import(s) therein to point to oslo.utils, or b. Sync the oslo-incubator code for loopingcall, picking up all changes at least upto and including the change in oslo-incubator that handles the graduation of oslo.utils. In speaking with Doug and Flavio, (after I submitted copious amounts of code that did (a)) above, I’ve come to learn that I chose the wrong answer. The correct answer is (b). This doesn’t have to be part of the same commit, and what I’ve ended up doing is this … c. Leave timeutils in project/openstack/common and let oslo-incubator depend on it while migrating the project to use oslo.utils. In a subsequent commit, a sync from oslo-incubator can happen. I’d like someone on OSLO to confirm this, and for other projects whose lead I followed, you may want to address these in the changes you have in flight or have already merged. `b` is the right answer there. As a general rule - probably the easiest way to solve the above dilema - people should *never* modify incubator modules in the project. Sticking to this rule will automatically answer the question of how to update, maintain and consume code from oslo-incubator. Crazy idea: we should have a bot that -1's all the patches that modify oslo-incubator code without being marked by some special tag (OsloSync?). We've slipped several local modifications to those files before (I know two cases in Neutron, though I hardly monitor all the patch queue). We had this problem a lot in Cinder in the past. With education and monitoring from the Cores we have been able to avoid it. The idea of monitoring for such commits is a good idea. I wonder, however, if there is a way to do it with a hacking check? If there are projects that picked `a` as the right answer, please, update your patches and follow the already well defined workflow for oslo-incubator. Doing otherwise will just make things harder for us who maintain oslo, for stable maintenance and for your own contributors. Amrith, thanks for bringing this up and for updating your patches, I know it's a pain and I appreciate your collaboration there. Cheers, Flavio P.S: Gentle note. Oslo is not an acronym. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -BEGIN PGP SIGNATURE- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) iQEcBAEBCgAGBQJUZcwOAAoJEC5aWaUY1u5741wH/2B/Pf56YtAaru7jAR9bN7IZ KNY2zkGIDNMZQJWz3DxW7R0myKdmHVqrM/wA+8Lkf0l/ITh1LtiXtRxx5E2sPnJP jef3ODTESooOKGcGOxvKO8tt/Bl4EorJzkX70dyJzlV7fKfZuwCFpZCp73S7npBh BYd5Dfhi+pTyIZvtWSYKJzloJ/BasvKM+pwvlVsV9JIwMNrwwaLx2yDh+D3fltEg bROJooq5J6z/pN19bZ5UkFU2z9lHNI6K6pa1eWLqQdm8WJnNmfmJjiX+vO2wp1z5 7qDsKL/dbHzXfPrBX8MqO9SEvt/jrDS8TeA2tMgAZg8+F5ST1dVHFGxWXJ4eoF4= =2v6F -END PGP SIGNATURE- ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon
On 14/11/14 13:02, Richard Jones wrote: On 14 November 2014 18:51, Radomir Dopieralski openst...@sheep.art.pl mailto:openst...@sheep.art.pl wrote: On 13/11/14 23:30, Martin Geisler wrote: [...] Maybe a difference is that you don't (yet) install a web application like you install a system application. Instead you *deploy* it: you unpack files on a webserver, you configure permissions, you setup cache rules, you configure a database, etc. [...] In order to do that, they can't just take a tarball and drop it in a directory. They always produce their own builds, to make sure it's the same thing that the source code specifies. They sometimes have to rearrange configuration files, to make them fit the standards in their system. They provide sane configuration defaults. They track the security reports about all the installed components, and apply fixes, often before the security issue is even announced. [...] I think that it boils down to whether it'is possible that distributions: 1. package the node-based tools (grunt, karma, protractor, ...) as installable programs, and 2. xstatic-package the bower-based packages that we use (probably a couple of dozen at least). We might even be able to get away without using grunt, though an alternative to its LiveReload facility (and one that doesn't then also depend on another node program like django-livereload does) would be required. I believe tox and django's runserver (and other manage commands) could suffice to replace the other functionality typically provided by grunt. We don't really need Xstatic for that. The packagers can as well package the bower-based packages directly. We can use anything, really, as long as we follow a process that makes sure that Horizon can be packaged into the different distributions. That is, we need: 1. All dependencies explicit (with tests failing if a dependency is missing), 2. explicit version ranges, 3. no multiple versions of the same library, 4. additions and upgrades of libraries moderated by the packagers, 5. ability to replace the development environment with packaged libraries from the system, 6. ability to build and test our software with the tools that can be used by all the distributions. As I said, this is more of a process thing than a tool thing -- I believe any tool can be used to follow this process, more or less automatically. -- Radomir Dopieralski ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] improving PyPi modules design FHS (was: the future of angularjs development in Horizon)
On Fri, Nov 14, 2014 at 08:44:24AM -0500, Donald Stufft wrote: On Nov 13, 2014, at 6:29 PM, Thomas Goirand z...@debian.org wrote: On 11/14/2014 06:40 AM, Donald Stufft wrote: [okay, well, actually Thomas Goirand wrote:] No, that's for arch independent *things*. Like for example, javascript. In Debian, these are going in /usr/share/javascript. Python code used to live within /usr/share/pyshared too (but we stopped the symlink forest during the Jessie cycle). [and Donald Stufft replied:] Why does the FHS webpage say differently? From [1]: The /usr/share hierarchy is for all read-only architecture independent data files. Which is exactly what I wrote. Oh, maybe it's the data files that bothers you? Well, in some ways, javascript can be considered as data files. But let's take another example. PHP, java and perl library files are all stored into /usr/share as well (though surprisingly, ruby is in /usr/lib... but maybe because it also integrates compiled-in .so files). Yea it’s the data files part (which is why I added the * * around it in my original message). Maybe the FHS uses confuses terminology here but I wouldn’t, and I suspect the NPM maintainers feel the same way, classify software that is designed to be executed on the server as “data”. One of the easiest ways to understand why Debian and other systems like to put architecture-independent interpreted language files (Perl, Python, JavaScript, etc) in /usr/share instead of /usr/lib actually goes back way further in the past: back to the time when /usr or parts of /usr were, well, shared between machines. The idea is that if there is a large set of files that will be absolutely, character-for-character, bit-for-bit identical if installed on different architectures, they may only be installed once and then reused from there. Thus, interpreted source is put in /usr/share, while compiled object files (.a files, shared object files, shared libraries, Git helper binaries, etc) are put in /usr/lib, which will be different for each machine. The Debian package archive takes this one step sideways and says arch-independent data should be split into a separate Debian package, put in /usr/share, and not just installed the same way on any architecture, but *only exist in a single copy* in the Debian package archive*. Thus, pure-Perl, pure-Python, pure-JavaScript (or just JavaScript, I guess ;)) packages will provide only a single Debian package that may be installed as-is on any architecture and put files in /usr/share/{perl,python,javascript,...} that will just work everywhere. More recently in Debian history, this already-existing split between arch-independent stuff in /usr/share and arch-dependent stuff in /usr/lib has been used and even extended once more for multiarch packages, but that's another story for another day :) So in short, the idea that anything arch-independent may live in /usr/share and be used unmodified by any machine on any architecture kind of makes sense to me personally and to the Debian project as a whole :) G'luck, Peter -- Peter Pentchev r...@ringlet.net r...@freebsd.org p.penc...@storpool.com PGP key:http://people.FreeBSD.org/~roam/roam.key.asc Key fingerprint 2EE7 A7A5 17FC 124C F115 C354 651E EFB0 2527 DF13 signature.asc Description: Digital signature ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon
On 11/11/2014 03:02 PM, Richard Jones wrote: json3 es5-shim angular angular-route angular-cookies angular-animate angular-sanitize angular-smart-table angular-local-storage angular-bootstrap angular-translate font-awesome boot underscore ng-websocket Just FYI, in Debian, the libjs-angularjs already carries: angular-route.js angular-cookies.js angular-animate.js angular-sanitize.js We also already have packaged: font-awesome underscore So, basically, I'd have to package: json3 es5-shim boot angular-smart-table angular-local-storage angular-translate ng-websocket That's a reasonable amount of work. Multiply this by 2 for the xstatic packages (if we keep using that), that's about 14 new packages. By the way, can't we use libjs-sockjs instead of ng-websocket? Last, I'm ok if we add all these, but please, let's do this in the beginning of the Kilo cycle. It was really hard to cope with it at the end of the freeze for Juno. Cheers, Thomas Goirand (zigo) ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] [Ironic] [Cinder] Baremetal volumes -- how to model direct attached storage
Excerpts from Chris Jones's message of 2014-11-14 00:42:48 -0800: Hi My thoughts: Shoe-horning the ephemeral partition into Cinder seems like a lot of pain for almost no gain[1]. The only gain I can think of would be that we could bring a node down, boot it into a special ramdisk that exposes the volume to the network, so cindery operations (e.g. migration) could be performed, but I'm not even sure if anyone is asking for that? Forcing Cinder to understand and track something it can never normally do anything with, seems like we're just trying to squeeze ourselves into an ever-shrinking VM costume! Having said that, preserve ephemeral is a terrible oxymoron, so if we can do something about it, we probably should. How about instead, we teach Nova/Ironic about a concept of no ephemeral? They make a partition on the first disk for the first image they deploy, and then they never touch the other part(s) of the disk(s), until the instance is destroyed. This creates one additional burden for operators, which is to create and format a partition the first time they boot, but since this is a very small number of commands, and something we could trivially bake into our (root?) elements, I'm not sure it's a huge problem. This gets rid of the cognitive dissonance of preserving something that is described as ephemeral, and (IMO) makes it extremely clear that OpenStack isn't going to touch anything but the first partition of the first disk. If this were baked into the flavour rather than something we tack onto a nova rebuild command, it offers greater safety for operators, against the risk of accidentallying a vital state partition with a misconstructed rebuild command. +1 A predictable and simple rule seems like it would go a long way to decoupling state preservation from rebuild, which I like very much. There is, of course, the issue of decom then, but that has never been a concern for TripleO, and for OnMetal, they think we're a bit daft trying to preserve state while delivering new images anyway. :) ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [infra][devstack] CI failed The plugin token_endpoint could not be found
On 2014-11-14 00:34:14 -0500 (-0500), Wan, Sam wrote: Seems we need to use python-keystoneclient and python-openstackclient from git.openstack.org because those on pip don’t work. That's a bug we're (collectively) trying to prevent in the future. Services, even under development, should not depend on features only available in unreleased versions of libraries. But in latest update of stack.sh, it’s to use pip by default [...] And this is intentional, implemented specifically so that we can keep it from happening again. -- Jeremy Stanley ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon
Thomas Goirand z...@debian.org writes: On 11/14/2014 06:30 AM, Martin Geisler wrote: That is, I think Horizon developers will use these tools to produce a release -- a tarball -- and that tarball will be something you unpack on your webserver and then you're done. I base this on what I've seen in the project I've been working. The release tarball you download here don't mention npm, bower, or any of the other tools: https://github.com/zerovm/swift-browser/releases The tools were used to produce the tarball and were used to test it, but they're not part of the released product. Somewhat similar to how GCC isn't included in the tarball if you download a pre-compiled binary. When doing packages, I don't even use the tarball, but a git clone, which itself produces an orig.tar.xz file. I do that to allow more flexibility and to be able to do upstream code change easily. That seems to be a choice you're making -- I think you could also use the upstream tarball as provided (let's say I also include unminified source files in the tarball). On 11/14/2014 06:30 AM, Martin Geisler wrote: Maybe a difference is that you don't (yet) install a web application like you install a system application. Instead you *deploy* it: you unpack files on a webserver, you configure permissions, you setup cache rules, you configure a database, etc. I really don't see why a web application should be any different from any other component of OpenStack. No, I wont deploy it, I will just apt-get install it... I'm a long time Debian user and web developer. I install system tools (web servers, databases, editors) and I deploy web applications. I believe that's the most common way to handle web applications today. On 11/14/2014 06:30 AM, Martin Geisler wrote: I agree that it would be cool to have web apps be as robust and general purpose as system apps. However, I think that day is a ways off. I'm not sure why you are saying this. Horizon works out of the box in Debian, and so is murano-dashboard and the sahara support. That's cool! On 11/14/2014 06:30 AM, Martin Geisler wrote: The dependency solver is as good as the community needs it to be. Or put differently, if the JavaScript community is able to produce working software with npm, then they obviously produce it within the bounds of the capabilities of its dependency solver. I'm happy to believe that apt has a top-notch and highly tuned dependency solver. That doesn't really matter since it would be solving problems we don't have. Dependency solving is pure math. It's very hard to get it right. I don't agree that some language may need something weaker, and that it's possible for the maintainers to adapt. It's just that it may, in some case, be possible to go around some defects if they exist, but everyone needs a robust dependency solver. I think you're misunderstanding the implication. If apt has a stronger dependency solver than npm, then that's fine. The argument that apt is stronger than npm is not an argument for moving node packages from npm to apt -- the Debian packages will still only use a subset of apt dependency solver, namely the subset they use with npm. On 11/14/2014 06:30 AM, Martin Geisler wrote: In my view, you're taking on way too much work by going into those details. I don't think I need or want you do to anything more than repack the tarball that npm retrieves -- I don't think you should run tests or generate documentation. Of course, I need to run tests. That's a big part of the QA work, and I will certainly not give-up on that. You will have a hard time convincing anyone within the OpenStack community that it's OK to not run unit tests. That's not what I said: the OpenStack developers will continue to tests the software. I personally don't think it's the job of the downstream packagers to do this QA work. (It's of course cool to run the tests on the system installed by your packages -- that test run would then install the needed tools using npm and bower and whatnot if that's how the upstream has setup the test framework.) On 11/14/2014 06:30 AM, Martin Geisler wrote: As a user or sysadmin, I would be happy to add a deb line to my sources.list and get Debian packages that wrap the node modules. This means that the packages would *not* be in Debian. Therefore, horizon couldn't be uploaded to Debian (as there would be some not available dependencies). That's absolutely not what I want to do. I want Horizon, just like the rest of OpenStack, to be fully in Debian. You don't have to convince me -- I'm not going to deploy OpenStack anytime soon (apart from DevStack). So I'm not really the right customer for these packages. All I wanted to say is that if there is a cool OpenStack dashboard written as a web app, then I would be happy to deploy it like I deploy other web apps. If there were a package I could use, that would be cool, but it's by no means a show-stopper for someone like me.
Re: [openstack-dev] [Fuel] Separate code freeze for repos
Guys, The idea of separate unfreezing is cool itself, but we have to define some rules how to define that fuel-web is stable. I mean, in fuel-web we have different projects, so when Fuel UI is stable, the fuel_upgrade or Nailgun may be not. - Igor On Fri, Nov 14, 2014 at 3:52 PM, Vitaly Kramskikh vkramsk...@mirantis.com wrote: Evgeniy, That means that the stable branch can be created for some repos earlier. For example, fuel-web repo seems not to have critical issues for now and I'd like master branch of that repo to be opened for merging various stuff which shouldn't go to 6.0 and do not wait until all other repos stabilize. 2014-11-14 16:42 GMT+03:00 Evgeniy L e...@mirantis.com: Hi, There was an idea to make a separate code freeze for repos Could you please clarify what do you mean? I think we should have a way to merge patches for the next release event if it's code freeze for the current. Thanks, On Tue, Nov 11, 2014 at 2:16 PM, Vitaly Kramskikh vkramsk...@mirantis.com wrote: Folks, There was an idea to make a separate code freeze for repos, but we decided not to do it. Do we plan to try it this time? It is really painful to maintain multi-level tree of dependent review requests and wait for a few weeks until we can merge new stuff in master. -- Vitaly Kramskikh, Software Engineer, Mirantis, Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Vitaly Kramskikh, Software Engineer, Mirantis, Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon
On 2014-11-14 15:10:59 +0100 (+0100), Martin Geisler wrote: [...] That's not what I said: the OpenStack developers will continue to tests the software. I personally don't think it's the job of the downstream packagers to do this QA work. (It's of course cool to run the tests on the system installed by your packages -- that test run would then install the needed tools using npm and bower and whatnot if that's how the upstream has setup the test framework.) [...] Just to quibble on this particular point... distro packagers are also developers. They often (more often than we'd like, and we do try to find ways to help avoid this where possible) need to carry their own patches to tweak the software to fit their deployment and operation model. Being able to rerun tests in-place with packaged versions of everything including their patches helps them confirm that what they distribute still works as intended. Further, the distro users are well within their rights to modify and respin these packages themselves, and will potentially want to be able to run these tests for the same reasons. We distribute our tests as part of our software because our tests *are* part of our software. -- Jeremy Stanley ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Fuel] Separate code freeze for repos
There is a proposal to consider a repo as stable if there are no high/critical bugs and there were no such new bugs with this priority for the last 3 days. I'm ok with it. 2014-11-14 17:16 GMT+03:00 Igor Kalnitsky ikalnit...@mirantis.com: Guys, The idea of separate unfreezing is cool itself, but we have to define some rules how to define that fuel-web is stable. I mean, in fuel-web we have different projects, so when Fuel UI is stable, the fuel_upgrade or Nailgun may be not. - Igor On Fri, Nov 14, 2014 at 3:52 PM, Vitaly Kramskikh vkramsk...@mirantis.com wrote: Evgeniy, That means that the stable branch can be created for some repos earlier. For example, fuel-web repo seems not to have critical issues for now and I'd like master branch of that repo to be opened for merging various stuff which shouldn't go to 6.0 and do not wait until all other repos stabilize. 2014-11-14 16:42 GMT+03:00 Evgeniy L e...@mirantis.com: Hi, There was an idea to make a separate code freeze for repos Could you please clarify what do you mean? I think we should have a way to merge patches for the next release event if it's code freeze for the current. Thanks, On Tue, Nov 11, 2014 at 2:16 PM, Vitaly Kramskikh vkramsk...@mirantis.com wrote: Folks, There was an idea to make a separate code freeze for repos, but we decided not to do it. Do we plan to try it this time? It is really painful to maintain multi-level tree of dependent review requests and wait for a few weeks until we can merge new stuff in master. -- Vitaly Kramskikh, Software Engineer, Mirantis, Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Vitaly Kramskikh, Software Engineer, Mirantis, Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Vitaly Kramskikh, Software Engineer, Mirantis, Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon
On 2014-11-14 08:31:37 -0500 (-0500), Donald Stufft wrote: [...] with TLS you depend on the web server to not be compromised [...] Or in some cases, the CDN. ;) -- Jeremy Stanley ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] L2 gateway as a service
Hi, As far as I understood last friday afternoon dicussions during the design summit, this use case is in the scope of another umbrella spec which would define external connectivity for neutron networks. Details of those connectivity would be defined through service plugin API. Ian do you plan to define such an umbrella spec? or at least, could you sum up the agreement of the design summit discussion in the ML? I see at least 3 specs which would be under such an umbrella spec : https://review.openstack.org/#/c/93329/ (BGPVPN) https://review.openstack.org/#/c/101043/ (Inter DC connectivity with VPN) https://review.openstack.org/#/c/134179/ (l2 gw aas) On Fri, Nov 14, 2014 at 1:13 PM, Salvatore Orlando sorla...@nicira.com wrote: Thanks Maruti, I have some comments and questions which I've posted on gerrit. There are two things I would like to discuss on the mailing list concerning this effort. 1) Is this spec replacing https://review.openstack.org/#/c/100278 and https://review.openstack.org/#/c/93613 - I hope so, otherwise this just adds even more complexity. 2) It sounds like you should be able to implement this service plugin in either a feature branch or a repository distinct from neutron. Can you confirm that? Salvatore On 13 November 2014 13:26, Kamat, Maruti Haridas maruti.ka...@hp.com wrote: Hi Friends, As discussed during the summit, I have uploaded the spec for review at https://review.openstack.org/#/c/134179/ Thanks, Maruti ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [oslo][kite] oslo.messaging changes for message security
On Nov 13, 2014, at 8:47 PM, Jamie Lennox jamielen...@redhat.com wrote: Hi all, To implement kite we need the ability to sign and encrypt the message and the message data. This needs to happen at a very low level in the oslo.messaging stack. The existing message security review (https://review.openstack.org/#/c/109806/) isn't going to be sufficient. It allows us to sign/encrypt only the message data ignoring the information in the context and not allowing us to sign the message as a whole. It would also intercept and sign notifications which is not something that kite can do. Mostly this is an issue of how the oslo.messaging library is constructed. The choice of how data is serialized for transmission (including things like how you arrange context and message data in the payload) is handled individually by the driver layer rather than in a common higher location. All the drivers use the same helper functions for this and so it isn't a problem in practice. Essentially I need a stateful serializing/deserializing object (I need to store keys and hold things like a connection to the kite server) that either extends or replaces oslo.messaging._drivers.common.serialize_msg and deserialize_msg and their exception counterparts. There are a couple of ways I can see to do what I need: 1. Kite becomes a more integral part of oslo.messaging and the marshalling and verification code becomes part of the existing RPC path. This is how it was initially proposed, it does not provide a good story for future or alternative implementations. Oslo.messaging would either have a dependency on kiteclient, implement its own ways of talking to the server, or have some hack that imports kiteclient if available. 2. Essentially I add a global object loaded from conf to the existing common RPC file. Pro: The existing drivers continue to work as today, Con: global state held by a library. However given the way oslo messaging works i'm not really sure how much of a problem this is. We typically load transport from a predefined location in the conf file and we're not really in a situation where you might want to construct different transports with different parameters in the same project. 3. I create a protocol object out of the RPC code that kite can subclass and the protocol can be chosen by CONF when the transport/driver is created. This still touches a lot of places as the protocol object would need to be passed to all messages, consumers etc. It involves changing the interface of the drivers to accept this new object and changes in each of the drivers to work with the new protocol object rather than the existing helpers. 4. As the last option requires changing the driver interface anyway we try and correct the driver interfaces completely. The driver send and receive functions that currently accept a context and args parameters should only accept a generic object/string consisting of already marshalled data. The code that handles serializing and deserializing gets moved to a higher level and kite would be pluggable there with the current RPC being default. None of these options involve changing the public facing interfaces nor the messages emitted on the wire (when kite is not used). I've been playing a little with option 3 and I don't think it's worth it. There is a lot of code change and additional object passing that I don't think improves the library in general. Before I go too far down the path with option 4 I'd like to hear the thoughts of people more familiar with the library. Is there a reason that the drivers currently handle marshalling rather than the RPC layer? It may have been an artifact of the evolution of that code, but I seem to remember at some point that one of the drivers had a limitation either in the byte-values allowed or the number of bytes allowed in a message. They all seem to be doing roughly the same thing to construct the messages now, though, so I’m not sure if that’s really true. I know there is ongoing talk about evolving the oslo.messaging library, I unfortunately didn't make it to the sessions at summit. Has this problem been raised? How would it affect those talks? Security did come up, but I don’t see where we recorded any decisions in the etherpad: https://etherpad.openstack.org/p/kilo-oslo-oslo.messaging Our bigger concern was lack of reviewers for the library, which will make any large-scale refactoring problematic. Is there explicit/implicit support for out of tree drivers that would disallow changing these interfaces? There’s implicit support, but we haven’t formally called those APIs stable. Does anyone have alternative ideas on how to organize the library for message security? Thanks for the help. Jamie ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org
Re: [openstack-dev] [nova] Consistency, efficiency, and safety of NovaObject.save()
I don't think it's a problem. It puts a practical limit on the scope of an 'api call' which can be covered by a single database transaction though, because it would be difficult to arrange for 2 RPC calls to both use the same DB transaction on the remote end. I think we agree on this. Sure, but since RPC calls can be queued for *minutes* this is probably not a realistic problem, right? --Dan signature.asc Description: OpenPGP digital signature ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] L2 gateway as a service
Hello all, Also, what about Kevin's https://review.openstack.org/#/c/87825/? One of its use cases is exactly the L2 gateway. These proposals could probably be inserted in a more generic work for moving existing datacenter L2 resources to Neutron. Cheers, On 14 November 2014 15:28, Mathieu Rohon mathieu.ro...@gmail.com wrote: Hi, As far as I understood last friday afternoon dicussions during the design summit, this use case is in the scope of another umbrella spec which would define external connectivity for neutron networks. Details of those connectivity would be defined through service plugin API. Ian do you plan to define such an umbrella spec? or at least, could you sum up the agreement of the design summit discussion in the ML? I see at least 3 specs which would be under such an umbrella spec : https://review.openstack.org/#/c/93329/ (BGPVPN) https://review.openstack.org/#/c/101043/ (Inter DC connectivity with VPN) https://review.openstack.org/#/c/134179/ (l2 gw aas) On Fri, Nov 14, 2014 at 1:13 PM, Salvatore Orlando sorla...@nicira.com wrote: Thanks Maruti, I have some comments and questions which I've posted on gerrit. There are two things I would like to discuss on the mailing list concerning this effort. 1) Is this spec replacing https://review.openstack.org/#/c/100278 and https://review.openstack.org/#/c/93613 - I hope so, otherwise this just adds even more complexity. 2) It sounds like you should be able to implement this service plugin in either a feature branch or a repository distinct from neutron. Can you confirm that? Salvatore On 13 November 2014 13:26, Kamat, Maruti Haridas maruti.ka...@hp.com wrote: Hi Friends, As discussed during the summit, I have uploaded the spec for review at https://review.openstack.org/#/c/134179/ Thanks, Maruti ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Igor Duarte Cardoso. http://igordcard.com @igordcard https://twitter.com/igordcard ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [oslo] The right (and the wrong) way of handling module imports in oslo-incubator
On Nov 14, 2014, at 4:31 AM, Ihar Hrachyshka ihrac...@redhat.com wrote: Signed PGP part On 14/11/14 09:14, Flavio Percoco wrote: On 13/11/14 23:25 +, Amrith Kumar wrote: At the suggestion of Doug Hellmann, and relative to a conversation with him and Flavio at Summit. Doug suggested that I pose this question on the dev mailing list so someone from OSLO can communicate the answer to the entire community (rather than just the private email exchange that we had). Here’s the situation, I’m using loopingcall.py as an example, this is not limited to this module but serves as an example. An OSLO incubator module loopingcall depends on another OSLO incubator module timeutils. timeutils has graduated [drum-roll] and is now part of oslo.utils. There is also other project code that references timeutils. So, to handle the graduation of timeutils, the changes I’ll be making are: 1. Remove timeutils from openstack-common.conf 2. Make the project code reference oslo.utils But what of loopingcall? Should I a. Update it and change the import(s) therein to point to oslo.utils, or b. Sync the oslo-incubator code for loopingcall, picking up all changes at least upto and including the change in oslo-incubator that handles the graduation of oslo.utils. In speaking with Doug and Flavio, (after I submitted copious amounts of code that did (a)) above, I’ve come to learn that I chose the wrong answer. The correct answer is (b). This doesn’t have to be part of the same commit, and what I’ve ended up doing is this … c. Leave timeutils in project/openstack/common and let oslo-incubator depend on it while migrating the project to use oslo.utils. In a subsequent commit, a sync from oslo-incubator can happen. I’d like someone on OSLO to confirm this, and for other projects whose lead I followed, you may want to address these in the changes you have in flight or have already merged. `b` is the right answer there. As a general rule - probably the easiest way to solve the above dilema - people should *never* modify incubator modules in the project. Sticking to this rule will automatically answer the question of how to update, maintain and consume code from oslo-incubator. Crazy idea: we should have a bot that -1's all the patches that modify oslo-incubator code without being marked by some special tag (OsloSync?). We've slipped several local modifications to those files before (I know two cases in Neutron, though I hardly monitor all the patch queue). Anyone with energy to put into automating checks like this is welcome to come help us graduate libraries instead. :-) Doug If there are projects that picked `a` as the right answer, please, update your patches and follow the already well defined workflow for oslo-incubator. Doing otherwise will just make things harder for us who maintain oslo, for stable maintenance and for your own contributors. Amrith, thanks for bringing this up and for updating your patches, I know it's a pain and I appreciate your collaboration there. Cheers, Flavio P.S: Gentle note. Oslo is not an acronym. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [oslo] The right (and the wrong) way of handling module imports in oslo-incubator
On Nov 14, 2014, at 3:14 AM, Flavio Percoco fla...@redhat.com wrote: On 13/11/14 23:25 +, Amrith Kumar wrote: At the suggestion of Doug Hellmann, and relative to a conversation with him and Flavio at Summit. Doug suggested that I pose this question on the dev mailing list so someone from OSLO can communicate the answer to the entire community (rather than just the private email exchange that we had). Here’s the situation, I’m using loopingcall.py as an example, this is not limited to this module but serves as an example. An OSLO incubator module loopingcall depends on another OSLO incubator module timeutils. timeutils has graduated [drum-roll] and is now part of oslo.utils. There is also other project code that references timeutils. So, to handle the graduation of timeutils, the changes I’ll be making are: 1. Remove timeutils from openstack-common.conf 2. Make the project code reference oslo.utils But what of loopingcall? Should I a. Update it and change the import(s) therein to point to oslo.utils, or b. Sync the oslo-incubator code for loopingcall, picking up all changes at least upto and including the change in oslo-incubator that handles the graduation of oslo.utils. In speaking with Doug and Flavio, (after I submitted copious amounts of code that did (a)) above, I’ve come to learn that I chose the wrong answer. The correct answer is (b). This doesn’t have to be part of the same commit, and what I’ve ended up doing is this … c. Leave timeutils in project/openstack/common and let oslo-incubator depend on it while migrating the project to use oslo.utils. In a subsequent commit, a sync from oslo-incubator can happen. I’d like someone on OSLO to confirm this, and for other projects whose lead I followed, you may want to address these in the changes you have in flight or have already merged. `b` is the right answer there. As a general rule - probably the easiest way to solve the above dilema - people should *never* modify incubator modules in the project. Sticking to this rule will automatically answer the question of how to update, maintain and consume code from oslo-incubator. Yes, that’s right. We’ve worked very hard to ensure that the incubator always works with released Oslo libraries specifically so projects will not have to hack up versions of those modules in order to adopt the libraries. That will, at times, mean you have to adopt multiple libraries in a single patch. This is annoying from a “one patch, one change” standpoint, but it preferable from the “always keep master working” standpoint. After Kilo, the rate of graduations will drop off significantly and so this won’t be an issue any more. Please bear with us in the mean time. If there are projects that picked `a` as the right answer, please, update your patches and follow the already well defined workflow for oslo-incubator. Doing otherwise will just make things harder for us who maintain oslo, for stable maintenance and for your own contributors. The Oslo team is under no obligation to support Oslo code modified outside of an Oslo code repository. I don’t know what stand the stable maintenance team takes on the subject, but I would expect them similarly to insist on following community practices. Doug Amrith, thanks for bringing this up and for updating your patches, I know it's a pain and I appreciate your collaboration there. Cheers, Flavio P.S: Gentle note. Oslo is not an acronym. -- @flaper87 Flavio Percoco ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [oslo] updated release instructions
Last cycle Thierry put together a tool to make releasing Oslo libraries easier for us. Using the tool will ensure that all of our releases are tracked consistently, but it’s going to mean a few procedural changes for us in launchpad, especially with the way we use milestones. I’ve updated https://wiki.openstack.org/wiki/Oslo/ReleaseProcess with the basic instructions, and I think all of our existing libraries are configured with the relevant series and milestones. Please look over the instructions and let me know if you have questions. I would like to ensure that each library lead (or someone they designate) is able to handle library releases this cycle so I am not a bottleneck for us. Doug ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon
On 11/14/2014 09:05 AM, Thomas Goirand wrote: That's a reasonable amount of work. Multiply this by 2 for the xstatic packages (if we keep using that), that's about 14 new packages. By the way, can't we use libjs-sockjs instead of ng-websocket? Last, I'm ok if we add all these, but please, let's do this in the beginning of the Kilo cycle. It was really hard to cope with it at the end of the freeze for Juno. Hear hear! And good work by all of the package maintainers. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron] LeastNetwork scheduling for DHCP
Hi all, I would definitely be glad to work on the subject as well. However I am not sure to understand fully Armando last remark in our change. I will try to discuss it with him on IRC. Regards, Benjamin GRASSART [@@ THALES GROUP INTERNAL @@] De : S M, Praveen Kumar [mailto:praveen-sm.ku...@hp.com] Envoyé : vendredi 7 novembre 2014 09:27 À : Narasimhan, Vivekanandan; OpenStack Development Mailing List (not for usage questions) Cc : Beltur, Jayashree; GRASSART Benjamin; Sourabh Patwardhan (sopatwar); M, Shiva Kumar; A, Keshava Objet : RE: [Neutron] LeastNetwork scheduling for DHCP Hi Vivek, We are definitely interested in working on these blueprints collaboratively. We have a working implementation for our blueprint and received few important comments from Armando and addressing them currently. Regards Praveen. From: Narasimhan, Vivekanandan Sent: Thursday, November 06, 2014 9:09 PM To: OpenStack Development Mailing List (not for usage questions) Cc: Beltur, Jayashree; S M, Praveen Kumar; benjamin.grass...@thalesgroup.commailto:benjamin.grass...@thalesgroup.com; Sourabh Patwardhan (sopatwar) Subject: [Neutron] LeastNetwork scheduling for DHCP Hi Neutron Stackers, There is an interest among vendors to bring Least Networks scheduling for DHCP into Openstack Neutron. Currently there are the following blueprints lying there, all of them trying to address this issue: https://review.openstack.org/111210 https://review.openstack.org/#/c/130912/ https://review.openstack.org/104587 We are trying to pull together all these BPs as one Umbrella BP, on which we can pour volunteers from every side, to clear out this BP itself as initial step. So we would like to collaborate, to plan BP approval for these. Please respond if you are interested. -- Thanks, Vivek ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Horizon] Separate horizon and openstack_dashboard
On 11/14/14, 4:24 AM, Jaromir Coufal jcou...@redhat.com wrote: On 11/11/14 09:30, Jiri Tomasek wrote: On 11/10/2014 12:19 PM, Matthias Runge wrote: On Thu, Oct 30, 2014 at 01:13:48PM +0100, Matthias Runge wrote: Hi, tl;dr: how to progreed in separating horizon and openstack_dashboard About a year ago now we agreed, it makes sense to separate horizon and openstack_dashboard. At the past summit, we discussed this again. Currently, our repo contains two directories: horizon and openstack_dashboard, they both will need new names. We discussed a renaming in the past; the former consensus was: rename horizon to horizon_lib and rename openstack_dashboard to horizon. IMHO that doesn't make any sense and will confuse people a lot. I wouldn't object to rename horizon to horizon_lib, although any other name, e.g django-horizon should be fine as well. openstack_dashboard is our official name; people from outside refer to the Dashboard as Horizon, why not rename to openstack_horizon here? It is official name, but I disagree that people refer more to Dashboard than to Horizon. People mostly talk about Horizon and when they say Horizon they refer to the UI. Dashboard is not much used outside Horizon community and is a bit confusing (overloaded). Small example - when you want to add the general overview page - the actual dashboard, that should be the *dashboard* name how it should be used. Otherwise we are adding _Dashboard_ view into Project's _dashboard_ into OpenStack _Dashboard_ project. That's what I really see confusing :) My opinion and preference: * horizon_lib (framework) + horizon (UI) I agree with both the concept behind these names and the name selections given here. People don¹t think of Nova as the framework, nor Neutron as the framework. That¹s the server/service that we make rest calls against. I like: (horizon_lib or horizon_framework) and horizon smime.p7s Description: S/MIME cryptographic signature ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [glance] security and swift multi-tenant fixes on stable branch
On 14/11/14 11:25 +, stuart.mcla...@hp.com wrote: On 2014-11-13 18:28:14 +0100 (+0100), Ihar Hrachyshka wrote: [...] I think those who maintain glance_store module in downstream distributions will cherry-pick the security fix into their packages, so there is nothing to do in terms of stable branches to handle the security issue. [...] As a counterargument, some Oslo libs have grown stable branches for security backports and cut corresponding point releases on an as-needed basis so as to avoid introducing new features in stable server deployments. -- Jeremy Stanley The current glance stable/juno requirement for glance_store is = 0.1.1. If you run stable/juno against glance_store 0.1.1 and try to create an image, you get (multi-tenant store): [snip] Before glance_store was separated out it would have been straightforward to backport the relevant fixes to Glance's tightly coupled in-tree store code. I'm neutral on the mechanics, but I think we need to get to a point where if someone is running stable/juno and has a version of glance_store which satisfies what's specified in requirements.txt they should have secure, working code. I think releasing glance_store now with the security fix is fine. Distro packages will be updated as soon as 2014.2.1 is released and the change introduced is backwards compatible. FWIW, we're adapting glance_store's development to follow oslo libraries policies even for releases and versioning. Cheers, Flavio -- @flaper87 Flavio Percoco pgpW7NsATDPbZ.pgp Description: PGP signature ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Consistency, efficiency, and safety of NovaObject.save()
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 14/11/14 14:40, Dan Smith wrote: I don't think it's a problem. It puts a practical limit on the scope of an 'api call' which can be covered by a single database transaction though, because it would be difficult to arrange for 2 RPC calls to both use the same DB transaction on the remote end. I think we agree on this. Sure, but since RPC calls can be queued for *minutes* this is probably not a realistic problem, right? More violent agreement ;) I think scoping this to just conductor is appropriate and useful. Compare and swap at the object level would be a useful mechanism for safety across multiple rpc calls. Matt - -- Matthew Booth Red Hat Engineering, Virtualisation Team Phone: +442070094448 (UK) GPG ID: D33C3490 GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490 -BEGIN PGP SIGNATURE- Version: GnuPG v1 iEYEARECAAYFAlRmJiQACgkQNEHqGdM8NJASFgCdFsz8Bm9LcIqTBf+uZSe0/hmE LG0An2D9L3vyI8QSAUxNDj3fpfSFt0tx =32fN -END PGP SIGNATURE- ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] [Ironic] [Cinder] Baremetal volumes -- how to model direct attached storage
For decom (now zapping), I'm building it with config flags to either disable it entirely, or just disable the erase_disks steps. No comment on the daft bit :) But I do understand why you'd want to do it this way. https://review.openstack.org/#/c/102685/ On Fri Nov 14 2014 at 6:14:13 AM Clint Byrum cl...@fewbar.com wrote: Excerpts from Chris Jones's message of 2014-11-14 00:42:48 -0800: Hi My thoughts: Shoe-horning the ephemeral partition into Cinder seems like a lot of pain for almost no gain[1]. The only gain I can think of would be that we could bring a node down, boot it into a special ramdisk that exposes the volume to the network, so cindery operations (e.g. migration) could be performed, but I'm not even sure if anyone is asking for that? Forcing Cinder to understand and track something it can never normally do anything with, seems like we're just trying to squeeze ourselves into an ever-shrinking VM costume! Having said that, preserve ephemeral is a terrible oxymoron, so if we can do something about it, we probably should. How about instead, we teach Nova/Ironic about a concept of no ephemeral? They make a partition on the first disk for the first image they deploy, and then they never touch the other part(s) of the disk(s), until the instance is destroyed. This creates one additional burden for operators, which is to create and format a partition the first time they boot, but since this is a very small number of commands, and something we could trivially bake into our (root?) elements, I'm not sure it's a huge problem. This gets rid of the cognitive dissonance of preserving something that is described as ephemeral, and (IMO) makes it extremely clear that OpenStack isn't going to touch anything but the first partition of the first disk. If this were baked into the flavour rather than something we tack onto a nova rebuild command, it offers greater safety for operators, against the risk of accidentallying a vital state partition with a misconstructed rebuild command. +1 A predictable and simple rule seems like it would go a long way to decoupling state preservation from rebuild, which I like very much. There is, of course, the issue of decom then, but that has never been a concern for TripleO, and for OnMetal, they think we're a bit daft trying to preserve state while delivering new images anyway. :) ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Heat] Convergence-poc status
Hi All, Latest convergence-poc code is available [1] Reason for this fork was to avoid constant rebase. Wiki [2] also has been updated to reflect latest changes/deviations. One prominent deviation from convergence-blueprint is that observer and workers are not separate operating system processes, but eventlets under heat-engine. A sample RandomString template used for verification is also available [3]. Verified for basic CREATE, UPDATE and DELETE operations. We are yet to verify ROLLBACK and CONCURRENT stack operations. Although we have tested using RandomString resource, Code should work for all heat resources. TODO: 1. Delete old resources after a successful update 2. Failure path for basic operations We expect the poc completion by early next week. Graph lookup code is all dumped on stack module and will look out of place and we intend to refactor them. Please review code and share your thoughts. [1] https://github.com/anantpatil/heat-convergence-poc [2] https://wiki.openstack.org/wiki/Heat/ConvergenceDesign [3] https://github.com/anantpatil/heat-convergence-poc/tree/master/sample-templates Cheers, HP Heat Team. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [infra][devstack] CI failed The plugin token_endpoint could not be found
just an fyi, i had same issue. i 'pip uninstall'ed all the python-*clients and it worked fine... i assume it's something to do with master (as i had it configured previously) since devstack seems to pull in pypi version. cheers, gord ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [api] Cross-Project Liaison for the API Working Group
Hello PTLs of Ceilometer, Heat, Horizon, and Trove, The purpose of the API Working Group [1] is to propose, discuss, review, and advocate for API guidelines for all OpenStack Programs to follow. We’re seeking API subject matter experts and a cross-project liaison [2] for each project to communicate plans for API updates, review API guidelines with their project's view in mind, and review the API Working Group guidelines as they are drafted. The liaison should be familiar with the project's REST API design and future planning for changes to it. The liaison should be a core reviewer for the project, but does not need to be the PTL. By default, the liaison will be the PTL. The liaison is the first line of contact for the API Working Group team members The liaison may further delegate work to other subject matter experts The liaison should be aware of and engaged in the API Working Group Communication channels This was discussed at the Kilo design summit and we already had representatives from many of the other projects sign up. We would like to have liaisons from your projects as well. Thanks, Everett [1] https://wiki.openstack.org/wiki/API_Working_Group [2] https://wiki.openstack.org/wiki/CrossProjectLiaisons#API_Working_Group ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] mapping of hypervisor support matrix to driver functions - agenda?
In preparation to the blueprint libvirt-kvm-systemz [1] we made an analysis of the mapping of the driver functions to the hypervisor support matrix features [2]. We could provide that information for the wiki page. Should we discuss that up front in a nova meeting? [1] https://blueprints.launchpad.net/nova/+spec/libvirt-kvm-systemz [2] https://wiki.openstack.org/wiki/HypervisorSupportMatrix Regards, Markus Zoeller IRC: markus_z Launchpad: mzoeller ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [infra][devstack] CI failed The plugin token_endpoint could not be found
On 11/14/2014 09:09 AM, Jeremy Stanley wrote: On 2014-11-14 00:34:14 -0500 (-0500), Wan, Sam wrote: Seems we need to use python-keystoneclient and python-openstackclient from git.openstack.org because those on pip don’t work. That's a bug we're (collectively) trying to prevent in the future. Services, even under development, should not depend on features only available in unreleased versions of libraries. But in latest update of stack.sh, it’s to use pip by default [...] And this is intentional, implemented specifically so that we can keep it from happening again. Patrick actually got to the bottom of a bug we had in devstack around this, we merged the fixes this morning. As Jeremy said, installing from pypi released versions is intentional. If something wants to use features in a library, the library needs to cut a release. -Sean -- Sean Dague http://dague.net ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] mapping of hypervisor support matrix to driver functions - agenda?
On Fri, Nov 14, 2014 at 05:26:44PM +0100, Markus Zoeller wrote: In preparation to the blueprint libvirt-kvm-systemz [1] we made an analysis of the mapping of the driver functions to the hypervisor support matrix features [2]. We could provide that information for the wiki page. Should we discuss that up front in a nova meeting? One of the items that came out of the design summit is to produce a formal document to detail so called capabilities of virtualization drivers, to replace what's currently in the wiki page you quote, and ultimately provide a much higher level of detail. Myself Mikal had volunteered to do this, and expect that it will finally take the form of a structured document living in the Nova git repository docs directory. Your mapping might prove to be useful input. Perhaps just create a new wiki page and upload it in whatever format you currently have it in, and we'll work forwards from there. [1] https://blueprints.launchpad.net/nova/+spec/libvirt-kvm-systemz [2] https://wiki.openstack.org/wiki/HypervisorSupportMatrix Regards, Daniel -- |: http://berrange.com -o-http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] config options not correctly deprecated
On 11/13/2014 05:56 PM, Clint Byrum wrote: Excerpts from Ben Nemec's message of 2014-11-13 15:20:47 -0800: On 11/10/2014 05:00 AM, Daniel P. Berrange wrote: On Mon, Nov 10, 2014 at 09:45:02AM +, Derek Higgins wrote: Tl;dr oslo.config wasn't logging warnings about deprecated config options, do we need to support them for another cycle? AFAIK, there has not been any change in olso.config behaviour in the Juno release, as compared to previous releases. The oslo.config behaviour is that the generated sample config file contain all the deprecation information. The idea that olso.config issue log warnings is a decent RFE to make the use of deprecated config settings more visible. This is an enhancement though, not a bug. A set of patches to remove deprecated options in Nova was landed on Thursday[1], these were marked as deprecated during the juno dev cycle and got removed now that kilo has started. Yes, this is our standard practice - at the start of each release cycle, we delete anything that was marked as deprected in the previous release cycle. ie we give downstream users/apps 1 release cycle of grace to move to the new option names. Most of the deprecated config options are listed as deprecated in the documentation for nova.conf changes[2] linked to from the Nova upgrade section in the Juno release notes[3] (the deprecated cinder config options are not listed here along with the allowed_direct_url_schemes glance option). The sample nova.conf generated by olso lists all the deprecations. For example, for cinder options it shows what the old config option name was. [cinder] # # Options defined in nova.volume.cinder # # Info to match when looking for cinder in the service # catalog. Format is: separated values of the form: # service_type:service_name:endpoint_type (string value) # Deprecated group/name - [DEFAULT]/cinder_catalog_info #catalog_info=volume:cinder:publicURL Also note the deprecated name will not appear as an option in the sample config file at all, other than in this deprecation comment. My main worry is that there were no warnings about these options being deprecated in nova's logs (as a result they were still being used in tripleo), once I noticed tripleo's CI jobs were failing and discovered the reason I submitted 4 reverts to put back the deprecated options in nova[4] as I believe they should now be supported for another cycle (along with a fix to oslo.config to log warnings about their use). The 4 patches have now been blocked as they go against our deprecation policy. I believe the correct way to handle this is to support these options for another cycle so that other operators don't get hit when upgrading to kilo. While at that same time fix oslo.config to report the deprecated options in kilo. I have marked this mail with the [all] tag because there are other projects using the same deprecated_name (or deprecated_group) parameter when adding config options, I think those projects also now need to support their deprecated options for another cycle. AFAIK, there's nothing different about Juno vs previous release cycles, so I don't see any reason to do anything different this time around. No matter what we do there is always a possibility that downstream apps / users will not notice and/or ignore the deprecation. We should certainly look at how to make deprecation more obvious, but I don't think we should change our policy just because an app missed the fact that these were deprecated. So the difference to me is that this cycle we are aware that we're creating a crappy experience for deployers. In the past we didn't have anything in the CI environment simulating a real deployment so these sorts of issues went unnoticed. IMHO telling deployers that they have to troll the sample configs and try to figure out which deprecated opts they're still using is not an acceptable answer. I don't know if this is really fair, as all of the deprecated options do appear here: http://docs.openstack.org/juno/config-reference/content/nova-conf-changes-juno.html Fair enough, I didn't realize there was another source of deprecation details. I do want to point out that based on our historical method of handling deprecated opts (remove them early in the cycle after they've been deprecated), there is only a few weeks notice for CD'ers if they're relying on the release documentation for deprecations. So I don't think that completely removes the need for a logged message, but it does help quite a bit. So the real bug is that in TripleO we're not paying attention to the appropriate stream of deprecations. Logs on running systems is a mighty big hammer when the documentation is being updated for us, and we're just not paying attention in the right place. BTW, where SHOULD continuous deployers pay attention for this stuff? I'm with Derek that they should be able to use the logs. We log
Re: [openstack-dev] [neutron] L2 gateway as a service
Thanks guys. I think you've answered my initial question. Probably not in the way I was hoping it to be answered, but it's ok. So now we have potentially 4 different blueprint describing more or less overlapping use cases that we need to reconcile into one? If the above is correct, then I suggest we go back to the use case and make an effort to abstract a bit from thinking about how those use cases should be implemented. Salvatore On 14 November 2014 15:42, Igor Cardoso igordc...@gmail.com wrote: Hello all, Also, what about Kevin's https://review.openstack.org/#/c/87825/? One of its use cases is exactly the L2 gateway. These proposals could probably be inserted in a more generic work for moving existing datacenter L2 resources to Neutron. Cheers, On 14 November 2014 15:28, Mathieu Rohon mathieu.ro...@gmail.com wrote: Hi, As far as I understood last friday afternoon dicussions during the design summit, this use case is in the scope of another umbrella spec which would define external connectivity for neutron networks. Details of those connectivity would be defined through service plugin API. Ian do you plan to define such an umbrella spec? or at least, could you sum up the agreement of the design summit discussion in the ML? I see at least 3 specs which would be under such an umbrella spec : https://review.openstack.org/#/c/93329/ (BGPVPN) https://review.openstack.org/#/c/101043/ (Inter DC connectivity with VPN) https://review.openstack.org/#/c/134179/ (l2 gw aas) On Fri, Nov 14, 2014 at 1:13 PM, Salvatore Orlando sorla...@nicira.com wrote: Thanks Maruti, I have some comments and questions which I've posted on gerrit. There are two things I would like to discuss on the mailing list concerning this effort. 1) Is this spec replacing https://review.openstack.org/#/c/100278 and https://review.openstack.org/#/c/93613 - I hope so, otherwise this just adds even more complexity. 2) It sounds like you should be able to implement this service plugin in either a feature branch or a repository distinct from neutron. Can you confirm that? Salvatore On 13 November 2014 13:26, Kamat, Maruti Haridas maruti.ka...@hp.com wrote: Hi Friends, As discussed during the summit, I have uploaded the spec for review at https://review.openstack.org/#/c/134179/ Thanks, Maruti ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Igor Duarte Cardoso. http://igordcard.com @igordcard https://twitter.com/igordcard ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] config options not correctly deprecated
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 11/13/2014 06:37 PM, Sean Dague wrote: On 11/13/2014 06:56 PM, Clint Byrum wrote: Excerpts from Ben Nemec's message of 2014-11-13 15:20:47 -0800: On 11/10/2014 05:00 AM, Daniel P. Berrange wrote: On Mon, Nov 10, 2014 at 09:45:02AM +, Derek Higgins wrote: Tl;dr oslo.config wasn't logging warnings about deprecated config options, do we need to support them for another cycle? AFAIK, there has not been any change in olso.config behaviour in the Juno release, as compared to previous releases. The oslo.config behaviour is that the generated sample config file contain all the deprecation information. The idea that olso.config issue log warnings is a decent RFE to make the use of deprecated config settings more visible. This is an enhancement though, not a bug. A set of patches to remove deprecated options in Nova was landed on Thursday[1], these were marked as deprecated during the juno dev cycle and got removed now that kilo has started. Yes, this is our standard practice - at the start of each release cycle, we delete anything that was marked as deprected in the previous release cycle. ie we give downstream users/apps 1 release cycle of grace to move to the new option names. Most of the deprecated config options are listed as deprecated in the documentation for nova.conf changes[2] linked to from the Nova upgrade section in the Juno release notes[3] (the deprecated cinder config options are not listed here along with the allowed_direct_url_schemes glance option). The sample nova.conf generated by olso lists all the deprecations. For example, for cinder options it shows what the old config option name was. [cinder] # # Options defined in nova.volume.cinder # # Info to match when looking for cinder in the service # catalog. Format is: separated values of the form: # service_type:service_name:endpoint_type (string value) # Deprecated group/name - [DEFAULT]/cinder_catalog_info #catalog_info=volume:cinder:publicURL Also note the deprecated name will not appear as an option in the sample config file at all, other than in this deprecation comment. My main worry is that there were no warnings about these options being deprecated in nova's logs (as a result they were still being used in tripleo), once I noticed tripleo's CI jobs were failing and discovered the reason I submitted 4 reverts to put back the deprecated options in nova[4] as I believe they should now be supported for another cycle (along with a fix to oslo.config to log warnings about their use). The 4 patches have now been blocked as they go against our deprecation policy. I believe the correct way to handle this is to support these options for another cycle so that other operators don't get hit when upgrading to kilo. While at that same time fix oslo.config to report the deprecated options in kilo. I have marked this mail with the [all] tag because there are other projects using the same deprecated_name (or deprecated_group) parameter when adding config options, I think those projects also now need to support their deprecated options for another cycle. AFAIK, there's nothing different about Juno vs previous release cycles, so I don't see any reason to do anything different this time around. No matter what we do there is always a possibility that downstream apps / users will not notice and/or ignore the deprecation. We should certainly look at how to make deprecation more obvious, but I don't think we should change our policy just because an app missed the fact that these were deprecated. So the difference to me is that this cycle we are aware that we're creating a crappy experience for deployers. In the past we didn't have anything in the CI environment simulating a real deployment so these sorts of issues went unnoticed. IMHO telling deployers that they have to troll the sample configs and try to figure out which deprecated opts they're still using is not an acceptable answer. I don't know if this is really fair, as all of the deprecated options do appear here: http://docs.openstack.org/juno/config-reference/content/nova-conf-changes-juno.html So the real bug is that in TripleO we're not paying attention to the appropriate stream of deprecations. Logs on running systems is a mighty big hammer when the documentation is being updated for us, and we're just not paying attention in the right place. BTW, where SHOULD continuous deployers pay attention for this stuff? Now that we do know, I think we need to address the issue. The first step is to revert the deprecated removals - they're not hurting anything, and if we wait another cycle we can fix oslo.config and then remove them once deployers have had a reasonable chance to address the deprecation. In this case, we can just fix the templates. Are we broken? Yes. Can we fix it? YES! I would definitely appreciate the
Re: [openstack-dev] [neutron] L2 gateway as a service
Hi Salvatore, Thanks for the review comments on the BP. Yes, Maruti’s BP supersedes the review https://review.openstack.org/#/c/100278 posted by Isaku, and we discussed with Isaku during the summit to embrace his BP. For the review https://review.openstack.org/#/c/93613 by Racha , we were not able to see how that idea maps to implementation angle driven in Maruti’s BP. And Kevin’s idea of giving neutron representation of external-attachment-points (good idea), can be used in the implementation of Maruti’s BP where in the attachment-points can be fed to ‘net-gateway-create’ command, instead of giving physical-network switch names/ports. So we request Kevin , Racha and others, to peruse Maruti’s BP and post comments/questions on the same, more specifically from the resource representation perspective (REST APIs and CLIs). -- Thanks, Vivek From: Salvatore Orlando [mailto:sorla...@nicira.com] Sent: Friday, November 14, 2014 10:17 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [neutron] L2 gateway as a service Thanks guys. I think you've answered my initial question. Probably not in the way I was hoping it to be answered, but it's ok. So now we have potentially 4 different blueprint describing more or less overlapping use cases that we need to reconcile into one? If the above is correct, then I suggest we go back to the use case and make an effort to abstract a bit from thinking about how those use cases should be implemented. Salvatore On 14 November 2014 15:42, Igor Cardoso igordc...@gmail.commailto:igordc...@gmail.com wrote: Hello all, Also, what about Kevin's https://review.openstack.org/#/c/87825/? One of its use cases is exactly the L2 gateway. These proposals could probably be inserted in a more generic work for moving existing datacenter L2 resources to Neutron. Cheers, On 14 November 2014 15:28, Mathieu Rohon mathieu.ro...@gmail.commailto:mathieu.ro...@gmail.com wrote: Hi, As far as I understood last friday afternoon dicussions during the design summit, this use case is in the scope of another umbrella spec which would define external connectivity for neutron networks. Details of those connectivity would be defined through service plugin API. Ian do you plan to define such an umbrella spec? or at least, could you sum up the agreement of the design summit discussion in the ML? I see at least 3 specs which would be under such an umbrella spec : https://review.openstack.org/#/c/93329/ (BGPVPN) https://review.openstack.org/#/c/101043/ (Inter DC connectivity with VPN) https://review.openstack.org/#/c/134179/ (l2 gw aas) On Fri, Nov 14, 2014 at 1:13 PM, Salvatore Orlando sorla...@nicira.commailto:sorla...@nicira.com wrote: Thanks Maruti, I have some comments and questions which I've posted on gerrit. There are two things I would like to discuss on the mailing list concerning this effort. 1) Is this spec replacing https://review.openstack.org/#/c/100278 and https://review.openstack.org/#/c/93613 - I hope so, otherwise this just adds even more complexity. 2) It sounds like you should be able to implement this service plugin in either a feature branch or a repository distinct from neutron. Can you confirm that? Salvatore On 13 November 2014 13:26, Kamat, Maruti Haridas maruti.ka...@hp.commailto:maruti.ka...@hp.com wrote: Hi Friends, As discussed during the summit, I have uploaded the spec for review at https://review.openstack.org/#/c/134179/ Thanks, Maruti ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Igor Duarte Cardoso. http://igordcard.com @igordcardhttps://twitter.com/igordcard ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [oslo] Graduating oslo-incubator/reports
On Nov 12, 2014, at 8:28 PM, Solly Ross sr...@redhat.com wrote: Hello Doug et all, I would like to get the ball rolling on graduating oslo-incubator/reports into oslo.reports. What do I have to do on my end to move forward with the graduation process? We need a spec filled out using the template in specs/graduation-template.rst and a blueprint registered under the oslo-incubator project on launchpad. After the spec is approved, the rest of the steps are detailed in https://wiki.openstack.org/wiki/Oslo/CreatingANewLibrary. With the move away from namespace packages, we have some tool changes up for review that we should land before we export any more libraries: https://review.openstack.org/#/q/status:open+project:openstack/oslo-incubator+branch:master+topic:bp/drop-namespace-packages,n,z There’s plenty to do until we land those, though. Thanks! Doug ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [oslo] Graduating oslo-incubator/reports
On Nov 14, 2014, at 12:04 PM, Doug Hellmann d...@doughellmann.com wrote: On Nov 12, 2014, at 8:28 PM, Solly Ross sr...@redhat.com wrote: Hello Doug et all, I would like to get the ball rolling on graduating oslo-incubator/reports into oslo.reports. What do I have to do on my end to move forward with the graduation process? We need a spec filled out using the template in specs/graduation-template.rst and a blueprint registered under the oslo-incubator project on launchpad. After the spec is approved, the rest of the steps are detailed in https://wiki.openstack.org/wiki/Oslo/CreatingANewLibrary. With the move away from namespace packages, we have some tool changes up for review that we should land before we export any more libraries: https://review.openstack.org/#/q/status:open+project:openstack/oslo-incubator+branch:master+topic:bp/drop-namespace-packages,n,z There’s plenty to do until we land those, though. Just in case anyone is looking for reviews, we need to tweak the cookiecutter template for the new repositories, too: https://review.openstack.org/#/q/status:open+project:openstack-dev/oslo-cookiecutter+branch:master+topic:bp/drop-namespace-packages,n,z Doug ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Horizon] Separate horizon and openstack_dashboard
On 14/11/14 15:46, Wood, Matthew David (HP Cloud - Horizon) wrote: On 11/14/14, 4:24 AM, Jaromir Coufal jcou...@redhat.com wrote: My opinion and preference: * horizon_lib (framework) + horizon (UI) I agree with both the concept behind these names and the name selections given here. People don¹t think of Nova as the framework, nor Neutron as the framework. That¹s the server/service that we make rest calls against. I like: (horizon_lib or horizon_framework) and horizon The concern with this raised by Matthias is that application developers who currently do import horizon to build plug-ins will be surprised and completely broken once they upgrade. Changing both names would alleviate that a little. Julie ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] L2 gateway as a service
Last Friday I recall we had two discussions around this topic. One in the morning, which I think led to Maruti to push [1]. The way I understood [1] was that it is an attempt at unifying [2] and [3], by choosing the API approach of one and the architectural approach of the other. [1] https://review.openstack.org/#/c/134179/ [2] https://review.openstack.org/#/c/100278/ [3] https://review.openstack.org/#/c/93613/ Then there was another discussion in the afternoon, but I am not 100% of the outcome. All this churn makes me believe that we probably just need to stop pretending we can achieve any sort of consensus on the approach and let the different alternatives develop independently, assumed they can all develop independently, and then let natural evolution take its course :) Ultimately the biggest debate is on what the API model needs to be for these abstractions. We can judge on which one is the best API of all, but sometimes this ends up being a religious fight. A good API for me might not be a good API for you, even though I strongly believe that a good API is one that can: - be hard to use incorrectly - clear to understand - does one thing, and one thing well So far I have been unable to be convinced why we'd need to cram more than one abstraction in one single API, as it does violate the above mentioned principles. Ultimately I like the L2 GW API proposed by 1 and 2 because it's in line with those principles. I'd rather start from there and iterate. My 2c, Armando On 14 November 2014 08:47, Salvatore Orlando sorla...@nicira.com wrote: Thanks guys. I think you've answered my initial question. Probably not in the way I was hoping it to be answered, but it's ok. So now we have potentially 4 different blueprint describing more or less overlapping use cases that we need to reconcile into one? If the above is correct, then I suggest we go back to the use case and make an effort to abstract a bit from thinking about how those use cases should be implemented. Salvatore On 14 November 2014 15:42, Igor Cardoso igordc...@gmail.com wrote: Hello all, Also, what about Kevin's https://review.openstack.org/#/c/87825/? One of its use cases is exactly the L2 gateway. These proposals could probably be inserted in a more generic work for moving existing datacenter L2 resources to Neutron. Cheers, On 14 November 2014 15:28, Mathieu Rohon mathieu.ro...@gmail.com wrote: Hi, As far as I understood last friday afternoon dicussions during the design summit, this use case is in the scope of another umbrella spec which would define external connectivity for neutron networks. Details of those connectivity would be defined through service plugin API. Ian do you plan to define such an umbrella spec? or at least, could you sum up the agreement of the design summit discussion in the ML? I see at least 3 specs which would be under such an umbrella spec : https://review.openstack.org/#/c/93329/ (BGPVPN) https://review.openstack.org/#/c/101043/ (Inter DC connectivity with VPN) https://review.openstack.org/#/c/134179/ (l2 gw aas) On Fri, Nov 14, 2014 at 1:13 PM, Salvatore Orlando sorla...@nicira.com wrote: Thanks Maruti, I have some comments and questions which I've posted on gerrit. There are two things I would like to discuss on the mailing list concerning this effort. 1) Is this spec replacing https://review.openstack.org/#/c/100278 and https://review.openstack.org/#/c/93613 - I hope so, otherwise this just adds even more complexity. 2) It sounds like you should be able to implement this service plugin in either a feature branch or a repository distinct from neutron. Can you confirm that? Salvatore On 13 November 2014 13:26, Kamat, Maruti Haridas maruti.ka...@hp.com wrote: Hi Friends, As discussed during the summit, I have uploaded the spec for review at https://review.openstack.org/#/c/134179/ Thanks, Maruti ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Igor Duarte Cardoso. http://igordcard.com @igordcard https://twitter.com/igordcard ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] A more dynamic wiki, introducing Categories
On 11/14/2014 02:50 PM, Thierry Carrez wrote: In this precise example, I feel like the dynamic page is much less usable than the static page, due to the deep hierarchy. Got it. The current taxonomy is trying to map precisely the hierarchy of Program-Projects-Teams therefore it keeps the Nova project under Compute. In fact, the page of Nova https://wiki.openstack.org/wiki/Nova is somewhat mixing aspects of the Compute program with others more related to the project Nova and the teams. One thing I wanted to do was to reshuffle content, too, and split larger pages. But since we're going to move the official list of Programs outside of the wiki (thankfully) we may need to redesign the taxonomy and keep it leaner. I'll put more complex changes like this one on hold until the governance site is live. /stef ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] A more dynamic wiki, introducing Categories
On 2014-11-14 18:57:13 +0100 (+0100), Stefano Maffulli wrote: [...] I'll put more complex changes like this one on hold until the governance site is live. Which, incidentally, is awaiting https://review.openstack.org/132466 plus any change to merge in the governance repository now that the publication job is live (assuming I wasn't completely off the rails when I wrote it). -- Jeremy Stanley ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] LTFS integration with OpenStack Swift for scenario like - Data Archival as a Service .
On 11/13/14, 10:19 PM, Sachin Goswami wrote: In OpenStack Swift - xfs file system is integrated which provides a maximum file system size of 8 exbibytes minus one byte (263-1 bytes). Not exactly. The Swift storage nodes keep their data on POSIX filesystems with support for extended attributes. While XFS filesystems are typically used, XFS is not required. We are studying use of LTFS integration with OpenStack Swift for scenario like - *Data Archival as a Service* . Was integration of LTFS with Swift considered before? If so, can you please share your study output? Will integration of LTFS with Swift fit into existing Swift architecture ? Assuming it's POSIX enough and supports extended attributes, a tape filesystem on a spinning disk might technically work, but I don't see it performing well at all. If you're talking about using actual tapes for data storage, I can't imagine that working out for you. Most clients aren't prepared to wait multiple minutes for HTTP responses while a tape laboriously spins back and forth, so they'll just time out. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon
On 11/14/2014 08:48 PM, Matthias Runge wrote: On 13/11/14 19:11, Donald Stufft wrote: As far as I’m aware npm supports TLS the same as pip does. That secures the transport between the end users and the repository so you can be assured that there is no man in the middle. Security wise npm (and pip) are about ~95% (mad up numbers, but you can get the gist) of the effectiveness as the OS package managers. Oh, e.g rpm allows packages to be cryptographically signed, and depending on your systems config, that is enforced. This is quite different from just tls'ing a connection. Matthias Just like the Debian Release file is signed into a Release.gpg. So, the RPM system signs every package, while in Debian, it's the full repository that is signed. That's 2 different approaches that both works. pip doesn't offer this kind of security, but at the same time, is there any kind of check for things that are uploaded to PyPi? Is there at least a peer review process? You do realize that TLS provides cryptographic proof of authenticity and integrity just like PGP does right? (It also provides the cool benefit of privacy which PGP signing does not). Do you realize that with the TLS system, you have to trust every and all CA, while with PGP, you only need to trust a single fingerprint? All this isn't to say that TLS is 100% as good as using something like PGP for signatures though. I don't agree. I don't trust the CNNIC or the hong-kong post office, though their key is on every browser. I do trust the Debian PGP key generated by the Debian team. TLS is a fairly decent way of securing a package infrastructure though, it prevents all of the major attacks that PGP signing does in practice but it moves the high value target from the build machines to the web servers [...] And ... to a huge list of root CA which you have to trust. Cheers, Thomas ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon
On 11/14/2014 09:58 PM, Radomir Dopieralski wrote: On 14/11/14 13:02, Richard Jones wrote: We might even be able to get away without using grunt, though an alternative to its LiveReload facility (and one that doesn't then also depend on another node program like django-livereload does) would be required. I believe tox and django's runserver (and other manage commands) could suffice to replace the other functionality typically provided by grunt. We don't really need Xstatic for that. The packagers can as well package the bower-based packages directly. We can use anything, really, as long as we follow a process that makes sure that Horizon can be packaged into the different distributions. That is, we need: 1. All dependencies explicit (with tests failing if a dependency is missing), 2. explicit version ranges, 3. no multiple versions of the same library, 4. additions and upgrades of libraries moderated by the packagers, 5. ability to replace the development environment with packaged libraries from the system, 6. ability to build and test our software with the tools that can be used by all the distributions. What I liked a lot with the xstatic package thing, is that it was *very* easy for me to be able to manage path. Horizon just imports the xstatic package, and then the BASE_DIR or some symlink do the work. Whatever system we choose, I'd like it to be at least as simple as xstatic in this regard. Thomas ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] v2 or v3 for new api
Do you have a blueprint for the new feature? If you want it accepted upstream, you'll need a blueprint and spec: https://wiki.openstack.org/wiki/Blueprints#Spec_.2B_Blueprints_lifecycle On Wed, Nov 12, 2014 at 2:44 PM, Pasquale Porreca pasquale.porr...@dektech.com.au wrote: Hello I am working on an api for a new feature in nova, but I am wondering what is the correct way to add a new extension: should it be supported by v2, v3 or both? BR -- Pasquale Porreca DEK Technologies Via dei Castelli Romani, 22 00040 Pomezia (Roma) Mobile +39 3394823805 Skype paskporr ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon
On 11/14/2014 10:10 PM, Martin Geisler wrote: Of course, I need to run tests. That's a big part of the QA work, and I will certainly not give-up on that. You will have a hard time convincing anyone within the OpenStack community that it's OK to not run unit tests. That's not what I said: the OpenStack developers will continue to tests the software. I personally don't think it's the job of the downstream packagers to do this QA work. (It's of course cool to run the tests on the system installed by your packages -- that test run would then install the needed tools using npm and bower and whatnot if that's how the upstream has setup the test framework.) What happens is that the environment within the distribution, inevitably, will be different from the one ran on the gate. There's going to be different versions of many components and so on. So it is very important for me to also run these unit tests, to make sure that everything continues to work. Yes, the build-dependencies will pull the same components as pulled by npm/bower, though they may be installed in different path, and maybe using different versions. On 11/14/2014 10:21 PM, Jeremy Stanley wrote: On 2014-11-14 15:10:59 +0100 (+0100), Martin Geisler wrote: [...] Just to quibble on this particular point... distro packagers are also developers. They often (more often than we'd like, and we do try to find ways to help avoid this where possible) need to carry their own patches to tweak the software to fit their deployment and operation model. Being able to rerun tests in-place with packaged versions of everything including their patches helps them confirm that what they distribute still works as intended. Further, the distro users are well within their rights to modify and respin these packages themselves, and will potentially want to be able to run these tests for the same reasons. We distribute our tests as part of our software because our tests *are* part of our software. Exactly! Let me give a few examples... In Jessie, Nova carries patches so that there is support for Ceph. Until a few days, there was still an issue with live migration over NFS. This has just been fixed (thanks to Mehdi!), and running unit tests at build time confirmed that. Another example. Jessie will be released with Icehouse Horizon, but with Django 1.7. The gate didn't test that, but I did. Most patches landed in Juno, though never Icehouse will be tested with Django 1.7 in the gate. Lucky, my package runs these unit tests and I can confirm that it continues to work with Django 1.7 in Jessie. Hoping these are giving you an insight as why it's *really* important to run unit tests at build time for distributions, Cheers, Thomas Goirand (zigo) P.S: I also run tempest tests over a Debian package installation to make sure OpenStack is also functional. But that's another story... :) ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon
On Nov 14, 2014, at 1:57 PM, Thomas Goirand z...@debian.org wrote: On 11/14/2014 08:48 PM, Matthias Runge wrote: On 13/11/14 19:11, Donald Stufft wrote: As far as I’m aware npm supports TLS the same as pip does. That secures the transport between the end users and the repository so you can be assured that there is no man in the middle. Security wise npm (and pip) are about ~95% (mad up numbers, but you can get the gist) of the effectiveness as the OS package managers. Oh, e.g rpm allows packages to be cryptographically signed, and depending on your systems config, that is enforced. This is quite different from just tls'ing a connection. Matthias Just like the Debian Release file is signed into a Release.gpg. So, the RPM system signs every package, while in Debian, it's the full repository that is signed. That's 2 different approaches that both works. pip doesn't offer this kind of security, but at the same time, is there any kind of check for things that are uploaded to PyPi? Is there at least a peer review process? The entirety of PyPI is signed. It’s not possible to get a copy of our equivalent to Release.gpg that isn’t cryptographically proven to have been sent by a server possessing our RSA private key. No, PyPI is not a curated repository, nor are any of the language repos that I’m aware of. That really has nothing to do with securely fetching a particular package, it only has to do with whether the contents of said package are safe to use. It means that people installing a package from PyPI have to decide if they trust the author of the package prior to installing it, but if they do trust that author then it is roughly as safe to install from PyPI as it is to install from Debian. The Linux distros are curated repositories so you need to decide if you want to trust the gatekeepers of the distro instead of the authors of the software you’re using (or really you probably need to trust both since a malicious author could likely hide back doors that would go unnoticed during packaging as a .deb). You do realize that TLS provides cryptographic proof of authenticity and integrity just like PGP does right? (It also provides the cool benefit of privacy which PGP signing does not). Do you realize that with the TLS system, you have to trust every and all CA, while with PGP, you only need to trust a single fingerprint? You absolutely do not need to trust every single CA, or even any CAs at all. If I recall npm pins which CA they trust. Pip doesn’t (yet) do this because of some historical reasons but it’s on my list of things as well. It’s no harder to limit the set of CAs or even individual certificates that are accepted as valid than it is to limit the set of PGP keys you trust. All this isn't to say that TLS is 100% as good as using something like PGP for signatures though. I don't agree. I don't trust the CNNIC or the hong-kong post office, though their key is on every browser. I do trust the Debian PGP key generated by the Debian team. See above, you’re operating under a misconception that TLS mandates using the same set of CAs as the browsers use. TLS is a fairly decent way of securing a package infrastructure though, it prevents all of the major attacks that PGP signing does in practice but it moves the high value target from the build machines to the web servers [...] And ... to a huge list of root CA which you have to trust. Already discussed above. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon
On 2014-11-15 02:57:15 +0800 (+0800), Thomas Goirand wrote: [...] Do you realize that with the TLS system, you have to trust every and all CA, while with PGP, you only need to trust a single fingerprint? [...] Technically not true *if* the package retrieval tools implement certificate pinning rather than trusting any old CA (after all, they're not Web browsers, so they could in theory do that with minimal impact). Too bad https://github.com/pypa/pip/issues/1168 hasn't gotten much traction. -- Jeremy Stanley ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] TC election by the numbers
This, It's certainly not a fun thing to do (trying to guide a community of disjoint folks) and it likely comes with little recognition when successful, but IMHO we surely need more of this active technical leadership vs. blessing of projects; of course the boundary between being a engaging leadership and one that is perceived as trying to dictate technical goals for projects must be approached carefully (and with thoughtfulness) to avoid creating more problems than solutions... On Nov 10, 2014, at 8:06 AM, Zane Bitter zbit...@redhat.com wrote: I think you missed the most important option I mentioned in the thread - for the TC to engage more with the community and take an active technical leadership role in the design of OpenStack as a whole. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron] LeastNetwork scheduling for DHCP
Benjamin, Feel free to reach out. If you are referring to my -2, that was just provisional. Before we can go ahead and see an improved scheduling capability for DHCP, you guys need to resolve the conflict between the overlapping blueprints, working together or giving up one in favor on the other. Cheers, Armando On 14 November 2014 07:28, GRASSART Benjamin benjamin.grass...@thalesgroup.com wrote: Hi all, I would definitely be glad to work on the subject as well. However I am not sure to understand fully Armando last remark in our change. I will try to discuss it with him on IRC. Regards, Benjamin GRASSART [@@ THALES GROUP INTERNAL @@] *De :* S M, Praveen Kumar [mailto:praveen-sm.ku...@hp.com] *Envoyé :* vendredi 7 novembre 2014 09:27 *À :* Narasimhan, Vivekanandan; OpenStack Development Mailing List (not for usage questions) *Cc :* Beltur, Jayashree; GRASSART Benjamin; Sourabh Patwardhan (sopatwar); M, Shiva Kumar; A, Keshava *Objet :* RE: [Neutron] LeastNetwork scheduling for DHCP Hi Vivek, We are definitely interested in working on these blueprints collaboratively. We have a working implementation for our blueprint and received few important comments from Armando and addressing them currently. Regards Praveen. *From:* Narasimhan, Vivekanandan *Sent:* Thursday, November 06, 2014 9:09 PM *To:* OpenStack Development Mailing List (not for usage questions) *Cc:* Beltur, Jayashree; S M, Praveen Kumar; benjamin.grass...@thalesgroup.com; Sourabh Patwardhan (sopatwar) *Subject:* [Neutron] LeastNetwork scheduling for DHCP Hi Neutron Stackers, There is an interest among vendors to bring Least Networks scheduling for DHCP into Openstack Neutron. Currently there are the following blueprints lying there, all of them trying to address this issue: https://review.openstack.org/111210 https://review.openstack.org/#/c/130912/ https://review.openstack.org/104587 We are trying to pull together all these BPs as one Umbrella BP, on which we can pour volunteers from every side, to clear out this BP itself as initial step. So we would like to collaborate, to plan BP approval for these. Please respond if you are interested. -- Thanks, Vivek ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon
On Nov 14, 2014, at 2:39 PM, Jeremy Stanley fu...@yuggoth.org wrote: On 2014-11-15 02:57:15 +0800 (+0800), Thomas Goirand wrote: [...] Do you realize that with the TLS system, you have to trust every and all CA, while with PGP, you only need to trust a single fingerprint? [...] Technically not true *if* the package retrieval tools implement certificate pinning rather than trusting any old CA (after all, they're not Web browsers, so they could in theory do that with minimal impact). Too bad https://github.com/pypa/pip/issues/1168 hasn't gotten much traction. Yea, primary reason that hasn’t been done is up until recently we (PyPI) were relying on the TLS certificate provided by Fastly and they were unwilling to make a promise to also be using a particular CA for the next N years. We now have dedicated IP addresses with them so we can provide them with whatever certificate we want, now it’s just a matter of selecting CAs and the political process. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] LTFS integration with OpenStack Swift for scenario like - Data Archival as a Service .
There were some discussions over the past years. I raised the question of Swift tape support in my keynote in Boston in 2011 (http://www.slideshare.net/noggin143/cern-user-story) but there was limited interest. LTFS makes it more likely but we should not underestimate the challenges. Ensuring bulk recall/migration (mounting tapes takes minutes), inventory catalogs to find the right tape and robotics (multiple interfaces to ask for a tape to be mounted) means it is not just a POSIX support question. There was a blog in 2012 regarding a Glacier competitor (http://www.buildcloudstorage.com/2012/08/cold-storage-using-openstack-swift-vs.html) but I don't think things have progressed much beyond that. It would need to be tiered (i.e. migrate whole collections rather than files) and a local catalog would be needed to map containers to tapes. Timeouts would be an issue since we are often waiting hours for recall (to ensure that multiple recalls for the same tape are grouped). It is not an insolvable problem but it is not just a 'use LTFS' answer. Tim On 14 Nov 2014, at 19:06, Samuel Merritt s...@swiftstack.commailto:s...@swiftstack.com wrote: On 11/13/14, 10:19 PM, Sachin Goswami wrote: In OpenStack Swift - xfs file system is integrated which provides a maximum file system size of 8 exbibytes minus one byte (263-1 bytes). Not exactly. The Swift storage nodes keep their data on POSIX filesystems with support for extended attributes. While XFS filesystems are typically used, XFS is not required. We are studying use of LTFS integration with OpenStack Swift for scenario like - *Data Archival as a Service* . Was integration of LTFS with Swift considered before? If so, can you please share your study output? Will integration of LTFS with Swift fit into existing Swift architecture ? Assuming it's POSIX enough and supports extended attributes, a tape filesystem on a spinning disk might technically work, but I don't see it performing well at all. If you're talking about using actual tapes for data storage, I can't imagine that working out for you. Most clients aren't prepared to wait multiple minutes for HTTP responses while a tape laboriously spins back and forth, so they'll just time out. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] Using Job Queues for timeout ops
Sounds like this is tooz[1] ;) The api for tooz (soon to be an oslo library @ https://review.openstack.org/#/c/122439/) is around coordination and 'service-group' like behavior so I hope we don't have duplicates of this in 'oslo.healthcheck' instead of just using/contributing to tooz instead. https://github.com/stackforge/tooz/blob/master/tooz/coordination.py#L63 CoordinationDriver - watch_join_group - unwatch_join_group - join_group - get_members - ... Tooz has backends that use [redis, zookeeper, memcache] to achieve the above API (it also has some locking support for distributed locks as well). Feel free to jump on #openstack-state-management if u want more info (jd and the enovance guys and myself have developed that library for this kind of purpose). -josh On Nov 13, 2014, at 10:58 PM, Jastrzebski, Michal michal.jastrzeb...@intel.com wrote: Also, on Common approach to HA session we moved something like oslo.healthcheck (or whatever it will be called), common lib for service-group like behavior. In my opinion it's pointless to implement zookeeper management in every project separately (its already in nova..). Might be worth looking closely into this topic. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] Using Job Queues for timeout ops
Sounds like this is tooz[1] ;) The api for tooz (soon to be an oslo library @ https://review.openstack.org/#/c/122439/) is around coordination and 'service-group' like behavior so I hope we don't have duplicates of this in 'oslo.healthcheck' instead of just using/contributing to tooz instead. https://github.com/stackforge/tooz/blob/master/tooz/coordination.py#L63 CoordinationDriver - watch_join_group - unwatch_join_group - join_group - get_members - ... Tooz has backends that use [redis, zookeeper, memcache] to achieve the above API (it also has some locking support for distributed locks as well). Feel free to jump on #openstack-state-management if u want more info (jd and the enovance guys and myself have developed that library for this kind of purpose). -josh On Nov 13, 2014, at 10:58 PM, Jastrzebski, Michal michal.jastrzeb...@intel.com wrote: Also, on Common approach to HA session we moved something like oslo.healthcheck (or whatever it will be called), common lib for service-group like behavior. In my opinion it's pointless to implement zookeeper management in every project separately (its already in nova..). Might be worth looking closely into this topic. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] Using Job Queues for timeout ops
Sounds like this is tooz[1] ;) The api for tooz (soon to be an oslo library @ https://review.openstack.org/#/c/122439/) is around coordination and 'service-group' like behavior so I hope we don't have duplicates of this in 'oslo.healthcheck' instead of just using/contributing to tooz instead. https://github.com/stackforge/tooz/blob/master/tooz/coordination.py#L63 CoordinationDriver - watch_join_group - unwatch_join_group - join_group - get_members - ... Tooz has backends that use [redis, zookeeper, memcache] to achieve the above API (it also has some locking support for distributed locks as well). Feel free to jump on #openstack-state-management if u want more info (jd and the enovance guys and myself have developed that library for this kind of purpose). -josh On Nov 13, 2014, at 10:58 PM, Jastrzebski, Michal michal.jastrzeb...@intel.com wrote: Also, on Common approach to HA session we moved something like oslo.healthcheck (or whatever it will be called), common lib for service-group like behavior. In my opinion it's pointless to implement zookeeper management in every project separately (its already in nova..). Might be worth looking closely into this topic. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] A more dynamic wiki, introducing Categories
Thank you Stefano (and Shari and Katherine)! I think this will help. Is there a way to add new categories? I expect that as people start using this, more categories will be requested. And, is there a wiki page where this info is hosted? --Rocky -Original Message- From: Stefano Maffulli [mailto:stef...@openstack.org] Sent: Friday, November 14, 2014 3:01 AM To: OpenStack-dev@lists.openstack.org Subject: [openstack-dev] [all] A more dynamic wiki, introducing Categories Hello folks in the past months Shari and I have implemented more chunks of the taxonomy developed for us by Katherine Cranford (a volunteer expert). Using categories in the wiki pages can help us create dynamic pages and keep information more visible, well organized and discoverable. For example, we had a page listing all Teams. When someone needed to create a new Team, someone would have to: 1) create a new page for the team, save, 2) go to the Teams page, edit page adding a link to the new page, save. Today instead the process looks like this: - Create a new page for the new team - Add the text [[Category: Teams]] to the rest of the text - save Done. The new page will be automatically shown on https://wiki.openstack.org/wiki/Category:Teams Category pages are just like any other wiki page: they can have text, images, can be edited, translated etc. And they automatically show the pages contained in their category. As an example, see how the dynamic page for Programs https://wiki.openstack.org/wiki/Category:Programs and the 'static' version look like: https://wiki.openstack.org/wiki/Programs I would apply to Programs the same redirect I created for Teams (unless someone stops me). Pages generated automatically this way are a huge improvement to navigation inside the wiki and I suggest you all to get familiar with the list of Categories shown in a nice tree on: https://wiki.openstack.org/wiki/Category:Home and read the Taxonomy section in: https://wiki.openstack.org/wiki/How_To_Use_The_Wiki As this is a wiki, your help is needed: think about adding pages you manage to an existing category and when possible consider moving 'static' pages to the newly created dynamic pages. Cheers, stef ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] A more dynamic wiki, introducing Categories
On 2014-11-14 20:04:33 + (+), Rochelle Grober wrote: [...] Is there a way to add new categories? I expect that as people start using this, more categories will be requested. [...] Categories emerge automatically as you tag pages into them. No separate category creation step is required. -- Jeremy Stanley ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] Using Job Queues for timeout ops
Arg, sorry for the spam, mail.app was still trying to send it multiple times for some reason... -Josh From: Joshua Harlow harlo...@outlook.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Sent: Friday, November 14, 2014 11:45 AM Subject: Re: [openstack-dev] [Heat] Using Job Queues for timeout ops Sounds like this is tooz[1] ;) The api for tooz (soon to be an oslo library @ https://review.openstack.org/#/c/122439/) is around coordination and 'service-group' like behavior so I hope we don't have duplicates of this in 'oslo.healthcheck' instead of just using/contributing to tooz instead. https://github.com/stackforge/tooz/blob/master/tooz/coordination.py#L63 CoordinationDriver - watch_join_group - unwatch_join_group - join_group - get_members - ... Tooz has backends that use [redis, zookeeper, memcache] to achieve the above API (it also has some locking support for distributed locks as well). Feel free to jump on #openstack-state-management if u want more info (jd and the enovance guys and myself have developed that library for this kind of purpose). -josh On Nov 13, 2014, at 10:58 PM, Jastrzebski, Michal michal.jastrzeb...@intel.com wrote: Also, on Common approach to HA session we moved something like oslo.healthcheck (or whatever it will be called), common lib for service-group like behavior. In my opinion it's pointless to implement zookeeper management in every project separately (its already in nova..). Might be worth looking closely into this topic. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Cinder] Support LVM on a shared LU
Hello, Duncan, Mike, 11:13 (DuncanT) DuncanT mtanino, You can and should submit the code even if the spec isn't approved as long as it isn't looking contentious, but I will certainly take a look Base on the comment at cinder__unofficial_meeting at Wednesday, I posted both updated cinder-spec and the code. Could you review the spec and code? Spec: https://review.openstack.org/#/c/129352/ Code: https://review.openstack.org/#/c/92479/ The code is still work in progress, but most of functions are already implemented. Please check whether the code doesn't break existing cinder code. For your reference, here are whole links related to this proposal. Blueprints: * https://blueprints.launchpad.net/nova/+spec/lvm-driver-for-shared-storage * https://blueprints.launchpad.net/cinder/+spec/lvm-driver-for-shared-storage Spec: * nova: https://review.openstack.org/#/c/127318/ * cinder: https://review.openstack.org/#/c/129352/ Gerrit Review: * nova: https://review.openstack.org/#/c/92443/ * cinder: https://review.openstack.org/#/c/92479/ Regards, Mitsuhiro Tanino mitsuhiro.tan...@hds.com HITACHI DATA SYSTEMS ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [api] Cross-Project Liaison for the API Working Group
On Nov 14, 2014, at 1:43 PM, Jay Pipes jaypi...@gmail.com wrote: On 11/14/2014 05:13 PM, Everett Toews wrote: The liaison should be a core reviewer for the project, but does not need to be the PTL. By default, the liaison will be the PTL. ]Anyway, the outcome of the email exchange with Eoghan was that I recommended he submit a core for the API liaison to start, and that I would raise the issue on the ML to see if those conditions may be loosened to include non-cores. And... here is that issue being raised :) I’m totally fine with that. Ideally it’s the person who is the most passionate about the API from the team, regardless of core status. The current wording on the Cross-Project Liaisons page is The liaison should be a core reviewer for the project, but does not need to be the PTL.” I think the key word there is _should_. Naturally, it’s preferable to want a core reviewer in this role because they have more say about what gets into the code base but it’s not an absolute must. We can loosen the wording further but I think it’s okay to proceed with a non-core reviewer, especially if that person is passionate about APIs. Everett ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon
On 14/11/14 16:21, Adam Young wrote: Example: I don't need Grunt to run a web server. I need Apache for that. Grunt does not need to be in the distro, mod_wsgi does. I will need every tool required to run e.g. unit tests or selenium tests to be packaged. Why? Because our builders don't have access to the internet and I want to be sure, the packaged version of horizon still passes tests. Matthias ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [cinder] [nova] Consistency groups?
Hi Phil, Do you have a libvirt volume driver on the Nova side for DRBD? Regarding getting consistency group information to the Nova nodes, can you help me understand the steps you need to go through? 1. Create a consistency group 2. Create a volume and add volume to the group Repeat the above step until all volumes are created and added to the group 3. Attach volume in the group 4. Create a snapshot of the consistency group Do you setup the volume on the Nova side at step 3? We currently don't have a group level API that setup all volumes in a group. Is it possible for you to detect whether a volume is in a group or not when attaching one volume and setup all volumes in the same group? Otherwise, it sounds like we need to add a group level API for this purpose. Thanks, Xing -Original Message- From: Philipp Marek [mailto:philipp.ma...@linbit.com] Sent: Friday, November 14, 2014 2:58 AM To: openstack-dev@lists.openstack.org Subject: [openstack-dev] [cinder] [nova] Consistency groups? Hi, I'm working on the DRBD Cinder driver, and am looking at the Nova side, too. Is there any idea how Cinder's consistency groups should be used on the Nova nodes? DRBD has easy support for consistency groups (a DRBD resource is a collection of DRBD volumes that share a single, serialized connection) and so can guarantee write consistency across multiple volumes. [ Which does make sense anyway; eg. with multiple iSCSI connections one could break down because of STP or other packet loss, and then the storage backend and/or snapshots/backups/etc. wouldn't be consistent anymore.] What I'm missing now is a way to get the consistency group information to the Nova nodes. I can easily put such a piece of data into the transmitted transport information (along with the storage nodes' IP addresses etc.) and use it on the Nova side; but that also means that on the Nova side there'll be several calls to establish the connection, and several for tear down - and (to exactly adhere to the API contract) I'd have to make sure that each individual volume is set up (and closed) in exactly that order again. That means quite a few unnecessary external calls, and so on. Is there some idea, proposal, etc., that says that *within a consistency group* all volumes *have* to be set up and shutdown *as a single logical operation*? [ well, there is one now ;] Because in that case all volume transport information can (optionally) be transmitted in a single data block, with several iSCSI/DRBD/whatever volumes being set up in a single operation; and later calls (for the other volumes in the same group) can be simply ignored as long as they have the same transport information block in them. Thank you for all pointers to existing proposals, ideas, opinions, etc. Phil -- : Ing. Philipp Marek : LINBIT | Your Way to High Availability : DRBD/HA support and consulting http://www.linbit.com : DRBD® and LINBIT® are registered trademarks of LINBIT, Austria. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] LTFS integration with OpenStack Swift for scenario like - Data Archival as a Service .
Excerpts from Samuel Merritt's message of 2014-11-14 10:06:53 -0800: On 11/13/14, 10:19 PM, Sachin Goswami wrote: In OpenStack Swift - xfs file system is integrated which provides a maximum file system size of 8 exbibytes minus one byte (263-1 bytes). Not exactly. The Swift storage nodes keep their data on POSIX filesystems with support for extended attributes. While XFS filesystems are typically used, XFS is not required. We are studying use of LTFS integration with OpenStack Swift for scenario like - *Data Archival as a Service* . Was integration of LTFS with Swift considered before? If so, can you please share your study output? Will integration of LTFS with Swift fit into existing Swift architecture ? Assuming it's POSIX enough and supports extended attributes, a tape filesystem on a spinning disk might technically work, but I don't see it performing well at all. If you're talking about using actual tapes for data storage, I can't imagine that working out for you. Most clients aren't prepared to wait multiple minutes for HTTP responses while a tape laboriously spins back and forth, so they'll just time out. Agreed. You'd need to have a separate API for freezing and thawing data I think, similar to the way glacier works. However, my understanding of glacier is that it is simply a massive bank of cheap disks which are largely kept powered off until either a ton of requests for data on a single disk arrive, or a certain amount of time has passed. The benefit of this is that there is no intermediary storage required. The disks are either online, and you can read your data, or offline, and you have to wait. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev