Re: [openstack-dev] [python3] Enabling py37 unit tests
On 10/11/18 1:35 AM, Goutham Pacha Ravi wrote: > Thanks Corey for starting this effort. I proposed changes to > manila repos to use your template [1] [2], but the interpreter's not > being installed, > do you need to make any bindep changes to enable the "universe" ppa and > install > python3.7 and python3.7-dev? Your best luck in Debian based distribution is probably to attempt installing python3-all *or* python3-dev-all. The -all means all available versions (so, in case of Debian Sid currently, it will install both Python 3.6 and 3.7). The -dev-all will also install the -all package, so you never need *both*. You also don't need the -dev if you don't have Python packages that needs Python.h (ie: embedded C code in a Python module). So, just switch to that, and you're good to go *forever*, independently of what python version is available in a given OS version! :) I hope this helps, Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [barbican] No ca_file in the KeystonePassword class
Hi, Trying to implement kms_keymaster in Swift (to enable encryption), I have found out that Castellan's KeystonePassword doesn't include any option for root CA certificates (neither a insecure=True option). In such a configuration, it's not easy to test. So my question is: has anyone from the Barbican thought about this, and/or is there any workaround this? Going to production without any possibility to test with fake certs is a little bit annoying... :P Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [oslo] No complains about rabbitmq SSL problems: could we have this in the logs?
On 10/31/18 2:40 PM, Mohammed Naser wrote: > For what it’s worth: I ran into the same issue. I think the problem lies a > bit deeper because it’s a problem with kombu as when debugging I saw that > Oslo messaging tried to connect and hung after. > > Sent from my iPhone > >> On Oct 31, 2018, at 2:29 PM, Thomas Goirand wrote: >> >> Hi, >> >> It took me a long long time to figure out that my SSL setup was wrong >> when trying to connect Heat to rabbitmq over SSL. Unfortunately, Oslo >> (or heat itself) never warn me that something was wrong, I just got >> nothing working, and no log at all. >> >> I'm sure I wouldn't be the only one happy about having this type of >> problems being yelled out loud in the logs. Right now, it does work if I >> turn off SSL, though I'm still not sure what's wrong in my setup, and >> I'm given no clue if the issue is on rabbitmq-server or on the client >> side (ie: heat, in my current case). >> >> Just a wishlist... :) >> Cheers, >> >> Thomas Goirand (zigo) I've opened a bug here: https://bugs.launchpad.net/oslo.messaging/+bug/1801011 Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [oslo] No complains about rabbitmq SSL problems: could we have this in the logs?
Hi, It took me a long long time to figure out that my SSL setup was wrong when trying to connect Heat to rabbitmq over SSL. Unfortunately, Oslo (or heat itself) never warn me that something was wrong, I just got nothing working, and no log at all. I'm sure I wouldn't be the only one happy about having this type of problems being yelled out loud in the logs. Right now, it does work if I turn off SSL, though I'm still not sure what's wrong in my setup, and I'm given no clue if the issue is on rabbitmq-server or on the client side (ie: heat, in my current case). Just a wishlist... :) Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Announcing OpenStack Cluster Installer (OCI)
Hi, After about 6 months of development, I'm proud to announce the first fully working version of OCI. Here's the description: OCI (OpenStack Cluster Installer) is a software to provision an OpenStack cluster automatically. This package install a provisioning machine, which consists of a DHCP server, a PXE boot server, a web server, and a puppet-master. Once computers in the cluster boot for the first time, a Debian live system is served by OCI, to act as a discovery image. This live system then reports the hardware features back to OCI. The computers can then be installed with Debian from that live system, configured with a puppet-agent that will connect to the puppet-master of OCI. After Debian is installed, the server reboots under it, and OpenStack services is then provisionned in these machines, depending on their role in the cluster. Currently, OCI can only install a highly available Swift cluster. In the future, it will be able to deploy full compute clouds. Stay tuned, or contribute. Now is the perfect time to influence OCI's design. OCI has been deployed in production at Infomaniak and has been used for deploying a cross data-center fully redondant swift cluster. We're currently working on adding the compute feature to it. OCI, internally, uses the Puppet modules of puppet-openstack, and is fully packaged and tested in Debian Sid, Buster and Stretch (including the Puppet modules). It is available from your closest Debian mirror in Sid and Buster, and it is also available through the unofficial stretch-rocky.debian.net backport repository. A simple "apt-get install openstack-cluster-installer" is enough to install all needed modules. For further information, see: https://salsa.debian.org/openstack-team/debian/openstack-cluster-installer To get in touch, contribute, or ask for support, please join the team's IRC channel #debian-openstack on the OFTC network. Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Proposal for a process to keep up with Python releases
On 10/26/18 7:11 PM, Zane Bitter wrote: > On 26/10/18 5:09 AM, Thomas Goirand wrote: >> On 10/22/18 9:12 PM, Zane Bitter wrote: >>> On 22/10/18 10:33 AM, Thomas Goirand wrote: >>>> This can only happen if we have supporting distribution packages for >>>> it. >>>> IMO, this is a call for using Debian Testing or even Sid in the gate. >>> >>> It depends on which versions we choose to support, but if necessary yes. >> >> If what we want is to have early detection of problems with latest >> versions of Python, then there's not so many alternatives. > > I think a lot depends on the relative timing of the Python release, the > various distro release cycles, and the OpenStack release cycle. We > established that for 3.7 that's the only way we could have done it in > Rocky; for 3.8, who knows. No need for a crystal ball... Python 3.8 is scheduled to be released in summer 2019. As Buster is to be frozen early the same year, it should be out before it. So, there's a lot of chance that Python 3.8 will be in Debian Sid/Bullseye before anywhere else again, probably just after the release of the OpenStack T release, meaning it most likely will be broken again in Debian Sid. > I agree that bugs with future versions of Python are always worth fixing > ASAP, whether or not we are able to test them in the gate. :) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Proposal for a process to keep up with Python releases
On 10/22/18 9:12 PM, Zane Bitter wrote: > On 22/10/18 10:33 AM, Thomas Goirand wrote: >> This can only happen if we have supporting distribution packages for it. >> IMO, this is a call for using Debian Testing or even Sid in the gate. > > It depends on which versions we choose to support, but if necessary yes. If what we want is to have early detection of problems with latest versions of Python, then there's not so many alternatives. I don't really understand why you're writing that it "depends on which version we choose to support". That's the kind of answer which I found very frustrating when I submit a bug, and I'm being replied "we don't support this version". My reasoning is, the earlier we detect and fix problems, the better, and that's orthogonal to to what version of Python we want to support. Delaying bugfix and latest Python version compat leads to nowhere, and best is to test with it if possible (even in a non-voting mode). Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [horizon] xstatic-bootstrap-datepicker and twitter-bootstrap dependency
Hi, The python3-xstatic-bootstrap-datepicker Debian package runtime depends on libjs-twitter-bootstrap-datepicker which itself depends on libjs-twitter-bootstrap, which is produced by the twitter-bootstrap source package. The twitter-bootstrap will go away from Debian Buster, as per https://bugs.debian.org/907724 So a few questions here: - Do I really need to have libjs-twitter-bootstrap-datepicker depend on libjs-twitter-bootstrap (which is version 2 of bootstrap)? - Is Horizon using bootstrap 3? - What action does the Horizon team suggest to keep Horizon working in Debian? Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] [tc] [api] Paste Maintenance
On 10/22/18 12:55 PM, Chris Dent wrote: >> My assumption is that it's "something we plan to minimally maintain >> because we depend on it". in which case all options would work: the >> exact choice depends on whether there is anybody interested in helping >> maintaining it, and where those contributors prefer to do the work. > > Thus far I'm not hearing any volunteers. If that continues to be the > case, I'll just keep it on bitbucket as that's the minimal change. Could you please move it to Github, so that at least, it's easier to check out? Mercurial is always a pain... Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Proposal for a process to keep up with Python releases
On 10/19/18 5:17 PM, Zane Bitter wrote: > We have traditionally held to the principle that we want each release to > support the latest release of CentOS and the latest LTS release of > Ubuntu, as they existed at the beginning of the release cycle.[2] > Currently this means in practice one version of py2 and one of py3, but > in the future it will mean two, usually different, versions of py3. That's not very nice to forget about the Debian case, which usually closely precedes Ubuntu. If you want to support Ubuntu better, then supporting Debian better helps. I usually get the issue before everyone, as Sid is the distro which is updated the most often. Therefore, please make sure to include Debian in your proposal. > For unit tests, the most important thing is to test on the versions of > Python we target. It's less important to be using the exact distro that > we want to target, because unit tests generally won't interact with > stuff outside of Python. One of the reoccurring problem that I'm facing in Debian is that not only Python 3 version is lagging behind, but OpenStack dependencies are also lagging behind the distro. Often, the answer is "we don't support this or that version of X", which of course is very frustrating. One thing which would be super nice, would be a non-voting gate job that test with the latest version of every Python dependencies as well, so we get to see breakage early. We've stopped seeing them since we decided it breaks too often and we would hide problems behind the global-requirement thing. And sometimes, we have weird interactions. For example, taskflow was broken in Python 3.7 before this patch: https://salsa.debian.org/openstack-team/libs/python-taskflow/commit/6a10261a8a147d901c07a6e7272dc75b9f4d0988 which broke multiple packages using it. Funny thing, it looks like it wouldn't have happen if we didn't have a pre-version of Python 3.7.1 in Sid, apparently. Anyway, this can happen again. > So, for example, (and this is still under active debate) for Stein we > might have gating jobs for py35 and py37, with a periodic job for py36. > The T jobs might only have voting py36 and py37 jobs, but late in the T > cycle we might add a non-voting py38 job on master so that people who > haven't switched to the U template yet can see what, if anything, > they'll need to fix. This can only happen if we have supporting distribution packages for it. IMO, this is a call for using Debian Testing or even Sid in the gate. > We'll run the unit tests on any distro we can find that supports the > version of Python we want. It could be a non-LTS Ubuntu, Fedora, Debian > unstable, whatever it takes. We won't wait for an LTS Ubuntu to have a > particular Python version before trying to test it. I very much agree with that. > Before the start of each cycle, the TC would determine which range of > versions we want to support, on the basis of the latest one we can find > in any distro and the earliest one we're likely to need in one of the > supported Linux distros. Release of Python aren't aligned with OpenStack cycles. Python 3.7 appeared late in the Rocky cycle. Therefore, unfortunately, doing what you propose above doesn't address the issue. > Integration Tests > - > > Integration tests do test, amongst other things, integration with > non-openstack-supplied things in the distro, so it's important that we > test on the actual distros we have identified as popular.[2] It's also > important that every project be testing on the same distro at the end of > a release, so we can be sure they all work together for users. I find very disturbing to see the project only leaning toward these only 2 distributions. Why not SuSE & Debian? Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [election][tc]Question for candidates about global reachout
On 09/14/2018 09:52 PM, Zhipeng Huang wrote: > This is a joint question from mnaser and me :) > > For the candidates who are running for tc seats, please reply to this > email to indicate if you are open to use certain social media app in > certain region (like Wechat in China Even if I do use WeChat because of some of my Chinese friends that know only that, I am strongly against using such a proprietary network, especially for open development. It's ok-ish if some Chinese want to create local community in WeChat. It's not if the whole project vouches for this type of networks. Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [goals][python3] mixed versions?
On 09/13/2018 12:52 AM, Chris Friesen wrote: > On 9/12/2018 12:04 PM, Doug Hellmann wrote: > >>> This came up in a Vancouver summit session (the python3 one I think). >>> General consensus there seemed to be that we should have grenade jobs >>> that run python2 on the old side and python3 on the new side and test >>> the update from one to another through a release that way. >>> Additionally there was thought that the nova partial job (and similar >>> grenade jobs) could hold the non upgraded node on python2 and that >>> would talk to a python3 control plane. >>> >>> I haven't seen or heard of anyone working on this yet though. >>> >>> Clark >>> >> >> IIRC, we also talked about not supporting multiple versions of >> python on a given node, so all of the services on a node would need >> to be upgraded together. > > As I understand it, the various services talk to each other using > over-the-wire protocols. Assuming this is correct, why would we need to > ensure they are using the same python version? > > Chris There are indeed a few cases were things can break, especially with character encoding. If you want an example of what may go wrong, here's one with Cinder and Ceph: https://review.openstack.org/568813 Without the encodeutils.safe_decode() call, Cinder over Ceph was just crashing for me in Debian (Debian is full Python 3 now...). In this example, we're just over the wire, and it was supposed to be the same. Yet, only an integration test could have detect it (and I discovered it running puppet-openstack on Debian). Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] better name for placement
On 09/04/2018 06:25 PM, Jay Pipes wrote: > On 09/04/2018 12:17 PM, Doug Hellmann wrote: >> Excerpts from Jay Pipes's message of 2018-09-04 12:08:41 -0400: >>> On 09/04/2018 11:44 AM, Doug Hellmann wrote: >>>> Excerpts from Chris Dent's message of 2018-09-04 15:32:12 +0100: >>>>> On Tue, 4 Sep 2018, Jay Pipes wrote: >>>>> >>>>>> Is there a reason we couldn't have openstack-placement be the >>>>>> package name? >>>>> >>>>> I would hope we'd be able to do that, and probably should do that. >>>>> 'openstack-placement' seems a find pypi package name for a think >>>>> from which you do 'import placement' to do some openstack stuff, >>>>> yeah? >>>> >>>> That's still a pretty generic name for the top-level import, but I >>>> think >>>> the only real risk is that the placement service couldn't be installed >>>> at the same time as another package owned by someone else that used >>>> that >>>> top-level name. I'm not sure how much of a risk that really is. >>> >>> You mean if there was another Python package that used the package name >>> "placement"? >>> >>> The alternative would be to make the top-level package something like >>> os_placement instead? > > Either one works for me. Though I'm pretty sure that it isn't necessary. > The reason it isn't necessary is because the stuff in the top-level > placement package isn't meant to be imported by anything at all. In a distro, no 2 package can hold the same file. That's forbidden. This has nothing to do if someone has to "import placemement" or not. Just saying this, but *not* that we should rename (I didn't spot any conflict yet and I understand the pain it would induce). This command returns nothing: apt-file search placement | grep python3/dist-packages/placement Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] better name for placement (was:Nominating Chris Dent for placement-core)
On 08/31/2018 05:45 PM, Eric Fried wrote: > The openstack/placement project [1] and its core team [2] have been > established in gerrit. > > I hereby nominate Chris Dent for membership in the placement-core team. > He has been instrumental in the design, implementation, and stewardship > of the placement API since its inception and has shown clear and > consistent leadership. > > As we are effectively bootstrapping placement-core at this time, it > would seem appropriate to consider +1/-1 responses from heavy placement > contributors as well as existing cores (currently nova-core). > > [1] https://review.openstack.org/#/admin/projects/openstack/placement > [2] https://review.openstack.org/#/admin/groups/1936,members Just a nit-pick... It's a shame we call it just placement. It could have been something like: foo: OpenStack placement Just like we have: nova: OpenStack compute No? Is it too late? Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Openstack-operators] [all] Bringing the community together (combine the lists!)
On 08/30/2018 11:33 PM, Jeremy Stanley wrote: > On 2018-08-30 22:49:26 +0200 (+0200), Thomas Goirand wrote: > [...] >> I really don't want this. I'm happy with things being sorted in >> multiple lists, even though I'm subscribed to multiples. > > I understand where you're coming from I'm coming from the time when OpenStack had a list on launchpad where everything was mixed. We did the split because it was really annoying to have everything mixed. > I was accustomed to communities where developers had one mailing > list, users had another, and whenever a user asked a question on the > developer mailing list they were told to go away and bother the user > mailing list instead (not even a good, old-fashioned "RTFM" for > their trouble). I don't think that's what we are doing. Usually, when someone does the mistake, we do reply to him/her, at the same time pointing to the correct list. > You're probably intimately familiar with at least > one of these communities. ;) I know what you have in mind! Indeed, in that list, it happens that some people are a bit harsh to users. Hopefully, the folks in OpenStack devel aren't like this. > As the years went by, it's become apparent to me that this is > actually an antisocial behavior pattern In the OpenStack lists, every day, some developers take the time to answer users. So I don't see what there is to fix. > I believe OpenStack actually wants users to see the > development work which is underway, come to understand it, and > become part of that process. Users are very much welcome in our -dev list. I don't think there's a problem here. > Requiring them to have their > conversations elsewhere sends the opposite message. In many places and occasion, we've sent the correct message. On 08/30/2018 11:45 PM, Jimmy McArthur wrote: > IMO this is easily solved by tagging. If emails are properly tagged > (which they typically are), most email clients will properly sort on > rules and you can just auto-delete if you're 100% not interested in a > particular topic. This topically works with folks used to send tags. It doesn't for new comers, which is what you see with newbies coming to ask questions. Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] Bringing the community together (combine the lists!)
On 08/30/2018 08:57 PM, Chris Friesen wrote: > On 08/30/2018 11:03 AM, Jeremy Stanley wrote: > >> The proposal is simple: create a new openstack-discuss mailing list >> to cover all the above sorts of discussion and stop using the other >> four. > > Do we want to merge usage and development onto one list? I really don't want this. I'm happy with things being sorted in multiple lists, even though I'm subscribed to multiples. Thomas __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Paste unmaintained
On 08/08/2018 04:38 PM, Chris Dent wrote: > On Wed, 8 Aug 2018, Thomas Goirand wrote: > >> I'd be more than happy to have a better logging without the need of >> paste/pastescript, but so far, that's the only way I found that worked >> with uwsgi. Do you know any other way? > > Yes, use systemd or some other supervisor which is responsible for > catching stderr. That's why I pointed to devstack and my container > thing. Not because I think devstack is glorious or anything, but > because the logging works and presumably something can be learned > from that. > > Apparently what you're doing in the debian packages doesn't work > (without logging middleware), which isn't surprising because that's > exactly how uwsgi and WSGI is supposed to work. > > What I've been trying to suggest throughout this subthread is that > it sounds like however things are being packaged in debian is not > right, and that something needs to be changed. Also that your bold > assertion that uwsgi doesn't work without paste is only true in the > narrow way in which you are using it (which is the wrong way to use > it). Thanks. I'll try to investigate then. However, the way you're suggesting mandates systemd which is probably not desirable. Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Paste unmaintained
On 08/08/2018 10:43 AM, Chris Dent wrote: > On Wed, 8 Aug 2018, Thomas Goirand wrote: > >> If you don't configure uwsgi to do any special logging, then then only >> thing you'll see in the log file is client requests, without any kind of >> logging from the wsgi application. To have proper logging, one needs to >> add, in the uwsgi config file: >> >> paste-logger = true >> >> If you do that, then you need the python3-pastescript installed, which >> itself depends on the python3-paste package. >> >> Really, I don't see how an operator could run without the paste-logger >> option activated. Without it, you see nothing in the logs. > > I'm pretty sure your statements here are not true. In the uwsgi > configs for services in devstack, paste-logger is not used. I have never mentioned devstack ! :) > Can you please point me to where you are seeing these problems? In the Debian packages, if I don't do paste-logger = true, I will not see any debug output. > Clearly something is confused somewhere. Is the difference in our > experiences that both of the situations I describe above are happy > with logging being on stderr and you're talking about being able to > config logging to files, within the application itself? If there's no paste-logger, what the application prints on stderr doesn't appear in the log file that uwsgi logs into. That's precisely what paste-logger fixes. > If that's > the case then my response would b: don't do that. Let systemd, or > your container, or apache2, or whatever process/service orchestration > system you have going manage that. That's what they are there for. I'd be more than happy to have a better logging without the need of paste/pastescript, but so far, that's the only way I found that worked with uwsgi. Do you know any other way? Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate
On 08/07/2018 06:10 PM, Corey Bryant wrote: > I was concerned that there wouldn't be any > gating until Ubuntu 20.04 (April 2020) Same over here. I'm concerned that it takes another 2 years, which really, we cannot afford. > but Py3.7 is available in bionic today. Is Bionic going to be released with Py3.7? In Debconf18 in Taiwan, Doko didn't seem completely sure about it. Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Paste unmaintained
On 08/07/2018 05:17 PM, Doug Hellmann wrote: > Excerpts from Thomas Goirand's message of 2018-08-07 16:57:59 +0200: >> On 08/02/2018 04:27 PM, Doug Hellmann wrote: >>> >>> The last I heard, a few years ago Ian moved away from Python to >>> JavaScript as part of his work at Mozilla. The support around >>> paste.deploy has been sporadic since then, and was one of the reasons >>> we discussed a goal of dropping paste.ini as a configuration file. >> >> Doug, >> >> That's nice to have direct dependency, but this doesn't cover >> everything. If using uwsgi, if you want any kind of logging from the >> wsgi application, you need to use pastescript, which itself runtimes >> depends on paste. So, anything which potentially has an API also depends >> indirectly on Paste. > > I'm not sure why that would be the case. Surely *any* middleware could > set up logging? > > Doug Doug, If you don't configure uwsgi to do any special logging, then then only thing you'll see in the log file is client requests, without any kind of logging from the wsgi application. To have proper logging, one needs to add, in the uwsgi config file: paste-logger = true If you do that, then you need the python3-pastescript installed, which itself depends on the python3-paste package. Really, I don't see how an operator could run without the paste-logger option activated. Without it, you see nothing in the logs. I hope this helps, Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Paste unmaintained
| Paste # MIT| > | tricircle | requirements.txt > |7 | Paste>=2.0.2 # MIT | > | trio2o | requirements.txt > |7 | Paste # MIT| > | trove | requirements.txt > | 11 | Paste>=2.0.2 # MIT | > | upstream-institute-virtual-environment | > elements/upstream-training/static/tmp/requirements.txt | 147 | Paste==2.0.3 > | > +++--+--------+ Doug, That's nice to have direct dependency, but this doesn't cover everything. If using uwsgi, if you want any kind of logging from the wsgi application, you need to use pastescript, which itself runtimes depends on paste. So, anything which potentially has an API also depends indirectly on Paste. Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate
On 08/07/2018 03:24 PM, Sean Mooney wrote: > so im not sure pushing for python 3.7 is the right thing to do. also i would > not > assume all distros will ship 3.7 in the near term. i have not check lately but > i believe cento 7 unless make 3.4 and 3.6 available in the default repos. > ubuntu 18.04 ships with 3.6 i believe The current plan for Debian is that we'll be trying to push for Python 3.7 for Buster, which freezes in January. This freeze date means that it's going to be Rocky that will end up in the next Debian release. If Python 3.7 is a failure, then late November, we will remove Python 3.7 from Unstable and let Buster release with 3.6. As for Ubuntu, it is currently unclear if 18.10 will be released with Python 3.7 or not, but I believe they are trying to do that. If not, then 19.04 will for sure be released with Python 3.7. > im not sure about other linux distros but since most openstack > deployment are done > on LTS releases of operating systems i would suspect that python 3.6 > will be the main > python 3 versions we see deployed in production for some time. In short: that's wrong. > having a 3.7 gate is not a bad idea but priority wise have a 3.6 gate > would be much higher on my list. Wrong list. One version behind. > i think we as a community will have to decide on the minimum and > maximum python 3 versions > we support for each release and adjust as we go forward. Whatever the OpenStack community decides is not going to change what distributions like Debian will do. This type of reasoning lacks a much needed humility. > i would suggst a min of 3.5 and max of 3.6 for rocky. My suggestion is that these bugs are of very high importance and that they should at least deserve attention. That the gate for Python 3.7 isn't ready, I can understand, as everyone's time is limited. This doesn't mean that the OpenStack community at large should just dismiss patches that are important for downstream. > for stien perhaps bump that to min of 3.6 max 3.7 but i think this is > something that needs to be address community wide > via a governance resolution rather then per project. At this point, dropping 3.5 isn't a good idea either, even for Stein. > it will also > impact the external python lib we can depend on too which is > another reason i think thie need to be a comuntiy wide discussion and > goal that is informed by what distros are doing but > not mandated by what any one distro is doing. > regards > sean. Postponing any attempt to support anything current is always a bad idea. I don't see why there's even a controversy when one attempts to fix bugs that will, sooner or later, also hit the gate. Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate
On 08/06/2018 09:02 PM, Sean McGinnis wrote: >> >> I didn't have time to investigate these, but at least Glance was >> affected, and a patch was sent (as well as an async patch). None of them >> has been merged yet: >> >> https://review.openstack.org/#/c/586050/ >> https://review.openstack.org/#/c/586716/ >> >> That'd be ok if at least there was some reviews. It looks like nobody >> cares but Debian & Ubuntu people... :( >> > > Keep in mind that your priorities are different than everyone elses. There are > large parts of the community still working on Python 3.5 support (our > officially supported Python 3 version), as well as smaller teams overall > working on things like critical bugs. > > Unless and until we declare Python 3.7 as our new target (which I don't think > we are ready to do yet), these kinds of patches will be on a best effort > basis. This is exactly what I'm complaining about. OpenStack upstream has very wrong priorities. If we really are to switch to Python 3, then we got to make sure we're current, because that's the version distros are end up running. Or maybe we only care if "it works on devstack" (tm)? Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate
On 08/02/2018 10:43 AM, Andrey Kurilin wrote: > There's also some "raise StopIteration" issues in: > - ceilometer > - cinder > - designate > - glance > - glare > - heat > - karbor > - manila > - murano > - networking-ovn > - neutron-vpnaas > - nova > - rally > > > Can you provide any traceback or steps to reproduce the issue for Rally > project ? I'm not sure there's any. The only thing I know is that it has stop StopIteration stuff, but I'm not sure if they are part of generators, in which case they should simply be replaced by "return" if you want it to be py 3.7 compatible. I didn't have time to investigate these, but at least Glance was affected, and a patch was sent (as well as an async patch). None of them has been merged yet: https://review.openstack.org/#/c/586050/ https://review.openstack.org/#/c/586716/ That'd be ok if at least there was some reviews. It looks like nobody cares but Debian & Ubuntu people... :( Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate
On 07/12/2018 10:38 PM, Thomas Goirand wrote: > Hi everyone! > > [...] Here's more examples that shows why we should be gating earlier with newer Python versions: Nova: https://review.openstack.org/#/c/584365/ Glance: https://review.openstack.org/#/c/586716/ Murano: https://bugs.debian.org/904581 Pyghmi: https://bugs.debian.org/905213 There's also some "raise StopIteration" issues in: - ceilometer - cinder - designate - glance - glare - heat - karbor - manila - murano - networking-ovn - neutron-vpnaas - nova - rally - zaqar It'd be nice to have these addressed ASAP. Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate
On 07/18/2018 06:42 AM, Ian Wienand wrote: > While I'm reserved about the > idea of full platform functional tests, essentially having a > wide-variety of up-to-date tox environments using some of the methods > discussed there is, I think, a very practical way to be cow-catching > some of the bigger issues with Python version updates. If we are to > expend resources, my 2c worth is that pushing in that direction gives > the best return on effort. > > -i Hi Ian, Thanks a lot for your reply, that's very useful. I very much agree that testing the latest Qemu / libvirt could be a problem if it fails too often, and same with other components, however, these needs to be addressed anyway at some point. If we can't do it this way, then we have to define a mechanism to find out. Maybe a dvsm periodic task unrelated to a specific project would do? Anyway, my post was *not* about functional testing, so let's not talk about this. What I would love to get addressed is catching problems with newer language updates. Having them early avoids downstream distribution doing the heavy work, which is not sustainable considering the amount of people (which is about 1 or 2 guys per distro), and that's what I would like to be addressed. For example, "async" becoming a keyword in Python 3.7 is something I would have very much like to be caught by some kind of upstream CI running unit tests, rather than Debian and Ubuntu package maintainers fixing the problems as we get FTBFS (Fails To Build From Source) bugs filed in the BTS, and when we find out by ourselves that some package cannot be installed or built. This happened with oslo.messaging, taskflow, etc. This is just the new Python 3.7 things, though there was numerous problems with Python 3.6. Currently, it looks like Heat also has unit test failures in Sid (not sure yet what the issue is). Waiting for Bionic to be released to start gating unit tests on Python 3.6 is IMO a way too late, as for example Debian Sid was running Python 3.6 about a year before that, and that's what I would like to be fixed. Using either Fedora or SuSE is fine to me, as long as it gets latest Python language fast enough (does it go as fast as Debian testing?). If it's for doing unit testing only (ie: no functional tests using Qemu, libvirt and other component of this type) looks like a good plan. Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate
Hi everyone! It's yet another of these emails where I'm going to complain out of frustration because of OpenStack having bugs when running with the newest stuff... Sorry in advance ! :) tl;dr: It's urgent, we need Python 3.7 uwsgi + SSL gate jobs. Longer version: When Python 3.6 reached Debian, i already forwarded a few patches. It went quite ok, but still... When switching services to Python 3 for Newton, I discover that many services still had issues with uwsgi / mod_wsgi, and I spent a large amount of time trying to figure out ways to fix the situation. Some patches are still not yet merged, even though it was a community goal to have this support for Newton: Neutron: https://review.openstack.org/#/c/555608/ https://review.openstack.org/#/c/580049/ Neutron FWaaS: https://review.openstack.org/#/c/580327/ https://review.openstack.org/#/c/579433/ Horizon tempest plugin: https://review.openstack.org/#/c/575714/ Oslotet (clearly, the -1 is for someone considering only Devstack / venv, not understanding packaging environment): https://review.openstack.org/#/c/571962/ Designate: As much as I know, it still doesn't support uwsgi / mod_wsgi (please let me know if this changed recently). There may be more, I didn't have much time investigating some projects which are less important to me. Now, both Debian and Ubuntu have Python 3.7. Every package which I upload in Sid need to support that. Yet, OpenStack's CI is still lagging with Python 3.5. And there's lots of things currently broken. We've fixed most "async" stuff, though we are failing to rebuild oslo.messaging (from Queens) with Python 3.7: unit tests are just hanging doing nothing. I'm very happy to do small contributions to each and every component here and there whenever it's possible, but this time, it's becoming a little bit frustrating. I sometimes even got replies like "hum ... OpenStack only supports Python 3.5" a few times. That's not really acceptable, unfortunately. So moving forward, what I think needs to happen is: - Get each and every project to actually gate using uwsgi for the API, using both Python 3 and SSL (any other test environment is *NOT* a real production environment). - The gating has to happen with whatever is the latest Python 3 version available. Best would even be if we could have that *BEFORE* it reaches distributions like Debian and Ubuntu. I'm aware that there's been some attempts in the OpenStack infra to have Debian Sid (which is probably the distribution getting the updates the faster). This effort needs to be restarted, and some (non-voting ?) gate jobs needs to be setup using whatever the latest thing is. If it cannot happen with Sid, then I don't know, choose another platform, and do the Python 3-latest gating... The current situation with the gate still doing Python 3.5 only jobs is just not sustainable anymore. Moving forward, Python 2.7 will die. When this happens, moving faster with Python 3 versions will be mandatory for everyone, not only for fools like me who made the switch early. :) Cheers, Thomas Goirand (zigo) P.S: A big thanks to everyone who where helpful for making the switch to Python 3 in Debian, especially Annp and the rest of the Neutron team. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron][fwaas] How to reproduce in Debian (was: Investigation into debian/l3/wsgi/fwaas error)
On 06/21/2018 04:48 PM, Nate Johnston wrote: > I will continue to debug the issue tomorrow. I see no lonkage at this > point with any of the previously listed constraints on this scenario. > So I am going to copy Brian Haley for his L3 expertise, as well as the 3 > FWAaaS cores to see if this directs their thoughts in any particular > direction. I hope to continue the investigation tomorrow. > > Thanks, > > Nate Johnston > njohnston Hi there, As per IRC discussion, let me explain to everyone the difference between what I've done in Debian, and what's in the other distributions. First, I would like to highlight the fact that this isn't at all Debian specific. 1/ Why doing: neutron-server -> neutron-api + neutron-rpc-server On other distros, we use Python 2, therefore neutron-server can be in use, and that works with or without SSL. In Debian, since we've switched to Python 3, an Eventlet based API daemon cannot work with SSL, due to a bug in Eventlet itself. That bug has been known since 2015, with no fix coming. What happens is that when a client connects to the API server, due to Eventlet's monkey patching, non-blocking stuff produce an SSL handshake. As a consequence, the only way to run Neutron with Python 3 and SSL, is to avoid neutron-server, and use either uwsgi, or mod_uwsgi. In Debian, most daemons are using uwsgi when possible. In such mode, the WSGI application is /usr/bin/neutron-api. But this WSGI application, as it's not a daemon (but an API only, served by a web server), cannot have a a thread to listen to the RPC bus. So instead, there's neutron-rpc-server to do that job. 2/ Bugs already fixed but not merged in neutron for this mode An Nguyen Phong (annp on IRC) has fixed stuff in neutron for that mode of operation described above, but it's not yet merged: https://review.openstack.org/#/c/555608/ Without this patch, the l3 agent doesn't know about ml2 extensions, it's impossible to pass startup --config-file= parameters correctly, and the openvswitch agent never applies security-group firewall rules. Please consider reviewing this patch and merging it. 3/ How to reproduce the Debian environment You can always simply install stuff by hand with packages, but that's boringly long to do. The easiest way is to pop a fresh Stretch, and have puppet to run in it to install everything for you. Here's the steps: a) Boot-up a stretch machine with access to the net. b) git clone https://github.com/openstack/puppet-openstack-integration c) cd puppet-openstack-integration d) git review -d 577281 This will re-enable FWaaS for the l3 agent. Hopefully, we'll get to the point where this patch can be applied and FWaaS re-enabled. e) edit all-in-one.sh line 69: --- a/all-in-one.sh +++ b/all-in-one.sh @@ -66,7 +66,7 @@ export GEM_HOME=`pwd`/.bundled_gems gem install bundler --no-rdoc --no-ri --verbose set -e -./run_tests.sh +SCENARIO=scenario001 ./run_tests.sh RESULT=$? set +e if [ $RESULT -ne 0 ]; then f) git commit -a -m "test scenario001" g) ./all-in-one.sh Note that you may as well test scenario 2 & 4 which are also using OVS, or scenario 3 that is using linuxbridge. After approx one hour, you'll get a full Debian all-in-one installation using packages. If you're not used to it, all the code is in /usr/lib/python3/dist-packages. You may edit code there for your tests. If you need to re-run a single test, you can do this: cp /tmp/openstack/tempest/etc/tempest.conf /etc/tempest cd /var/lib/tempest tempest_debian_shell_wrapper \ tempest.scenario.test_server_basic_ops.TestServerBasicOps.test_server_basic_ops Just have a look at /usr/bin/tempest_debian_shell_wrapper, it's a tiny small shell script to run tests easily. Also, feel free to attempt switching to firewall_v2 in configuration files in /etc/neutron, and then restart the daemons. By default, it's still v1, but if it works with v2, we'll happily apply patches in puppet-openstack for that (which will apply for all distros). If you need me, just type "zigo" on IRC (I'm in most channels, including #openstack-neutron and #openstack-fwaas), and I'll reply if it's office hours in Geneva/France, or late in my evening. I hope the above helps, Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [puppet-openstack][announce][debian] repository address (was: puppet-openstack now has full Debian support)
On 06/20/2018 04:23 PM, Thomas Goirand wrote: > or using the Debian backports repository at: > > http://stretch-queens.infomaniak.ch/debian I really meant: deb http://stretch-queens.debian.net/debian \ strech-queens-backports main deb http://stretch-queens.debian.net/debian \ strech-queens-backports-nochange main which is the official URL for the 2 Stretch backports repositories. Please use that address, always, as we may point it somewhere in the future at some point (maybe in Infomaniak global mirror). Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [puppet-openstack][announce][debian] puppet-openstack now has full Debian support
Dear Stackers, I am glad/overjoyed/jazzed to announce the global availability of puppet-openstack for Debian. Indeed, a few minutes ago, the CI turned all green for Debian: https://review.openstack.org/#/c/576416 (note: the red one for CentOS is to be ignored, it looks like non-deterministic error) This is after 3 months of hard work, and more than 50 patches, sometimes on upstream code base (for example in Cinder, Sahara, and Neutron), often because of Python 3 or uwsgi/mod_wsgi related problems. Some of these patches aren't merged yet upstream, but are included in the Debian packages already. Also note that Debian fully supports SSL and ipv6 endpoints. I'd like here to publicly thanks all of the puppet-openstack core reviewers for their help and enthusiasm. A big thanks to mnaser, tobasco, EmilienM and mwhahaha. Guys, you've been really awesome and helpful with me. Also a big thanks to these upstream helping with fixing these bits as explained above, and especially annp for fixing the neutron-rpc-server related problems, with the patch also pending reviews at: https://review.openstack.org/#/c/555608/ All of these puppet modules are available directly in Debian in a packaged form. To get them, simply do: apt-get install openstack-puppet-modules in Debian Sid, or using the Debian backports repository at: http://stretch-queens.infomaniak.ch/debian Still to fix, is neutron-fwaas, which seems to not like either Python 3 or using neutron-api over uwsgi (I'm not sure which of these yet). Upstream neutron developers are currently investigating this. For this reason, neutron firewall extension is currently disabled for the l3-agent, but will be reactivated as soon as a proper fix is found. Also, Ceph in Debian is currently a way behind (so we have to use upstream Debian repository for Stretch), as it lacks a proper Python 3 support, and still no Luminous release uploaded to Sid. I intend to attempt to fix this, to get a chance to get this in time for Buster. Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] minimum libvirt version for nova-compute
Hi, Trying to get puppet-openstack to validate with Debian, I got surprised that mounting encrypted volume didn't work for me, here's the stack dump with libvirt 3.0.0 from Debian Stretch: File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 1463, in attach_volume guest.attach_device(conf, persistent=True, live=live) File "/usr/lib/python3/dist-packages/nova/virt/libvirt/guest.py", line 303, in attach_device self._domain.attachDeviceFlags(device_xml, flags=flags) File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 186, in doit result = proxy_call(self._autowrap, f, *args, **kwargs) File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 144, in proxy_call rv = execute(f, *args, **kwargs) File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 125, in execute six.reraise(c, e, tb) File "/usr/lib/python3/dist-packages/eventlet/support/six.py", line 625, in reraise raise value File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 83, in tworker rv = meth(*args, **kwargs) File "/usr/lib/python3/dist-packages/libvirt.py", line 585, in attachDeviceFlags if ret == -1: raise libvirtError ('virDomainAttachDeviceFlags() failed', dom=self) libvirt.libvirtError: internal error: unable to execute QEMU command 'object-add': Incorrect number of padding bytes (57) found on decrypted data After switching to libvirt 4.3.0 (my own backport from Debian Testing), it does work. So, while the minimum version of libvirt seems to be enough for normal operation, it isn't for encrypted volumes. Therefore, I wonder if Nova shouldn't declare a minimum version of libvirt higher than it claims at the moment. I'm stating that, especially because we had this topic a few weeks ago. Thoughts anyone? Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [cinder] backups need reserved space for LVM snapshots: do we have it implemented already?
On 06/14/2018 01:10 PM, Erlon Cruz wrote: > Hi Thomas, > > The reserved_percentage *is* taken in account for non thin provisoning > backends. So you can use it to spare the space you need for backups. It > is a per backend configuration. Oh. Reading the doc, I thought it was only for thin provisioning, it's nice if it works with "normal" cinder LVM then ... :P When you say "per backend", does it means it can be set differently on each volume node? > If you have already tried to used it and that is not working, please let > us know what release you are using, because despite this being the > current (and proper) behavior, it might not being like this in the past. > > Erlon Will do, thanks. Cheers, Thomas __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [cinder] backups need reserved space for LVM snapshots: do we have it implemented already?
Hi, When using cinder-backup, it first makes a snapshot, then sends the backup wherever it's configured. The issue is, to perform a backup, one needs to make a snapshot of a volume, meaning that one needs the size of the volume as empty space to be able to make the snapshot. So, let's say I have a cinder volume of 1 TB, this mean I need 1 TB as empty space on the volume node so I can do a backup of that volume. My question is: is there a way to tell cinder to reserve an amount of space for this kind of operation? The only thing I saw was reserved_percentage, but this looks like for thin provisioning only. If this doesn't exist, would such new option be accepted by the Cinder community, as a per volume node option? Or should we do it as a global setting? Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] [release] How to handle "stable" deliverables releases
On 06/11/2018 11:53 AM, Thierry Carrez wrote: > Hi everyone, > > As some of the OpenStack deliverables get more mature, we need to adjust > our release policies to best handle the case of deliverables that do not > need to be updated that much. This discussion started with how to handle > those "stable" libraries, but is actually also relevant for "stable" > services. > > Our current models include cycle-tied models (with-intermediary, > with-milestones, trailing) and a purely cycle-independent model. Main > OpenStack deliverables (the service components that you can deploy to > build an OpenStack cloud) are all "released" on a cycle. Libraries are > typically maintained per-cycle as well. What happens if no change is > pushed to a service or library during a full cycle ? What should we do > then ? > > Options include: > > 1/ Force artificial releases, even if there are no changes > 2/ Do not force releases, but still create branches from latest releases > 2bis/ Like 2, but only create the branch when needed > 3/ Do not force releases, and reuse stable branches from cycle to cycle > 4/ Stop worrying about stable branches at all for those "stable" things FYI, for downstream distribution maintainers, any evolution from 1/ is fine: it's a bit silly for us to just rebuild a new package when there's no need for it. It's a waste of time for package maintainer, and users who have to download the new version, etc. We're not really concerned by branches, all we care is if there's a new tag to be packaged. Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TC] Stein Goal Selection
On 06/04/2018 08:59 PM, Ivan Kolodyazhny wrote: > I hope we'll have Ubuntu 18.04 LTS on our gates for this activity soon. > It becomes > important not only for developers but for operators and vendors too. By the time the project will be gating on Python 3.6, most likely there's going to be 3.7 or even 3.8 in Debian Sid, and I'll get all the broken stuff alone again... Can't we try to get Sid in the gate, at least in non-voting mode, so we get to see problems early rather than late? As developers, we should always aim for the future, and Bionic should already be considered the past release to maintain, rather than the one to focus on. If we can't get Sid, then at least should we consider the non-LTS (always latest) Ubuntu releases? Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tc][ptl][python3][help-wanted] starting work on "python 3 first" transition
On 06/06/2018 10:14 PM, Sean McGinnis wrote: > On 06/06/2018 03:04 PM, Doug Hellmann wrote: >> I have started submitting a series of patches to fix up the tox.ini >> settings for projects as a step towards running "python3 first" >> [1]. The point of doing this now is to give teams a head start on >> understanding the work involved as we consider whether to make this >> a community goal. > > I would ask that you stop. > > While I think this is useful as a quick way of finding out which projects > will require additional work here and which don't, this is just creating > a lot of work and overlap. > > Some teams are not ready to take this on right now. So unless you are > planning on actually following through with making the failing ones work, > it is just adding to the set of failing patches in their review queue. > > Other teams are already working on this and working through the failures > due to the differences between python 2 and 3. So these just end up being > duplication and a distraction for limited review capacity. Sean, Reading these words is very much disappointing to me. I very much like the coordination work that Doug is engaging into, and it'd be very frustrating if some projects were refusing to clean their technical debt regarding Python 3 support. As I wrote in my mail to Doug, the biggest issue I've seen is projects not really setting their functional tests under real world conditions. Anyone pushing in that direction should be warmly welcomed At some point, this should be a top priority so we all get rid of this transition work once and for all, without having that one annoying project that is still lagging behind (see how I'm *not* naming anyone on purpose...). If you're not happy about the way Doug is doing, just make it happen the way you prefer. As long as it's done soon, everyone will be happy. Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tc][ptl][python3][help-wanted] starting work on "python 3 first" transition
3, aren't running functional tests the same way that operators will (ie: with SSL and uwsgi/mod_wsgi), and because of that, some problems aren't being detected. Just a hint, to run something like neutron-api with ipv4, 6 and SSL, you'd run it this way: /usr/bin/uwsgi_python35 --https-socket [::]:9696,/usr/local/share/ca-certificates/puppet_openstack.crt,/etc/neutron/ssl/private/debian-stretch-ovh-gra1-0004341140.pem --ini /etc/neutron/neutron-api-uwsgi.ini --pyargv "--config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini --config-file=/etc/neutron/fwaas_driver.ini" Notice the [::] meaning ipv4 AND ipv6 at the same time, and how --pyargv needs a quoted list of arguments. Hopefully, this is slowly being addressed, though I would very much like if there was a general move to using uwsgi, preferably with SSL and ipv6 turned on and tested. It'd be also nice if projects could ship the uwsgi ini file that they use for functional tests, as it looks like it sometimes depend on the project (for example, I had to add rem-header: content-length for nova-placement-api to avoid connection reset by peer, but I'm not sure if it even is harmful for other projects). Here's an example from the Debian packaging: https://salsa.debian.org/openstack-team/services/neutron/blob/debian/queens/debian/neutron-api-uwsgi.ini See how I'm deliberately *not* setting "pyargv" there, and prefer it to be set by the init script / systemd service (so that it can be dynamic and load the configuration file needed for the activated plugin), and that http-socket / https-socket is also dynamic, ie: --https-socket is used on the command line if a pair of certificate + private key is found on the hard disk under /etc/neutron/ssl. See https://salsa.debian.org/openstack-team/services/neutron/blob/debian/queens/debian/neutron-api.init.in and https://salsa.debian.org/openstack-team/debian/openstack-pkg-tools/tree/debian/queens/init-template to understand how it's built. Hoping that these packaging-related insights are helpful for the project at large, cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [horizon] Font awesome currently broken with Debian Sid and Horizon
On 05/30/2018 04:13 PM, Ivan Kolodyazhny wrote: > Hi Thomas, > > > As my python3-xstatic-font-awesome removes the embedded fonts > > > It sounds like you broke xstatic-* packages for Debian and use something > we don't test with Horizon at all. > > Speaking about Rocky/master version, our upper-constraint > XStatic-Font-Awesome===4.7.0.0 [1]. We don't test horizon with font > awesome v 5.0.10. > > > Second, it'd be nice if Horizon could adapt and use the new v5 > font-awesome, so that the problem is completely solved. > > +1. I'll put my +2/A once somebody provides a patch for it with a > detailed description how can I test it. Unfortunately, Horizon team has > a very limited set of resources, so we can't adopt new version of > xstatic-* fast :(. > > [1] > https://github.com/openstack/requirements/blob/master/upper-constraints.txt#L61 > > Regards, > Ivan Kolodyazhny, > http://blog.e0ne.info/ Ivan, The point of Xstatic packages is so that, in distributions, they depend on the asset which is packaged separately, so that there's no duplication of data in the distro. In this case, the python3-xstatic-font-awesome package depends on the fonts-font-awesome package. And it is the later that got updated in Debian. I don't maintain it, so it's not my fault. This broke many packages, including the openstackdocstheme also. Of course, I could revert what was previously done, and have python3-xstatic-font-awesome to contain the fonts data again. But that's not desirable. What we really want is have Horizon fixed, and use a newer version of font-awesome (ie: v5) if possible. Using only glyphs from fa-solid-900 will make it possible to have Horizon work with both v4 and v5, which would be even better (of course, package maintainers would have to set correct links to the right font file, but that's a packaging detail). Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [horizon] Font awesome currently broken with Debian Sid and Horizon
Hi Radomir, I'm adding the debian bug as Cc. On 05/28/2018 08:35 AM, Radomir Dopieralski wrote: > I did a quick search for all the glyphs we are using: > > ~/dev/horizon(master)> ag 'fa-' | egrep -o 'fa-[a-z-]*' | sort | uniq > fa- > fa-angle-left > [...] Thanks for your investigation. I did a quick test, and loaded all of these glyphs into a test HTML page. As much as I can see, only 4 glyphs aren't in fa-solid-900: fa-cloud-upload fa-pencil fa-share-square-o fa-sign-out It'd be nice if we could replace these by glyphs present in fa-solid-900 so we could simply replace the old v4 font by fa-solid-900 only. Your thoughts? Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [horizon] Font awesome currently broken with Debian Sid and Horizon
Hi Horizon team! I'm not sure if you're aware of that, but the upstream authors of fontawesome decided it was a good idea to break everyone. See this Debian bug entry: https://bugs.debian.org/899124 So, what happened is that fontawesome-webfont has been split into 3 sets of fonts: solid, regular and brands fonts. Thus there is no drop in replacement for the old fontawesome-webfont.xxx. As my python3-xstatic-font-awesome removes the embedded fonts, and just points to /usr/share/fonts-font-awesome, Horizon is broken and cannot even be installed currently in Debian Sid. Of course, I'm considering reverting the removal of the data folder from the xstatic package, but it then defeats the purpose of Xstatic, which is avoiding duplication of static files already packaged in the distribution. So, ideally, I would like to know first if I can use the fa-solid-900 for Horizon, or if other glyphs are in use (it's very much possible that only fa-solid-900 stuff are in use, but I really don't know how to check for that the correct way). If that's the case, then I can workaround the issue (at least temporarily), and synlink stuff in the data folder to the new fa-solid-900 files. Second, it'd be nice if Horizon could adapt and use the new v5 font-awesome, so that the problem is completely solved. Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [neutron] Status of neutron-rpc-server
Hi, I'd like to know what's the status of neutron-rpc-server. As I switched the Debian package from neutron-server to neutron-api using uwsgi, I tried using it, and it seems it kind of works, if I apply this patch: https://review.openstack.org/#/c/555608 Is there anything else that I should know? Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][tc][ptls] final stages of python 3 transition
On 05/20/2018 06:24 PM, Matthew Treinish wrote: > On Sun, May 20, 2018 at 03:05:34PM +0200, Thomas Goirand wrote: >> Thanks for these details. What exactly is the trouble with the Swift >> backend? Do you know? Is anyone working on fixing it? At my company, >> we'd be happy to work on that (if of course, it's not too time demanding). >> > > Sorry I didn't mean the swift backend, but swift itself under python3: > > https://wiki.openstack.org/wiki/Python3#OpenStack_applications_.28tc:approved-release.29 > > If you're trying to deploy everything under python3 I don't think you'll be > able to deploy swift. But if you already have a swift running then the glance > backend should work fine under pythom 3. Of course I know Swift isn't Python 3 ready. And that's sad... :/ However, we did also experience issues with the swift backend last week. Hopefully, with the switch to uwsgi it's going to work. I'll let you know if that's not the case. Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][tc][ptls] final stages of python 3 transition
On 05/07/2018 01:36 PM, Jean-Philippe Evrard wrote: > We've been juggling with python3, ansible and multiple distros for a while > now. > That dance hasn't been fruitful: many hidden issues, either due to > ansible modules, or our own modules, or upgrade issues. > > I've recently decided to simplify the python2/3 story. > > Queens and all the stable branches will be python2 only (python3 will > not be used anymore, to simplify the code) > > For Rocky, we plan to use as much as possible the distribution > packages for the python stack, if it's recent enough for our source > installs. > Ubuntu 16.04 will have python2, SUSE has python2, CentOS has no > appropriate package, so we are pip installing things (and using > python2). > So... If people work on Ubuntu 18.04 support, we could try a python3 > only system. Nobody worked on it right now. /me raises hand! At the moment, I got a nearly full Queens stack up and running on top of Debian Stretch, using Python 3 only (and of course all of that is also uploaded to Debian Sid). The puppet-openstack scenario001 works fully without SSL, last week I could fix cinder, today Glance (thanks to Matt) and Nova, and it's looking like I got one remaining issue with Neutron that I didn't have time to investigate fully yet (got to dig in the logs). But it's looking good. Hopefully, next week I'll be able to tell everything works. So, have a try with Debian? :) Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][tc][ptls][glance] final stages of python 3 transition
On 05/14/2018 06:42 PM, Victoria Martínez de la Cruz wrote: > Hi, > > Jumping in now as I'm helping with py3 support efforts in the manila side. > > In manila we have both support for Apache WSGI and the built-in server > (which depends in eventlet). Would it be a possible workaround to rely > on the Apache WSGI server while we wait for evenlet issues to be sorted > out? Is there any chance the upper constraints will be updated soon-ish > and this can be fixed in a newer eventlet version? Probably we can update the upper-constraints file, though the newer eventlet doesn't have a fix. It's been more than 2 years there's the issue, and nobody seems to work on it. > This is the only change it's preventing us to be fully py3 compatible, > hence it's a big deal for us. I don't think Eventlet is a blocker, as long as you're supporting uwsgi and Apache2. The case of Glance not supporting Apache2 is a real issue though, for Ubuntu at least, since they don't want uwsgi to be promoted to main (ie: it's in Universe, and they don't support it for security). As for Debian, since it is looking like I managed to find a solution for running everything in uwsgi, then I probably will do that. I have btw recently joined the team maintaining uwsgi in Debian, and managed to fix all RC bugs on it. Like Matt, I do prefer to be able to restart only *one* daemon at a time, which is why I don't really like setting-up everything with mod_wsgi. Though in such setup, I wonder what the point is to still use Apache for proxying the requests. Is there any added value doing that? Also, does anyone know if uwsgi uses the Python subinterpreter thing, which is the reason why mod_wsgi is outperforming everything else? If I'm not mistaking, the API is described at: https://www.python.org/dev/peps/pep-0554/ (ie: PEP554), and that, if I understand correctly, works around the global interpreter lock issue. As much as I could see, uwsgi doesn't use that, so Apache should still be outperforming uwsgi. Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][tc][ptls][glance] final stages of python 3 transition
On 05/08/2018 09:16 PM, Matthew Treinish wrote: > Although, I don't think glance uses oslo.service even in the case where it's > using the standalone eventlet server. It looks like it launches eventlet.wsgi > directly: > > https://github.com/openstack/glance/blob/master/glance/common/wsgi.py I can confirm this through my last week (bad) experience with Eventlet over SSL. Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Setting-up NoVNC 1.0.0 with nova
Hi there! The novnc package in Debian and Ubuntu is getting very old. So I thought about upgrading to 1.0.0, which has lots of very nice newer features, like the full screen mode, and so on. All seemed to work, however, when trying to connect to the console of a VM, NoVNC attempts to connect to https://example.com:6080/websockify and then fails (with a 404). So I was wondering: what's missing in my setup so that there's a /websockify URL? Is there some missing code in the nova-novncproxy so that it would forward this URL to /usr/bin/websockify? If so, has anyone started working on it? Also, what's the status of NoVNC with Python 3? I saw lots of print statements which are easy to fix, though I even wonder if the code in the python-novnc package is useful. Who's using it? Nova-novncproxy? That's unlikely, since I didn't package a Python 3 version for it. Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][tc][ptls] final stages of python 3 transition
On 05/19/2018 07:54 PM, Matthew Treinish wrote: > On Sat, May 19, 2018 at 07:04:53PM +0200, Thomas Goirand wrote: >> using: >> - RBD backend >> - swift backend >> - swift+rgw > > As for the backend store choice I don't have any personal experience using > either of these 3 as a backend store. That being said your choice of store > should be independent from the getting glance-api deployed behind uwsgi > and a webserver. > > Although, you might have trouble with swift on py3, because IIRC that still > isn't working. (unless something changed recently) But, the store config is > really independent from getting the api to receive and handle api requests > properly. Thanks for these details. What exactly is the trouble with the Swift backend? Do you know? Is anyone working on fixing it? At my company, we'd be happy to work on that (if of course, it's not too time demanding). >>> The issues glance has with running in a wsgi app are related to it's >>> use of async tasks via taskflow. (which includes the tasks api and >>> image import stuff) This shouldn't be hard to fix, and I've had >>> patches up to address these for months: >>> >>> https://review.openstack.org/#/c/531498/ >>> https://review.openstack.org/#/c/549743/ >> >> Do I need to backport these patches to Queens to run Glance the way I >> described? Will it also fix running Glance with mod_wsgi? > > These patches are independent of getting things working for you. They > are only required for 2 API features in glance to work. The tasks api and > the image import api (which was added in queens). You don't need either > to upload images by default, and the patches will only ever be necessary > if you have something using those APIs (which personally I've never > encountered in the wild). There is also no test coverage in tempest or > any external test suite using these apis that I'm aware of so your CI > likely won't even be blocked by this. (which is how this situation > arose in the first place) Allright, So hopefully, I'm very close from having Debian to gate properly in puppet-openstack upstream. As much as I could tell, Glance and Cinder are the only pieces that are still failing with SSL (and everything works already without SSL), so I must be very close to a nice result (after a course of nearly 2 months already). Thanks again for all the very valuable details that you provided. I have to admit that I was starting to loose faith in the project, because of all the frustration of not finding a working solution. I'll let the list knows when I have something that fully works and gating with puppet-openstack, of course. Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][tc][ptls][glance] final stages of python 3 transition
On 05/08/2018 07:55 PM, Matthew Treinish wrote: > I wrote up a doc about running under > apache when I added the uwsgi chunked transfer encoding support to glance > about > running glance under apache here: > > https://docs.openstack.org/glance/latest/admin/apache-httpd.html > > Which includes how you have to configure things to get it working and a > section > on why mod_wsgi doesn't work. Thanks for that. Could you also push a uWSGI .ini configuration example file, as well as the mod_proxy example? There's so many options in uwsgi that I don't want to risk doing something wrong. I've pasted my config at the end of this message. Do you think it's also OK to use SSL directly with uwsgi, using the --https option? What about the 104 error that I've been experiencing? Is it because I'm not using mod_proxy? BTW, there's no need to manually do the symlink, you can use instead: a2ensite uwsgi-glance-api.conf Cheers, Thomas Goirand (zigo) [uwsgi] ### Generic UWSGI config ### # Override the default size for headers from the 4k default. buffer-size = 65535 # This avoids error 104: "Connection reset by peer" rem-header = Content-Lenght # This is running standalone master = true # Threads and processes enable-threads = true processes = 4 # uwsgi recommends this to prevent thundering herd on accept. thunder-lock = true plugins = python3 # This ensures that file descriptors aren't shared between the WSGI application processes. lazy-apps = true # Log from the wsgi application: needs python3-pastescript as runtime depends. paste-logger = true # automatically kill workers if master dies no-orphans = true # exit instead of brutal reload on SIGTERM die-on-term = true ## ### OpenStack service specific ### ## # This is the standard port for the WSGI application, listening on all available IPs http-socket = :9292 logto = /var/log/glance/glance-api.log name = glance-api uid = glance gid = glance chdir = /var/lib/glance wsgi-file = /usr/bin/glance-wsgi-api __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][tc][ptls] final stages of python 3 transition
On 05/08/2018 06:22 PM, Matthew Treinish wrote: >> Glance - Has issues with image upload + uwsgi + eventlet [1] > > This actually is a bit misleading. Glance works fine with image upload and > uwsgi. > That's the only configuration of glance in a wsgi app that works because > of chunked transfer encoding not being in the WSGI protocol. [2] uwsgi > provides > an alternate interface to read chunked requests which enables this to work. > If you look at the bugs linked off that release note about image upload > you'll see they're all fixed. Hi Matt, I'm quite happy to read the above. Just to make sure... Can you confirm that Glance + Python 3 + uwsgi with SSL will work using the below setup? using: - RBD backend - swift backend - swift+rgw If so, then I'll probably end up pushing for such uwsgi setup. If I understand you correctly, it wont work with Apache mod_wsgi, because of these chcked transfer encoding, which is what made if fail when I tried using the RBD backend. Right? > The issues glance has with running in a wsgi app are related to it's > use of async tasks via taskflow. (which includes the tasks api and > image import stuff) This shouldn't be hard to fix, and I've had > patches up to address these for months: > > https://review.openstack.org/#/c/531498/ > https://review.openstack.org/#/c/549743/ Do I need to backport these patches to Queens to run Glance the way I described? Will it also fix running Glance with mod_wsgi? Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][tc][ptls] final stages of python 3 transition
On 05/08/2018 06:01 PM, Graham Hayes wrote: > Glance - Has issues with image upload + uwsgi + eventlet [1] Yeah, as much as I can see, my experience with last week is that there's no working mode of operation with Glance, Python3 and SSL: - It doesn't work with eventlet, with SSL handshake failure if we don't remove that line: https://github.com/eventlet/eventlet/blob/master/eventlet/green/ssl.py#L342 (of course, removing the set_nonblocking() line in eventlet is *not* a solution) - It doesn't work with uwsgi (connection reset by peer, IIRC) - It doesn't work with Apache (Content-Length issue when uploading images) The only mode that I didn't test (yet) is using eventlet without SSL, and then using HA-Proxy to do the SSL part. Maybe using Apache with mod_proxy will work too, I probably will test that too, and see which one integrates the more easily with puppet-openstack. I don't see why the above mod_proxy or haproxy deployment wouldn't work, but after all the frustrations I had with Glance last week, I'm expecting anything... So, to generalize, yeah, we definitively need to fix this issue with Eventlet ASAP. But also fix Glance so that it can work with uwsgi and Apache mod_wsgi. Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [all] Eventlet + SSL + Python 3 = broken monkey patching leading to completely broken glance-api
Hi, It took me nearly a week to figure this out, as I'm not really an expert in Eventlet, OpenSSL and all, but now I've pin-pointed a big problem. My tests were around Glance, which I was trying to run over SSL and Eventlet, though it seems a general issue with SSL + Python 3. In the normal setup, when I do: openstack image list then I get: Unable to establish connection to https://127.0.0.1:9292/v2/images: ('Connection aborted.', OSError(0, 'Error')) (more detailed stack dump at the end of this message [1]) Though, with Eventlet 0.20.0, if in /usr/lib/python3/dist-packages/eventlet/green/ssl.py line 352, I comment out set_nonblocking(newsock) in the accept() function of the GreenSSLSocket, then everything works. Note that: - This also happens with latest Eventlet 0.23.0 - There's no problem without SSL - There's no commit on top of 0.23.0 relevant to the issue The issue has been reported here 2 years ago: https://github.com/eventlet/eventlet/issues/308 it's marked with "importance-bug" and "need-contributor", but nobody did anything about it. I also tried running with libapache2-mod-wsgi-py3, but then I'm hitting another bug: https://bugs.launchpad.net/glance/+bug/1518431 what's going on is that glanceclient spit out a 411 error complaining about content lenght. That issue is seen *only* when using Apache and mod_wsgi. So, I'm left with no solution here: Glance never works over SSL and Python 3. Something's really wrong should be fixed. Please help! This also pinpoints something: our CI is *not* covering the SSL case, or mod_wsgi, when really, it should. We should be having tests with: - mod_wsgi - eventlet - uwsgi and all of the above with and without SSL, plus Python 2 and 3, plus with file or swift backend. That's 24 possibility of problems, which we should IMO all cover. We don't need to run all tests, but maybe just make sure that at least the daemon works, which isn't the case at the moment for most of these use cases. The only setup that works are: - eventlet with or without SSL, using Python 2 - eventlet without SSL with Python 3 - apache with or without SSL without swift backend As much as I understand, we're only testing with eventlet with Python 2 and 3 without SSL and file backend. That's 2 setups out of 24... Can someone works on fixing this? Cheers, Thomas Goirand (zigo) [1] Unable to establish connection to https://127.0.0.1:9292/v2/images: ('Connection aborted.', OSError(0, 'Error')) Traceback (most recent call last): File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 601, in urlopen chunked=chunked) File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 346, in _make_request self._validate_conn(conn) File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 852, in _validate_conn conn.connect() File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 326, in connect ssl_context=context) File "/usr/lib/python3/dist-packages/urllib3/util/ssl_.py", line 329, in ssl_wrap_socket return context.wrap_socket(sock, server_hostname=server_hostname) File "/usr/lib/python3.5/ssl.py", line 385, in wrap_socket _context=self) File "/usr/lib/python3.5/ssl.py", line 760, in __init__ self.do_handshake() File "/usr/lib/python3.5/ssl.py", line 996, in do_handshake self._sslobj.do_handshake() File "/usr/lib/python3.5/ssl.py", line 641, in do_handshake self._sslobj.do_handshake() OSError: [Errno 0] Error __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [puppet] [magnum] Magnum tempest fails with 400 bad request
', u'id': u'a7233852-e3f1-4129-b34e-c607aef5172e', u'subnetpool_id': None, u'name': u'tempest-setUp-2113966350-subnet'}, router: {u'status': u'ACTIVE', u'external_gateway_info': {u'network_id': u'c6cf6d80-fcbb-46e6-aefd-17f41b5c57b1', u'enable_snat': True, u'external_fixed_ips': [{u'subnet_id': u'34e589e9-86d2-4f72-a0c3-7990406561b1', u'ip_address': u'172.24.5.13'}]}, u'availability_zone_hints': [], u'availability_zones': [], u'description': u'', u'tags': [], u'tenant_id': u'31c5c1fbc46e4880b7e498e493700a50', u'created_at': u'2018-05-16T15:10:27Z', u'admin_state_up': True, u'distributed': False, u'updated_at': u'2018-05-16T15:10:29Z', u'ha': False, u'flavor_id': None, u'revision_number': 2, u'routes': [], u'project_id': u'31c5c1fbc46e4880b7e498e493700a50', u'id': u'bdf13d72-c19c-4ad1-b57d-ed6da9c569b3', u'name': u'tempest-setUp-2113966350-router'} And right after that, we can only see clean-up calls (removing routers, DELETE calls, etc.). Looking at the magnum-api log shows issues in glanceclient just right before the 500 error. So, something's probably going on there, with a bad glanceclient request. Having a look into magnum.conf doesn't show anything suspicious concerning [glance_client] though, so I went to look into tempest.conf. And there, it shows no [magnum] section, and I believe that's the issue. Your tempest package/whatever hasn't been built with the magnum plugin, and there's nothing configured for magnum like: [magnum]/image_id and such. Maybe that still works though, because of default values? I wasn't able to completely figure it out, so I hope this helps... Did you try to debug this in a VM? Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [horizon] Scheduling switch to django >= 2.0
On 05/14/2018 03:30 PM, Akihiro Motoki wrote: > Is Python 3 ever used for mod_wsgi? Does the WSGI setup code honor > the variable that tells devstack to use Python 3? > > > Ubuntu 16.04 provides py2 and py3 versions of mod_wsgi (libapache2-mod-wsgi > and libapache2-mod-wsgi-py3) and as a quick look the only difference is > a module > specified in LoadModule apache directive. > I haven't tested it yet, but it seems worth explored. > > Akihiro libapache2-mod-wsgi-py3 is what's in use in all Debian packages for OpenStack, and it works well, including for Horizon. Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [neutron] neutron-server declaring itself as up too early
Hi, It looks like to me (I'm not sure yet...) that neutron-server is declaring itself as up when it's not. As a consequence, puppet-openstack just fails on me when doing "neutron net-list" too early and failing. Could it be possible that the systemd notify is called at the wrong place? If so, how could this be fixed? Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [horizon] Scheduling switch to django >= 2.0
On 05/11/2018 05:14 PM, Akihiro Motoki wrote: > Hi zigo and horizon plugin maintainers, > > Horizon itself already supports Django 2.0 and horizon unit test covers > Django 2.0 with Python 3.5. > > A question to all is whether we change the upper bound of Django from > <2.0 to <2.1. > My proposal is to bump the upper bound of Django to <2.1 in Rocky-2. > (Note that Django 1.11 will continue to be used for python 2.7 environment.) All this is nice, thanks for working on Django 2.x. But Debian Buster will be released with Django 1.11 and Python 3.6. So what I need, as far as Debian is concerned is: - Python 3.6 & Django 1.11 for Rocky (that's for Debian Buster). - Python 3.6, probably even 3.7 and Django 2.0 for Stein (that's for after Buster is released). Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [horizon] Scheduling switch to django >= 2.0
Hi, It has been decided that, in Debian, we'll switch to Django 2.0 after Buster will be released. Buster is to be frozen next February. This means that we have roughly one more year before Django 1.x goes away. Hopefully, Horizon will be ready for it, right? Hoping this helps, Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Problems with all OpenStack APIs & uwsgi with Content-Lenght and connection reset by peer (ie: 104)
On 05/02/2018 10:25 AM, Chris Dent wrote: > On Wed, 2 May 2018, Thomas Goirand wrote: > >> What was disturbing was that, doing the same request with curl worked >> perfectly. Even more disturbing, it looked like I was having the issue >> nearly always in virtualbox, but not always in real hardware, where it >> sometimes worked. > > What was making the request in the first place? It fails in X, but > works in curl. What is X? For example, nova-compute querying nova-placement-api. Another example: openstackclient. It happened to me trying to configure keystone when running puppet-openstack, for example, but on the command line directly as well, simply trying to add users, projects, etc. This looks like to me as a general problem in all of the OpenStack WSGI applications. >> Anyway, finally, I figured out that adding: >> >> --rem-header Content-Lenght > > You added this arg to what? As a parameter to uwsgi, so that it removes the Content-Lenght that the WSGI application sends. >> This however, looks like a workaround rather than a fix, and I wonder if >> there's a real issue somewhere that needs to be fixed in a better way, >> maybe in openstackclient or some other component... > > Yeah, it sounds like something could be setting a bad value for the > content length header and uwsgi is timing out while trying to read > that much data (meaning, it is believing the content-length header) > but there isn't anything actually there. > > Another option is that there are buffer size problems in the uwsgi > configuration but it's hard to speculate because it is not clear > what requests and tools you're actually talking about here. When attempting to google for the issue, I saw a lot of people that had this problem fixed by adding --buffer-size 65535, as the default 4k header of uwsgi was not enough. I also have this option set, as it seems a reasonable thing to have, but that was not enough to fix the problem. Only the --rem-header thing did. If you want to try, you can simply use the stretch-queens.debian.net repository with Glance (or simply Debian Sid), and edit /etc/glance/glance-api-wsgi.ini to change the uwsgi parameters (I've just switched Glance to uwsgi, since it now works...). I haven't checked with Glance, but since I saw the problem with nova-placement-api, cinder-api and keystone, I don't see why it wouldn't happen there. Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Problems with all OpenStack APIs & uwsgi with Content-Lenght and connection reset by peer (ie: 104)
Hi there! It's been a month I was knocking my head on the wall trying to get uwsgi working with all of OpenStack API uwsgi applications. Indeed, when OpenStack component (like for example, nova-compute) were talking to uwsgi, then they were receiving a 104 error (ie: connection reset by peer) before getting an answer. What was disturbing was that, doing the same request with curl worked perfectly. Even more disturbing, it looked like I was having the issue nearly always in virtualbox, but not always in real hardware, where it sometimes worked. Anyway, finally, I figured out that adding: --rem-header Content-Lenght fixed everything. I was able to spawn instances in virtualbox. This however, looks like a workaround rather than a fix, and I wonder if there's a real issue somewhere that needs to be fixed in a better way, maybe in openstackclient or some other component... Thoughts anyone? Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] sqlalchemy-migrate and networking-mlnx still depends on tempest-lib
Hi, Everyone migrated away from tempest-lib to tempest, but there's still 2 packages that are remaining, still using the old deprecated tempest-lib. Does anyone volunteer for the job? It'd be nice if that happened, so we could get completely rid of the tempest-lib packages in distros and everywhere. I can review patches in sqla-migrate, as I'm still core-reviewer there, though I'm not sure I know enough to do it myself. Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][tc] final stages of python 3 transition
On 04/25/2018 11:40 PM, Jeremy Stanley wrote: > It may be worth considering how this interacts with the switch of > our default test platform from Ubuntu 16.04 (which provides Python > 3.5) to 18.04 (which provides Python 3.6). If we switch from 3.5 to > 3.6 before we change most remaining jobs over to Python 3.x versions > then it gives us a chance to spot differences between 3.5 and 3.6 at > that point. I don't think you'll find lots of issues, as all Debian and Gentoo packages were built against Python 3.6, and hopefully, prometheanfire and myself have reported the issues. > So I guess that raises the question: switch to Python 3.5 by default > for most jobs in Rocky and then have a potentially more disruptive > default platform switch with Python 3.5->3.6 at the beginning of > Stein, or wait until the default platform switch to move from Python > 2.7 to 3.6 as the job default? I can see some value in each option. I'd love to see gating on both Python 3.5 and 3.6 if possible. Also, can we restart the attempts (non-voting) gating jobs with Debian Sid? That's always were we get all updates first. Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tripleo] Recap of Python 3 testing session at PTG
On 04/13/2018 02:48 PM, Thomas Goirand wrote: > On 03/17/2018 09:34 AM, Emilien Macchi wrote: >> ## Challenges >> >> - Some services aren't fully Python 3 > > To my experience switching everything to Py3 in Debian, the only issues > were: > > - manila-ui > - networking-mlnx Of course, I also forgot Swift, which isn't Py3 ready. But that's so famous that I didn't mention it. BTW, any progress on upstream Swift WRT Py3 support? Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tripleo] Recap of Python 3 testing session at PTG
On 03/17/2018 09:34 AM, Emilien Macchi wrote: > ## Challenges > > - Some services aren't fully Python 3 To my experience switching everything to Py3 in Debian, the only issues were: - manila-ui - networking-mlnx The Melanox driver will probably be dropped from Debian, so the only collateral is manila-ui, which is being worked on upstream. The other one that isn't Py3 ready *in stable* is trove-dashboard. I have sent backport patches, but they were not approved because of the stable gate having issues: https://review.openstack.org/#/c/554680/ https://review.openstack.org/#/c/554681/ https://review.openstack.org/#/c/554682/ https://review.openstack.org/#/c/554683/ The team had plans to make this pass (by temporarily fixing the gate) but so far, it hasn't happened. On the packaging level, this wont be an issue for Rocky, and for Queens (which you probably don't care about), you could just add these patches at the packaging level. I hope this helps, Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Removing networking-mlnx from Debian?
Hi, Is networking-mlnx actively maintained? It doesn't look like it to me, there's still no Queens release. It also fails to build in Debian, with apparently no Python 3 support. Without any reply from an active maintainer, I'll ask for this package to be removed from Debian. Please let me know, Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] RFC: Next minimum libvirt / QEMU versions for "Stein" release
On 04/06/2018 12:07 PM, Kashyap Chamarthy wrote: >> dh_install: Cannot find (any matches for) "/usr/lib/*/wireshark/" (tried >> in "." and "debian/tmp") >> dh_install: libvirt-wireshark missing files: /usr/lib/*/wireshark/ >> dh_install: missing files, aborting > > That seems like a problem in the Debian packaging system, not in > libvirt. It sure is. As I wrote, it should be a minor packaging issue. > I double-checked with the upstream folks, and the install > rules for Wireshark plugin doesn't have /*/ in there. That part (ie: the path with *) isn't a mistake, it's because Debian has multiarch support, so for example, we get path like this (just a random example from my laptop): /usr/lib/i386-linux-gnu/pulseaudio /usr/lib/x86_64-linux-gnu/pulseaudio > Note: You don't even have to build the versions from 'Buster', which are > quite new. Just the slightly more conservative libvirt 3.2.0 and QEMU > 2.9.0 -- only if it's possbile. Actually, for *official* backports, it's the policy to always update to whatever is in testing until testing is frozen. I could maintain an unofficial backport in stretch-stein.debian.net though. > That said ... I just spent comparing the release notes of libvirt 3.0.0 > and libvirt 3.2.0[1][2]. By using libvirt 3.2.0 and QEMU 2.9.0, Debian users > will be spared from a lot of critical bugs (see all the list in [3]) in > CPU comparision area. > > [1] https://www.redhat.com/archives/libvirt-announce/2017-April/msg0.html > -- Release of libvirt-3.2.0 > [2] > https://www.redhat.com/archives/libvirt-announce/2017-January/msg3.html > -- Release of libvirt-3.0.0 > [3] https://www.redhat.com/archives/libvir-list/2017-February/msg01295.html So, because of these bugs, would you already advise Nova users to use libvirt 3.2.0 for Queens? Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] RFC: Next minimum libvirt / QEMU versions for "Stein" release
On 04/04/2018 10:45 AM, Kashyap Chamarthy wrote: > Answering my own questions about Debian -- > > From looking at the Debian Archive[1][2], these are the versions for > 'Stretch' (the current stable release) and in the upcoming 'Buster' > release: > > libvirt | 3.0.0-4+deb9u2 | stretch > libvirt | 4.1.0-2| buster > > qemu| 1:2.8+dfsg-6+deb9u3| stretch > qemu| 1:2.11+dfsg-1 | buster > > I also talked on #debian-backports IRC channel on OFTC network, where I > asked: > > "What I'm essentially looking for is: "How can 'stretch' users get > libvirt 3.2.0 and QEMU 2.9.0, even if via a different repository. > As they are proposed to be least common denominator versions across > distributions." > > And two people said: Then the versions from 'Buster' could be backported > to 'stretch-backports'. The process for that is to: "ask the maintainer > of those package and Cc to the backports mailing list." > > Any takers? > > [0] https://packages.debian.org/stretch-backports/ > [1] https://qa.debian.org/madison.php?package=libvirt > [2] https://qa.debian.org/madison.php?package=qemu Hi Kashyap, Thanks for your considering of Debian, asking me and giving enough time for answering! Here's my thoughts. I updated the wiki page as you suggested [1]. As i wrote on IRC, we don't need to care about Jessie, so I removed Jessie, and added Buster/SID. tl;dr: just skip this section & go to conclusion backport of libvirt/QEMU/libguestfs more in details --- I already attempted the backports from Debian Buster to Stretch. All of the 3 components (libvirt, qemu & libguestfs) could be built without extra dependency, which is a very good thing. - libvirt 4.1.0 compiled without issue, though the dh_install phase failed with this error: dh_install: Cannot find (any matches for) "/usr/lib/*/wireshark/" (tried in "." and "debian/tmp") dh_install: libvirt-wireshark missing files: /usr/lib/*/wireshark/ dh_install: missing files, aborting Without more investigation but this build log, it's likely a minor fix in debian/*.install files to make it possible to backport the package. - qemu 2.11 built perfectly with zero change. - libguestfs 1.36.13 only needed to have fdisk replaced by util-linux as build-depends (fdisk is now a separate package in Buster). So it looks like easy to backport these 3 *AT THIS TIME*. [2] However, without a cristal ball, nobody can tell how hard it will be to backport these *IN A YEAR FROM NOW*. Conclusion: --- If you don't absolutely need new features from libvirt 3.2.0 and 3.0.0 is fine, please choose 3.0.0 as minimum. If you don't absolutely need new features from qemu 2.9.0 and 2.8.0 is fine, please choose 2.8.0 as minimum. If you don't absolutely need new features from libguestfs 1.36 and 1.34 is fine, please choose 1.34 as minimum. If you do need these new features, I'll do my best adapt. :) About Buster freeze & OpenStack Stein backports to Debian Stretch - Now, about Buster. As you know, Debian doesn't have planned release dates. Though here's the stats showing that roughly, there's a new Debian every 2 years, and the freeze takes about 6 months. https://wiki.debian.org/DebianReleases#Release_statistics With this logic and considering Stretch was released last year in June, after Stein is released, Buster will probably start its freeze. If the Debian freeze happens later, good for me, I'll have more time to make Stein better. But then Debian users will probably expect an OpenStack Stein backport to Debian Stretch, and that's where it can become tricky to backport these 3 packages. The end --- I hope the above isn't too long, and helps to take the best decision, Cheers, Thomas Goirand (zigo) [1] https://wiki.openstack.org/wiki/LibvirtDistroSupportMatrix#Distro_minimum_versions [2] I'm not shouting, just highlighting the important part! :) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Announcing Queens packages for Debian Sid/Buster and Stretch backports
On 03/27/2018 04:39 PM, Tom Barron wrote: > On 27/03/18 15:53 +0200, Thomas Goirand wrote: >> Building the packages worked surprisingly well. I was secretly expecting >> more failures. The only real collateral damage is: >> >> - manila-ui (no Py3 support upstream) > > Just a note of thanks for calling our attention to this issue. > manila-ui had been rather neglected and is getting TLC now. > We'll certainly get back to you when we've got it working with Python 3. > > -- Tom Barron Sure, no pb! I do understand it may take time, no worries. I just find it a bit frustrating, as I like the Manila project. Please continue to send me patches to try. Hopefully, you'll get there soon. If you wish, I can also explain to you how to build the Debian package for manila-ui if you want to try yourself in Sid. Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Announcing Queens packages for Debian Sid/Buster and Stretch backports
On 03/27/2018 04:20 PM, Jeremy Stanley wrote: > On 2018-03-27 15:53:34 +0200 (+0200), Thomas Goirand wrote: > [...] >> I'd really love to have Sid as a possible image in the infra > [...] > > This has unfortunately been rotting too far down my to do list for > me to get to it. I'd love to have debian-sid nodes to test some > stuff on as well--especially clients/libraries/utilities--since a > lot of my (workstation and portable) systems are running it. If > someone is interested in and has time to work on this, I'm happy to > provide guidance and review their changes. It involves fixing DIB to make it produce a Sid image, right? I tried to make recent DIB to work. I really did. But seriously, it was a horrible experience and I gave up. :( Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Announcing Queens packages for Debian Sid/Buster and Stretch backports
Hi, As some of you already know, after some difficult time after I left my past employer, I'm back! And I don't plan on giving-up, ever... :) The repositories: = Today, it's my pleasure to announce today the general availability of Debian packages for the Queens OpenStack release. These are available in official Debian Sid (as usual), and also as a Stretch (unofficial) backports. These packages have been tested successfully with Tempest. Here's the address of the (unofficial) backport repositories: deb http://stretch-queens.debian.net/debian stretch-queens-backports main deb-src http://stretch-queens.debian.net/debian stretch-queens-backports main deb http://stretch-queens.debian.net/debian stretch-queens-backports-nochange main deb-src http://stretch-queens.debian.net/debian stretch-queens-backports-nochange main The repository key is here: wget -O - http://stretch-queens.debian.net/debian/dists/pubkey.gpg | \ apt-key add Please note that stretch-queens.debian.net is just a IN CNAME pointer to the server of my new employer, Infomaniak, and that the real server name is: stretch-queens.infomaniak.ch So, that server is of course located in Geneva, Switzerland. Thanks to my employer for sponsoring that server, and allowing me to build these packages during my work time. What's new in this release == 1/ Python 3 --- The new stuff is ... the full switch Python 3! As much as I understand, apart from Gentoo, no other distribution switched to Python 3 yet. Both RDO and Ubuntu are planning to do it for Rocky (at least that's what I've been told). So once more, Debian is on the edge. :) While there is still dual Python 2/3 support for clients (with priority to Python 3 for binaries in /usr/bin), all services have been switched to Py3. Building the packages worked surprisingly well. I was secretly expecting more failures. The only real collateral damage is: - manila-ui (no Py3 support upstream) As the Horizon package switched to Python 3, it's unfortunately impossible to keep these plugins to use Python 2, and therefore, manila-ui is now (from a Debian packaging standpoint) RC buggy, and shall be removed from Debian Testing. Also, Django 2 will sooner or later be the only option in Debian Sid. It'd be great if Horizon's patches could be merged, and plugins adapt ASAP. Also, a Neutron plugins isn't released upstream yet for Queens, and since the Neutron package switched to Python 3, the old Pike plugin packages are also considered RC buggy (and it doesn't build with Queens anyway): - networking-mlnx The faith of the above packages is currently unknown. Hopefully, there's going to be upstream work to make them in a packageable state (which means, for today's Debian, Python 3.6 compatible), if not, there will be no choice but to remove them from Debian. As for networking-ovs-dpdk, it needs more work on OVS itself to support dpdk, and I still haven't found the time for it yet. As a more general thing, it'd be nice if there was Python 3.6 in the gate. Hopefully, this will happen with Bionic release and the infra switching to it. It's been a reoccurring problem though, that Debian Sid is always experiencing issues before the other distros (ie: before Ubuntu, for example), because it gets updates first. So I'd really love to have Sid as a possible image in the infra, so we could use it for (non-voting) gate. 2/ New debconf unified templates The Debconf templates used to be embedded within each packages. This isn't the case anymore, all of them are now stored in openstack-pkg-tools if they are not service specific. Hopefully, this will help having a better coverage for translations. The postinst scripts can also optionally create the service tenant and user automatically. The system also does less by default (ie: it wont even read your configuration files if the user doesn't explicitly asks for config handling), API endpoint can now use FQDN and https as well. 3/ New packages/services We've added Cloudkitty and Vitrage. Coming soon: Octavia and Vitrage. Unfortunately, at this point, cloudkitty-dashboard still contains non-free files (ie: embedded minified javascripts). Worse, some of them cannot even be identified (I couldn't find out what version from upstream it was). So even if this package is ready, I can't upload it to Debian in such state. Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [keystone] Keystone failing with error 104 (connection reset by peer) if using uwsgi
On 03/11/2018 08:12 PM, Lance Bragstad wrote: > Hey Thomas, > > Outside of the uwsgi config, are you following a specific guide for your > install? I'd like to try and recreate the issue. > > Do you happen to have any more logging information? > > Thanks Hi Lance, Thanks for your proposal to try to diagnose the issue. Here's the Debian package: http://stretch-queens.infomaniak.ch/keystone/ (it's 13.0.0-6, but that's really a backport for Stretch...) To use that version of Keystone, you will need this Queens repository: deb http://stretch-queens.infomaniak.ch/debian \ stretch-queens-backports main deb-src http://stretch-queens.infomaniak.ch/debian \ stretch-queens-backports main deb http://stretch-queens.infomaniak.ch/debian \ stretch-queens-backports-nochange main deb-src http://stretch-queens.infomaniak.ch/debian \ stretch-queens-backports-nochange main (sorry for the email client that is wrapping this...) this repository contains a full Queens backport for Stretch btw, and also holds a version of keystone (with Apache), so make sure you're using the correct uwsgi version from above. Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [cinder] [oslo] cinder.conf generation is broken for my_ip, building non-reproducibly
Hi, When inspecting Cinder's (Queens release) cinder.conf, I can see: # Warning: Failed to format sample for my_ip # unhashable type: 'HostAddress' So it seems there's an issue in either Cinder or Oslo. How can I investigate and fix this? It's very likely that I'm once more the only person in the OpenStack community that is really checking config file generation (it used to be like that for past releases), and therefore the only one who noticed it. Also, looking at the code, this seems to be yet-another-instance of "package cannot be built reproducible" [1] with the build host config leaking in the configuration (well, once that's fixed...). Indeed, in the code I can read: cfg.HostAddressOpt('my_ip', default=netutils.get_my_ipv4(), help='IP address of this host'), This means that, when that's repaired, build Cinder will write something like this: #my_ip = 1.2.3.4 With 1.2.3.4 being the value of netutils.get_my_ipv4(). This is easily fixed by adding something like this: sample-default='' I'm writing this here for Cinder, but there's been numerous cases like this already. The most common mistake being the hostname of the build host leaking in the configuration. While this is easily fixed at the packaging level fixing the config file after generating it with oslo.config, often that config file is also built with the sphinx doc, and then that file isn't built reproducibly. That's harder to detect, and easier fixed upstream. Cheers, Thomas Goirand (zigo) [1] https://reproducible-builds.org/ __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [keystone] Keystone failing with error 104 (connection reset by peer) if using uwsgi
On 03/11/2018 08:12 PM, Lance Bragstad wrote: > Hey Thomas, > > Outside of the uwsgi config, are you following a specific guide for your > install? Under the packages that I maintain in Debian, there's nothing more to do than "apt-get install keystone", reply to a few Debconf questions, and you get a working installation. That is to say, I don't think I did any mistake here. > I'd like to try and recreate the issue. If you wish, I can build a package for you to try, if you're ok with that. Would that be ok? Would you prefer to use Sid or Stretch? It's rather easy to do, as the revert to Apache is just a single git commit. > Do you happen to have any more logging information? That's what was really frustrating: no log at all on the server side, just the client... Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [keystone] Keystone failing with error 104 (connection reset by peer) if using uwsgi
Hi, I've attempted to switch Keystone to using uwsgi instead of Apache in the Debian packages for Queens. Unfortunately, I had random failure with error 104 in both output of the client and keystone logs. 104 is in fact "TCP connection reset by peer" (and this shows in the logs). So I've switched back, but I'd prefer using uwsgi if possible. Here's the parameters I had in the .ini for uwsgi: http-socket = :35357 wsgi-file = /usr/bin/keystone-wsgi-admin buffer-size = 65535 master = true enable-threads = true processes = 12 thunder-lock = true plugins = python3 lazy-apps = true paste-logger = true logto = /var/log/keystone/keystone-admin.log name = keystone-admin uid = keystone gid = keystone chdir = /var/lib/keystone die-on-term = true Has this happened to anyone else? Is there one option above which is wrong? Why is this happening? Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [horizon][ptg] Horizon PTG Highlights
On 03/06/2018 11:08 PM, Ivan Kolodyazhny wrote: > * Angular and XStatic packages versions > o testing and updating were done mostly manually by Radomir and Rob > o we agreed to update XStatic packages in Rocky if they have > suitable for Horizon versions and we've got capacity for this Just a quick input here. Having JS libs which I don't maintain in Debian to upgrade can be a long, and painful process, especially for high profile packages like libjs-jquery. I'd appreciate a lot if I can get a ping before/when this happen, and this has to happen as early as possible in the cycle. Also, you don't *HAVE* to upgrade them *ALL* in a single cycle and give downstream package maintainers so much work! :) Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [heat] heat-dashboard is non-free, with broken unit test env
On 03/02/2018 07:28 PM, Kaz Shinohara wrote: > Hi Thomas(zigo), > > I found an issue which is included in > https://review.openstack.org/#/c/548924/ (you did cherry pick last > night) > In short, this issue makes it impossible to install heat-dashboard.. > > I landed fix for this. https://review.openstack.org/#/c/549214/ > > Could you kindly pick up this for your package ? > Sorry again for your inconvenience. > > Regards, > Kaz(kazsh) Hi, I've added the patch, thanks for it. So now, I'm embedding that patch for fixing unit tests, plus: https://review.openstack.org/#/c/547468/ https://review.openstack.org/#/c/549214/ Indeed, it'd be nice to have all of them officially backported to Queens, as you suggested on IRC. It'd be even better to completely remove embedded stuff, and use xstatic packages. There's already an XStatic package for the font-awesome which can be used. I do believe it would be very much OK to add such a requirement to heat-dashboard, since it is already one of the requirements for Horizon. Altogether, thanks a lot for your care, as always, the OpenStack community turns out to be comprehensive, reactive and simply awesome! :) Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [requirements] Let's switch from pyldap to python-ldap >= 3
Hi, Pyldap started as a fork of python-ldap. But recently, its features were merged into the python-ldap python module >= 3, and pyldap is now deprecated. Let's switch to python-ldap >= 3, and remove pyldap from requirements. This has already been done in Debian. The python-ldap source package now carries a python-pyldap transition package, which installs python-ldap. What's the procedure? Should I first send a patch to the global-reqs repo first? Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [heat] heat-dashboard is non-free, with broken unit test env
On 02/23/2018 09:29 AM, Xinni Ge wrote: > Hi there, > > We are aware of the javascript embedded issue, and working on it now, > the patch will be summited later. > > As for the unittest failure, we are still investigating it. We will > contant you as soon as we find out the cause. > > Sorry to bring troubles to you. We will be grateful if you could wait > for a little longer. > > Best Regards, > > Xinni Hi, Thanks for this message. This lowers the frustration! :) Let me know if there's any patch I could review. Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Xen API dom0 plugin code in os-xenapi doesn't have Python 3 support
Hi, I hope this reaches Bob Ball, I'm bringing the patch review conversation here, because I don't think Gerrit is a good enough medium. For a bit of context, I've opened this patch review: https://review.openstack.org/544809 which fixes Python 3 support within os_xenapi/dom0/etc/xapi.d/plugins. It feels like this piece of code is: - out of place - not Python 3 compliant - annoying downstream distributions - never used in OpenStack. Bob, yes, I can remove the code in the Debian package, that's not hard to do so, it's just annoying. But why would you add such a burden on all and every downstream distribution? Wouldn't there be a better place to put the CentOS Python code? Couldn't we get an RPM package to install on all XenAPI servers before they join the OpenStack cluster? To me, there's 2 alternatives: 1/ Accept this patch, so that at least the code builds/installs in downstream distributions 2/ Remove the code completely from os-xenapi I'd prefer the later, but I don't mind much. Your thoughts? Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [ptg] Release cycles, stable branch maintenance, LTS vs. downstream consumption models
On 02/24/2018 03:42 PM, Thierry Carrez wrote: > On Tuesday afternoon we'll have a discussion on release cycle duration, > stable branch maintenance, and LTS vs. how OpenStack is consumed downstream. > > I set up an etherpad at: > https://etherpad.openstack.org/p/release-cycles-ptg-rocky > > Please add the topics you'd like to cover. I really wish I could be there. Is there any ways I could attend remotely? Like someone with Skype or something... Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [ptl][all][python3] collecting current status of python 3 support in projects
On 02/23/2018 12:29 AM, Doug Hellmann wrote: > I am trying to update the wiki document with the current state of > support for Python 3 projects as part of preparing for a discussion > about moving from "Python 2 first, then 3" to "Python 3 first, then > 2" development. > > I have added the missing libraries and services (at least those > managed by the release team) and done my best to figure out if there > are unit and functional/integration test jobs for each project. > > I need your help to verify the information I have collected and fill in > any gaps. > > Please look through the tables in [1] and if your projects' status > is out of date either update the page directly or email me (off > list) with the updates. > > Thanks! > Doug > > [1] > https://wiki.openstack.org/wiki/Python3#Python_3_Status_of_OpenStack_projects Hi Doug! As I've been working over the course of this week on switching all of Debian OpenStack to Py3, I have a bit experience with it. Unfortunately, I can only tell about unit tests, as I haven't run functional tests yet. Mostly, it's working well, and even in Python 3.6 in Sid. Though what I've seen often, is the tooling, and especially the sphinx docs, expecting Python 2 to be there. For example (and that's just an example, I'm not pointing finger at any project here...) generating the sphinx doc of Cinder calls binaries in the "tools" folder (ie: tools/generate_driver_list.py) which has "#! /usr/bin/env python" as first line. Of course, under my Python 3 only environment, it just fails miserably, and I had to patch the files. Another example would be Congress generating its lexer with some Python 2 type of exception (those with coma instead of "as"). I fixed that at build time with Victor's sixer tool (which really is awesome, thanks for it Victor!). Then there's Nova which annoyed me when generating the doc because of seemingly a bug in the Python 3 version of blockdiag (I may be wrong, but I don't think Nova itself is at fault here). I would have more details like this, but I guess you understand the general issue I'm raising: mostly we need to get rid of Python 2 completely, because otherwise, it's expected to be the default. So I'm really looking forward it happens upstream. LET'S KILL PYTHON 2 SUPPORT !!! :) More seriously, it'd be nice if all the docs tooling were effectively switching to Python 3, otherwise other issues will be reported. Also, it is annoying to see that manila-ui isn't Python 3 ready at all. I guess I'll simply skip manila-ui for this release (since all of Horizon is already switched to Python 3 on my side). I'm expecting to see more of these Horizon plugins to not be ready (I haven't completely finished that part...). I hope this helps, Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [heat] heat-dashboard is non-free, with broken unit test env
On 02/21/2018 05:54 PM, Corey Bryant wrote: > > > On Wed, Feb 21, 2018 at 9:35 AM, Thomas Goirand <mailto:z...@debian.org>> wrote: > > Hi there! > > I'm having big trouble package heat-dashboard for Debian. I hope I can > get help through this list. > > In here: > > > heat_dashboard/static/dashboard/project/heat_dashboard/template_generator/js/ > > we have minified *only* versions of Javascript. > > > There's also a bug open for this: > https://bugs.launchpad.net/heat-dashboard/+bug/1747687 > > Regards, > Corey Thanks for the link and filing this bug. Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [heat] heat-dashboard is non-free, with broken unit test env
Hi there! I'm having big trouble package heat-dashboard for Debian. I hope I can get help through this list. In here: heat_dashboard/static/dashboard/project/heat_dashboard/template_generator/js/ we have minified *only* versions of Javascript. 1/ Why is there only minified versions? That's non-free to me, Debian, and probably any other distro caring about OpenStack. 2/ Why do we even have a folder called "vendors"? Doesn't this sound really a bad practice? 3/ Why is there so many angular-*.min.js files? Do we need them all? 4/ Why isn't the package using xstatic-angular and friends? As it stands, I can't upload heat-dashboard to Debian for Queens, and it's been removed from Horizon... :( Oh, and I almost forgot! When running unit tests, I get: PYTHON=python$i NOSE_WITH_OPENSTACK=1 \ NOSE_OPENSTACK_COLOR=1 \ NOSE_OPENSTACK_RED=0.05 \ NOSE_OPENSTACK_YELLOW=0.025 \ NOSE_OPENSTACK_SHOW_ELAPSED=1 \ DJANGO_SETTINGS_MODULE=heat_dashboard.test.settings \ python$i /home/zigo/sources/openstack/queens/services/heat-dashboard/build-area/heat-dashboard-1.0.2/manage.py test heat_dashboard.test --settings=heat_dashboard.test.settings No local_settings file found. Traceback (most recent call last): File "/home/zigo/sources/openstack/queens/services/heat-dashboard/build-area/heat-dashboard-1.0.2/manage.py", line 23, in execute_from_command_line(sys.argv) [ ... some stack dump ...] File "/usr/lib/python3/dist-packages/fasteners/process_lock.py", line 147, in acquire self._do_open() File "/usr/lib/python3/dist-packages/fasteners/process_lock.py", line 119, in _do_open self.lockfile = open(self.path, 'a') PermissionError: [Errno 13] Permission denied: '/usr/lib/python3/dist-packages/openstack_dashboard/local/_usr_lib_python3_dist-packages_openstack_dashboard_local_.secret_key_store.lock' What thing is attempting to write in my read only /usr, while Horizon is correctly installed, and writing its secret key material as it should, in /var/lib/openstack-dashboard? It's probably due to me, but here, how can I make heat-dashboard unit test behave during package build? What's this "No local_settings file found" thing? Other dashboards didn't complain in this way... Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Debian OpenStack packages switching to Py3 for Queens
On 02/16/2018 03:42 PM, Petr Kovar wrote: > On Thu, 15 Feb 2018 09:31:19 +0100 > Thomas Goirand wrote: > >> Hi, >> >> Since I'm getting some pressure from other DDs to actively remove Py2 >> support from my packages, I'm very much considering switching all of the >> Debian packages for Queens to using exclusively Py3. I would have like >> to read some opinions about this. Is it a good time for such move? I >> hope it is, because I'd like to maintain as few Python package with Py2 >> support at the time of Debian Buster freeze. >> >> Also, doing Queens, I've noticed that os-xenapi is still full of py2 >> only stuff in os_xenapi/dom0. Can we get those fixes? Here's my patch: >> >> https://review.openstack.org/544809 > > Hey Thomas, slightly off-topic to this, but would it be a good idea to > resurrect OpenStack install guides for Debian if Debian packages are still > maintained? Yes it would. I'm not sure where to start, since all the doc has moved to individual projects. Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Debian OpenStack packages switching to Py3 for Queens
Hi, Since I'm getting some pressure from other DDs to actively remove Py2 support from my packages, I'm very much considering switching all of the Debian packages for Queens to using exclusively Py3. I would have like to read some opinions about this. Is it a good time for such move? I hope it is, because I'd like to maintain as few Python package with Py2 support at the time of Debian Buster freeze. Also, doing Queens, I've noticed that os-xenapi is still full of py2 only stuff in os_xenapi/dom0. Can we get those fixes? Here's my patch: https://review.openstack.org/544809 Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Qinling package description (was: Technical Committee Status update, February 2nd)
On 02/02/2018 11:52 AM, Thierry Carrez wrote: > == Recently-approved changes == > > * New project team: Qinling (Function as a Service) [1] > * Goal updates: ironic > > [1] https://review.openstack.org/#/c/533827/ Sorry for this usual "no description" ranting, but I believe it's for the best. While Qinling seems a nice project, its description is IMO not very descriptive. I had to go on the AWS website to understand what AWS Lambda is. Nowhere, I could read what type of language Qinling supports. While I understand that a just born project cannot have a meaningful documentation, almost no project description isn't going to make it very attractive for new contributors. Could we get this improved? Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Help with fixing python-pint fixing failed tests with last python-numpy
Hi list! Pint is a (test) dependency for horizon (and therefore, an indirect dependency for all Horizon plugins). Since the update of python-numpy in Debian Sid, pint fails to build. The issue was reported to the Debian BTS: https://bugs.debian.org/876921 and to the upstream github: https://github.com/hgrecco/pint/issues/577 None of these bug entries received patches. I attempted to understand myself, though it is above my skills. Could anyone help and provide a patch? Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] Switching to longer development cycles
On 12/15/2017 05:52 PM, Matt Riedemann wrote: > On 12/15/2017 9:15 AM, Thomas Goirand wrote: >> Not only that. Everyone is lagging a few release behind, and currently, >> upstream OpenStack don't care backporting to older releases. > > Can you clarify this please? The nova team is definitely backporting > fixes to pike, ocata and newton. Newton isn't EOL yet *because* nova has > held it up backporting fixes that we think are important enough to get > in there before we EOL the branch. I very much appreciate what has been done with the CVE fixes. Thanks a lot for this, especially that it looked quite tricky and a way above the level of patch I could backport by myself in a safe way. > If you're talking about LTS, that's a different story, but please don't > say upstream OpenStack doesn't care about backporting fixes. That might > be a per-project statement, but in general it's untrue. After re-reading myself, I noticed that it could be read in a variety of ways. Sorry for this that's typical from me, maybe because I'm not a native English speaker. :( Let me attempt to correct myself. First, it wasn't "upstream don't care about anyone, upstream is bad". It was more: upstream currently doesn't have in place support so it can care for a long enough time for its security bugfixes to be relevant to distros. More in details: Upstream distributions are all advertising for 5 years support. For my own case, and considering the last Debian release, Newton was out a year ago, a bit before Debian Stretch freeze. Stretch was then released on the 17th of June, while Newton was officially EOL on the 11th of October. This means that, officially, Debian received 4 months of official support during the lifetime of its release, which is supposed to be at least 3 years, and preferably 5 (if we account the LTS effort). So even without talking about OpenStack LTS, I hope everyone understand that for me & Debian, the *official* security support is as good as inexistant when dealing with Debian Stable. Lucky, as always within this awesome OpenStack community, mostly everyone from individual projects has been super helpful and helped when I asked. However, even with very nice people, this helpfulness has limits, and an official longer support would definitively help. Anyway, all this was to say: I'm convinced that releasing less often will help. I don't think backporting from master to Pike, Ocata and Newton has so much value, but it's a lot of effort upstream. And in Debian's case, Ocata backport wasn't needed. Even if we're not talking about LTS, I'm sure having half the number of backports may help extend the life of stable releases. I hope it's clearer this time, Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] Switching to longer development cycles
On 12/14/2017 12:44 AM, Clint Byrum wrote: > We can take stock of the intermediate releases over the last year, and make > sure they all work together once a year. Chris Jones mentioned that we should > give users time between milestones and release. I suggest we release an > intermediary and _support it_. Let distros pick those up when they need to > ship > new features. I don't think this will happen. > Let users jump ahead for a few projects when they need the bug fixes. And that, I don't agree. New releases aren't to fix bugs, bugs should be fixed in stable too, otherwise you face new issues trying to get a bugfix. And that's why we have stable. > I understand the belief that nobody will run the intermediaries. Not only that. Everyone is lagging a few release behind, and currently, upstream OpenStack don't care backporting to older releases. Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] Switching to longer development cycles
On 12/14/2017 03:16 PM, Ed Leafe wrote: > On Dec 14, 2017, at 3:01 AM, Thomas Goirand wrote: >> >> As a package maintainer who no longer can follow the high pace, I very >> much support this change. > > So you’re saying that you would be ignoring any intermediate releases? > > -- Ed Leafe I used to package each and every b1, b2 and b3, be ready for rc1, and be the first to release a working *and tested* release of OpenStack in Debian. Since I'm no longer paid to do that, I just can't. I already skipped Ocata, and yes, I will ignore these, and start the packaging work *after* the official release. That is of course, unless some companies offer to sponsor my work again... So far, each and every of such proposal went to nowhere. Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] Switching to longer development cycles
On 12/13/2017 05:17 PM, Thierry Carrez wrote: > So... What do you think ? As a package maintainer who no longer can follow the high pace, I very much support this change. Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Removing internet access from unit test gates
On 11/21/2017 04:04 PM, Jeremy Stanley wrote: > Removing network access from the machines running these jobs won't > work, of course, because our job scheduling and execution service > needs to reach them over the Internet to start jobs, monitor > progress and collect results. Right. > As you noted, faking Python out with > envvars pointing it at nonexistent HTTP proxies might help at least > where tests attempt to make HTTP(S) connections to remote systems. > The Web is not all there is to the Internet however, so this > wouldn't do much to prevent use of remote DNS, NTP, SMTP or other > non-HTTP(S) protocols. To my experience, in the vast majority of cases, setting-up the env vars is enough to prevent problems. There was a few (non test failure producing) DNS access, IIRC with glanceclient, but that's all I noticed. > As such, any proxy envvar setting would > need to happen within the scope of tox itself so that it will be > able to set up the virtualenv prior to configuring the proxy vars > for the ensuing tests. This is a workable solution, though it involves patching absolutely all projects tox.ini. I'm certain I don't have enough available time to produce 300+ tox.ini patches and follow their review. Wouldn't there be a more global approach? > It might be easiest for you to work out the > tox.ini modification on one project (it'll be self-testing at least) > and then once the pilot can be shown working the conversation with > the community becomes a little easier. Doing this kind of a patch at first on a few project's tox.ini, absolutely! I might even start with Horizon and PBR (yes, there's a problem there as well... which I haven't reported yet). Though generalizing it to 300+ patches, I'm really not sure. Your thoughts? Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Removing internet access from unit test gates
Hi everyone! I believe all distributions are running unit tests of packages, either at build time, or later on (for example in autopkgtest). In Debian, it's done mostly at build time. There it's forbidden to access internet, and on purpose, Debian buildd servers wont provide internet access when building. Unfortunately, a number of OpenStack packages have internet accessing unit tests. I believe this is by mistake, or lack of knowledge of some upstream developers. It's unfortunately often discovered after the facts, like for example https://bugs.debian.org/882266 (note: no finger pointing here, this is just yet another instance of the issue and just an example out of many others, and probably the first time in Horizon). One easy way, at least in python, is to define the environment variables: http_proxy=127.0.0.1:9 https_proxy=127.0.0.9:9 HTTP_PROXY=127.0.0.1:9 HTTPS_PROXY=127.0.0.1:9 This way, no way to do an accidental implementation mistake. This is of course not desirable under the functional/integration testing environment. The only way that I see going forward, is having internet access removed from unit tests in the gate, or probably just the above variables set. Your thoughts? Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Upstream LTS Releases
On 11/08/2017 05:27 PM, Samuel Cassiba wrote: > ie. deployment-focused development > teams already under a crunch as contributor count continues to decline > in favor of other projects inside and out of OpenStack. Did you even think that one of the reason for such a decline, is that OpenStack is moving too fast, and has no LTS? Some major public cloud (which I will on purpose not name) are still running Kilo, which was released 3 years ago! 3 or 5 years support for an LTS version is the industry standard, and OpenStack is doing only 1 year. This has driven people away, and will continue to do so if nothing is done. Instead of thinking "this will be more work", why don't you think of the LTS as an opportunity to only release OpenStack Chef for the LTS? That'd be a lot less work indeed, and IMO that's a very good opportunity for you to scale down. Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Call for sponsorship: hardware for Debian OpenStack packages functional testing
tl;dr: I need hardware to run tempest on Debian + OpenStack. I wouldn't refuse sponsorship of my work either. Dear everyone, As you may know, I have been packaging OpenStack in Debian nearly since it existed (ie: since the Cactus release). I used to be a Mirantis employee, though like many others from the company, I've been "let go" last year (that's the wording of the Mirantis marketing people... though I didn't want to go!). At the moment, I'm still unemployed, even though I have very serious opportunities offered to me. Anyway, the thing is, every time my professional situation changes, I am loosing the access to hardware used to do functional testing of the Debian OpenStack packages. Currently, I don't have a server to run on, so I cannot check if Pike works as expected. I already had some offers from companies to use hardware that they also would host. However, this also feels like not sustainable over a long period of time. I would very much prefer to have such a hardware hosted within the Debian infrastructure. Which is why I am hereby calling for sponsorship of such a hardware. Note that I already made such a request to the DSA team (Debian System Administrators), and it was denied because they don't want to make OpenStack a special case. Normally, DDs are supposed to test packages themselves when uploading to Debian. There's 3 types of setup that my current scripts are able to support: 1/ A Xen VM. 2/ A KVM vm. 3/ A Debian live system running on bare metal, which is reseted using IMPI (using an ipmitool command). It's that last one which is performing the best, because it's running on bare metal, which avoids nested virtualization. Also, reinstalling the system means simply doing a reset and waiting for the the server to be up again. Last, the system runs on a tempfs, and IOPs are therefore a way faster than on a normal disk (HDD / SSD). The local HDD is then used as a scratch disk for testing Cinder and Swift, instead of a local loopback in the case of KVM or Xen (so again, much faster). It also needs to have IPMI, and preferably also KVM over IP. The speed of the system used to do the functional testing is important, because the time for setting-up the system is around 20 minutes (on option 3 above, slower in other cases), then it takes roughly 1 hour to run the functional test with tempest. Typically, such a debug process is ran multiple times, iteratively, fixing one problem after another. The hardware I last used was a multi-core 64 bits x86 system with 32 GB of RAM, and an SSD scratch disk (100 GB of a single SSD is enough), plus a server to run PXE network boot: tftp server, dhcp server, and apache to provide the squashfs image to the server. That's about what I need. Also, to be able to PXE boot the server, I need a 2nd server to run dhcp, pxe and apache. On that server, I would run Xen to be able to also install Jenkins server to do package build on each git push, which avoids a lot of RC bugs in Debian, and speeds up therefore packaging. So, all together, I'm searching for someone to sponsor: - A 32 GB RAM server with at least 2 cores, and 100 GB SSD, and 2 nics at least and either: - A 2nd server with a minimum of 1GB RAM & 20GB HDD and 2 nics but preferably: - 64 or even better 128 GB RAM, so I can host Jenkins servers and Debian repositories (one per release, using virtualization), with a large enough HDD to host the full set of packages per release: a pair of 1TB HDD or more using RAID1 (or even better: 4 HDD with RAID10 for better performances) seems a good choice to me. If you are able to sponsor such a hardware, and send it either to Univercity of British Columbia, or to Bytemarks in UK, please get in touch with me. Last thing. A number of companies offered me to sponsor my work packaging OpenStack for Debian: at least 4 companies already. It really feels like a number of companies were using my work over the years. However, it never went through. As I've been unemployed for a long time, I probably will accept a job not directly related to the packaging of OpenStack. So if you wish that I continue what I've done, sponsoring is welcome too. To such a sponsor, I can offer more than just the packaging: I can offer my help for deploying OpenStack, and maintaining it in production, plus whatever that company will need related to that, and this either on Debian or Ubuntu (I can provide support for both, even if my heart is on the Debian side). I also would accept any job that would include OpenStack Debian packaging, and if it can be done remote, from my home. The risk if this doesn't happen, is that the Debian packaging of OpenStack stops. It was the case for Mitaka already, and I decided to do Newton on my free time. I probably wont be able to do that again for Queens, if I'm not paid for it: it's clearly not a sustainable situation. Cheers, Thomas Goirand (zigo) _
Re: [openstack-dev] [packaging][all] Sample Config Files in setup.cfg
On 10/11/2017 01:17 PM, Jesse Pretorius wrote: > On 10/10/17, 9:26 PM, "Doug Hellmann" wrote: > >> I still think we should just be looking at these files as sample >> data and not active configuration files when we put them into the sdist >> or wheel. > > There is agreement from all distributions bar one on this approach so far, so > I’d like to dig into this approach more to see if we can find a solution that > works for everyone. > > Thomas, do I understand the problem correctly that if we considered these > sample files then the complexity for Debian appears to be that it would not > want the sample config files in, for example, /usr/share/nova but instead in > /usr/share/nova-common ? Is the use of the –common directory something that’s > Debian specific, or do any other distributions do it? Hoping to let you understand a bit more where I am, here's a bit more details. What I do in Debian also goes in Ubuntu, so it's not Debian specific. The thing is, we don't have a nova package, the list is: nova-api nova-cells nova-common nova-compute nova-compute-ironic nova-compute-kvm nova-compute-lxc nova-compute-qemu nova-conductor nova-console nova-consoleauth nova-consoleproxy nova-doc nova-placement-api nova-scheduler python-nova python3-nova The nova-common package is the one that holds all the configuration file logic. So using /usr/share/nova-common is the natural place where to put configuration files. However, if we had *samples* (ie: files that are examples, not to be used in production), they would go in the nova-doc package, somewhere in /usr/share/doc/nova-doc, probably even within an example folder in this path. Now, there's something which I still am not completely sure. Are you willing to push for *samples*, or real configuration files which we can actually use? Depending on your answer, the destination location will be different, and probably the only thing we need is some logic to disable the installation of config files (and then we just do it at the packaging level as we've been used to do so far). Also, in the packaging process, there's a few tweaks that are necessary for the distro integration. A quick example: in the nova package, I've set sensible default for [DEFAULT]/pybasedir, [DEFAULT]/bindir, [DEFAULT]/state_path. Even if you were shipping a nova.conf in the correct location, the Debian package would have to fix these directives. I've seen that Ubuntu does something similar also. I still didn't have any reply on my PBR change proposal. Your thoughts? Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [zun] Client long description on PyPi
Hi, Could someone write something relevant, instead of the current placeholder? See here: https://pypi.python.org/pypi/python-zunclient and see that "this is a hard requirement". Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [packaging][all] Sample Config Files in setup.cfg
On 10/10/2017 01:16 PM, Jesse Pretorius wrote: > Regardless, given that this is not functionality currently> available for > setuptools, distutils or pbr it would seem > that this functionality (or another applicable workaround) > would have to be carried by Debian packaging until such > time that such a facility exists in the python tooling. We don't control setuptools/distutils directly. The only thing we control is PBR. So my proposal is: 1/ Design a new config_files directive in setup.cfg and patch PBR so that it understands it. Considering the python module named foo, this could end up like this: [files] config_files_dest = PYBASE/foo config_files = foo.cfg then PYBASE would be expanded to /usr/lib/python2.7/dist-packages in the case of Debian + Python 2.7. or, as another example: [files] config_files_dest = foo config_files = foo.cfg then foo.cfg would end up in /usr/share/foo/foo.cfg unless the env var OSLO_CONFIG_FILES_DEST is set (for example to /usr/share/foo-common, or to /etc/foo). I don't really mind whatever is decided for the non-packaging use case (ie: pip install ?). 2/ Make PBR to read the destination path from something configurable at a packaging level, that would overwrite the default behavior, so that we could have whatever the package maintainer decides. 3/ When the new PBR release has the new feature, let individual packages use the new config_files directive that would simply list the config files with the preselected destination. Does this seem a good plan? Please comment on the above. Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [packaging][all] Sample Config Files in setup.cfg
On 10/09/2017 12:36 PM, Jesse Pretorius wrote: > There has been an objection from the OpenStack package maintainer for > Debian, but that objection has been to the use of the relative path of > /etc. Thomas Goirand, could you please indicate whether you support the > use of the relative path of /share given the currently available > functionality in setuptools and pbr? See my other post, which is IMO the most important and that we may have missed in this thread: I don't think data_files is there to handle configuration files, but *only* data_file. Probably Monty will be able to confirm that fact. If one day, we want to mix data files (ie, things to push in /usr/share, typically) and config files, then everything will fall apart. So, maybe we should design a config_files statement in setup.cfg? >> 2) Looking at the Ubuntu packaging for OpenStack projects, we have > quite a few places where oslo-config-generator or oslo-policy-generator > is used to generate sample configuration files as part of the package > build; Historically, I believe it's there because I started to do so in Debian, and then Ubuntu people picked-up what I did for a few project, then thought it wasn't a bad idea. Also, there's an important bit too: we're patching keystonemiddleware so that it includes the old type of keystone auth variables when generating the config files. Here's the patch: https://anonscm.debian.org/cgit/openstack/libs/python-keystonemiddleware.git/tree/debian/patches/re-add-missing-auth-options.patch FYI, this is very important for Debian's CI, as it's still using the old type of keystone auth and expecting correct configuration files. Even if it was using the new type (ie: keystone v3), then something would need to be patched in keystonemiddleware to generate that (as currently, keystonemiddleware entry point produces ... no auth directive at all!). BTW, the use of default config files in the CI is very intentional on my side, as I do want to test them. Over the years, I realized Debian was the *only* distro in the world that was gating on that. I did notice it when I understood nobody else was reporting bugs about them. Other distros are probably using config management scripts, I guess. So anyway, the point was: we'd better regenerate all service's config file by ourselves to make sure it's really taking into account our patch. I also prefer to have config files wrapped with 140 columns rather than the default 80. That's only my opinionated preference, and just an example of why we may prefer to do things by ourselves. > I might have missed it in my read through of this thread but it > would be awesome if those could be integrated as part of this process as > well as the originating project would then be able to provide some level > for assurance to the content of generated files in downstream distributions. Well, if OpenStack as an upstream was doing everything correctly, it'd be a huge win. Though really, the correct way to do things is to give package maintainers the freedom to do what they want. In this case, giving a way to tell "please drop my config files THERE --->", so that we can write nice automations is the correct thing to do. > As mentioned in [4] these should be auto-generated. Some projects do > this and submit samples into the repo from time to time, others have > just left a stub with an explanation of how to generate it. The correct thing to do is to *not* ship potentially wrong samples. Indeed, the config files depend on the version of the underlying libs, which can potentially change without prior notice. But let's not rehash this again, it has been written *many* times over *multiple* years. :) >> I'd also be +1 on a packaging SIG; I might have missed it, but what is SIG?!? > This > e-mail message may contain confidential or privileged information > intended for the recipient. Any dissemination, distribution or copying > of the enclosed material is prohibited. If you receive this transmission > in error, please notify us immediately by e-mail at ab...@rackspace.com > and delete the original message. Your cooperation is appreciated. Hum... Should I mention that this message footer is inappropriate for a *public* mailing list? Will I be persecuted for quoting you? :) Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [packaging][all] Sample Config Files in setup.cfg
On 10/10/2017 01:21 PM, Jesse Pretorius wrote: > > > On 10/10/17, 12:08 PM, "Jesse Pretorius" > wrote: > >>$ python setup.py install --skip-build --root /tmp/keystone >> --install-data / > > Apologies – I copied the wrong command, it should have been: > > $ python setup.py install --root /tmp/keystone --install-data / Isn't it that "--install-data" carries a different meaning than config files? To me, its semantic was data files, not config files. Which leads me to the idea that "data_files" in setup.cfg is probably the wrong way to describe config files. Typically, in distros, we'd have data files (for example, timezone data, pictures, docs, etc.) in /usr/share, while config files lives in /etc. Aren't we here mixing 2 concepts? For example, if we take openstack-doc-tools's setup.cfg, it has under data_files: data_files = share/openstack-doc-tools/sitemap = sitemap/* share/openstack-doc-tools/cleanup = cleanup/* Typically, for openstackdocstheme, I'd prefer these files to end up under: /usr/lib/python2.7/dist-packages/openstackdocstheme (sed s/2.7/3/ if using Python 3) With your method, wouldn't these files end up in the wrong location, ie outside of /usr? What if openstackdocstheme has both config files and "real" data files? Probably, we need a method to handle both cases: config files and data files. In fact, don't think PBR's data_files is designed for handling config files at all. Which is the very reason why I think it's broken to use it the proposed way. Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [packaging][all] Sample Config Files in setup.cfg
On 10/09/2017 11:20 AM, Luigi Toscano wrote: > On Saturday, 7 October 2017 12:30:54 CEST Thomas Goirand wrote: >> Though people doing the packaging will suffer. Please don't throw the >> baby over the wall, and let's fix the issue. > > This is an overstatement. Pardon me if I'm wrong, but if you think that's an overstatement, then you are admitting at least partly that I'm right, no? :) I'm here, like everyone, only trying to find a solution that works for everyone. I don't think pushing config files in the correct location is *THAT* hard, is it? With PBR, we had a *very* long standing issue with the module version of projects, until Monty had the brilliant idea to propose a patch to read from the environment variable OSLO_PACKAGE_VERSION. Once this was done, I hard-wired this env var reading debian/changelog (using dpkg-parsechangelog and a bit of sed magic). These days, it's just included in openstack-pkg-tools's pkgos.make, which everyone (in both Debian and Ubuntu) is using, and we don't even have to think about it. I'd love if we could have a similar answer: something fully automated that works for everyone. > For example, sahara have been installing those files for a long time under > data_files, When the patch changed the location from /usr, I just changed the > way those files are copied: > https://review.rdoproject.org/r/#/c/9722/ > (this does not happen for Ubuntu and Debian packaging right now, so there are > duplicated files). The very point is: it is happening for all distros, meaning at least 4 workaround have to be wrriten, which is silly. > This means that: > 1) if the files are not installed through data_files, you have to manually > copy them using the packaging script ; > 2) if the files are installed using data_files, you either move them in the > right position or delete that copy and still copy the proper files in the > right place So 2) is more work, you'd need to rm -rf /usr/etc. Instead, we could have absolutely everything completely automated, once and for all in all distros, with files copied in the right place. In Debian, I'd write something that would parse debian/control, and look if the foo-common package exist, and then do the right thing (tm) so config files are pushed to the correct location. But instead, right now, we have to deal with the issue. Repeat this 3 dozen times, multiply by 4 distros, and consider how much time loss we're facing, just because config files are dropped in /usr/etc. I'm still convince we can do better, toward perfect automation, and that the problem isn't that hard to solve! >> Let's get there then. In the mean while, don't break stuff. > > Nothing is broken, see above. You'll have a hard time convincing me that config files written in /usr/etc is not a broken feature. :) Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [packaging][all] Sample Config Files in setup.cfg
On 10/09/2017 08:44 AM, Jean-Philippe Evrard wrote: > If you want to include the fix for PBR, or refresh it, don't hesitate > to propose a follow-up patch. That's the issue. I would very much like to propose a follow-up patch, but Robert has disagreed on its very principle, so I don't see a route to have a validated patch. That is, unless some key core contributors to PBR agree with me first. Also, I don't think it is a good idea to carry a Debian specific patch, and I would very much prefer gathering a consensus. Such consensus could be along the lines of: we've failed to have this issue tackled in a timely manner at the setuptools level, therefore it is time to act and adopt the pragmatic solution. > In any case, I don't think it's a good idea to wait for things to > happen, and expect that uniformity will happen naturally. This is a > step in the right direction. > > On top of that we are still early in the Queens cycle, so it leaves a > chance to the packagers to react. I think it's good timing. I very much agree that it is more than time to have this problem solved. Let's be constructive and fix the situation! Starting this thread was definitively a very good idea, especially considering different people have different opinions. Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev