Re: [openstack-dev] [Nova] The unbearable lightness of specs
Top-posting since I am writing this as summary email, with some (very) rough proposals on improvements going forward (*) * Specs have a number of positives that we should not discount: ** Absolutely necessary to sign off on the idea and direction before writing code ** Serve as a way for operators to give feedback ** Serve as documentation once the work has landed (provided that they are kept up to date by * Current process has a number of shortcomings too (some of these are my own comments, and some are thought that people brought up that I incorporated): ** Approval process that creates a bottleneck on a (needlessly?) small team of people ** Tools and review culture that does not work well for the kind of communication that needs to happen ** Requesting the same format and process for all proposed work causing delays where they are not necessary and in turn exacerbating the load on the spec-core. Some problems require a lot less written design discussion than others, but we treat them all the same. Also, due to the general design of Nova, it is significantly harder to decide on some design decisions without looking at the code too, which the current spec process discourages. ** Coupling the spec review process to a particular release - this has a number of drawbacks that are probably worth it's own email, some of which are technical in nature and some of which are social. (A good point was also made that this makes the already poor tooling even worse as the previous discussion is lost) We should also take into account history behind the current Nova process and that it was meant to also give people more confidence about the prospects of their code landing in a certain release. This might be something we want to consider in parallel to figuring out the changes in the release cycles that are also happening. Going forward - some ideas on first steps we could take to improve (purely my own, not a digest from the thread): * Default to no spec, and be clear on what grounds we are asking for one. Currently it is hard to do in part I believe because posting a spec in Gerrit carries far more weight than just opening a BP in Launchpad. One idea could be to have a BP repository (that gets mirrored in LP maybe) that requires only a subset of info, and require 2 cores (or a certain number of contributors) to vote negatively before a full blown spec is required. * Consider specs approved indefinitely when they are merged, and if they miss a release - no big deal, but reserve the right to block the patches should circumstances change. Do release planning separately. * Start to talk about improvement to tooling. I feel it has been our (OpenStack) desire to stick to what we know even when it's clear that the tools are sub-par for the job. The integrated release dictate a lot of that and it might be a time to start those discussion. N. (*) I feel more discussions on this list could benefit from one On 06/24/2015 01:42 PM, Daniel P. Berrange wrote: On Wed, Jun 24, 2015 at 11:28:59AM +0100, Nikola Đipanov wrote: Hey Nova, I'll cut to the chase and keep this email short for brevity and clarity: Specs don't work! They do nothing to facilitate good design happening, if anything they prevent it. The process layered on top with only a minority (!) of cores being able to approve them, yet they are a prereq of getting any work done, makes sure that the absolute minimum that people can get away with will be proposed. This in turn goes and guarantees that no good design collaboration will happen. To add insult to injury, Gerrit and our spec template are a horrible tool for discussing design. Also the spec format itself works for only a small subset of design problems Nova development is faced with. I'd like to see some actual evidence to backup a sweeping statement as Specs dont work. They do nothing to facilitate good design happening, if anything they prevent it. Comparing Nova today, with Nova before specs were introduced, I think that specs have had a massive positive impact on the amount of good design and critique that is happening. Before specs, the average blueprint had no more than 3 lines of text in its description. Occassionally a blueprint would link to a wiki page or google doc with some design information, but that was very much the exception. When I was reviewing features in Nova before specs came along, I spent alot of time just trying to figure out what on earth the code was actually attempting to address, because there was rarely any statement of the problem being addressed, or any explanation of the design that motivated the code. This made life hard for reviewers trying to figure out if the code was acceptable to merge. It is pretty bad for contributors trying to implement new features too, as they could spend weeks or months writing and submitting code, only to be rejected at the end because the (lack of any design discussions) meant they missed some
Re: [openstack-dev] [openstack-infra] [neutron] Third Party CI Voting
We seem to be agreeing that having third party CI tools not vote –1 is a good idea. Personally I think it would be more beneficial to make it a rule rather than a recommendation. John From: Edgar Magana edgar.mag...@workday.commailto:edgar.mag...@workday.com Reply-To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Date: Friday, 26 June 2015 19:04 To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [openstack-infra] [neutron] Third Party CI Voting Totally agreed! Edgar From: Salvatore Orlando Reply-To: OpenStack Development Mailing List (not for usage questions) Date: Thursday, June 25, 2015 at 3:44 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [openstack-infra] [neutron] Third Party CI Voting Edgar, in a nutshell my point is that if we want to remove voting rights from every CI I'm fine with it. However, I think what's being discussed in this thread is already captured very well by [1] and believe the policy it outlines is perfectly fine for Neutron purposes. Salvatore [1] http://git.openstack.org/cgit/openstack/neutron/tree/doc/source/policies/thirdparty-ci.rst On 25 June 2015 at 17:08, Edgar Magana edgar.mag...@workday.commailto:edgar.mag...@workday.com wrote: Thank for your response Salvatore. I am not sure what is your position in this topic? Are you fine removing voting rights to all Cis? Edgar From: Salvatore Orlando Reply-To: OpenStack Development Mailing List (not for usage questions) Date: Thursday, June 25, 2015 at 7:59 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [openstack-infra] [neutron] Third Party CI Voting On 25 June 2015 at 16:08, John Davidge (jodavidg) jodav...@cisco.commailto:jodav...@cisco.com wrote: Hi all, Recent neutron third party CI issues have got me thinking again about a topic which we discussed in Vancouver: Should any Third Party CI have voting rights for neutron patches in gerrit? Why should this be a decision for Neutron only? I’d like to suggest that they shouldn’t. A -1 from a third party CI tool can often be an indication that the CI tool itself or the third party plugin is broken, rather than there being issues with the patch under review. I don’t think there are many cases where a third party CI tool has caught a genuine issue that Jenkins has missed. With the current voting rights these CI tools cause a lot of noise when they experience problems. As far as I am aware no 3rd party CI tool has a better coverage than the upstream one. some 3rd party CIs exercise different code paths and might uncover some issue that the upstream CI did not cover. There will surely be people claiming this has happened a lot of times, and even a single issue found is invaluable; I would agree with that, but I also think that a 3rd party CI does not have to vote to be useful. I’m not suggesting that the results of these tests be removed from the page altogether - there are some cases where their results are useful to the patch author/reviewer - but removing voting rights (or at least -1 rights) would save a patch from a –1 that might not be particularly meaningful. Frankly I find the overwhelming number of CI messages - and email notifications even more annoying that random -1s. Thankfully you can hide the formers and filter out the latters. From the perspective of 3rd party CI maintainer I could use myself as an example; I maintain a CI which has now been broken for about 48 hours. I am busy with other tasks and cannot look at it now. I might be a terrible person for this, but that's my problem. If the CI was not voting at least I would not have annoyed people. (fwiw, I've disabled my CI now). Also, I believe we already agreed that a working CI is not anymore a requirement, as long as the plugin/driver maintainers can provide a reasonable proof that their integration works? Salvatore Thoughts? John __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions)
Re: [openstack-dev] [VPNaas]How to load kernel module with IPSec?
Curious as to what operating system you are using and which release? Are you running under DevStack or doing an OpenStack install? Regards, Paul Michali (pc_m) On Mon, Jun 29, 2015 at 6:31 AM Zhi Chang chang...@unitedstack.com wrote: Hi, all I have some questions about how to load kernel module of IPSec. I'm using Openswan to build VPNaas and there is a error message says no kernel code presently loaded when I run ipsec verify. My solution is running service ipsec start on host to load kernel module. Everything goes okay when I run it. But I think the solution is too ungraceful. Does anyone have a simple solution to resolve this problem instead of run service ipsec start? Thx. Zhi Chang __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [TripleO] diskimage-builder 1.0.0
Hello all, DIB has come a long way and we seem to have a fairly stable interface for the elements and the image creation scripts. As such, I think it's about time we commit to a major version release. Hopefully this can give our users the (correct) impression that DIB is ready for use by folks who want some level of interface stability. AFAICT our bug list does not have any major issues that might require us to break our interface, so I dont see any harm in 'just going for it'. If anyone has input on fixes/features we should consider including before a 1.0.0 release please speak up now. If there are no objections by next week I'd like to try and cut a release then. :) Cheers, Greg __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [cinder] Huawei CI's problem have been solved, and is reporting normally now
Hi Mike, We have solved the problem of Huawei CI, and it is running and reporting stably now. The logs is ok to access. The very recently patchs it have been posted to isi: https://review.openstack.org/#/c/147726/ https://review.openstack.org/#/c/147738/ We will be very appreciate if you can have a consider of remove the -2 review to Huawei driver, thanks! :) Best regards, Liu __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [puppet] gate-puppet-*-puppet-beaker-rspec-dsvm-centos7 failures
On 06/29/2015 08:50 AM, Emilien Macchi wrote: Hello, Some of you probably noticed gate-puppet-*-puppet-beaker-rspec-dsvm-centos7 failures happening quite often lately [1]. So first, I submitted a elastic recheck query: https://review.openstack.org/196673 And a patch in puppet-openstack_extras which should fix our issue: https://review.openstack.org/#/c/196663/ Though last patch does not pass old versions of Puppet, I'm figuring that out, but any help is welcome. Best, -- Emilien Macchi signature.asc Description: OpenPGP digital signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [mistral] Liberty 1 milestone is released
Hi again, Because of some issues during release process we had to release Mistral client under version 1.0.0 which is considered a starting version number for Liberty development. 0.3.0 and 0.3.0 should be ignored. Global requirements will be updated soon. My apologies for the inconvenience. Renat Akhmerov @ Mirantis Inc. On 26 Jun 2015, at 17:25, Renat Akhmerov rakhme...@mirantis.com wrote: Hi all, Liberty 1 milestone for Mistral server and Mistral client 0.3.0 have been released! Visit corresponding release pages to see information on fixed bugs and implemented blueprints: https://launchpad.net/mistral/liberty/liberty-1 https://launchpad.net/mistral/liberty/liberty-1 https://launchpad.net/python-mistralclient https://launchpad.net/python-mistralclient Thanks to Mistral team! Renat Akhmerov @ Mirantis Inc. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [cinder][oslo] Locks for create from volume/snapshot
Is the V3 api going to be a task API like nova desires (someday it will happen in nova to)? If so then it seems like a natural fit for this (aka submit a request, get back a task json object that can be polled on, one of those polling states it reports back is 'WAITING' or 'BLOCKED' or ...) Dulko, Michal wrote: That’s right, it might be painful. V3 API implememtation would be also a hard, because then we would need different manager behavior for requests from V2 and V3… So maybe we need some config flag with deprecation procedure scheduled? *From:*Duncan Thomas [mailto:duncan.tho...@gmail.com] *Sent:* Monday, June 29, 2015 2:46 PM *To:* OpenStack Development Mailing List (not for usage questions) *Subject:* Re: [openstack-dev] [cinder][oslo] Locks for create from volume/snapshot On 29 June 2015 at 15:23, Dulko, Michal michal.du...@intel.com mailto:michal.du...@intel.com wrote: There’s also some similar situations when we actually don’t lock on resources. For example – a cgsnapshot may get deleted while creating a consistencygroup from it. From my perspective it seems best to have atomic state changes and state-based exclusion in API. We would need some kind of currently_used_to_create_snapshot/volums/consistencygroups states to achieve that. Then we would be also able to return VolumeIsBusy exceptions so retrying a request would be on the user side. I'd agree, except that gives quite a big behaviour change in the tenant-facing API, which will break clients and scripts. Not sure how to square that circle... I'd say V3 API except Mike might kill me... __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [cinder][oslo] Locks for create from volume/snapshot
That’s right, it might be painful. V3 API implememtation would be also a hard, because then we would need different manager behavior for requests from V2 and V3… So maybe we need some config flag with deprecation procedure scheduled? From: Duncan Thomas [mailto:duncan.tho...@gmail.com] Sent: Monday, June 29, 2015 2:46 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [cinder][oslo] Locks for create from volume/snapshot On 29 June 2015 at 15:23, Dulko, Michal michal.du...@intel.commailto:michal.du...@intel.com wrote: There’s also some similar situations when we actually don’t lock on resources. For example – a cgsnapshot may get deleted while creating a consistencygroup from it. From my perspective it seems best to have atomic state changes and state-based exclusion in API. We would need some kind of currently_used_to_create_snapshot/volums/consistencygroups states to achieve that. Then we would be also able to return VolumeIsBusy exceptions so retrying a request would be on the user side. I'd agree, except that gives quite a big behaviour change in the tenant-facing API, which will break clients and scripts. Not sure how to square that circle... I'd say V3 API except Mike might kill me... __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO][Heat] Tuskar v. Heat responsibilities
FWIW, I liked what you were proposing in the other thread. In thinking about the deployment flow in the Tuskar-UI, I think it would enable exposing and setting the nested stack parameters easily (you choose various resources as displayed in a widget, click a reload/refresh button, and new parameters are exposed). I agree, I was thinking something similar too. There's a step to pick the larger decisions (implementations of resource types) and then a refresh that will ask Heat to recalculate the full set of parameters. What might also be neat is if something like heatclient then had support to automatically generate stub yaml environment files based on the output of the template-validate. So it could spit out a yaml file that had a parameter_defaults: section with all the expected parameters and their default values, that way the user could then just edit that stub to complete the required inputs. This is similar to what Tuskar API was looking to do. I think it'd be awesome to see Heat support it natively. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [neutron] Scenario test for VPN getting socket error
For review 159746, we are seeing that a traceback is occurring (PS26) that appears to be caused by two rootwrap commands running at the same time and trying to use the same socket for communication with the daemon. This happens in one functional job, and not the other. The difference being that the failing (dsvm-functional-sswan) has it's own rootwrap function, instead of using ip_lib.IPWrapper(). There are two main questions here. One is whether or not we need the custom rootwrap logic, or if ip_lib methods can be used to mount the desired paths and run the commands, as needed. There is a mounting of /etc and /var/run. I'm guessing that the former is to allow customizing of ipsec config files for the connection being created. I'm not sure why /var/run is mounted (and if that is interfering with the rootwrap daemon operation). The other is why this issue is happening with the one job and not the other (or general use of IPWrapper where there are long running and short running commands happening at once (and how to resolve this issue). Both jobs do the same tasks, only with different (external) VPN driver processes. In the failure case, an execute() is done from ip_lib's get_devices() for a find operation, and a second execute() is done by send_ip_addr_adv_notif() for a arping operation (long). In the working case, these two operations seem to happen simultaneously w/o incident. Any thoughts on what may be happening, would be appreciated! Regards, Paul Michali (pc_m) Ref: https://review.openstack.org/#/c/159746/28/neutron_vpnaas/tests/functional/common/test_scenario.py __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [neutron] How to catch security group updates in ML2 mechanism driver?
Hi there, For my team's networking backend, we want to catch security group updates in our ML2 mechanism driver code. Currently we're doing this by monkey patching the AgentNotifierApi: # This section monkeypatches the AgentNotifierApi.security_groups_rule_updated # method to ensure that the Calico driver gets told about security group # updates at all times. This is a deeply unpleasant hack. Please, do as I say, # not as I do. # # For more info, please see issues #635 and #641. original_sgr_updated = rpc.AgentNotifierApi.security_groups_rule_updated def security_groups_rule_updated(self, context, sgids): LOG.info(security_groups_rule_updated: %s %s % (context, sgids)) mech_driver.send_sg_updates(sgids, context) original_sgr_updated(self, context, sgids) rpc.AgentNotifierApi.security_groups_rule_updated = ( security_groups_rule_updated ) But, as the comment says, this is a hack. Is there a better way? Many thanks, Neil __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [VPNaas]How to load kernel module with IPSec?
Ah, so Icehouse... From what I recall, there were two problems running RHEL type operating systems with *Swan. First, they use LibSwan instead of OpenSwan. Second, there were some config/setup problems with StrongSwan based connections. Recently, there were some commits to resolve these issues. For the kernel issue that you have, see commit 72e1f670, which creates a LibSwan driver and deals with the kernel module loading. You may need to backport that fix to run VPN under CentOS. Regards, Paul Michali (pc_m) On Mon, Jun 29, 2015 at 8:26 AM Zhi Chang chang...@unitedstack.com wrote: Hi, thanks for you reply. My OS is CentOS 6.5 and doing an OpenStack install, and my OpenStack verison is I. Regards, Zhi Chang -- Original -- *From: * Paul Michalip...@michali.net; *Date: * Mon, Jun 29, 2015 06:37 PM *To: * OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org; *Subject: * Re: [openstack-dev] [VPNaas]How to load kernel module with IPSec? Curious as to what operating system you are using and which release? Are you running under DevStack or doing an OpenStack install? Regards, Paul Michali (pc_m) On Mon, Jun 29, 2015 at 6:31 AM Zhi Chang chang...@unitedstack.com wrote: Hi, all I have some questions about how to load kernel module of IPSec. I'm using Openswan to build VPNaas and there is a error message says no kernel code presently loaded when I run ipsec verify. My solution is running service ipsec start on host to load kernel module. Everything goes okay when I run it. But I think the solution is too ungraceful. Does anyone have a simple solution to resolve this problem instead of run service ipsec start? Thx. Zhi Chang __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] For those interested in reading distributed systems papers
Sounds good! Best Regards, -- Accela Zhao On Mon, Jun 29, 2015 at 9:53 AM, Joshua Harlow harlo...@outlook.com wrote: Since I found this classes site useful and thought others might also... https://courses.engr.illinois.edu/cs525/sched.htm (it even appears actively in use! since openstack is mentioned in some slides and papers!) It has a bunch of papers which IMHO are relevant to openstack (and various other neat distributed systems slides/papers and links...), nice stuff around scheduling, P2P algorithms, reliability... Good fun weekend reading :-P -Josh __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Best Regards, -- Accela Zhao __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] diskimage-builder 1.0.0
On 06/29/2015 08:44 AM, Gregory Haynes wrote: Hello all, DIB has come a long way and we seem to have a fairly stable interface for the elements and the image creation scripts. As such, I think it's about time we commit to a major version release. Hopefully this can give our users the (correct) impression that DIB is ready for use by folks who want some level of interface stability. As someone who is using it quite happily in production, I'd love the sense that it is, in fact, production ready. :) AFAICT our bug list does not have any major issues that might require us to break our interface, so I dont see any harm in 'just going for it'. If anyone has input on fixes/features we should consider including before a 1.0.0 release please speak up now. If there are no objections by next week I'd like to try and cut a release then. :) Cheers, Greg __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] The unbearable lightness of specs
On 06/29/2015 11:32 AM, Thierry Carrez wrote: Nikola Đipanov wrote: It's not only about education - I think Gerrit is the wrong medium to have a design discussion and do design work. Maybe you disagree as you seem to imply that it worked well in some cases? I've recently seen on more than a few cases how a spec review can easily spiral into a collection of random comments that are hard to put together in a coherent discussion that you could call design work. If you throw in the expectation of approval into the mix, I think it basically causes the opposite of good design collaboration to happen. On Gerrit not being the right tool for specs... Using code review tools to iterate on specs creates two issues: * Minor comments Line-by-line code review tools are excellent for reviewing the correctness of lines of code. When switching to specs, you retain some of that review correctness of all lines mindset and tend to spot mistakes in the details more than mistakes in the general idea. That, in turn, results in -1 votes that don't really mean the same thing. * Extra process Code review tools are designed to produce final versions of documents. For specs we use a template to enforce a minimal amount of details, but those are already too much for most small features. To solve that issue, we end up having to binary-decide when something is significant enough to warrant a full spec. As with any line in the sand, the process end up being too much for things that are just beyond the line, and too little for things that are just before. IMHO the ideal tool would allow you to start with a very basic description of what feature you want to push. Then a discussion can start, and the spec can be refined to answer new questions or detail the already-sketched-out answers. Simple features can be approved really quickly using a one-sentence spec, while more complex features will develop into a full-fledged detailed document before they get approved. One size definitely doesn't fit all. And the discussion-based review (opposed to line-by-line review) discourages nitpicking on style. You *can* do this with Gerrit: discourage detail review + encourage idea review, and start small and develop the document in future patchsets as-needed. It's just not really encouraging that behavior for the job, and the overhead for simple features still means we can't track smallish features with it. As we introduce new tools we might switch the feature approval process to something else. In the mean time, my suggestion would be to use smaller templates, start small and go into details only if needed, and discourage nitpicking -1s. I fully agree with the above FWIW. This is *exactly* what I hinted at in the summary email, when I suggested a BP repository, with a problem statement patch that could then potentially evolve into a full blown spec if needed. I feel that Gerrit is bad at keeping an easily review-able history of a discussion even for code reviews, and this problem is worse for written text (as you point out), so looking at other tools might be useful at some point. N. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [cinder][oslo] Locks for create from volume/snapshot
On 29 June 2015 at 15:23, Dulko, Michal michal.du...@intel.com wrote: There’s also some similar situations when we actually don’t lock on resources. For example – a cgsnapshot may get deleted while creating a consistencygroup from it. From my perspective it seems best to have atomic state changes and state-based exclusion in API. We would need some kind of currently_used_to_create_snapshot/volums/consistencygroups states to achieve that. Then we would be also able to return VolumeIsBusy exceptions so retrying a request would be on the user side. I'd agree, except that gives quite a big behaviour change in the tenant-facing API, which will break clients and scripts. Not sure how to square that circle... I'd say V3 API except Mike might kill me... __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [puppet] gate-puppet-*-puppet-beaker-rspec-dsvm-centos7 failures
Hello, Some of you probably noticed gate-puppet-*-puppet-beaker-rspec-dsvm-centos7 failures happening quite often lately [1]. They are likely our local mirror just being very slow or unresponsive which seems to happen because a bad configuration of the 'epel' Yumrepo resource: https://github.com/stahnma/puppet-module-epel/blob/master/manifests/init.pp#L81-L90 According to Fedora administrators, we should use metalink instead of baseurl. While I'm preparing a pull-request for EPEL puppet module and doing some testing to make sure it fix our CI, I would ask to our contributors to carefully check the job status when they send a patch. In the meantime, if they notice an error like [1], please just do 'recheck' to re-trigger Zuul. Thanks, [1] http://logs.openstack.org/01/196301/1/check/gate-puppet-ceilometer-puppet-beaker-rspec-dsvm-centos7/d39efaa/console.html#_2015-06-28_16_06_19_946 -- Emilien Macchi signature.asc Description: OpenPGP digital signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [cinder][oslo] Locks for create from volume/snapshot
On Sun, Jun 28, 2015 at 1:16 PM, Duncan Thomas duncan.tho...@gmail.com wrote: We need mutual exclusion for several operations. Whether that is done by entity queues, locks, state based locking at the api later, or something else, we need mutual exclusion. Our current api does not lend itself to looser consistency, and I struggle to come up with a sane api that does - nobody doing an operation on a volume wants it to happen maybe, at some time... What about deletes? They can happen later on, which can help in these situations I think. -- *Avishay Traeger* *Storage RD* Mobile: +972 54 447 1475 E-mail: avis...@stratoscale.com Web http://www.stratoscale.com/ | Blog http://www.stratoscale.com/blog/ | Twitter https://twitter.com/Stratoscale | Google+ https://plus.google.com/u/1/b/108421603458396133912/108421603458396133912/posts | Linkedin https://www.linkedin.com/company/stratoscale __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [ceilometer][all] the max length of an id
On 27/06/2015 4:29 AM, Chris Dent wrote: Providing or displaying meaning is why we have other fields or lookup by reference. An id and title are not the same thing. The easiest way to follow the rules (that is to create unique, universal, persistent, portable and meaningless identifiers) is to use solely a UUID and nothing else when creating resource ids. If you then need to get additional information on that thing you know you've got a good id with which to get that information because it is an actual identifier, by definition. i take it this applies to namespacing as well? i'm just curious if there is a reasoning behind the 255 length id limit? was there a reason UUIDs were not acceptable? or is this a byproduct of arbitrary, custom ids from the early days? cheers, -- gord __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] How to catch security group updates in ML2 mechanism driver?
Yes, look at this patch: https://review.openstack.org/#/c/174588/ On Mon, Jun 29, 2015 at 3:42 PM, Neil Jerram neil.jer...@metaswitch.com wrote: Hi there, For my team's networking backend, we want to catch security group updates in our ML2 mechanism driver code. Currently we're doing this by monkey patching the AgentNotifierApi: # This section monkeypatches the AgentNotifierApi.security_groups_rule_updated # method to ensure that the Calico driver gets told about security group # updates at all times. This is a deeply unpleasant hack. Please, do as I say, # not as I do. # # For more info, please see issues #635 and #641. original_sgr_updated = rpc.AgentNotifierApi.security_groups_rule_updated def security_groups_rule_updated(self, context, sgids): LOG.info(security_groups_rule_updated: %s %s % (context, sgids)) mech_driver.send_sg_updates(sgids, context) original_sgr_updated(self, context, sgids) rpc.AgentNotifierApi.security_groups_rule_updated = ( security_groups_rule_updated ) But, as the comment says, this is a hack. Is there a better way? Many thanks, Neil __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Best Regards , The G. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [QA][Tempest] Proposing Jordan Pittier for Tempest Core
On Mon, Jun 22, 2015 at 04:23:30PM -0400, Matthew Treinish wrote: Hi Everyone, I'd like to propose we add Jordan Pittier (jordanP) to the tempest core team. Jordan has been a steady contributor and reviewer on tempest over the past few cycles and he's been actively engaged in the Tempest community. Jordan has had one of the higher review counts on Tempest for the past cycle, and he has consistently been providing reviews that show insight into both the project internals and it's future direction. I feel that Jordan will make an excellent addition to the core team. As per the usual, if the current Tempest core team members would please vote +1 or -1(veto) to the nomination when you get a chance. We'll keep the polls open for 5 days or until everyone has voted. So, after 5 days and it's been all positive feedback. Welcome to the team Jordan. -Matt Treinish signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] How to catch security group updates in ML2 mechanism driver?
Cool, thank you! On 29/06/15 14:08, Gal Sagie wrote: Yes, look at this patch: https://review.openstack.org/#/c/174588/ On Mon, Jun 29, 2015 at 3:42 PM, Neil Jerram neil.jer...@metaswitch.com mailto:neil.jer...@metaswitch.com wrote: Hi there, For my team's networking backend, we want to catch security group updates in our ML2 mechanism driver code. Currently we're doing this by monkey patching the AgentNotifierApi: # This section monkeypatches the AgentNotifierApi.security_groups_rule_updated # method to ensure that the Calico driver gets told about security group # updates at all times. This is a deeply unpleasant hack. Please, do as I say, # not as I do. # # For more info, please see issues #635 and #641. original_sgr_updated = rpc.AgentNotifierApi.security_groups_rule_updated def security_groups_rule_updated(self, context, sgids): LOG.info(security_groups_rule_updated: %s %s % (context, sgids)) mech_driver.send_sg_updates(sgids, context) original_sgr_updated(self, context, sgids) rpc.AgentNotifierApi.security_groups_rule_updated = ( security_groups_rule_updated ) But, as the comment says, this is a hack. Is there a better way? Many thanks, Neil __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Best Regards , The G. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO][Heat] Tuskar v. Heat responsibilities
We could do likewise in the environment: resource_registry: OS::TripleO::ControllerConfig: puppet/controller-config.yaml ... constraints: OS::TripleO::ControllerConfig: - allowed_values: - puppet/controller-config.yaml, - foo/other-config.yaml] These constraints would be enforced at stack validation time such that the environment would be rejected if the optional constraints were not met. I like this approach. Originally, I was thinking it might be cleaner to encode the relationship in the opposite direction. Something like this in puppet/controller-config.yaml: implements: OS::TripleO::ControllerConfig But then, you leave it up to the external tools (a UI, etc) to know how to discover these implementing templates. If they're explicitly listed in a list as in your example, that helps UI's / API's more easily present these choices. Maybe it could work both ways. Yeah the strict interface definition is basically the TOSCA approach referenced by Thomas in my validation thread, and while I'm not opposed to that, it just feels like overkill for this particular problem. I don't see any mutually exclusive logic here, we could probably consider adding resource_registry constraints and still add interfaces later if it becomes apparent we really need them - atm I'm just slightly wary of adding more complexity to already complex templates, and also on relying on deep introspection to match up interfaces (when we've got no deep validation capabilities at all in heat atm) vs some simple rules in the environment. Sounds like we've got enough consensus on this idea to be worth raising a spec, I'll do that next week. I had originally been thinking of it like slagle describes, from the child up to the parent as well. What I like about that approach is that it achieves a more pluggable model when you think about extensions that aren't accepted or applicable in TripleO upstream. If someone comes along and adds a new ControllerConfig to your above example, they have to edit whatever environment you're talking about that defines the constraints (I'm call it overcloud-something.yaml for now). This becomes a problem from a packaging point of view, especially when you factor in non-TripleO integrators (without revealing too much inside baseball, think partner integrations). How do I add in an extra package (RPM, DEB, whatever) that provides that ControllerConfig and have it picked up as a valid option? We don't want to be editing the overcloud-something.yaml because it's owned by another package and there's the potential for conflicts if multiple extra implementations start stepping on each other. An interface/discovery sort of mechanism, which I agree is more complex, would be easier to work with in those cases. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [mistral] Team meeting - 06/29/2015
Hi, This is a reminder that we’ll have a team meeting today at 16.00 UTC at #openstack-meeting. Agenda: * Review action items * Current status (progress, issues, roadblocks, further plans) * Duplicating messages on executors * Liberty-2 planning * Open discussion Feel free to propose your own topics (either by replying to this email or modifying https://wiki.openstack.org/wiki/Meetings/MistralAgenda https://wiki.openstack.org/wiki/Meetings/MistralAgenda). Thanks Renat Akhmerov @ Mirantis Inc. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Let's get rid of tablib and cliff-tablib
Hi, Le 29/06/2015 11:03, Thomas Goirand a écrit : cliff-tablib is used for the unit tests of things like python-neutronclient. The annoying bit is that cliff-tablib depends on tablib, which itself is a huge mess. It has loads of 3rd party embedded packages and most of them aren't Python 3.x compatible. tablib includes copies of various dependencies in its tablib/packages/ directory. Some of them are for Python 2, others are for Python 3. It would be better to use dependencies (requirements in setup.py), not copies. Do you try to contact tablib authors to ask them to remove completly tablib/packages/? setup.py uses a different list of packages on Python 2 and Python 3. I tried python3 setup.py install: the bytecode compilation of markup.py fails with an obvious SyntaxError, the code is for Python 2. But there is also markup3.py which is compiled successfully. Even if the compilation of the markup.py fails, python setup.py install succeed with the exit code 0. What is your problem? setup.py should be fixed to skip markup.py on Python 3, and skip markup3.py on Python 2. A workaround is to remove manually the file depending on the Python major version. Note: pip install tablib works on Python 3 (pip uses the binary wheel package). I've seen that for python-openstackclient, recently, cliff-tablib was added. Let's do the reverse, and remove cliff-tablib whenever possible. If we really want to keep using cliff-tablib, then someone has to do the work to port tablib to Python3 (good luck with that...). cliff-tablib is used in tests. If you remove the cliff-tablib dependency, tests will obviously fail. What do you propose? Modify tests to reimplement cliff-tablib? Remove tests? Victor __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [cinder][oslo] Locks for create from volume/snapshot
There’s also some similar situations when we actually don’t lock on resources. For example – a cgsnapshot may get deleted while creating a consistencygroup from it. From my perspective it seems best to have atomic state changes and state-based exclusion in API. We would need some kind of currently_used_to_create_snapshot/volums/consistencygroups states to achieve that. Then we would be also able to return VolumeIsBusy exceptions so retrying a request would be on the user side. From: Duncan Thomas [mailto:duncan.tho...@gmail.com] Sent: Sunday, June 28, 2015 12:16 PM To: OpenStack Development Mailing List Subject: Re: [openstack-dev] [cinder][oslo] Locks for create from volume/snapshot We need mutual exclusion for several operations. Whether that is done by entity queues, locks, state based locking at the api later, or something else, we need mutual exclusion. Our current api does not lend itself to looser consistency, and I struggle to come up with a sane api that does - nobody doing an operation on a volume wants it to happen maybe, at some time... On 28 Jun 2015 07:30, Avishay Traeger avis...@stratoscale.commailto:avis...@stratoscale.com wrote: Do we really need any of these locks? I'm sure we could come up with some way to remove them, rather than make them distributed. On Sun, Jun 28, 2015 at 5:07 AM, Joshua Harlow harlo...@outlook.commailto:harlo...@outlook.com wrote: John Griffith wrote: On Sat, Jun 27, 2015 at 11:47 AM, Joshua Harlow harlo...@outlook.commailto:harlo...@outlook.com mailto:harlo...@outlook.commailto:harlo...@outlook.com wrote: Duncan Thomas wrote: We are working on some sort of distributed replacement for the locks in cinder, since file locks are limiting our ability to do HA. I'm afraid you're unlikely to get any traction until that work is done. I also have a concern that some backend do not handle load well, and so benefit from the current serialisation. It might be necessary to push this lock down into the driver and allow each driver to choose it's locking model for snapshots. IMHO (and I know this isn't what everyone thinks) but I'd rather have cinder (and other projects) be like this from top gear ( https://www.youtube.com/watch?v=xnWKz7Cthkk ) where that toyota truck is virtually indestructible vs. trying to be a high-maintenance ferrari (when most openstack projects do a bad job of trying to be one). So, maybe for a time (and I may regret saying this) we could consider focusing on reliability, consistency, being the toyota vs. handling some arbitrary amount of load (trying to be a ferrari). Also I'd expect/think operators would rather prefer a toyota at this stage of openstack :) Ok enough analogies, ha. Well said Josh, I guess I've been going about this all wrong by not using the analogies :) Exactly!! IMHO should be the new 'openstack mantra, built from components/projects that survive like a toyota truck' haha. Part 2 (https://www.youtube.com/watch?v=xTPnIpjodA8) and part 3 (https://www.youtube.com/watch?v=kFnVZXQD5_k) are funny/interesting also :-P Now we just need openstack to be that reliable and tolerant of failures/calamities/... -Josh On 27 Jun 2015 06:18, niuzhenguo niuzhen...@huawei.commailto:niuzhen...@huawei.com mailto:niuzhen...@huawei.commailto:niuzhen...@huawei.com mailto:niuzhen...@huawei.commailto:niuzhen...@huawei.com mailto:niuzhen...@huawei.commailto:niuzhen...@huawei.com wrote: Hi folks, __ __ Currently we use a lockfile to protect the create operations from concurrent delete the source volume/snapshot, we use exclusive locks on both delete and create sides which will ensure that: __ __ __1.__If a create of VolA from snap/VolB is in progress, any delete requests for snap/VolB will wait until the create is complete. __2.__If a delete of snap/VolA is in progress, any create from snap/VolA will wait until snap/VolA delete is complte. __ __ but, the exclusive locks will also result in: __ __ __3.__If a create of VolA from snap/VolB is inprogress, any other create requests from snap/VolB will wait until the create is complete. __ __ So the create operations from same volume/snapshot can not process on parallel, please reference bp [1]. I’d like to change the current filelock or introduce a new lock to oslo.concurrency. __ __ Proposed change: Add exclusive(write) locks for delete operations and shared(read)
Re: [openstack-dev] [VPNaas]How to load kernel module with IPSec?
Hi, thanks for you reply. My OS is CentOS 6.5 and doing an OpenStack install, and my OpenStack verison is I. Regards, Zhi Chang -- Original -- From: Paul Michalip...@michali.net; Date: Mon, Jun 29, 2015 06:37 PM To: OpenStack Development Mailing List (not for usage questions)openstack-dev@lists.openstack.org; Subject: Re: [openstack-dev] [VPNaas]How to load kernel module with IPSec? Curious as to what operating system you are using and which release? Are you running under DevStack or doing an OpenStack install? Regards, Paul Michali (pc_m) On Mon, Jun 29, 2015 at 6:31 AM Zhi Chang chang...@unitedstack.com wrote: Hi, all I have some questions about how to load kernel module of IPSec. I'm using Openswan to build VPNaas and there is a error message says no kernel code presently loaded when I run ipsec verify. My solution is running service ipsec start on host to load kernel module. Everything goes okay when I run it. But I think the solution is too ungraceful. Does anyone have a simple solution to resolve this problem instead of run service ipsec start? Thx. Zhi Chang __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [VPNaas]How to load kernel module with IPSec?
Hi, all I have some questions about how to load kernel module of IPSec. I'm using Openswan to build VPNaas and there is a error message says no kernel code presently loaded when I run ipsec verify. My solution is running service ipsec start on host to load kernel module. Everything goes okay when I run it. But I think the solution is too ungraceful. Does anyone have a simple solution to resolve this problem instead of run service ipsec start? Thx. Zhi Chang__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [gnocchi][ceilometer] help with gnocchi measures api (return empty list)
Hi I have installed gnocchi on my devtstack (stable v1) and trying to check the flow , auto scaling with heat etc.. I have encountered an issue with measures fetching, its look like the ceilometer compute agent does send measures to gnocchi , I can see the rest calls in the logs all ending with 202 statuses For example: 35.248.18.191 - - [29/Jun/2015:10:47:11 +] POST /v1/resource/instance/819267ea-6fcb-418b-a197-ff8b65e94234/metric/cpu/measures HTTP/1.1 202 208 - python-requests/2.7.0 CPython/2.7.6 Linux/3.13.0-45-generic Or 135.248.18.191 - - [29/Jun/2015:10:47:12 +] POST /v1/resource/instance/819267ea-6fcb-418b-a197-ff8b65e94234/metric/cpu_util/measures HTTP/1.1 202 207 - python-requests/2.7.0 CPython/2.7.6 Linux/3.13.0-45-generic When I fetch the measures I always get an empty list as a response curl -X GET -H X-Auth-Token: 7a20206f3820412488edc7c9a0db7b29 http://135.248.18.191:8041/v1/resource/instance/819267ea-6fcb-418b-a197-ff8b65e94234/metric/cpu_util/measures | python -mjson.tool [] I have tried with swift and with file storage getting the same result Also I have seen the measures files was created in the fs stack@tshtilma-gnocchi-devstack:/opt/stack/data/gnocchi$ ls 11788bea-c270-4078-8536-d028b10e5d33 4737b486-658b-43c4-a1c5-66f5ee4ff06c 98336517-5aeb-4230-9294-d3dd12fe2756 b752676e-306e-44a3-b422-8dd675956ad5 ff343aa2-8ac1-42d8-965c-b1997ab435ad 3c590d88-4a4b-4b59-82ff-1ea7cf330f42 4abc98d8-70f1-4efe-8716-2cf51a9d7783 9c04b04d-8aa9-4bf9-9162-8b16c6651f73 cb3ecc08-89aa-4c6f-8ff0-2c89ed0e1eb0 locks 3ea375e0-e91d-4842-a4c2-261c1eeaf85e 7719230a-63e0-47aa-9fa6-a8a610e2bafc a0de628e-eac2-4fdd-8c07-fa72a787e101 d4984544-b892-407e-acc9-7b5534b4a41b measure 43fd3650-d60e-40eb-a183-35f315373bd1 88d7564e-e4a4-4438-8602-0d8f4c9103a1 b4341cde-532e-4715-8367-fd1ce0c927f3 f13c033b-606b-4e68-b1bb-58bee540d3e8 stack@tshtilma-gnocchi-devstack:/opt/stack/data/gnocchi$ cd measure/ stack@tshtilma-gnocchi-devstack:/opt/stack/data/gnocchi/measure$ ks ks: command not found stack@tshtilma-gnocchi-devstack:/opt/stack/data/gnocchi/measure$ ls 4abc98d8-70f1-4efe-8716-2cf51a9d7783 cb3ecc08-89aa-4c6f-8ff0-2c89ed0e1eb0 ff343aa2-8ac1-42d8-965c-b1997ab435ad All other calls works as expected For example: stack@tshtilma-gnocchi-devstack:~/devstack$ curl -X GET -H X-Auth-Token: 7a20206f3820412488edc7c9a0db7b29 http://135.248.18.191:8041/v1/resource/instance/819267ea-6fcb-418b-a197-ff8b65e94234/metric/cpu_util/ | python -mjson.tool { archive_policy: { aggregation_methods: [ std, count, 95pct, min, max, sum, median, mean ], back_window: 0, definition: [ { granularity: 0:05:00, points: 12, timespan: 1:00:00 }, { granularity: 1:00:00, points: 24, timespan: 1 day, 0:00:00 }, { granularity: 1 day, 0:00:00, points: 30, timespan: 30 days, 0:00:00 } ], name: low }, created_by_project_id: 5394701c-e992-4cc3-aa65-e59fee108295, created_by_user_id: a4e65de3-51ec-4530-a8e9-67416001f3b0, id: ff343aa2-8ac1-42d8-965c-b1997ab435ad, name: cpu_util, resource: { created_by_project_id: 5394701c-e992-4cc3-aa65-e59fee108295, created_by_user_id: a4e65de3-51ec-4530-a8e9-67416001f3b0, ended_at: null, id: 819267ea-6fcb-418b-a197-ff8b65e94234, project_id: 023099a6-81d5-43cb-aa0a-6f0a75d60628, revision_end: null, revision_start: 2015-06-29T10:50:22.143077+00:00, started_at: 2015-06-29T10:28:12.664034+00:00, type: instance, user_id: 02772acb-bd6e-4307-b4ce-9b906be716f2 } } Any help with be greatly appreciated Tomer local.conf [[local|localrc]] ADMIN_PASSWORD=password MYSQL_PASSWORD=password RABBIT_PASSWORD=password SERVICE_PASSWORD=password SERVICE_TOKEN=tokentoken enable_plugin gnocchi https://github.com/openstack/gnocchi stable/v1.0 enable_service q-lbaas enable_service ceilometer-api,ceilometer-collector enable_service ceilometer-acompute enable_service ceilometer-alarm-notifier,ceilometer-alarm-evaluator disable_service n-net enable_service q-svc enable_service q-agt enable_service q-dhcp enable_service q-l3 enable_service q-meta #enable_service heat,h-api,h-eng,h-api-cfn enable_service gnocchi-api #IMAGE_URLS=http://ftp.free.fr/mirrors/fedora.redhat.com/fedora/linux/releases/21/Cloud/Images/x86_64/Fedora-Cloud-Base-20141203-21.x86_64.qcow2; #GNOCCHI_COORDINATOR_URL=redis://localhost:6379?timeout=5 #Enable an eager processing of the ceilometer pipeline (every 10sec): CEILOMETER_PIPELINE_INTERVAL=10 __ OpenStack Development Mailing List (not for usage questions)
Re: [openstack-dev] [all]deprecating [test-]requirements-PYN.txt
Excerpts from Robert Collins's message of 2015-06-29 15:59:12 +1200: Hi, so we're nearly ready to deprecate the python-version-specific requirements files. Once we have infra's requirements cross checking jobs all copacetic again, we should be able to move forward. There isn't a specific spec for this in pbr, and I wanted to get some broad input into the manner of the deprecation. As a reminder, for context, we have several bits of context to consider. Firstly, we're aligning with upstream packaging precepts, so we want to remove all non-deployment-specific usage of requirements.txt and similar files. Secondly, the Python version specific files are incompatible with universal wheels, which are desirable because our infrastructure only knows how to build one wheel when a tag is made, and its less redundant downloads for users with multiple python versions. Thirdly, we can't do any backwards incompatible changes in pbr without breaking any existing users of $thing. Because we're a setup_requires, and setuptools can't handle version dependencies of setup_requires. So whatever we do will affect all stable branches immediately, in all gate jobs. I think we should do three things: - error if universal builds are requested and python versioned requirements files are present. That may break some of the Oslo stable libs, since not all of them were ready for Python 3 last cycle, and certainly not before. Have you done any analysis to find those libs so we can get patches ready preemptively? Doug - warn on stdout if versioned requirements files are present - start reflecting the 'test' extra into tests_require in the setup_kwargs The downside of this is that it will warn indefinitely for existing stable branches. But I think that that is tolerable. If its not, we could write a timestamp somewhere and only warn once/day, but I think that that is likely to lead to confusion, not clarity. -Rob __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] The unbearable lightness of specs
Nikola Đipanov wrote: It's not only about education - I think Gerrit is the wrong medium to have a design discussion and do design work. Maybe you disagree as you seem to imply that it worked well in some cases? I've recently seen on more than a few cases how a spec review can easily spiral into a collection of random comments that are hard to put together in a coherent discussion that you could call design work. If you throw in the expectation of approval into the mix, I think it basically causes the opposite of good design collaboration to happen. On Gerrit not being the right tool for specs... Using code review tools to iterate on specs creates two issues: * Minor comments Line-by-line code review tools are excellent for reviewing the correctness of lines of code. When switching to specs, you retain some of that review correctness of all lines mindset and tend to spot mistakes in the details more than mistakes in the general idea. That, in turn, results in -1 votes that don't really mean the same thing. * Extra process Code review tools are designed to produce final versions of documents. For specs we use a template to enforce a minimal amount of details, but those are already too much for most small features. To solve that issue, we end up having to binary-decide when something is significant enough to warrant a full spec. As with any line in the sand, the process end up being too much for things that are just beyond the line, and too little for things that are just before. IMHO the ideal tool would allow you to start with a very basic description of what feature you want to push. Then a discussion can start, and the spec can be refined to answer new questions or detail the already-sketched-out answers. Simple features can be approved really quickly using a one-sentence spec, while more complex features will develop into a full-fledged detailed document before they get approved. One size definitely doesn't fit all. And the discussion-based review (opposed to line-by-line review) discourages nitpicking on style. You *can* do this with Gerrit: discourage detail review + encourage idea review, and start small and develop the document in future patchsets as-needed. It's just not really encouraging that behavior for the job, and the overhead for simple features still means we can't track smallish features with it. As we introduce new tools we might switch the feature approval process to something else. In the mean time, my suggestion would be to use smaller templates, start small and go into details only if needed, and discourage nitpicking -1s. -- Thierry Carrez (ttx) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [gnochhi][ceilometer] help with gnocchi measures api (return empty)
Hi I have installed gnocchi on my devtstack (stable v1) and trying to check the flow , autoscaling with heat etc.. I have encounterd an issue with measures fetching, its look like the ceilometer compute agent does send measures to gnocchi , I can see the rest calls in the logs all ending with 201 statuses For example: __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [api] [Nova] [Ironic] [Magnum] Microversion guideline in API-WG
On 06/26/2015 11:10 AM, Dmitry Tantsur wrote: On 06/26/2015 04:57 PM, Joe Gordon wrote: To address this, nova has the following document: http://docs.openstack.org/developer/nova/api_microversion_history.html Btw 2.3 looks big, were it really one feature (and one commit, as we're talking about people deploying from master)? Yes, it was one feature and one commit. -jay __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Let's get rid of tablib and cliff-tablib
Excerpts from Victor Stinner's message of 2015-06-29 12:01:27 +0200: Hi, Le 29/06/2015 11:03, Thomas Goirand a écrit : cliff-tablib is used for the unit tests of things like python-neutronclient. The annoying bit is that cliff-tablib depends on tablib, which itself is a huge mess. It has loads of 3rd party embedded packages and most of them aren't Python 3.x compatible. tablib includes copies of various dependencies in its tablib/packages/ directory. Some of them are for Python 2, others are for Python 3. It would be better to use dependencies (requirements in setup.py), not copies. Do you try to contact tablib authors to ask them to remove completly tablib/packages/? tablib is managed by Kenneth Reitz, and as with his requests library he feels vendoring is the best way to distribute dependencies. For a while I've had a to-do on my list to rewrite those formatters to not use tablib, it just hasn't been a high priority. Doug setup.py uses a different list of packages on Python 2 and Python 3. I tried python3 setup.py install: the bytecode compilation of markup.py fails with an obvious SyntaxError, the code is for Python 2. But there is also markup3.py which is compiled successfully. Even if the compilation of the markup.py fails, python setup.py install succeed with the exit code 0. What is your problem? setup.py should be fixed to skip markup.py on Python 3, and skip markup3.py on Python 2. A workaround is to remove manually the file depending on the Python major version. Note: pip install tablib works on Python 3 (pip uses the binary wheel package). I've seen that for python-openstackclient, recently, cliff-tablib was added. Let's do the reverse, and remove cliff-tablib whenever possible. If we really want to keep using cliff-tablib, then someone has to do the work to port tablib to Python3 (good luck with that...). cliff-tablib is used in tests. If you remove the cliff-tablib dependency, tests will obviously fail. What do you propose? Modify tests to reimplement cliff-tablib? Remove tests? Victor __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [ceilometer] virtual mid-cycle planning
On Fri, 26 Jun 2015, Chris Dent wrote: Ceilometer contributors and other interested parties, It's been pointed out that the topic titles at https://etherpad.openstack.org/p/ceilometer-liberty-midcycle and the agenda items and descriptions at https://etherpad.openstack.org/p/ceilometer-liberty-midcycle-agenda are a bit sparse and thus it is hard to decide if you want to be interested in a session or not. The use of etherpads here is intentional: If you have knowledge about the topic or questions that you think need to be asked put them on the etherpad. Prad and I are facilitating the orchestrating of choosing the agenda _not_ defining it. The defining needs to be done as a group. If no one steps up to define a topic then it is pretty clear we don't need to talk about that one. So: If you care about a topic, get in there on https://etherpad.openstack.org/p/ceilometer-liberty-midcycle-agenda and write something about it. Also please keep in mind that not all topics will be addressed. Only those topics for which there is demonstrated interested, sufficient quorum and sufficient time overlap will be addressed. Thanks. -- Chris Dent tw:@anticdent freenode:cdent https://tank.peermore.com/tanks/cdent __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [cinder][oslo] Locks for create from volume/snapshot
Do we know what is so hated about the glance task API? Tasks and entity queues give the required exclusion, if you accept that tasks can fail if previous tasks in the queue can cause things to be pulled out from under it. On 29 June 2015 at 17:22, Joshua Harlow harlo...@outlook.com wrote: Is the V3 api going to be a task API like nova desires (someday it will happen in nova to)? If so then it seems like a natural fit for this (aka submit a request, get back a task json object that can be polled on, one of those polling states it reports back is 'WAITING' or 'BLOCKED' or ...) Dulko, Michal wrote: That’s right, it might be painful. V3 API implememtation would be also a hard, because then we would need different manager behavior for requests from V2 and V3… So maybe we need some config flag with deprecation procedure scheduled? *From:*Duncan Thomas [mailto:duncan.tho...@gmail.com] *Sent:* Monday, June 29, 2015 2:46 PM *To:* OpenStack Development Mailing List (not for usage questions) *Subject:* Re: [openstack-dev] [cinder][oslo] Locks for create from volume/snapshot On 29 June 2015 at 15:23, Dulko, Michal michal.du...@intel.com mailto:michal.du...@intel.com wrote: There’s also some similar situations when we actually don’t lock on resources. For example – a cgsnapshot may get deleted while creating a consistencygroup from it. From my perspective it seems best to have atomic state changes and state-based exclusion in API. We would need some kind of currently_used_to_create_snapshot/volums/consistencygroups states to achieve that. Then we would be also able to return VolumeIsBusy exceptions so retrying a request would be on the user side. I'd agree, except that gives quite a big behaviour change in the tenant-facing API, which will break clients and scripts. Not sure how to square that circle... I'd say V3 API except Mike might kill me... __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Duncan Thomas __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [QA][Tempest] Proposing Jordan Pittier for Tempest Core
Thanks a lot ! I just want to say that I am happy about this and I look forward to continue working on Tempest with you all. Cheers, Jordan On Mon, Jun 29, 2015 at 3:59 PM, Matthew Treinish mtrein...@kortar.org wrote: On Mon, Jun 22, 2015 at 04:23:30PM -0400, Matthew Treinish wrote: Hi Everyone, I'd like to propose we add Jordan Pittier (jordanP) to the tempest core team. Jordan has been a steady contributor and reviewer on tempest over the past few cycles and he's been actively engaged in the Tempest community. Jordan has had one of the higher review counts on Tempest for the past cycle, and he has consistently been providing reviews that show insight into both the project internals and it's future direction. I feel that Jordan will make an excellent addition to the core team. As per the usual, if the current Tempest core team members would please vote +1 or -1(veto) to the nomination when you get a chance. We'll keep the polls open for 5 days or until everyone has voted. So, after 5 days and it's been all positive feedback. Welcome to the team Jordan. -Matt Treinish __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Fwd: [cinder][oslo] Locks for create from volume/snapshot
On 29 June 2015 at 18:18, Joshua Harlow harlo...@outlook.com wrote: Duncan Thomas wrote: Do we know what is so hated about the glance task API? Tasks and entity queues give the required exclusion, if you accept that tasks can fail if previous tasks in the queue can cause things to be pulled out from under it. Sounds like certain tasks shouldn't of been accepted in the first place then no? Sounds like before acceptance of a piece of work there needs to be some verification that what is being requested doesn't conflict with what is underway/planned. That should be fun to code - you'd need to have current and future states of all tasks, and an atomic transactional way of changing the future state, from any API service... I'd say it's better to accept all tasks and report failure for the ones who had their resource go away before they got executed... far less needed in the way of transactional atomic primatives - you can just lock the state of each needed resource, as long as it is always in some defined order, you don't deadlock, and if a resource is found in a bad state, release (in reverse order) and fail the task. After all you don't try to hire a contractor to fix your plumbing on the 23rd of the month if your house is scheduled to be demolished on the 21st (analogies ftw)... If cancelling a contractor is cheap and booking one at the last second is expensive (or you schedule is very busy and unclear), then maybe you do... __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Fwd: [cinder][oslo] Locks for create from volume/snapshot
Duncan Thomas wrote: On 29 June 2015 at 18:18, Joshua Harlow harlo...@outlook.com mailto:harlo...@outlook.com wrote: Duncan Thomas wrote: Do we know what is so hated about the glance task API? Tasks and entity queues give the required exclusion, if you accept that tasks can fail if previous tasks in the queue can cause things to be pulled out from under it. Sounds like certain tasks shouldn't of been accepted in the first place then no? Sounds like before acceptance of a piece of work there needs to be some verification that what is being requested doesn't conflict with what is underway/planned. That should be fun to code - you'd need to have current and future states of all tasks, and an atomic transactional way of changing the future state, from any API service... I'd say it's better to accept all tasks and report failure for the ones who had their resource go away before they got executed... far less needed in the way of transactional atomic primatives - you can just lock the state of each needed resource, as long as it is always in some defined order, you don't deadlock, and if a resource is found in a bad state, release (in reverse order) and fail the task. Sure, sounds like something like an 'execution planner' (for lack of better terms) or u just look at how you'd do this in the real world and mimic that (us humans seem to have found a way to do this, without to much issue, so I don't see why computers shouldn't be able to mirror something similar). After all you don't try to hire a contractor to fix your plumbing on the 23rd of the month if your house is scheduled to be demolished on the 21st (analogies ftw)... If cancelling a contractor is cheap and booking one at the last second is expensive (or you schedule is very busy and unclear), then maybe you do... Or just don't be nutty and book contracts after your house is demolished, less nutty people ftw ;) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Out Of Office
I am currently travelling and am out of the office until Thursday the 9th July, I will be picking up emails periodically. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] grenade with external plugins for the big tent - how to use
On Sun, Jun 28, 2015 at 2:02 PM, Chris Dent chd...@redhat.com wrote: I started extracting ceilometer from devstack into its own plugin. This is working in local tests. It revealed that it would likely be problematic without there also being a grenade plugin. So I started that too. I should be able to look at the patches below this afternoon, but a couple of thoughts real quick... You might need to do this in the other order, ie build the celio plugin for Grenade first because it will need to make sure the right bits are included for the target phase, which doesn't use stack.sh and the DevStack plugin support. The Grenade plugin can handle both, looking to see if the target DevStack config has Celiometer set for an external plugin, and if so include the right bits otherwise handle the existing in-repo stuff. The DevStack plugin shouldn't change too much re function names, so once the right includes are in place it should go smoothly. Most of the trouble we've seen is getting the right things included in the target GRenade phase. I'll loop back on this later today (hopefully) and see how close I really am here... ;) dt -- Dean Troyer dtro...@gmail.com __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [cinder][oslo] Locks for create from volume/snapshot
Sound then you ask, how does such a thing occur, well ask yourself how you'd know not to scheduler a contractor on the 23rd when your house gets demolished on the 21st. You'd probably be keeping a piece of paper around that says XYZ task is scheduled on ABC date, and before adding a new one you'd make sure that it doesn't conflict with any of the planned ones. Sounds like something that could be stored in a database (or a text file, or other...); of course it can get a-lot more complex (aka jeez that sounds like a query planner in way) but let's not go there right now :-P Joshua Harlow wrote: Duncan Thomas wrote: Do we know what is so hated about the glance task API? Tasks and entity queues give the required exclusion, if you accept that tasks can fail if previous tasks in the queue can cause things to be pulled out from under it. Sounds like certain tasks shouldn't of been accepted in the first place then no? Sounds like before acceptance of a piece of work there needs to be some verification that what is being requested doesn't conflict with what is underway/planned. After all you don't try to hire a contractor to fix your plumbing on the 23rd of the month if your house is scheduled to be demolished on the 21st (analogies ftw)... -Josh On 29 June 2015 at 17:22, Joshua Harlow harlo...@outlook.com mailto:harlo...@outlook.com wrote: Is the V3 api going to be a task API like nova desires (someday it will happen in nova to)? If so then it seems like a natural fit for this (aka submit a request, get back a task json object that can be polled on, one of those polling states it reports back is 'WAITING' or 'BLOCKED' or ...) Dulko, Michal wrote: That’s right, it might be painful. V3 API implememtation would be also a hard, because then we would need different manager behavior for requests from V2 and V3… So maybe we need some config flag with deprecation procedure scheduled? *From:*Duncan Thomas [mailto:duncan.tho...@gmail.com mailto:duncan.tho...@gmail.com] *Sent:* Monday, June 29, 2015 2:46 PM *To:* OpenStack Development Mailing List (not for usage questions) *Subject:* Re: [openstack-dev] [cinder][oslo] Locks for create from volume/snapshot On 29 June 2015 at 15:23, Dulko, Michal michal.du...@intel.com mailto:michal.du...@intel.com mailto:michal.du...@intel.com mailto:michal.du...@intel.com wrote: There’s also some similar situations when we actually don’t lock on resources. For example – a cgsnapshot may get deleted while creating a consistencygroup from it. From my perspective it seems best to have atomic state changes and state-based exclusion in API. We would need some kind of currently_used_to_create_snapshot/volums/consistencygroups states to achieve that. Then we would be also able to return VolumeIsBusy exceptions so retrying a request would be on the user side. I'd agree, except that gives quite a big behaviour change in the tenant-facing API, which will break clients and scripts. Not sure how to square that circle... I'd say V3 API except Mike might kill me... __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Duncan Thomas __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Fuel][Fuel-Library] Nominate Aleksandr Didenko for fuel-library core
+1 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Magnum] Continuing with heat-coe-templates
Hello team, I've been doing work in Magnum recently to align our templates with the upstream templates from larsks/heat-kubernetes[1]. I've also been porting these changes to the stackforge/heat-coe-templates[2] repo. I'm currently not convinced that maintaining a separate repo for Magnum templates (stackforge/heat-coe-templates) is beneficial for Magnum or the community. Firstly it is very difficult to draw a line on what should be allowed into the heat-coe-templates. We are currently taking out changes[3] that introduced useful autoscaling capabilities in the templates but that didn't fit the Magnum plan. If we are going to treat the heat-coe-templates in that way then this extra repo will not allow organic development of new and old container engine templates that are not tied into Magnum. Another recent change[4] in development is smart autoscaling of bays which introduces parameters that don't make a lot of sense outside of Magnum. There are also difficult interdependency problems between the templates and the Magnum project such as the parameter fields. If a required parameter is added into the template the Magnum code must be also updated in the same commit to avoid functional test failures. This can be avoided using Depends-On: #xx feature of gerrit, but it is an additional overhead and will require some CI setup. Additionally we would have to version the templates, which I assume would be necessary to allow for packaging. This brings with it is own problems. As far as I am aware there are no other people using the heat-coe-templates beyond the Magnum team, if we want independent growth of this repo it will need to be adopted by other people rather than Magnum commiters. I don't see the heat templates as a dependency of Magnum, I see them as a truly fundamental part of Magnum which is going to be very difficult to cut out and make reusable without compromising Magnum's development process. I would propose to delete/deprecate the usage of heat-coe-templates and continue with the usage of the templates in the Magnum repo. How does the team feel about that? If we do continue with the large effort required to try and pull out the templates as a dependency then we will need increase the visibility of repo and greatly increase the reviews/commits on the repo. We also have a fairly significant backlog of work to align the heat-coe-templates with the templates in heat-coe-templates. Thanks, Tom [1] https://github.com/larsks/heat-kubernetes [2] https://github.com/stackforge/heat-coe-templates [3] https://review.openstack.org/#/c/184687/ [4] https://review.openstack.org/#/c/196505/ __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [cinder][oslo] Locks for create from volume/snapshot
Duncan Thomas wrote: Do we know what is so hated about the glance task API? Tasks and entity queues give the required exclusion, if you accept that tasks can fail if previous tasks in the queue can cause things to be pulled out from under it. Sounds like certain tasks shouldn't of been accepted in the first place then no? Sounds like before acceptance of a piece of work there needs to be some verification that what is being requested doesn't conflict with what is underway/planned. After all you don't try to hire a contractor to fix your plumbing on the 23rd of the month if your house is scheduled to be demolished on the 21st (analogies ftw)... -Josh On 29 June 2015 at 17:22, Joshua Harlow harlo...@outlook.com mailto:harlo...@outlook.com wrote: Is the V3 api going to be a task API like nova desires (someday it will happen in nova to)? If so then it seems like a natural fit for this (aka submit a request, get back a task json object that can be polled on, one of those polling states it reports back is 'WAITING' or 'BLOCKED' or ...) Dulko, Michal wrote: That’s right, it might be painful. V3 API implememtation would be also a hard, because then we would need different manager behavior for requests from V2 and V3… So maybe we need some config flag with deprecation procedure scheduled? *From:*Duncan Thomas [mailto:duncan.tho...@gmail.com mailto:duncan.tho...@gmail.com] *Sent:* Monday, June 29, 2015 2:46 PM *To:* OpenStack Development Mailing List (not for usage questions) *Subject:* Re: [openstack-dev] [cinder][oslo] Locks for create from volume/snapshot On 29 June 2015 at 15:23, Dulko, Michal michal.du...@intel.com mailto:michal.du...@intel.com mailto:michal.du...@intel.com mailto:michal.du...@intel.com wrote: There’s also some similar situations when we actually don’t lock on resources. For example – a cgsnapshot may get deleted while creating a consistencygroup from it. From my perspective it seems best to have atomic state changes and state-based exclusion in API. We would need some kind of currently_used_to_create_snapshot/volums/consistencygroups states to achieve that. Then we would be also able to return VolumeIsBusy exceptions so retrying a request would be on the user side. I'd agree, except that gives quite a big behaviour change in the tenant-facing API, which will break clients and scripts. Not sure how to square that circle... I'd say V3 API except Mike might kill me... __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Duncan Thomas __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [ceilometer] Aodh has been imported, next steps
On Mon, 29 Jun 2015, Julien Danjou wrote: Hi team, Aodh has been imported and is now available at: https://git.openstack.org/cgit/openstack/aodh/ woot! I'm pretty clear about the next steps for Aodh and what we need to build, but something is still not clear to me. Do we go ahead and bite the bullet and remove ceilometer-alarming from ceilometer in Liberty? This is the big question and is one of the things listed on the potential agenda for the mid-cylce. When we do the splits do we deprecate or delete the old code. Given the high chance of us missing some of potential issues it seems like hasing it some before the mid-cylce is a good idea. The two big overarching issues (that inform a lot of the details) that I'm aware of are: * If we delete then we need to make sure we're working hand in hand with all of: downstream packagers, tempest, grenade, devstack, etc. * If we deprecate will people bother to use the new stuff? I'm sure there are plenty of others. -- Chris Dent tw:@anticdent freenode:cdent https://tank.peermore.com/tanks/cdent __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [ceilometer] Aodh has been imported, next steps
Hi team, Aodh has been imported and is now available at: https://git.openstack.org/cgit/openstack/aodh/ You should add it to your review list on Gerrit I guess. I'm pretty clear about the next steps for Aodh and what we need to build, but something is still not clear to me. Do we go ahead and bite the bullet and remove ceilometer-alarming from ceilometer in Liberty? -- Julien Danjou # Free Software hacker # http://julien.danjou.info signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] The unbearable lightness of specs
On Mon, Jun 29, 2015 at 4:32 AM, Thierry Carrez thie...@openstack.org wrote: Nikola Đipanov wrote: It's not only about education - I think Gerrit is the wrong medium to have a design discussion and do design work. Maybe you disagree as you seem to imply that it worked well in some cases? I've recently seen on more than a few cases how a spec review can easily spiral into a collection of random comments that are hard to put together in a coherent discussion that you could call design work. If you throw in the expectation of approval into the mix, I think it basically causes the opposite of good design collaboration to happen. On Gerrit not being the right tool for specs... Using code review tools to iterate on specs creates two issues: * Minor comments Line-by-line code review tools are excellent for reviewing the correctness of lines of code. When switching to specs, you retain some of that review correctness of all lines mindset and tend to spot mistakes in the details more than mistakes in the general idea. That, in turn, results in -1 votes that don't really mean the same thing. * Extra process Code review tools are designed to produce final versions of documents. For specs we use a template to enforce a minimal amount of details, but those are already too much for most small features. To solve that issue, we end up having to binary-decide when something is significant enough to warrant a full spec. As with any line in the sand, the process end up being too much for things that are just beyond the line, and too little for things that are just before. IMHO the ideal tool would allow you to start with a very basic description of what feature you want to push. Then a discussion can start, and the spec can be refined to answer new questions or detail the already-sketched-out answers. Simple features can be approved really quickly using a one-sentence spec, while more complex features will develop into a full-fledged detailed document before they get approved. One size definitely doesn't fit all. And the discussion-based review (opposed to line-by-line review) discourages nitpicking on style. This is exactly what we realized in Neutron, and why we moved to an Request For Enhancement (RFE) process [1]. So far, it's been pretty good. A slimmed down spec (only required if an RFE bug needs a bit more fleshing out), design documents merging in-tree with the code, and no deadlines. I've not heard any complaints so far and we're liking the new model a lot better. Thanks, Kyle [1] http://lists.openstack.org/pipermail/openstack-dev/2015-June/066217.html You *can* do this with Gerrit: discourage detail review + encourage idea review, and start small and develop the document in future patchsets as-needed. It's just not really encouraging that behavior for the job, and the overhead for simple features still means we can't track smallish features with it. As we introduce new tools we might switch the feature approval process to something else. In the mean time, my suggestion would be to use smaller templates, start small and go into details only if needed, and discourage nitpicking -1s. -- Thierry Carrez (ttx) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron] [QOS] Request for Additional QoS capabilities
Adam: I agree with your point on QOS being used to address quality rather than protection, but the lines surely blur. Especially for this use case as we are trying to protect against excessive traffic. Only the volume of this traffic is harmful to other tenants and the network, not the traffic itself. For example, when QoS is used to restrict the traffic from a tenant to ensure the network doesn’t become congested, that ensure network quality AND protect against excessive traffic. That said, I am not hung up that this must be a QoS feature. We just want a way to protect the network, this tenant and other tenants. If FWaaS handles the traffic restriction in a centralized manner at a centralized point, then I don’t see how this will properly handle the use case. If a single VM on a compute host is excessive in creating connections this has the potential to degrade all VMs on that host unless the restriction is on that VM’s port. So I am not in favor of handling this via FWaaS unless the implementation will allow the granularity to be finer than the router level. If the rules and rates are configured against the router but effectively applied against all ports on the networks that connect to the router then this might be workable. I guess I should study the FWaaS plans a bit more completely. John From: Adam Lawson [mailto:alaw...@aqorn.com] Sent: Monday, June 29, 2015 12:56 AM To: OpenStack Development Mailing List (not for usage questions) Cc: lionel.zer...@huawei.com; Derek Chamorro (dechamor); Eran Gampel Subject: Re: [openstack-dev] [Neutron] [QOS] Request for Additional QoS capabilities If fwaas code is at the router level, it would seem that null routing might be one method of handling ddos, making fwaas a potentially suitable program. Qos seems to address quality issues rather than focusing on protection from unauthorized or malicious traffic whether that traffic originates from the inside or externally. That seems again, to me, as a fwaas-centric function. Just my two cents. ;) On Jun 28, 2015 6:19 PM, John Joyce (joycej) joy...@cisco.commailto:joy...@cisco.com wrote: Gal: I am also slow to jump between this and other work so I think I should be the one apologizing. I think we are receptive to any of the approaches QoS, FWaaS or Security Groups. I am not an expert on FWaaS but from a quick look it seemed like the FWaaS granularity was at the router level. We would want this per neutron port (e.g. per VM port although don’t want to limit the possibility for this be per container or per bare metal port). Allowing an aggregate across all ports of the VM would be very nice, but not a strict requirement. Do you see this as an issue going the FWaaS route? Have you been making any headway getting it in there? One detailed comment after looking through the reviews. Would there be any issue in adding a “reject-with-tcp-reset” option? The DDOS coming from a VM could be due to a virus within the VM our maybe just an overly aggressive tenant. I know the team that runs our cloud offering has experienced an excessive connection requests coming from a VM. I can try to get the exact scenario that triggered this. The net is that all tenants on that host can be affected, especially with an OVS based vswitch. John From: Gal Sagie [mailto:gal.sa...@gmail.commailto:gal.sa...@gmail.com] Sent: Tuesday, June 23, 2015 2:43 PM To: OpenStack Development Mailing List (not for usage questions) Cc: lionel.zer...@huawei.commailto:lionel.zer...@huawei.com; Derek Chamorro (dechamor); Eran Gampel Subject: Re: [openstack-dev] [Neutron] [QOS] Request for Additional QoS capabilities Hi John, Sorry for the delayed response as i was on vacation with no internet connection (you don't know how much you miss it until you don't have it). The work in terms of coding is pretty much done for the reference implementation. We initially tried to push it as a security group extension but there is a strong objection to change the security group API, so FWaaS can be next best candidate if we can find support or other uses of this (like your use case) (Of course that work will need to be added for supporting the connection limit, we tried to tackle brute force prevention which i personally see as a more concerning attack vector internally) Out of curiosity can you describe scenarios of DDoS attacking from an internal VM ? I would assume most DDoS will happen from external traffic or a combine attack from various internal VM's (and then this might no longer fit as a QoS) But if you feel this belongs in QoS this can certainly be added on top of the framework as Miguel suggested. Thanks Gal. On Fri, Jun 19, 2015 at 12:39 AM, John Joyce (joycej) joy...@cisco.commailto:joy...@cisco.com wrote: Gal: I had seen the brute force blueprint and noticed how close the use case was. Can you tell me the current status of the
[openstack-dev] [mistral] Team meeting minutes - 06/29/2015
As usually, Minutes: http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-06-29-16.00.html http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-06-29-16.00.html Log: http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-06-29-16.00.log.html http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-06-29-16.00.log.html The next meeting will be on July 7 at the same time. Looking forward to see you all again. Renat Akhmerov @ Mirantis Inc. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Newb looking to contribute
On 06/27/2015 11:06 AM, Jeff Learman wrote: I'm an OpenStack newbie, but a seasoned programmer with decades of experience in data communications (especially IP stack lower layers) and embedded systems. I'm fluent in Python, C, C++, and Java Lots of other good info on the list, but I am guessing you are best looking at Neutron based on your skills. Modular L2 and so on might be better places for you to make contributions. Nova is the management of virtual machines: create, destroy etc. As such, it tends to consume resources managed by other services. It originally owned the networking stuff, but that got split onto its own project Quantum, now renamed Neutron. While there is certainly Network smarts required for Nova, there are much more required in Neutron. . I'm looking for some pro-bono work to do, and am open to any suggestions, advice, or pleas for help. I'll need a bit of mentoring, mostly in terms of mentioning terms to study up on. I know about as much about OpenStack as I can learn from the Wikipedia entry. I started setting it up on Ubuntu on Cisco UCS for a project where I worked, but no longer work there. I don't have any resources other than a Windows laptop and the Internet, but I could wrestle up an x86-based Linux box if necessary (not a rack server, though -- low budget, I'd take an old tower, install a new MOBO, and go from there.) So long as it has virtualization extensions on it, an old machine should be fine. I think you will want the resources. If you are doing networking stuff, it might make sense to have a couple to play with so you are not just dealing with Kernel level network visualization. I'm willing to do tedious grunt work, as long as I'm learning something in the process (at least, to begin with.) For example, if there's a desire to convert to Python 3, that'd be a great way to get involved, learn a lot, and make a contribution, with minimal deep knowledge required about OpenStack, and hopefully relatively minimal risk. Thanks! Jeff __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] QoS code sprint
Today we were thinking a little bit about the patches work we have to do/we have on flight. And their dependencies. http://fileshare.ajo.es/QoS/P6290039.jpg This is how it looks like (probably missing stuff). Notation: * Green are single patches (in some cases those can/should be divided) * Red are blocks which we could more or less do in parallel * Orange is optional stuff. Today Ihar and I have been able to use some time for the top left block to merge stubs for the api extension and service, even if are not final, but still installable, so we can create the experimental jobs. First thing we could do in the morning could be to iterate over the diagram and assign a local tenant for the blocks, which could serve as a reference for remote collaborators to ask/participate via IRC :) Vikram Choudhary wrote: Thanks for arranging this Miguel. Looking forward to contribute in the best way we can On 26-Jun-2015 3:33 pm, Miguel Angel Ajomangel...@redhat.com wrote: Hi everybody, Next week, from Tue (Jul 29th) to Thu (Jun 2nd), a few of us will be physically [1] attending the neutron QoS coding sprint in Ra'anana (Israel) @ the Red Hat office. And a few others have expressed their will to join us remotely (Thanks!!) :) I guess a reasonable format for the sprint is in the form of short meetings followed by coding hours. We thought it was good to sync with others during the first day on the neutron meeting (Tue / Jul 29th, 14:00 UTC #openstack-meeting) about any progress, blockers or doubts, so we have requested a timeslot for it :) All the other days, we also plan to sync on #openstack-neutron @ ~14:00 UTC with the remote participants. TL;DR ... In an effort to collaborate I'm extending [1] to include the list of patches we're working on, please list any of them missing in there. Since we live in different timezones I believe we can let others address small/nit/not big structural changes which may need discussion by the people on other timezones, and we can use a locking mechanism with the etherpad+#openstack-neutron to say who's working in a patch. By the way, I think the value of gathering is that we can all be together to discuss and iterate very quickly, so during the Israel working hours I believe we may (generally) prioritize the people physically attending the sprint to work on patches to avoid blocking on depending parts. Thanks everybody, and sorry for the long message :) [1] https://etherpad.openstack.org/p/neutron-liberty-qos-code-sprint __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [cinder][oslo] Locks for create from volume/snapshot
Excerpts from Duncan Thomas's message of 2015-06-29 07:54:27 -0700: Do we know what is so hated about the glance task API? Tasks and entity queues give the required exclusion, if you accept that tasks can fail if previous tasks in the queue can cause things to be pulled out from under it. What I hate about it is that it doesn't actually work at all. I recently wrestled with it to try and write functional tests in shade's gate with devstack, and it was revealed to me that whatever Rackspace's task interface uses, that was not what is available in glance's trunk. At the time, Glance used an executor that just spawns a greenthread on the API request that creates the task, and if it fails, thats it, game over, the task exists in limbo forever. So, I would suggest fixing that one before using it as a model for any others. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [cinder][oslo] Locks for create from volume/snapshot
Clint Byrum wrote: Excerpts from Duncan Thomas's message of 2015-06-29 07:54:27 -0700: Do we know what is so hated about the glance task API? Tasks and entity queues give the required exclusion, if you accept that tasks can fail if previous tasks in the queue can cause things to be pulled out from under it. What I hate about it is that it doesn't actually work at all. I recently wrestled with it to try and write functional tests in shade's gate with devstack, and it was revealed to me that whatever Rackspace's task interface uses, that was not what is available in glance's trunk. At the time, Glance used an executor that just spawns a greenthread on the API request that creates the task, and if it fails, thats it, game over, the task exists in limbo forever. So, I would suggest fixing that one before using it as a model for any others. Yes that was my understanding as well, it just needs some TLC and all that to get it into shape (not that the idea itself is bad or anything)... __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [cinder][oslo] Locks for create from volume/snapshot
On Mon, Jun 29, 2015 at 03:45:56PM +0300, Duncan Thomas wrote: On 29 June 2015 at 15:23, Dulko, Michal michal.du...@intel.com wrote: There’s also some similar situations when we actually don’t lock on resources. For example – a cgsnapshot may get deleted while creating a consistencygroup from it. From my perspective it seems best to have atomic state changes and state-based exclusion in API. We would need some kind of currently_used_to_create_snapshot/volums/consistencygroups states to achieve that. Then we would be also able to return VolumeIsBusy exceptions so retrying a request would be on the user side. I'd agree, except that gives quite a big behaviour change in the tenant-facing API, which will break clients and scripts. Not sure how to square that circle... I'd say V3 API except Mike might kill me... I'd prefer not to add another item to the list of things to get HA, much less one on the scale of a new version. As far as I can see, we have 3 cases where we use or need to use locks: 1- Locking multiple writing access to a resource 2- Prevent modification of a resource being used for reading 3- Backend drivers 1- Locking multiple writing access to a resource These locks can most likely be avoided if we implement atomic state changes (with compare-and-swap) and use current state to prevent multiple writes on the same resource, since writes change the status of the resource. There's already a spec proposing this [1]. 2- Prevent modification of a resource in read use I only see 2 options here: - Limit numbers of readers to 1 and use Tooz's Locks as DLM. This would be implemented quite easily, although it would not be very efficient. - Implement shared locks in Tooz or in DB. One way to implement this in the DB would be to add a field with a counter of tasks currently using the resource for reading. Modifications to this counter would use a compare and swap to check the status when increasing the counter and doing the increase on the DB instead of doing it in the Cinder node. Status changes would also work with compare-and-swap and besides checking current status for availability it would check the counter to be 0. The drawback of the DB implementation is that an aborted operation would be locking the resource. But it could be solved if we use TaskFlow for operations and on the revert method we decrement the counter. One big advantage is that we don't need heartbeats to be periodically sent to prevent locks from being released and it's easy to pass the lock from the API to the Volume node. If we implement this in Tooz we could start implementing it in only 1 driver and recommend only using that until the rest are available. 3- Backend drivers Depending on the drivers they could not need locks, or they could do with file locks local to the node (since Cinder would be preventing multiple write access to the same resource) or they may need a DLM if they need, for example, to prevent simultaneous operations on the same pool from different nodes. For this case Tooz would be the best solution, since drivers should not access the DB and Tooz allows using file locks as well as distributed locking. Cheers, Gorka [1]: https://review.openstack.org/#/c/149894/ __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] diskimage-builder 1.0.0
Excerpts from Gregory Haynes's message of 2015-06-29 05:44:18 -0700: Hello all, DIB has come a long way and we seem to have a fairly stable interface for the elements and the image creation scripts. As such, I think it's about time we commit to a major version release. Hopefully this can give our users the (correct) impression that DIB is ready for use by folks who want some level of interface stability. AFAICT our bug list does not have any major issues that might require us to break our interface, so I dont see any harm in 'just going for it'. If anyone has input on fixes/features we should consider including before a 1.0.0 release please speak up now. If there are no objections by next week I'd like to try and cut a release then. :) +1.0 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Ironic] weekly subteam status report
Hi, Following is the subteam report for Ironic. As usual, this is pulled directly from the Ironic whiteboard[0] and formatted. Bugs (dtantsur) As of Mon, 29 Jun 16:00 UTC - Open: 150 (-1). 5 new, 52 in progress (+4), 0 critical, 11 high (+1) and 9 incomplete (-1) - Nova bugs with Ironic tag: 23. 0 new, 0 critical, 0 high Bug dashboard (http://ironic-bugs.divius.net/) seriously revamped: - shows stats for nova bugs with ironic tag - shows confirmed bugs, as they ideally also require triaging - new glamour design ilo, drac and ucs were added to the official bug tags list Neutron/Ironic work (jroll) === - still looking for reviews, mostly have agreement within subteam - https://review.openstack.org/#/c/188528/ -- I haven't personally reviewed this yet but Sukhdev seems to think it's good to go - https://review.openstack.org/#/c/187829/ -- will be pushing an update today or tomorrow Nova Liaisons (jlvillal mrda) === - 25-Jun-2015: jlvillal will be on leave for the month of July - Note above that http://ironic-bugs.divius.net/ now shows Nova bugs with ironic tag as well. Suggestions are welcome on how to improve it. - Nova multi-compute-host spec: https://review.openstack.org/#/c/194453/ Testing (adam_g/jlvillal) == - 25-Jun-2015: Initial hand off meeting occurred from adam_g to jlvillal explaining functional testing work. - 25-Jun-2015: jlvillal has been spending time working on local infrastructure issues. Working on getting an Ubuntu Cloud image to run devstack, in hopes of having an easily reproducible test environment. Ran into an issue where Horizon came up but then unable to schedule a bare metal instance successfully. Have not resolved the issue yet. - 25-Jun-2015: jlvillal will be away from work for the month of July. Will resume work on functional testing in August. - Functional testing Etherpad: https://etherpad.openstack.org/p/IronicFunctionalTestingSubTeam Inspector (dtantsur) === First devstack job is ready, but blocked on devstack bug https://bugs.launchpad.net/devstack/+bug/1469160 Bifrost (TheJulia) = - Bifrost's test path has been switched over to the newer dynamic inventory roles, which allows greater end-use flexibility. - Working toward breaking larger peices up into more re-usable chunks. Drivers == iRMC (naohirot) - https://review.openstack.org//#/q/owner:+naohirot%2540jp.fujitsu.com+status:+open,n,z Status: Reactive (solicit for core team's review and approval) - iRMC Virtual Media Deploy Driver bp/irmc-virtualmedia-deploy-driver bp/non-glance-image-refs bp/automate-uefi-bios-iso-creation bp/local-boot-support-with-partition-images bp/whole-disk-image-support bp/ipa-as-default-ramdisk Status: Active (spec review is on going) - Enhance Power Interface for Soft Reboot and NMI bp/enhance-power-interface-for-soft-reboot-and-nmi Status: Active (code review is on going) - iRMC out of band inspection - bp/ironic-node-properties-discovery Until next week, --ruby [0] https://etherpad.openstack.org/p/IronicWhiteBoard __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Out Of Office
I am currently travelling and am out of the office until Thursday the 9th July, I will be picking up emails periodically. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Out Of Office
I am currently travelling and am out of the office until Thursday the 9th July, I will be picking up emails periodically. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Out Of Office
I am currently travelling and am out of the office until Thursday the 9th July, I will be picking up emails periodically. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Out Of Office
I am currently travelling and am out of the office until Thursday the 9th July, I will be picking up emails periodically. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Out Of Office
I am currently travelling and am out of the office until Thursday the 9th July, I will be picking up emails periodically. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Out Of Office
I am currently travelling and am out of the office until Thursday the 9th July, I will be picking up emails periodically. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Out Of Office
I am currently travelling and am out of the office until Thursday the 9th July, I will be picking up emails periodically. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Out Of Office
I am currently travelling and am out of the office until Thursday the 9th July, I will be picking up emails periodically. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Out Of Office
I am currently travelling and am out of the office until Thursday the 9th July, I will be picking up emails periodically. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Out Of Office
I am currently travelling and am out of the office until Thursday the 9th July, I will be picking up emails periodically. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Out Of Office
I am currently travelling and am out of the office until Thursday the 9th July, I will be picking up emails periodically. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [devstack] [swift] [ceilometer] installing ceilometermiddleware
Is Swift the only project that uses the ceilometermiddleware - or just the only project that uses ceilometermiddleware that doesn't already have a oslo.config instance handy? FWIW There's a WIP patch that's trying to bring a *bit* of oslo.config love to the keystone middleware for policy.json [1]. Not sure if a similar approach could solve the broker/url/parsing issue described in that other thread. If swift is the only project that uses ceilometermiddleware currently it seems to make sense move the installation to lib/swift in devstack? -Clay 1. https://review.openstack.org/#/c/149930/ On Sun, Jun 28, 2015 at 5:06 AM, Chris Dent chd...@redhat.com wrote: On Sat, 27 Jun 2015, Chris Dent wrote: * What code should be calling and hosting install_ceilometermiddleware? Since it is lib/swift that is using it, there makes some sense? Especially since it already has a relatively long block of configuration instruction. I've put up a devstack review for this change: https://review.openstack.org/#/c/196378/ -- Chris Dent tw:@anticdent freenode:cdent https://tank.peermore.com/tanks/cdent __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Why doesn't Swift cache object data?
On Fri, Jun 26, 2015 at 8:16 PM, Michael Barton m...@weirdlooking.com wrote: What's the logical difference between having object data in memory on a memcache server and having it in page cache on an object server? +1 - about a syscall - i.e. not much - I think memcache does it's own heap management - so it's probably all userspace - but the locality is all wrong - just do it on the object nodes [1]! ... if you want object data served from memory - just turn on keep_cache_private and crank up keep_cache_size [2] -Clay 1. concurrent GETs would help serve the warmed copies first - https://review.openstack.org/#/c/117710/ 2. mind your /proc/fs/xfs/stat graphs tho - maybe not an issue if your object data filesystem is on an SSD storage policy tho __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [ceilometer] Aodh has been imported, next steps
On 29/06/2015 11:40 AM, Chris Dent wrote: On Mon, 29 Jun 2015, Julien Danjou wrote: Hi team, Aodh has been imported and is now available at: https://git.openstack.org/cgit/openstack/aodh/ woot! I'm pretty clear about the next steps for Aodh and what we need to build, but something is still not clear to me. Do we go ahead and bite the bullet and remove ceilometer-alarming from ceilometer in Liberty? i think we should follow up with the packagers. if i understand correctly, the location of the code is not known from a user pov, it's the packagers that build the appropriate packages for them to use. if from packagers pov, they just need to work against Aodh, then i would lean more to removing alarming from Ceilometer repo asap to avoid maintaining duplicate code bases and the eventual diversion of the two. This is the big question and is one of the things listed on the potential agenda for the mid-cylce. When we do the splits do we deprecate or delete the old code. Given the high chance of us missing some of potential issues it seems like hasing it some before the mid-cylce is a good idea. The two big overarching issues (that inform a lot of the details) that I'm aware of are: * If we delete then we need to make sure we're working hand in hand with all of: downstream packagers, tempest, grenade, devstack, etc. * If we deprecate will people bother to use the new stuff? i would think/hope the experience from end user doesn't actually change. ie. all the same packaged services remain. I'm sure there are plenty of others. -- gord __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [ceilometer] Aodh has been imported, next steps
I'm afraid user experience will change because of API. Do we have a plan about it? Will we interact with Aodh through ceilometer-api first? Or make user to go to aodh-api service? So I agree with Gordon that code-cleanup is more preferred option because we can't maintain two version simultaneously. But we need to think more about end users: is it appropriate just remove options from ceilometer-api? On Mon, Jun 29, 2015 at 10:47 PM, gordon chung g...@live.ca wrote: On 29/06/2015 11:40 AM, Chris Dent wrote: On Mon, 29 Jun 2015, Julien Danjou wrote: Hi team, Aodh has been imported and is now available at: https://git.openstack.org/cgit/openstack/aodh/ woot! I'm pretty clear about the next steps for Aodh and what we need to build, but something is still not clear to me. Do we go ahead and bite the bullet and remove ceilometer-alarming from ceilometer in Liberty? i think we should follow up with the packagers. if i understand correctly, the location of the code is not known from a user pov, it's the packagers that build the appropriate packages for them to use. if from packagers pov, they just need to work against Aodh, then i would lean more to removing alarming from Ceilometer repo asap to avoid maintaining duplicate code bases and the eventual diversion of the two. This is the big question and is one of the things listed on the potential agenda for the mid-cylce. When we do the splits do we deprecate or delete the old code. Given the high chance of us missing some of potential issues it seems like hasing it some before the mid-cylce is a good idea. The two big overarching issues (that inform a lot of the details) that I'm aware of are: * If we delete then we need to make sure we're working hand in hand with all of: downstream packagers, tempest, grenade, devstack, etc. * If we deprecate will people bother to use the new stuff? i would think/hope the experience from end user doesn't actually change. ie. all the same packaged services remain. I'm sure there are plenty of others. -- gord __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [ceilometer] Aodh has been imported, next steps
Hi, I think removing options from the API requires version bump. So if we plan to do this, that should be introduced in v3 as opposed to v2, which should remain the same and maintained for two cycles (assuming that we still have this policy in OpenStack). It this is achievable by removing the old code and relying on the new repo that would be the best, if not then we need to figure out how to freeze the old code. Best Regards, Ildikó -Original Message- From: Nadya Shakhat [mailto:nprival...@mirantis.com] Sent: June 29, 2015 21:52 To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [ceilometer] Aodh has been imported, next steps I'm afraid user experience will change because of API. Do we have a plan about it? Will we interact with Aodh through ceilometer-api first? Or make user to go to aodh-api service? So I agree with Gordon that code-cleanup is more preferred option because we can't maintain two version simultaneously. But we need to think more about end users: is it appropriate just remove options from ceilometer-api? On Mon, Jun 29, 2015 at 10:47 PM, gordon chung g...@live.ca wrote: On 29/06/2015 11:40 AM, Chris Dent wrote: On Mon, 29 Jun 2015, Julien Danjou wrote: Hi team, Aodh has been imported and is now available at: https://git.openstack.org/cgit/openstack/aodh/ woot! I'm pretty clear about the next steps for Aodh and what we need to build, but something is still not clear to me. Do we go ahead and bite the bullet and remove ceilometer-alarming from ceilometer in Liberty? i think we should follow up with the packagers. if i understand correctly, the location of the code is not known from a user pov, it's the packagers that build the appropriate packages for them to use. if from packagers pov, they just need to work against Aodh, then i would lean more to removing alarming from Ceilometer repo asap to avoid maintaining duplicate code bases and the eventual diversion of the two. This is the big question and is one of the things listed on the potential agenda for the mid-cylce. When we do the splits do we deprecate or delete the old code. Given the high chance of us missing some of potential issues it seems like hasing it some before the mid-cylce is a good idea. The two big overarching issues (that inform a lot of the details) that I'm aware of are: * If we delete then we need to make sure we're working hand in hand with all of: downstream packagers, tempest, grenade, devstack, etc. * If we deprecate will people bother to use the new stuff? i would think/hope the experience from end user doesn't actually change. ie. all the same packaged services remain. I'm sure there are plenty of others. -- gord __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Magnum] Continuing with heat-coe-templates
Needing to fork templates to tweak things is a very common problem. Adding conditionals to Heat was discussed at the Summit. (https://etherpad.openstack.org/p/YVR-heat-liberty-template-format). I want to say, someone was going to prototype it using YAQL, but I don't remember who. Would it be reasonable to keep if conditionals worked? Thanks, Kevin From: Hongbin Lu [hongbin...@huawei.com] Sent: Monday, June 29, 2015 3:01 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Magnum] Continuing with heat-coe-templates Agree. The motivation of pulling templates out of Magnum tree is hoping these templates can be leveraged by a larger community and get more feedback. However, it is unlikely to be the case in practise, because different people has their own version of templates for addressing different use cases. It is proven to be hard to consolidate different templates even if these templates share a large amount of duplicated code (recall that we have to copy-and-paste the original template to add support for Ironic and CoreOS). So, +1 for stopping usage of heat-coe-templates. Best regards, Hongbin -Original Message- From: Tom Cammann [mailto:tom.camm...@hp.com] Sent: June-29-15 11:16 AM To: openstack Development Mailing List (not for usage questions) Subject: [openstack-dev] [Magnum] Continuing with heat-coe-templates Hello team, I've been doing work in Magnum recently to align our templates with the upstream templates from larsks/heat-kubernetes[1]. I've also been porting these changes to the stackforge/heat-coe-templates[2] repo. I'm currently not convinced that maintaining a separate repo for Magnum templates (stackforge/heat-coe-templates) is beneficial for Magnum or the community. Firstly it is very difficult to draw a line on what should be allowed into the heat-coe-templates. We are currently taking out changes[3] that introduced useful autoscaling capabilities in the templates but that didn't fit the Magnum plan. If we are going to treat the heat-coe-templates in that way then this extra repo will not allow organic development of new and old container engine templates that are not tied into Magnum. Another recent change[4] in development is smart autoscaling of bays which introduces parameters that don't make a lot of sense outside of Magnum. There are also difficult interdependency problems between the templates and the Magnum project such as the parameter fields. If a required parameter is added into the template the Magnum code must be also updated in the same commit to avoid functional test failures. This can be avoided using Depends-On: #xx feature of gerrit, but it is an additional overhead and will require some CI setup. Additionally we would have to version the templates, which I assume would be necessary to allow for packaging. This brings with it is own problems. As far as I am aware there are no other people using the heat-coe-templates beyond the Magnum team, if we want independent growth of this repo it will need to be adopted by other people rather than Magnum commiters. I don't see the heat templates as a dependency of Magnum, I see them as a truly fundamental part of Magnum which is going to be very difficult to cut out and make reusable without compromising Magnum's development process. I would propose to delete/deprecate the usage of heat-coe-templates and continue with the usage of the templates in the Magnum repo. How does the team feel about that? If we do continue with the large effort required to try and pull out the templates as a dependency then we will need increase the visibility of repo and greatly increase the reviews/commits on the repo. We also have a fairly significant backlog of work to align the heat-coe-templates with the templates in heat-coe-templates. Thanks, Tom [1] https://github.com/larsks/heat-kubernetes [2] https://github.com/stackforge/heat-coe-templates [3] https://review.openstack.org/#/c/184687/ [4] https://review.openstack.org/#/c/196505/ __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO][Heat] Tuskar v. Heat responsibilities
I had originally been thinking of it like slagle describes, from the child up to the parent as well. What I like about that approach is that it achieves a more pluggable model when you think about extensions that aren't accepted or applicable in TripleO upstream. If someone comes along and adds a new ControllerConfig to your above example, they have to edit whatever environment you're talking about that defines the constraints (I'm call it overcloud-something.yaml for now). This becomes a problem from a packaging point of view, especially when you factor in non-TripleO integrators (without revealing too much inside baseball, think partner integrations). How do I add in an extra package (RPM, DEB, whatever) that provides that ControllerConfig and have it picked up as a valid option? We don't want to be editing the overcloud-something.yaml because it's owned by another package and there's the potential for conflicts if multiple extra implementations start stepping on each other. An interface/discovery sort of mechanism, which I agree is more complex, would be easier to work with in those cases. I'm effectively replying to my own e-mail here, but I've expressed these thoughts on the spec and it'd probably be better to continue this train of thought there: https://review.openstack.org/#/c/196656/ __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Magnum] Continuing with heat-coe-templates
Agree. The motivation of pulling templates out of Magnum tree is hoping these templates can be leveraged by a larger community and get more feedback. However, it is unlikely to be the case in practise, because different people has their own version of templates for addressing different use cases. It is proven to be hard to consolidate different templates even if these templates share a large amount of duplicated code (recall that we have to copy-and-paste the original template to add support for Ironic and CoreOS). So, +1 for stopping usage of heat-coe-templates. Best regards, Hongbin -Original Message- From: Tom Cammann [mailto:tom.camm...@hp.com] Sent: June-29-15 11:16 AM To: openstack Development Mailing List (not for usage questions) Subject: [openstack-dev] [Magnum] Continuing with heat-coe-templates Hello team, I've been doing work in Magnum recently to align our templates with the upstream templates from larsks/heat-kubernetes[1]. I've also been porting these changes to the stackforge/heat-coe-templates[2] repo. I'm currently not convinced that maintaining a separate repo for Magnum templates (stackforge/heat-coe-templates) is beneficial for Magnum or the community. Firstly it is very difficult to draw a line on what should be allowed into the heat-coe-templates. We are currently taking out changes[3] that introduced useful autoscaling capabilities in the templates but that didn't fit the Magnum plan. If we are going to treat the heat-coe-templates in that way then this extra repo will not allow organic development of new and old container engine templates that are not tied into Magnum. Another recent change[4] in development is smart autoscaling of bays which introduces parameters that don't make a lot of sense outside of Magnum. There are also difficult interdependency problems between the templates and the Magnum project such as the parameter fields. If a required parameter is added into the template the Magnum code must be also updated in the same commit to avoid functional test failures. This can be avoided using Depends-On: #xx feature of gerrit, but it is an additional overhead and will require some CI setup. Additionally we would have to version the templates, which I assume would be necessary to allow for packaging. This brings with it is own problems. As far as I am aware there are no other people using the heat-coe-templates beyond the Magnum team, if we want independent growth of this repo it will need to be adopted by other people rather than Magnum commiters. I don't see the heat templates as a dependency of Magnum, I see them as a truly fundamental part of Magnum which is going to be very difficult to cut out and make reusable without compromising Magnum's development process. I would propose to delete/deprecate the usage of heat-coe-templates and continue with the usage of the templates in the Magnum repo. How does the team feel about that? If we do continue with the large effort required to try and pull out the templates as a dependency then we will need increase the visibility of repo and greatly increase the reviews/commits on the repo. We also have a fairly significant backlog of work to align the heat-coe-templates with the templates in heat-coe-templates. Thanks, Tom [1] https://github.com/larsks/heat-kubernetes [2] https://github.com/stackforge/heat-coe-templates [3] https://review.openstack.org/#/c/184687/ [4] https://review.openstack.org/#/c/196505/ __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Juno Nova Instance building/scheduling forever without error or timeout
Hi Experts, I hope you can help me. I have a three node Juno system built on Ubuntu 14.04. When I try to launch a nova instance it gets stuck in building/scheduling forever without logging an error or timing out. Based off another post I upgraded rabbitMQ to 3.5 but that did not help. I have debug logs for the nova-compute, nova-api, and nova-scheduler. I don't see anything glaring other than a dynamic looping call which I am not clear on what that means. All of my nova-services are up as well as my neutron agents. I have posted all of this information here. https://ask.openstack.org/en/question/69128/juno-nova-instance-stuck-in-build-scheduling-three-node-system/ I am stuck on where to begin troubleshooting this issue. Can you please assist? Regards, Gregg Marxer | Field Systems Engineer O 949.631.6733 M 732.713.1361 f5.comhttps://www.f5.com/ | synthesis.f5.comhttps://synthesis.f5.com/ __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Magnum] New Kubernetes version 0.19 on Fedora Atomic
Hi everyone, I haven't had much success running this latest release on our Fedora Atomic image. The current image has version 0.15, and I have tried earlier with version 016 which worked. But with the new version, the kube-apiserver would not run. When run as a systemd service, it would terminate and restart repeatedly. When run on the command line, I would get something like: I0625 20:35:08.8708988386 master.go:252] Node port range unspecified. Defaulting to 3-32767. I0625 20:35:08.8729818386 master.go:274] Will report 10.0.0.3 as public IP address. E0625 20:35:08.9093908386 reflector.go:136] Failed to list *api.ResourceQuota: Get http://0.0.0.0:8080/api/v1/resourcequotas: dial tcp 0.0.0.0:8080: connection refused E0625 20:35:08.9097698386 reflector.go:136] Failed to list *api.LimitRange: Get http://0.0.0.0:8080/api/v1/limitranges: dial tcp 0.0.0.0:8080: connection refused E0625 20:35:08.9316838386 reflector.go:136] Failed to list *api.Namespace: Get http://0.0.0.0:8080/api/v1/namespaces: dial tcp 0.0.0.0:8080: connection refused [restful] 2015/06/25 20:35:08 log.go:30: [restful/swagger] listing is available at https://10.0.0.3:6443/swaggerapi/ [restful] 2015/06/25 20:35:08 log.go:30: [restful/swagger] https://10.0.0.3:6443/swaggerui/ is mapped to folder /swagger-ui/ E0625 20:35:09.9221868386 reflector.go:136] Failed to list *api.ResourceQuota: Get http://0.0.0.0:8080/api/v1/resourcequotas: dial tcp 0.0.0.0:8080: connection refused Then on the minion node, the api service could not be reached. Has anyone tried this and had any success? Thanks in advance for any pointer. Ton Ngo, __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [puppet][ceph] puppet-ceph CI status
Hi Recent changes in the puppet modules infra left stackforge/puppet-ceph CI broken. We've resolved the issues in [1][2] However we are short on non-involved core-reviewers. I propose that we leave the patchs open through Wednesday and use lazy consensus and merge it if we don't receive any negative feedback. [1] https://review.openstack.org/#/c/179645/ [2] https://review.openstack.org/#/c/195959/ -- -- Andrew Woodward Mirantis Fuel Community Ambassador Ceph Community __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Let's get rid of tablib and cliff-tablib
On 06/29/2015 12:01 PM, Victor Stinner wrote: Hi, Le 29/06/2015 11:03, Thomas Goirand a écrit : cliff-tablib is used for the unit tests of things like python-neutronclient. The annoying bit is that cliff-tablib depends on tablib, which itself is a huge mess. It has loads of 3rd party embedded packages and most of them aren't Python 3.x compatible. tablib includes copies of various dependencies in its tablib/packages/ directory. Some of them are for Python 2, others are for Python 3. It would be better to use dependencies (requirements in setup.py), not copies. Yes! Do you try to contact tablib authors to ask them to remove completly tablib/packages/? I haven't yet. Though some of the dependencies are simply not Py3 compatible at all, full stop. So switching to them as independent package wouldn't help me much: we'd still have to do the work of porting them to Py3. setup.py uses a different list of packages on Python 2 and Python 3. I tried python3 setup.py install: the bytecode compilation of markup.py fails with an obvious SyntaxError, the code is for Python 2. But there is also markup3.py which is compiled successfully. Even if the compilation of the markup.py fails, python setup.py install succeed with the exit code 0. What is your problem? As I wrote to you on IRC, if the compilation of the code fails, then this breaks the world when installing a Python package (the .py are built as .pyc at install time using pycompile -p package-name in the postinst of all python packages). So having python setup.py install that works isn't enough for Debian/Ubuntu. setup.py should be fixed to skip markup.py on Python 3, and skip markup3.py on Python 2. A workaround is to remove manually the file depending on the Python major version. Then will it magically know which one to use? Also, markup{3,}.py isn't the only issue. I gave you a list of files which have compile issues. Digging more into it, it's hell... Note: pip install tablib works on Python 3 (pip uses the binary wheel package). This doesn't help me, unfortunately. It's like if you were saying it worked on devstack... cliff-tablib is used in tests. If you remove the cliff-tablib dependency, tests will obviously fail. Yes, that's the issue. What do you propose? Modify tests to reimplement cliff-tablib? Remove tests? Some of the above, yes. Or at least, in the short term, make the tests skipped if cliff-tablib isn't installed, so that I can completely ignore these tests and remove cliff-tablib from Debian. Long term: get rid of cliff-tablib and tablib completely if we can. On 06/29/2015 01:39 PM, Doug Hellmann wrote: tablib is managed by Kenneth Reitz, and as with his requests library he feels vendoring is the best way to distribute dependencies. For a while I've had a to-do on my list to rewrite those formatters to not use tablib, it just hasn't been a high priority. Oh! I didn't notice it was form him. Here, we see how much vendoring is hurting in a *very* bad way. And give the answers about de-vendoring requests, we can already guess that asking any kinds of fixing of the situation will be pure time loss. I don't know much about tablib internals, I just see how badly it is packaged on PyPi. If you could get a replacement for it, it'd be awesome. Proposal: get a list of stubborn upstream having potentially dangerous behavior, and blacklist them completely (or, worst case, fork their project if we can afford it). I have also Gabriel Falcao as a potential candidate for such blacklist (author of sure, httpretty, steadymark, and more...), because he has the dangerous habit to do API breakage which has already broke the gate multiple times. Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] The unbearable lightness of specs
On 06/26/2015 08:15 PM, Tim Bell wrote: Limiting those who give input to the people who can analyse python and determine the impacts of the change has significant risks. Many of those running OpenStack clouds can give their feedback as part of the specs process. While this may not be as fully structured as you would like, ignoring the input from those who are running clouds when proposing a change is likely to cause problems later on. The specs process was developed jointly to allow exactly this kind of early input ... people writing the code wanted input from those who were using this code to deliver new functions and improvements to the end users of the cloud. No problem to discuss how to improve the process but it is important to allow all the people affected by a change to be involved in the solution and contribute, not just the ones writing the code. These are very valid points. Input from users/deployers is extremely important. One of the main points of the agile way of producing software is about shortening the feedback loop by producing working code to comment on, as opposed to defining requirements fully before writing code. I think that in the case of certain problems having as much information up front and solid feedback from the operator's community is very valuable, but I also feel that there are cases where after a point, prototyping can give better results (partly due to the nature of the tools we use and our reviewing culture as you mention above). N. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] enriching port binding extension API dictionaries with key-values
Kevin, you're right. My use case is that the plugin enriches the vif binding information. The value is generated along the ml2 agents configuration files (the name of the interface to plug the port to). There's no input coming from the user via API. Irina, thanks for clarification. binding:vif_details is exactly what I need! Thanks, Andreas (IRC: scheuran) On Fr, 2015-06-26 at 11:18 -0600, Kevin Benton wrote: That bug is about adding things that the user can pass to the port. I think Andreas is just talking about passing data to Nova that his ML2 plugin generates. The key difference would be that adding key/value pairs to the port API that the user populates would be exposing implementation details to users. An ML2 driver adding data to the binding information that a Nova VIF driver leverages shouldn't be a problem because the user API interaction wouldn't change. On Fri, Jun 26, 2015 at 8:14 AM, Neil Jerram neil.jer...@metaswitch.com wrote: Hi Andreas, On 26/06/15 14:04, Andreas Scheuring wrote: Hi together, for a new ml2 plugin I would like to pass over some data from neutron to nova on port creation and update (exploiting port binding extension [1]). For my prototype I thought of using one of the following response dictionaries to add my information: - binding:vif_details - binding:profile The API ref describes these attributes (port create / port update - both response) as dictionaries, but without restricting the key-value pairs or naming a defined number [1]. I've also seen some other ml2 plugins enriching those fields with unique data. So I assuming this is not considered as an API change, isn't it? Important: It's only about the response. The input comes from a configuration file. Thanks [1] http://developer.openstack.org/api-ref-networking-v2-ext.html I think the discussion at [1] is broadly in the same area, so you might find some relevant input there. Neil [1] https://bugs.launchpad.net/neutron/+bug/1460222 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Kevin Benton __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Andreas (IRC: scheuran) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Let's get rid of tablib and cliff-tablib
Hi, cliff-tablib is used for the unit tests of things like python-neutronclient. The annoying bit is that cliff-tablib depends on tablib, which itself is a huge mess. It has loads of 3rd party embedded packages and most of them aren't Python 3.x compatible. I've seen that for python-openstackclient, recently, cliff-tablib was added. Let's do the reverse, and remove cliff-tablib whenever possible. If we really want to keep using cliff-tablib, then someone has to do the work to port tablib to Python3 (good luck with that...). Your thoughts anyone? Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Infra] Meeting Tuesday June 30th at 19:00 UTC
Hi everyone, The OpenStack Infrastructure (Infra) team is having our next weekly meeting on Tuesday June 30th, at 19:00 UTC in #openstack-meeting Meeting agenda available here: https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is welcome to to add agenda items) Everyone interested in infrastructure and process surrounding automated testing and deployment is encouraged to attend. In case you missed it or would like a refresher, the meeting minutes and log from our last meeting are available here: Minutes: http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-06-23-19.02.html Minutes (text): http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-06-23-19.02.txt Log: http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-06-23-19.02.log.html -- Elizabeth Krumbach Joseph || Lyz || pleia2 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] The unbearable lightness of specs
+1. I think this is a good comment for all reviews. I have been frustrated lately with a number of reviews that I spent time upon but didn't feel should be scored. I think that 0 with comments should be counted as well. On Fri, Jun 26, 2015 at 2:27 PM Tim Bell tim.b...@cern.ch wrote: -Original Message- From: Jeremy Stanley [mailto:fu...@yuggoth.org] Sent: 26 June 2015 16:42 To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Nova] The unbearable lightness of specs On 2015-06-25 16:39:56 + (+), Tim Bell wrote: [...] One of the problems that I’ve seen is with specs etiquette where people -1 because they have a question. This is a question of education rather than a fundamental issue with the process. http://docs.openstack.org/infra/manual/developers.html#peer-review has been updated with a 7th entry addressing this in particular. Hopefully that will help realign reviewers on acceptable vs. unacceptable use of -1 for certain types of questions over time. I also feel that stackalytics should credit people of a 0 review comment on specs. Currently, I think that only non-zero reviews are considered as a contribution. My understanding of the workflow is that a 0 is in many cases is the constructive way to respond and therefore should be considered as a contribution. Tim -- Jeremy Stanley __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev- requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [puppet][ceph] puppet-ceph CI status
Ah, I don't have +2 on that repo, but the lgtm so your original plan is fine. On Mon, Jun 29, 2015 at 5:59 PM, Matt Fischer m...@mattfischer.com wrote: I can take a look at these tonight. Maybe also Clayton can review them? Neither of us were involved in the patches to my knowledge. On Jun 29, 2015 5:09 PM, Andrew Woodward xar...@gmail.com wrote: Hi Recent changes in the puppet modules infra left stackforge/puppet-ceph CI broken. We've resolved the issues in [1][2] However we are short on non-involved core-reviewers. I propose that we leave the patchs open through Wednesday and use lazy consensus and merge it if we don't receive any negative feedback. [1] https://review.openstack.org/#/c/179645/ [2] https://review.openstack.org/#/c/195959/ -- -- Andrew Woodward Mirantis Fuel Community Ambassador Ceph Community __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] gate is wedged
We need the g-r update to block the breaking oslo.versionedobjects release: https://review.openstack.org/#/c/194325/ But that's failing unit tests which should be fixed in: https://review.openstack.org/#/c/196719/ Which is dependent on a change that's failing unit tests b/c of the bad oslo.versionedobjects 0.5.0 release - and we have a circular dependency. -- Thanks, Matt Riedemann __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [puppet][ceph] puppet-ceph CI status
I can take a look at these tonight. Maybe also Clayton can review them? Neither of us were involved in the patches to my knowledge. On Jun 29, 2015 5:09 PM, Andrew Woodward xar...@gmail.com wrote: Hi Recent changes in the puppet modules infra left stackforge/puppet-ceph CI broken. We've resolved the issues in [1][2] However we are short on non-involved core-reviewers. I propose that we leave the patchs open through Wednesday and use lazy consensus and merge it if we don't receive any negative feedback. [1] https://review.openstack.org/#/c/179645/ [2] https://review.openstack.org/#/c/195959/ -- -- Andrew Woodward Mirantis Fuel Community Ambassador Ceph Community __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Magnum] Continuing with heat-coe-templates
I agree with Tom's comment for not maintaining separate repo for heat-templates when it can't be reused by others. Regards, Madhuri On Tue, Jun 30, 2015 at 10:56 AM, Angus Salkeld asalk...@mirantis.com wrote: On Tue, Jun 30, 2015 at 8:23 AM, Fox, Kevin M kevin@pnnl.gov wrote: Needing to fork templates to tweak things is a very common problem. Adding conditionals to Heat was discussed at the Summit. ( https://etherpad.openstack.org/p/YVR-heat-liberty-template-format). I want to say, someone was going to prototype it using YAQL, but I don't remember who. I was going to do that, but I would not expect that ready in a very short time frame. It needs some investigation and agreement from others. I'd suggest making you decision based on what we have now. -Angus Would it be reasonable to keep if conditionals worked? Thanks, Kevin From: Hongbin Lu [hongbin...@huawei.com] Sent: Monday, June 29, 2015 3:01 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Magnum] Continuing with heat-coe-templates Agree. The motivation of pulling templates out of Magnum tree is hoping these templates can be leveraged by a larger community and get more feedback. However, it is unlikely to be the case in practise, because different people has their own version of templates for addressing different use cases. It is proven to be hard to consolidate different templates even if these templates share a large amount of duplicated code (recall that we have to copy-and-paste the original template to add support for Ironic and CoreOS). So, +1 for stopping usage of heat-coe-templates. Best regards, Hongbin -Original Message- From: Tom Cammann [mailto:tom.camm...@hp.com] Sent: June-29-15 11:16 AM To: openstack Development Mailing List (not for usage questions) Subject: [openstack-dev] [Magnum] Continuing with heat-coe-templates Hello team, I've been doing work in Magnum recently to align our templates with the upstream templates from larsks/heat-kubernetes[1]. I've also been porting these changes to the stackforge/heat-coe-templates[2] repo. I'm currently not convinced that maintaining a separate repo for Magnum templates (stackforge/heat-coe-templates) is beneficial for Magnum or the community. Firstly it is very difficult to draw a line on what should be allowed into the heat-coe-templates. We are currently taking out changes[3] that introduced useful autoscaling capabilities in the templates but that didn't fit the Magnum plan. If we are going to treat the heat-coe-templates in that way then this extra repo will not allow organic development of new and old container engine templates that are not tied into Magnum. Another recent change[4] in development is smart autoscaling of bays which introduces parameters that don't make a lot of sense outside of Magnum. There are also difficult interdependency problems between the templates and the Magnum project such as the parameter fields. If a required parameter is added into the template the Magnum code must be also updated in the same commit to avoid functional test failures. This can be avoided using Depends-On: #xx feature of gerrit, but it is an additional overhead and will require some CI setup. Additionally we would have to version the templates, which I assume would be necessary to allow for packaging. This brings with it is own problems. As far as I am aware there are no other people using the heat-coe-templates beyond the Magnum team, if we want independent growth of this repo it will need to be adopted by other people rather than Magnum commiters. I don't see the heat templates as a dependency of Magnum, I see them as a truly fundamental part of Magnum which is going to be very difficult to cut out and make reusable without compromising Magnum's development process. I would propose to delete/deprecate the usage of heat-coe-templates and continue with the usage of the templates in the Magnum repo. How does the team feel about that? If we do continue with the large effort required to try and pull out the templates as a dependency then we will need increase the visibility of repo and greatly increase the reviews/commits on the repo. We also have a fairly significant backlog of work to align the heat-coe-templates with the templates in heat-coe-templates. Thanks, Tom [1] https://github.com/larsks/heat-kubernetes [2] https://github.com/stackforge/heat-coe-templates [3] https://review.openstack.org/#/c/184687/ [4] https://review.openstack.org/#/c/196505/ __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [cinder] Need help from folks working on Dell, Storpool and Infortrend drivers
Vincent, Best place to start is with Sean McGinnis. Jay On Sun, Jun 28, 2015 at 10:00 PM Sheng Bo Hou sb...@cn.ibm.com wrote: Hi everyone, For folks who are working on Dell, Storpool and Infortrend drivers: I have got a new patch for https://review.openstack.org/#/c/180873. There is a change about how to implement the method update_migrated_volume for each driver. The code needs to return the final values for the key _name_id and provider_location. I have changed accordingly in your driver, but I am no 100% sure of the precision, so I need your reviews on the changes about your driver. If there is any comment, tell me how to change it. You can take the implementation for Storwize as a reference. - First, check if the rename works the same for both the 'available' or 'in-use' volumes. It is possible to handle differently. - Then check if the rename is successful, it may return different _name_id and provider_location values. - There is an explanation about the update_migrated_volume in https://review.openstack.org/#/c/180873/67/cinder/volume/driver.py https://review.openstack.org/#/c/180873/66/cinder/volume/driver.py Thank you. Best wishes, Vincent Hou (侯胜博) Staff Software Engineer, Open Standards and Open Source Team, Emerging Technology Institute, IBM China Software Development Lab Tel: 86-10-82450778 Fax: 86-10-82453660 Notes ID: Sheng Bo Hou/China/IBM@IBMCNE-mail: sb...@cn.ibm.com Address:3F Ring, Building 28 Zhongguancun Software Park, 8 Dongbeiwang West Road, Haidian District, Beijing, P.R.C.100193 地址:北京市海淀区东北旺西路8号中关村软件园28号楼环宇大厦3层 邮编:100193 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Product] Product WG considering holding a mid-cycle sprint
My apologies: I seem to have triggered a bug in Doodle. The support staff are on the case; I’ll update you when I have more information. On Jun 29, 2015, at 5:33 PM, Geoff Arnold ge...@geoffarnold.com wrote: [Note: I’m copying this to openstack-dev in case there are potential participants who have not yet joined the product-wg list.] The OpenStack Product WG is considering whether to hold a 2-day mid-cycle sprint to accelerate its work. We understand that scheduling during the summer is difficult, and we will only hold this meeting if we can get critical mass. If you are seriously interested in participating, please respond to this poll by selecting dates and indicating your preferred (or required) locations in the comments. Doodle poll link: https://doodle.com/sskuwycqfa3vhu9e https://doodle.com/sskuwycqfa3vhu9e Geoff Arnold ___ Product-wg mailing list product...@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/product-wg __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [cinder][stable] Cinder client broken in Juno
I too would prefer option 2. Would rather do the pack ports than remove the functionality. Jay On Wed, Jun 24, 2015 at 9:40 AM Gorka Eguileor gegui...@redhat.com wrote: On Tue, Jun 23, 2015 at 08:49:55AM -0700, Mike Perez wrote: There was a bug raised [1] from some large deployments that the Cinder client 1.2.0 and beyond is not working because of version discovery. Unfortunately it's not taking into account of deployments that have a proxy. A little bit unrelated, but volume pagination in Cinder client is also broken due to Version Discovery: https://bugs.launchpad.net/python-cinderclient/+bug/1453755 Cinder client asks Keystone to find a publicURL based on a version. Keystone will gather data from the service catalog and ask Cinder for a list of the public endpoints and compare. For the proxy cases, Cinder is giving internal URLs back to the proxy and Keystone ends up using that instead of the publicURL in the service catalog. As a result, clients usually won't be able to use the internal URL and rightfully so. This is all correctly setup on the deployer's side, this an issue with the server side code of Cinder. There is a patch that allows the deployer to specify a configuration option public_endpoint [2] which was introduced in a patch in Kilo [3]. The problem though is we can't expect people to already be running Kilo to take advantage of this, and it leaves deployers running stable releases of Juno in the dark with clients upgrading and using the latest. Two options: 1) Revert version discovery which was introduced in Kilo for Cinder client. 2) Grant exception on backporting [4] a patch that helps with this problem, and introduces a config option that does not change default behavior. I'm also not sure if this should be considered for Icehouse. +1 to option 2 and I wouldn't be totally against considering it for Icehouse. Cheers, Gorka. [1] - https://launchpad.net/bugs/1464160 [2] - http://docs.openstack.org/kilo/config-reference/content/cinder-conf-changes-kilo.html [3] - https://review.openstack.org/#/c/159374/ [4] - https://review.openstack.org/#/c/194719/ -- Mike Perez __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [fuel] Deprecation and backwards compaibility Policy
Some recent specs have proposed changing some of the API's by either removing parts, or changing them in non-backwards way. Additionally there are some proposals that are light on details of their impact to already supported components. I propose that deprecation and backwards compatibility should be maintained for at least one release before removing support for the previous implementation. This would result in a process such as A - A2,B - B R1 - R2- R3 Where A is the initial implementation A2 is the depricated A interface that likely converts to B back to A B is the new feature R[1,2,3] Releases incrementing. This would require that the interface A is documented in the release notes of R2 as being marked for removal. The A interface can then be removed in R3. This will allow for a reasonable time for down-stream users to learn that the interface they may be using is going away and they can adapt to the new interface before it's the only interface available. -- -- Andrew Woodward Mirantis Fuel Community Ambassador Ceph Community __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Product] Product WG considering holding a mid-cycle sprint
[Note: I’m copying this to openstack-dev in case there are potential participants who have not yet joined the product-wg list.] The OpenStack Product WG is considering whether to hold a 2-day mid-cycle sprint to accelerate its work. We understand that scheduling during the summer is difficult, and we will only hold this meeting if we can get critical mass. If you are seriously interested in participating, please respond to this poll by selecting dates and indicating your preferred (or required) locations in the comments. Doodle poll link: https://doodle.com/sskuwycqfa3vhu9e https://doodle.com/sskuwycqfa3vhu9e Geoff Arnold__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] gate is wedged
On 6/29/2015 9:46 PM, Matt Riedemann wrote: We need the g-r update to block the breaking oslo.versionedobjects release: https://review.openstack.org/#/c/194325/ But that's failing unit tests which should be fixed in: https://review.openstack.org/#/c/196719/ Which is dependent on a change that's failing unit tests b/c of the bad oslo.versionedobjects 0.5.0 release - and we have a circular dependency. This should hopefully unwedge: https://review.openstack.org/#/c/195191/ -- Thanks, Matt Riedemann __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [api][nova][ironic] Microversion API HTTP header
2015-06-26 4:21 GMT+09:00 Dean Troyer dtro...@gmail.com: On Thu, Jun 25, 2015 at 7:10 AM, Sean Dague s...@dague.net wrote: For someone that's extremely familiar with what they are doing, they'll understand that http://service.provider/compute is Nova, and can find their way to Nova docs on the API. But for new folks, I can only see this adding to confusion. Anyone using the REST API directly has already gotten an endpoint from the service catalog using the service type (I'm ignoring the deprecated 'name' field). The version header should match up directly to the type used to get the endpoint. Yeah, I had the same thinking. Based on it, we can remove generic name(compute, identity, etc) from API microversions header. But now I feel it is fine to use the generic name if the name is allocated quickly just after a project is created and the name is stable. JSON-Home also needs something for representing each project in a response payload like: http://docs.openstack.org/api/openstack-identity/3/rel http://docs.openstack.org/api/openstack-compute/2.1/rel for the relationship. So even if we can remove the name from microversion header, we need something for representing each project. I tend to prefer generic name(compute, identity, etc) because the name seems stable. I push this to api-wg guidline[1] for cross projects. Thanks Ken Ohmichi --- [1]: https://review.openstack.org/#/c/196918/ __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Magnum] Continuing with heat-coe-templates
On Tue, Jun 30, 2015 at 8:23 AM, Fox, Kevin M kevin@pnnl.gov wrote: Needing to fork templates to tweak things is a very common problem. Adding conditionals to Heat was discussed at the Summit. ( https://etherpad.openstack.org/p/YVR-heat-liberty-template-format). I want to say, someone was going to prototype it using YAQL, but I don't remember who. I was going to do that, but I would not expect that ready in a very short time frame. It needs some investigation and agreement from others. I'd suggest making you decision based on what we have now. -Angus Would it be reasonable to keep if conditionals worked? Thanks, Kevin From: Hongbin Lu [hongbin...@huawei.com] Sent: Monday, June 29, 2015 3:01 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Magnum] Continuing with heat-coe-templates Agree. The motivation of pulling templates out of Magnum tree is hoping these templates can be leveraged by a larger community and get more feedback. However, it is unlikely to be the case in practise, because different people has their own version of templates for addressing different use cases. It is proven to be hard to consolidate different templates even if these templates share a large amount of duplicated code (recall that we have to copy-and-paste the original template to add support for Ironic and CoreOS). So, +1 for stopping usage of heat-coe-templates. Best regards, Hongbin -Original Message- From: Tom Cammann [mailto:tom.camm...@hp.com] Sent: June-29-15 11:16 AM To: openstack Development Mailing List (not for usage questions) Subject: [openstack-dev] [Magnum] Continuing with heat-coe-templates Hello team, I've been doing work in Magnum recently to align our templates with the upstream templates from larsks/heat-kubernetes[1]. I've also been porting these changes to the stackforge/heat-coe-templates[2] repo. I'm currently not convinced that maintaining a separate repo for Magnum templates (stackforge/heat-coe-templates) is beneficial for Magnum or the community. Firstly it is very difficult to draw a line on what should be allowed into the heat-coe-templates. We are currently taking out changes[3] that introduced useful autoscaling capabilities in the templates but that didn't fit the Magnum plan. If we are going to treat the heat-coe-templates in that way then this extra repo will not allow organic development of new and old container engine templates that are not tied into Magnum. Another recent change[4] in development is smart autoscaling of bays which introduces parameters that don't make a lot of sense outside of Magnum. There are also difficult interdependency problems between the templates and the Magnum project such as the parameter fields. If a required parameter is added into the template the Magnum code must be also updated in the same commit to avoid functional test failures. This can be avoided using Depends-On: #xx feature of gerrit, but it is an additional overhead and will require some CI setup. Additionally we would have to version the templates, which I assume would be necessary to allow for packaging. This brings with it is own problems. As far as I am aware there are no other people using the heat-coe-templates beyond the Magnum team, if we want independent growth of this repo it will need to be adopted by other people rather than Magnum commiters. I don't see the heat templates as a dependency of Magnum, I see them as a truly fundamental part of Magnum which is going to be very difficult to cut out and make reusable without compromising Magnum's development process. I would propose to delete/deprecate the usage of heat-coe-templates and continue with the usage of the templates in the Magnum repo. How does the team feel about that? If we do continue with the large effort required to try and pull out the templates as a dependency then we will need increase the visibility of repo and greatly increase the reviews/commits on the repo. We also have a fairly significant backlog of work to align the heat-coe-templates with the templates in heat-coe-templates. Thanks, Tom [1] https://github.com/larsks/heat-kubernetes [2] https://github.com/stackforge/heat-coe-templates [3] https://review.openstack.org/#/c/184687/ [4] https://review.openstack.org/#/c/196505/ __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
Re: [openstack-dev] [nova] gate is wedged
On 6/29/2015 10:31 PM, Matt Riedemann wrote: On 6/29/2015 9:46 PM, Matt Riedemann wrote: We need the g-r update to block the breaking oslo.versionedobjects release: https://review.openstack.org/#/c/194325/ But that's failing unit tests which should be fixed in: https://review.openstack.org/#/c/196719/ Which is dependent on a change that's failing unit tests b/c of the bad oslo.versionedobjects 0.5.0 release - and we have a circular dependency. This should hopefully unwedge: https://review.openstack.org/#/c/195191/ Alternatively, oslo.versionedobjects 0.5.1 is cut after https://review.openstack.org/#/c/196926/ is merged and then we just need haypo's test_db_api fix for the oslo.db 2.0.0 change: https://review.openstack.org/#/c/196719/ That's probably the cleanest route. -- Thanks, Matt Riedemann __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [kolla][release] Announcing Liberty-1 release of Kolla
The Kolla community is pleased to announce the release of the Kolla Liberty 1 milestone. This release fixes 56 bugs and implements 14 blueprints! Of significant note during this milestone, we defined our Mission statement for our project which is: Kolla provides production-ready containers and deployment tools for operating OpenStack clouds that are scalable, fast, reliable, and upgradable using community best practices. During this release we have dramatically improved our CI systems by building every container we develop and by introducing bashate gating. During Liberty-2 we plan to introduce container content testing where feasible. Our community developed the following notable features: * A start at source-based containers * Cinder containers * Designate containers * Galera containers (HA database support) * Keepalived containers * OpenvSwitch containers * The fat neutron container was converted to thin Neutron containers * Continuous improvement of existing containers * RabbitMQ with HA clustering support * Oracle Linux base container For full details, check out our launchpad tracker [1]. For Liberty-2 our main focus will be the execution of the two specifications written during the Liberty-1 cycle. These blueprints are: * Making Kolla deploy Highly Available OpenStack environments [2] * Adding Ansible deployingment tooling to Kolla [3] We feel these high impact activities are the best way to deliver on our mission statement. We have a really solid crew of reviewers that are not on the core team. We hope that folks interested in joining the core reviewer team will continue reviewing - we definately appreciate the reviews! For some short statistics of our reviewer activity check out [4]. Note the 20% of reviews coming from folks with no corporate affiliation! The software is available for immediate download from: https://github.com/stackforge/kolla/archive/liberty-1.tar.gz Regards, - The Kolla Development Team [1] https://launchpad.net/kolla/+milestone/liberty-1 [2] https://review.openstack.org/#/c/181983/ [3] https://review.openstack.org/#/c/189157/ [4] http://stackalytics.com/?project_type=allmodule=kolla __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev