Re: [openstack-dev] [Glance] IRC logging
On 09/01/15 12:11 -0800, Joshua Harlow wrote: So the only comment I'll put in is one that I know not everyone agrees with but might as well throw it out there. http://freenode.net/channel_guidelines.shtml (this page has a bunch of useful advice IMHO). From that; something useful to look/think over at least... If you're considering publishing channel logs, think it through. The freenode network is an interactive environment. Even on public channels, most users don't weigh their comments with the idea that they'll be enshrined in perpetuity. For that reason, few participants publish logs. If you're publishing logs on an ongoing basis, your channel topic should reflect that fact. Be sure to provide a way for users to make comments without logging, and get permission from the channel owners before you start. If you're thinking of anonymizing your logs (removing information that identifies the specific users), be aware that it's difficult to do it well—replies and general context often provide identifying information which is hard to filter. If you just want to publish a single conversation, be careful to get permission from each participant. Provide as much context as you can. Avoid the temptation to publish or distribute logs without permission in order to portray someone in a bad light. The reputation you save will most likely be your own. (Joshua, the below is not about what you posted, I really appreciate you bringing the above into the discussion) FWIW, I kind of feel that channel logging should become an OpenStack thing and not a per-project thing. Log all openstack official channels, make it clear in the wiki/homepage/HoToContribute/WhateverYouWannaCallIt and move on. Nothing, absolutely nothing, prevents the above from happening right now. I've local (as in my ZNC server) logs of the last 1y 1/2 and I could just make them public. Really, IRC is basically(?) public by default and I - I know this is probably personal opinion - don't think there's a difference between a logged channel and a not logged one. If we wanted to make the channel private, we should password protect it and invite few people, make them sign a contract where they swear they won't publish logs and whatnot. Anyway, I think a good way to avoid these discussions for future projects is to simply enable logging on all openstack- channels. Cheers, Flavio Brian Rosmaita wrote: The response on the review is overwhelmingly positive (or, strictly speaking, unanimously non-negative). If anyone has an objection, could you please register it before 12:00 UTC on Monday, January 12? https://review.openstack.org/#/c/145025/ thanks, brian *From:* David Stanek [dsta...@dstanek.com] *Sent:* Wednesday, January 07, 2015 4:43 PM *To:* OpenStack Development Mailing List (not for usage questions) *Subject:* Re: [openstack-dev] [Glance] IRC logging It's also important to remember that IRC channels are typically not private and are likely already logged by dozens of people anyway. On Tue, Jan 6, 2015 at 1:22 PM, Christopher Aedo ca...@mirantis.com mailto:ca...@mirantis.com wrote: On Tue, Jan 6, 2015 at 2:49 AM, Flavio Percoco fla...@redhat.com mailto:fla...@redhat.com wrote: Fully agree... I don't see how enable logging should be a limitation for freedom of thought. We've used it in Zaqar since day 0 and it's bee of great help for all of us. The logging does not remove the need of meetings where decisions and more relevant/important topics are discussed. Wanted to second this as well. I'm strongly in favor of logging - looking through backlogs of chats on other channels has been very helpful to me in the past, and it sure to help others in the future. I don't think there is danger of anyone pointing to a logged IRC conversation in this context as some statement of record. -Christopher ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org mailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- David blog: http://www.traceback.org twitter: http://twitter.com/dstanek www: http://dstanek.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- @flaper87 Flavio Percoco pgp_rNqy0Akht.pgp Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
Re: [openstack-dev] [Fuel] update existing plugin and add new ones
There was a discussion on whether to put all the fuel plugins into the single repository or have them separate. The problems with all the fuel plugins in single repository are following: - it is impossible to make a branch on a single plugin, branch is applied to the whole repository and all the plugins - plugins may have different release cycles, we will get lots of unnecessary merges/rebases - rebasing can be a pain. If I have a branch, I have to rebase when some other plugins change - there should be someone who manage merging to master, the person who owns all the plugins Having all the plugins in single repository will cause interference. if I do some plugin, I have to consider what developers of other plugins do. Anton On Tue, Jan 6, 2015 at 7:00 PM, samuel.bar...@orange.com wrote: Hello, Actually there is two different fuel plugins git repositories. One in stackforge and the other in mirantis: https://github.com/stackforge/fuel-plugins https://github.com/Mirantis/fuel-plugins it is a little bit confusing. Why needing two different repositories? which one should be used in order to: -add new plugin -update/correct exisiting one (for example adding 7-mode and e-series support to cinder netapp plugin according to https://bugs.launchpad.net/fuel/+bug/1405186) How are going to be manage version? There is a stable/6.0 branch from stackforge repo but not mirantis one Regards, Samuel Bartel Orange _ Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration, Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci. This message and its attachments may contain confidential or privileged information that may be protected by law; they should not be distributed, used or copied without authorisation. If you have received this email in error, please notify the sender and delete this message and its attachments. As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified. Thank you. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] 答复: Re: [neutron][AdvancedServices] Confusion about the solution of the service chaining!
Hi Alan, We are now in the process of doing It :) Thanks Vikram From: lv.erc...@zte.com.cn [mailto:lv.erc...@zte.com.cn] Sent: 12 January 2015 12:57 To: Vikram Choudhary Cc: Dongfeng (C); Dhruv Dhody; Kalyankumar Asangi; OpenStack Development Mailing List (not for usage questions); sumitnaiksa...@gmail.com Subject: 答复: RE: [openstack-dev] 答复: Re: [neutron][AdvancedServices] Confusion about the solution of the service chaining! Hi Virkram, Glad to hear that. Have you implemented that? BR Alan 发件人: Vikram Choudhary vikram.choudh...@huawei.commailto:vikram.choudh...@huawei.com 收件人: lv.erc...@zte.com.cnmailto:lv.erc...@zte.com.cn lv.erc...@zte.com.cnmailto:lv.erc...@zte.com.cn, 抄送:OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org, sumitnaiksa...@gmail.commailto:sumitnaiksa...@gmail.com sumitnaiksa...@gmail.commailto:sumitnaiksa...@gmail.com, Dhruv Dhody dhruv.dh...@huawei.commailto:dhruv.dh...@huawei.com, Dongfeng (C) albert.dongf...@huawei.commailto:albert.dongf...@huawei.com, Kalyankumar Asangi kaly...@huawei.commailto:kaly...@huawei.com 日期: 2015/01/12 13:05 主题:RE: [openstack-dev] 答复: Re: [neutron][AdvancedServices] Confusion about the solution of the service chaining! Hi Alan, We have also proposed an idea about SFC. For more details you can refer to https://review.openstack.org/#/c/146315/ Thanks Vikram -Original Message- From: Sumit Naiksatam [mailto:sumitnaiksa...@gmail.com] Sent: 09 January 2015 01:39 To: lv.erc...@zte.com.cnmailto:lv.erc...@zte.com.cn Cc: OpenStack Development Mailing List Subject: Re: [openstack-dev] 答复: Re: [neutron][AdvancedServices] Confusion about the solution of the service chaining! Hi Alan, On Wed, Jan 7, 2015 at 9:54 PM, lv.erc...@zte.com.cnmailto:lv.erc...@zte.com.cn wrote: Hi Sumit, thanks for your reply, one more question, If I just using the 'group-based-policy-service-chaining' to developing the service chaining feuture, how to map the network service in the neutron to the GBP model, because all the network service we implemented are based on neutron model, but the 'group-based-policy-service-chaining' setup the service chaining based on GBP model, so how can we setup the service chaining for network services based the neutron model using the 'group-based-policy-service-chaining' ? The current model and implementation leverage the Neutron services as is (the model is actually agnostic of the service definition/implementation). Will be happy to further discuss this, feel free to ping on #openstack-gbp. Thanks, ~Sumit. BR Alan 发件人: Sumit Naiksatam sumitnaiksa...@gmail.commailto:sumitnaiksa...@gmail.com 收件人: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org, 日期: 2015/01/08 10:46 主题:Re: [openstack-dev] [neutron][AdvancedServices] Confusion about the solution of the service chaining! Hi Alan, Responses inline... On Wed, Jan 7, 2015 at 4:25 AM, lv.erc...@zte.com.cnmailto:lv.erc...@zte.com.cn wrote: Hi, I want to confirm that how is the project about Neutron Services Insertion, Chaining, and Steering going, I found that all the code implementation about service insertion、service chaining and traffic steering list in JunoPlan were Abandoned . https://wiki.openstack.org/wiki/Neutron/AdvancedServices/JunoPlan and I also found that we have a new project about GBP and group-based-policy-service-chaining be located at: https://blueprints.launchpad.net/group-based-policy/+spec/group-based -policy-abstraction https://blueprints.launchpad.net/group-based-policy/+spec/group-based -policy-service-chaining so I'm confused with solution of the service chaining. Yes, the above two blueprints have been implemented and are available for consumption today as a part of the Group-based Policy codebase and release. The GBP model uses a policy trigger to drive the service composition and can accommodate different rendering policies like realization using NFV SFC. We are developing the service chaining feature, so we need to know which one is the neutron's choice. It would be great if you can provide feedback on the current implementation, and perhaps participate and contribute as well. Are the blueprints about the service insertion, service chaining and traffic steering list in JunoPlan all Abandoned ? Some aspects of this are perhaps a good fit in Neutron and others are not. We are looking forward to continuing the discussion on this topic on the areas which are potentially a good fit for Neutron (we have had this discussion before as well). BR Alan ZTE Information Security Notice: The information
Re: [openstack-dev] openstack-dev topics now work correctly
On 01/09/2015 08:12 PM, Stefano Maffulli wrote: Dear all, if you've tried the topics on this mailing list and haven't received emails, well... we had a problem on our side: the topics were not setup correctly. Luigi Toscano helped isolate the problem and point at the solution[1]. He noticed that only the QA topic was working and that's the only one defined with a single regular expression, while all the others use multiple line regexp. I corrected the regexp as described in the mailman FAQ and tested that the delivery works correctly. If you want to subscribe only to some topics now you can. Thanks again to Luigi for the help. Cheers, stef [1] http://wiki.list.org/pages/viewpage.action?pageId=8683547 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Hi! Is it possible to make topic lists more up-to-date with what real in-use topic are? I would appreciate at least topics oslo and all. Thanks. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][neutron]VIF_VHOSTUSER
Hi Ian, Spec for “Support vhost-user in libvirt vif driver” [1] has been approved for Kilo. We should have code available by EOW. We are also working on a mechanism driver for Neutron as well [2]. We started working on a 3rd party CI. Regards Przemek [1] https://review.openstack.org/#/c/138736/ [2] https://review.openstack.org/#/c/138742/ From: Ian Wells [mailto:ijw.ubu...@cack.org.uk] Sent: Saturday, January 10, 2015 1:15 AM To: OpenStack Development Mailing List Subject: [openstack-dev] [nova][neutron]VIF_VHOSTUSER Once more, I'd like to revisit the VIF_VHOSTUSER discussion [1]. I still think this is worth getting into Nova's libvirt driver - specifically because there's actually no way to distribute this as an extension; since we removed the plugin mechanism for VIF drivers, it absolutely requires a code change in the libvirt driver. This means that there's no graceful way of distributing an aftermarket VHOSTUSER driver for libvirt. The standing counterargument to adding it is that nothing in the upstream or 3rd party CI would currently test the VIF_VHOSTUSER code. I'm not sure that's a showstopper, given the code is zero risk to anyone when it's not being used, and clearly is going to be experimental when it's enabled. So, Nova cores, would it be possible to incorporate this without a corresponding driver in base Neutron? Cheers, -- Ian. [1] https://review.openstack.org/#/c/96140/ __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] Switching CI back to amd64
On 01/08/2015 10:22 AM, Derek Higgins wrote: On 07/01/15 23:41, Ben Nemec wrote: I don't feel like we've been all that capacity constrained lately anyway, so as I said in my other (largely unnecessary, as it turns out) email, I'm +1 on doing this. Correct we're not currently constrained on capacity at all (most days we run less then 300 jobs), but once the other region is in use we'll be hoping to add jobs to other projects. does that mean we could also add some jobs from the 'wanted ci jobs' matrix? :) https://github.com/openstack-infra/tripleo-ci/blob/master/docs/wanted_ci_jobs.csv -- Giulio Fidente GPG KEY: 08D733BA __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Vancouver Design Summit format changes
On 09/01/15 15:50 +0100, Thierry Carrez wrote: [huge snip] What do you think ? Could that work ? If not, do you have alternate suggestions ? Love it! Thanks for the thoughtful and detailed email. Flavio -- @flaper87 Flavio Percoco pgp2LB9GMnSpn.pgp Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] 答复: [devstack] Opensatck installation issue.
Hi all, I am still getting the same error while installing the Openstack through devstack, If someone know the solution please reply. On Fri, Jan 9, 2015 at 1:53 PM, Abhishek Shrivastava abhis...@cloudbyte.com wrote: Hi Liuxinguo, Thanks for the suggestion, I'll try and make it work. On Fri, Jan 9, 2015 at 1:24 PM, liuxinguo liuxin...@huawei.com wrote: Hi Abhishek, For the error in the first line: “mkdir: cannot create directory `/logs': Permission denied” and the error at the end: “ln: failed to create symbolic link `/logs/screen/screen-key.log': No such file or directory” The stack user does not have the permission on “/” so it can not create directory `/logs'. Please check the permission. liu *发件人:* Abhishek Shrivastava [mailto:abhis...@cloudbyte.com] *发送时间:* 2015年1月9日 15:26 *收件人:* OpenStack Development Mailing List (not for usage questions) *主题:* [openstack-dev] [devstack] Opensatck installation issue. Hi, I'm trying to install *Openstack *through* devstack master* on my *Ubuntu* *12.04 VM*, but it is failing and generating the following error. If anyone can help me resolving this issue please do reply. -- *Thanks Regards,* *Abhishek* ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- *Thanks Regards,* *Abhishek* -- *Thanks Regards,* *Abhishek* __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] serial-console *replaces* console-log file?
On Fri, Jan 09, 2015 at 09:15:39AM +0800, Lingxian Kong wrote: There is an excellent post describing this, for your information: http://blog.oddbit.com/2014/12/22/accessing-the-serial-console-of-your-nova-servers/ In the last section of that article he describes my issue as well: It would be nice to have both mechanisms available -- serial console support for interactive access, **and** console logs for retroactive debugging. Good reference, you can also get some information here: https://review.openstack.org/#/c/132269/ 2015-01-07 22:38 GMT+08:00 Markus Zoeller mzoe...@de.ibm.com: The blueprint serial-ports introduced a serial console connection to an instance via websocket. I'm wondering * why enabling the serial console *replaces* writing into log file [1]? * how one is supposed to retrieve the boot messages *before* one connects? The good point of using serial console is that you can create with a few lines of python an interactive console to debug your virtual machine. s. I really like the feature of the serial console and I don't doubt its usefulness. I'm worried about *not* persisting the OS messages into a file when I activate the serial console feature. Why not having both? Regards, Markus Zoeller IRC: markus_z Launchpad: mzoeller __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Vancouver Design Summit format changes
Tim Bell wrote: Let's ask the operators opinions too on openstack-operators mailing list. Sure, that's my next step. I first wanted to check that this change was fine for historic Design Summit participants. I'll follow up with operators and also discuss it at the Ops meetup. FWIW, the proposed format is partly inspired by the recent Ops Summit (with its single general session and multiple workgroup sessions), so I don't expect the new format to be a surprise there. I agree there is a significant scheduling challenge that we still need to solve (in all cases), but the format itself should be fine. Grouping the two events into one is also about facilitating exchanges and sharing the same space and lunches -- further closing the feedback loop. -- Thierry Carrez (ttx) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [mistral] Cancelling today's team meeting
Hi, We decided to cancel today’s team meeting because some key members of the team won’t be present. The next on will be held on Jan 19. Renat Akhmerov @ Mirantis Inc. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Fuel] [Puppet] Manifests for granular deploy steps and testing results against the host OS
Hello. We are working on the modularization of Openstack deployment by puppet manifests in Fuel library [0]. Each deploy step should be post-verified with some testing framework as well. I believe the framework should: * be shipped as a part of Fuel library for puppet manifests instead of orchestration or Nailgun backend logic; * allow the deployer to verify results right in-place, at the node being deployed, for example, with a rake tool; * be compatible / easy to integrate with the existing orchestration in Fuel and Mistral as an option? It looks like test resources provided by Serverspec [1] are a good option, what do you think? What plans have Fuel Nailgun team for testing the results of deploy steps aka tasks? The spec for blueprint gives no a clear answer. [0] https://blueprints.launchpad.net/fuel/+spec/fuel-library-modularization [1] http://serverspec.org/resource_types.html -- Best regards, Bogdan Dobrelya, Skype #bogdando_at_yahoo.com Irc #bogdando __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Vancouver Design Summit format changes
On Fri, Jan 9, 2015 at 8:50 AM, Thierry Carrez thie...@openstack.org wrote: Hi everyone, The OpenStack Foundation staff is considering a number of changes to the Design Summit format for Vancouver, changes on which we'd very much like to hear your feedback. The problems we are trying to solve are the following: - Accommodate the needs of more OpenStack projects - Reduce separation and perceived differences between the Ops Summit and the Design/Dev Summit - Create calm and less-crowded spaces for teams to gather and get more work done While some sessions benefit from large exposure, loads of feedback and large rooms, some others are just workgroup-oriented work sessions that benefit from smaller rooms, less exposure and more whiteboards. Smaller rooms are also cheaper space-wise, so they allow us to scale more easily to a higher number of OpenStack projects. My proposal is the following. Each project team would have a track at the Design Summit. Ops feedback is in my opinion part of the design of OpenStack, so the Ops Summit would become a track within the forward-looking Design Summit. Tracks may use two separate types of sessions: * Fishbowl sessions Those sessions are for open discussions where a lot of participation and feedback is desirable. Those would happen in large rooms (100 to 300 people, organized in fishbowl style with a projector). Those would have catchy titles and appear on the general Design Summit schedule. We would have space for 6 or 7 of those in parallel during the first 3 days of the Design Summit (we would not run them on Friday, to reproduce the successful Friday format we had in Paris). * Working sessions Those sessions are for a smaller group of contributors to get specific work done or prioritized. Those would happen in smaller rooms (20 to 40 people, organized in boardroom style with loads of whiteboards). Those would have a blanket title (like infra team working session) and redirect to an etherpad for more precise and current content, which should limit out-of-team participation. Those would replace project pods. We would have space for 10 to 12 of those in parallel for the first 3 days, and 18 to 20 of those in parallel on the Friday (by reusing fishbowl rooms). Each project track would request some mix of sessions (We'd like 4 fishbowl sessions, 8 working sessions on Tue-Thu + half a day on Friday) and the TC would arbitrate how to allocate the limited resources. Agenda for the fishbowl sessions would need to be published in advance, but agenda for the working sessions could be decided dynamically from an etherpad agenda. By making larger use of smaller spaces, we expect that setup to let us accommodate the needs of more projects. By merging the two separate Ops Summit and Design Summit events, it should make the Ops feedback an integral part of the Design process rather than a second-class citizen. By creating separate working session rooms, we hope to evolve the pod concept into something where it's easier for teams to get work done (less noise, more whiteboards, clearer agenda). What do you think ? Could that work ? If not, do you have alternate suggestions ? This looks great, thanks for continuing to evolve the Summit format! Kyle -- Thierry Carrez (ttx) ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Future-based api for openstack clients calls, that starts background tasks
On Mon, Jan 12, 2015 at 3:33 PM, Angus Salkeld asalk...@mirantis.com wrote: On Mon, Jan 12, 2015 at 10:17 PM, Konstantin Danilov kdani...@mirantis.com wrote: Boris, Move from sync http to something like websocket requires a lot of work and not directly connected with API issue. When openstack api servers begin to support websockets - it would be easy to change implementation of monitoring thread without breaking compatibility. At the moment periodical pooling from additional thread looks reasonable for me and it creates same amount of http requests as all current implementation. BP is not about improving performance, but about providing convenient and common API to handle background tasks. So we won't need to retrieve 100500 times information about object. As I sad before - this API creates same amount of load as any code which we use to check background task currently. It even can decrease load due to requests aggregation in some cases (but there a points to discuss). As well this pattern doesn't look great. I would prefer to see something like: vm = novaclient.servers.create(, sync=True) This is completely different pattern. It is blocking call, which don't allows you to start two(or more) background tasks and from same thread and make some calculations while they running in background. Except if you use threads (eventlet or other) - I am still struggling to enjoy Futures/yield based flowcontrol, lost battle i guess:(. On Mon, Jan 12, 2015 at 1:42 PM, Boris Pavlovic bpavlo...@mirantis.com wrote: Konstantin, I believe it's better to work on server side, and use some modern approach like web sockets for async operations. So we won't need to retrieve 100500 times information about object. And then use this feature in clients. create_future = novaclient.servers.create_async() . vm = create_future.result() As well this pattern doesn't look great. I would prefer to see something like: vm = novaclient.servers.create(, sync=True) Best regards, Boris Pavlovic On Mon, Jan 12, 2015 at 2:30 PM, Konstantin Danilov kdani...@mirantis.com wrote: Hi all. There a set of openstack api functions which starts background actions and return preliminary results - like 'novaclient.create'. Those functions requires periodically check results and handle timeouts/errors (and often cleanup + restart help to fix an error). Check/retry/cleanup code duplicated over a lot of core projects. As examples - heat, tempest, rally, etc and definitely in many third-party scripts. We have some very similar code at the moment, but we are keen to move away from it to something like making use of rpc .{start,end} notifications to reduce the load we put on keystone and friends. This is nice approach for core projects, yet novaclient users are typically can't use such approach. But the nice things about futures is that we can have different engines (websockets, sync http, rpc with callbacks, etc) behind same API, and hide all implementation details behind it. It's even possible to use them in simultaneously - different engine would used to handle different calls. I propose to provide common higth-level API for such functions, which uses 'futures' (http://en.wikipedia.org/wiki/Futures_and_promises) as a way to present background task. Idea is to add to each background-task-starter function a complimentary call, that returns 'future' object. E.g. create_future = novaclient.servers.create_async() . vm = create_future.result() Is that going to return on any state change or do you pass in a list of acceptable states? In general it should returns result if background task is completed successfully or raise exception if it fails. For server I currently use 'active' as success marker and 'error' or timeout for exception ( https://github.com/koder-ua/os_api/blob/master/os_api/nova.py#L74 ), and hope that expected states can be calculated from api call/parameters automatically, but not 100% sure. So - yes, additional parameter might be required. -Angus This allows to unify(and optimize) monitoring cycles, retries, etc. Please found complete BP at https://github.com/koder-ua/os_api/blob/master/README.md Thanks -- Kostiantyn Danilov aka koder http://koder.ua Principal software engineer, Mirantis skype:koder.ua http://koder-ua.blogspot.com/ http://mirantis.com __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
[openstack-dev] [neutron] Meeting tomorrow: Going over Critical and High priority Kilo-2 specs
Folks: During tomorrow's Neutron meeting, I'd like to spend a little time going over the approved specs marked as Critical and High priority [1]. We're about 3 weeks out from Kilo-2, so I'd like to get a feel for how these are coming along. If you are assigned one of these specs and can't make the meeting, feel free to find me on IRC, reply to the thread, or update your BP in LP with any remarks there. Thanks! Kyle [1] https://wiki.openstack.org/wiki/Network/Meetings#On_Demand_Agenda __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] Request Spec Freeze Exception (DRBD for Nova)
Hello all, in Paris (and later on, on IRC and the mailing list) I began to ask around about providing a DRBD storage driver for Nova. This is an alternative to using iSCSI for block storage access, and would be especially helpful for backends already using DRBD for replicated storage. The spec at https://review.openstack.org/#/c/134153/ was not approved in December on the grounds that the DRBD Cinder driver https://review.openstack.org/#/c/140451/ should be merged first; because of (network) timeouts during the K-1 milestone (and then merge conflicts, rebased dependencies, etc.) it wasn't merged until recently (Jan 5th). Now that the Cinder driver is already upstream, we'd like to ask for approval of the Nova driver - it would provide quite some performance boost over having all block storage data. Thank you for your kind consideration! Regards, Phil -- : Ing. Philipp Marek : LINBIT | Your Way to High Availability : DRBD/HA support and consulting http://www.linbit.com : DRBD® and LINBIT® are registered trademarks of LINBIT, Austria. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Fuel] [Puppet] Manifests for granular deploy steps and testing results against the host OS
Hi, Puppet OpenStack community uses Beaker for acceptance testing. I would consider it as option [2] [2] https://github.com/puppetlabs/beaker -- Best regards, Sergii Golovatiuk, Skype #golserge IRC #holser On Mon, Jan 12, 2015 at 2:53 PM, Bogdan Dobrelya bdobre...@mirantis.com wrote: Hello. We are working on the modularization of Openstack deployment by puppet manifests in Fuel library [0]. Each deploy step should be post-verified with some testing framework as well. I believe the framework should: * be shipped as a part of Fuel library for puppet manifests instead of orchestration or Nailgun backend logic; * allow the deployer to verify results right in-place, at the node being deployed, for example, with a rake tool; * be compatible / easy to integrate with the existing orchestration in Fuel and Mistral as an option? It looks like test resources provided by Serverspec [1] are a good option, what do you think? What plans have Fuel Nailgun team for testing the results of deploy steps aka tasks? The spec for blueprint gives no a clear answer. [0] https://blueprints.launchpad.net/fuel/+spec/fuel-library-modularization [1] http://serverspec.org/resource_types.html -- Best regards, Bogdan Dobrelya, Skype #bogdando_at_yahoo.com Irc #bogdando __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Fuel][plugins] Fuel 6.0 plugin for Pacemaker STONITH (HA fencing)
-- Message: 16 Date: Wed, 31 Dec 2014 17:41:10 -0800 From: Andrew Woodward xar...@gmail.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [Fuel][plugins] Fuel 6.0 plugin for Pacemaker STONITH (HA fencing) Message-ID: CACEfbZjMZX1+v+0KsOmqf1JLCvOqgk0UMBvALO4fCy_=dpr...@mail.gmail.com Content-Type: text/plain; charset=utf-8 Bogdan, Do you think that the existing post deployment hook is sufficient to implement this or does additional plugins development need to be done to support this On Dec 30, 2014 3:39 AM, Bogdan Dobrelya bdobre...@mirantis.com wrote: Hello. Post deployment hooks are a hardcode and is a bad place to contribute the code I believe. Plugins are a framework and should be used instead in further development. If someone would want to use this plugin to configure any custom power management device type, he or she should: * make sure there is a corresponding fence agent script exists amongst the other ones shipped with standard fence-agents package, * provide required parameters and values for this agent and put them in a pcs_fencing YAML file and apply puppet manifest for plugin on nodes (see plugin dev docs) and that's it. Hello. There is a long living blueprint [0] about HA fencing of failed nodes in Corosync and Pacemaker cluster. Happily, in 6.0 release we have a pluggable architecture supported in Fuel. I propose the following implementation [1] (WIP repo [2]) for this feature as a plugin for puppet. It addresses the related blueprint for HA Fencing in puppet manifests of Fuel library [3]. For initial version, all the data definitions for power management devices should be done manually in YAML files (see the plugin's README.md file). Later it could be done in a more user friendly way, as a part of Fuel UI perhaps. Note that the similar approach - YAML data structures which should be filled in by the cloud admin and passed to Fuel Orchestrator automatically at PXE provision stage - could be used as well for Power management blueprint, see the related ML thread [4]. Please also note, there is a dev docs for Fuel plugins merged recently [5] where you can find how to build and install this plugin. [0] https://blueprints.launchpad.net/fuel/+spec/ha-fencing [1] https://review.openstack.org/#/c/144425/ [2] https://github.com/bogdando/fuel-plugins/tree/fencing_puppet_newprovider/ha_fencing [3] https://blueprints.launchpad.net/fuel/+spec/fencing-in-puppet-manifests [4] http://lists.openstack.org/pipermail/openstack-dev/2014-November/049794.html [5] http://docs.mirantis.com/fuel/fuel-6.0/plugin-dev.html#what-is-pluggable-architecture -- Best regards, Bogdan Dobrelya, Skype #bogdando_at_yahoo.com Irc #bogdando ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Best regards, Bogdan Dobrelya, Skype #bogdando_at_yahoo.com Irc #bogdando __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Fuel] Dropping Python-2.6 support
Folks, as it was planned and then announced at the OpenStack summit OpenStack services deprecated Python-2.6 support. At the moment several services and libraries are already only compatible with Python=2.7. And there is no common sense in trying to get back compatibility with Py2.6 because OpenStack infra does not run tests for that version of Python. The point of this email is that some components of Fuel, say, Nailgun and Fuel Client are still only tested with Python-2.6. Fuel Client in it’s turn is about to use OpenStack CI’s python-jobs for running unit tests. That means that in order to make it compatible with Py2.6 there is a need to run a separate python job in FuelCI. However, I believe that forcing the things being compatible with 2.6 when the rest of ecosystem decided not to go with it and when Py2.7 is already available in the main CentOS repo sounds like a battle with the common sense. So my proposal is to drop 2.6 support in Fuel-6.1. - romcheg signature.asc Description: Message signed with OpenPGP using GPGMail __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Fuel] Image based provisioning
Hello, Andrew, thank you for pointing out all the issues. I left more detailed comments inlined On Tue, Jan 6, 2015 at 3:51 AM, Andrew Woodward xar...@gmail.com wrote: Here is a list of the issues I ran into using IBP before the 23rd. 5 appears to not be merged yet and must be resolved prior to making IBP the default as you can't restart a provisioned node. 1. a full cobbler template is generated for the IBP node, if you wanted to re-prov the node, you would have to erase the cobbler profile, bootstrap and call the node provision api. If you forced it back to netboot (which can be done with installer methods) it loads the installer instead of the bootstrap image Sounds like we have a bug here. 2. We need to be careful when considering removing cobbler from fuel, its still being used in IBP to manage dnsmasq (dhcp lease for fuelweb_admin iface) and bootp/PXE loading profiles Yes, indeed. We're going to implement our own dnsmasq driven service for managing all what cobbler performed earlier for us. 3. After a time all DNS names for nodes expire (ssh node-1 - Could not resolve hostname) even though they are still in cobbler (cobbler system list) Definitely a bug. 4. fuel-agent log is not in logs UI Again, it's a bug. 5. image based nodes won't set up network after first boot https://bugs.launchpad.net/fuel/+bug/1398207 The fix is available on the review board. I hope it'll be merged soon 6 image based nodes are basically impossible to read network settings on unless you know everything about cloud-init Sorry, i didn't get you. What did you mean? For image based, only the interface looking to admin network will be set up by cloud-init. All other network configuration will be done later and without cloud-init. On Wed, Dec 17, 2014 at 3:08 AM, Vladimir Kozhukalov vkozhuka...@mirantis.com wrote: In case of image based we need either to update image or run yum update/apt-get upgrade right after first boot (second option partly devalues advantages of image based scheme). Besides, we are planning to re-implement image build script so as to be able to build images on a master node (but unfortunately 6.1 is not a real estimate for that). Vladimir Kozhukalov On Wed, Dec 17, 2014 at 5:03 AM, Mike Scherbakov mscherba...@mirantis.com wrote: Dmitry, as part of 6.1 roadmap, we are going to work on patching feature. There are two types of workflow to consider: - patch existing environment (already deployed nodes, aka target nodes) - ensure that new nodes, added to the existing and already patched envs, will install updated packages too. In case of anakonda/preseed install, we can simply update repo on master node and run createrepo/etc. What do we do in case of image? Will we need a separate repo alongside with main one, updates repo - and do post-provisioning yum update to fetch all patched packages? On Tue, Dec 16, 2014 at 11:09 PM, Andrey Danin ada...@mirantis.com wrote: Adding Mellanox team explicitly. Gil, Nurit, Aviram, can you confirm that you tested that feature? It can be enabled on every fresh ISO. You just need to enable the Experimental mode (please, see the documentation for instructions). On Tuesday, December 16, 2014, Dmitry Pyzhov dpyz...@mirantis.com wrote: Guys, we are about to enable image based provisioning in our master by default. I'm trying to figure out requirement for this change. As far as I know, it was not tested on scale lab. Is it true? Have we ever run full system tests cycle with this option? Do we have any other pre-requirements? -- Andrey Danin ada...@mirantis.com skype: gcon.monolake ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Mike Scherbakov #mihgen ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Andrew Mirantis Ceph community ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Future-based api for openstack clients calls, that starts background tasks
On Mon, Jan 12, 2015 at 10:17 PM, Konstantin Danilov kdani...@mirantis.com wrote: Boris, Move from sync http to something like websocket requires a lot of work and not directly connected with API issue. When openstack api servers begin to support websockets - it would be easy to change implementation of monitoring thread without breaking compatibility. At the moment periodical pooling from additional thread looks reasonable for me and it creates same amount of http requests as all current implementation. BP is not about improving performance, but about providing convenient and common API to handle background tasks. So we won't need to retrieve 100500 times information about object. As I sad before - this API creates same amount of load as any code which we use to check background task currently. It even can decrease load due to requests aggregation in some cases (but there a points to discuss). As well this pattern doesn't look great. I would prefer to see something like: vm = novaclient.servers.create(, sync=True) This is completely different pattern. It is blocking call, which don't allows you to start two(or more) background tasks and from same thread and make some calculations while they running in background. Except if you use threads (eventlet or other) - I am still struggling to enjoy Futures/yield based flowcontrol, lost battle i guess:(. On Mon, Jan 12, 2015 at 1:42 PM, Boris Pavlovic bpavlo...@mirantis.com wrote: Konstantin, I believe it's better to work on server side, and use some modern approach like web sockets for async operations. So we won't need to retrieve 100500 times information about object. And then use this feature in clients. create_future = novaclient.servers.create_async() . vm = create_future.result() As well this pattern doesn't look great. I would prefer to see something like: vm = novaclient.servers.create(, sync=True) Best regards, Boris Pavlovic On Mon, Jan 12, 2015 at 2:30 PM, Konstantin Danilov kdani...@mirantis.com wrote: Hi all. There a set of openstack api functions which starts background actions and return preliminary results - like 'novaclient.create'. Those functions requires periodically check results and handle timeouts/errors (and often cleanup + restart help to fix an error). Check/retry/cleanup code duplicated over a lot of core projects. As examples - heat, tempest, rally, etc and definitely in many third-party scripts. We have some very similar code at the moment, but we are keen to move away from it to something like making use of rpc .{start,end} notifications to reduce the load we put on keystone and friends. I propose to provide common higth-level API for such functions, which uses 'futures' (http://en.wikipedia.org/wiki/Futures_and_promises) as a way to present background task. Idea is to add to each background-task-starter function a complimentary call, that returns 'future' object. E.g. create_future = novaclient.servers.create_async() . vm = create_future.result() Is that going to return on any state change or do you pass in a list of acceptable states? -Angus This allows to unify(and optimize) monitoring cycles, retries, etc. Please found complete BP at https://github.com/koder-ua/os_api/blob/master/README.md Thanks -- Kostiantyn Danilov aka koder http://koder.ua Principal software engineer, Mirantis skype:koder.ua http://koder-ua.blogspot.com/ http://mirantis.com __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Kostiantyn Danilov aka koder.ua Principal software engineer, Mirantis skype:koder.ua http://koder-ua.blogspot.com/ http://mirantis.com __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI
Hi, Regarding the last issue, i fixed it by logging in and manually pip install docutils. Image was created successfully. Now the problem is that nodepool is not able to login into instances created from that image. I have NODEPOOL_SSH_KEY exported in the screen where nodepool is running, and also i am able to login to the instance from user nodepool, but nodepoold gives error: 2015-01-12 14:19:03,095 DEBUG paramiko.transport: Switch to new keys ... 2015-01-12 14:19:03,109 DEBUG paramiko.transport: Trying key c03fbf64440cd0c2ecbc07ce4ed59804 from /home/nodepool/.ssh/id_rsa 2015-01-12 14:19:03,135 DEBUG paramiko.transport: userauth is OK 2015-01-12 14:19:03,162 INFO paramiko.transport: Authentication (publickey) failed. 2015-01-12 14:19:03,185 DEBUG paramiko.transport: Trying discovered key c03fbf64440cd0c2ecbc07ce4ed59804 in /home/nodepool/.ssh/id_rsa 2015-01-12 14:19:03,187 DEBUG paramiko.transport: userauth is OK ^C2015-01-12 14:19:03,210 INFO paramiko.transport: Authentication (publickey) failed. 2015-01-12 14:19:03,253 DEBUG paramiko.transport: EOF in transport thread 2015-01-12 14:19:03,254 INFO nodepool.utils: Password auth exception. Try number 4... echo $NODEPOOL_SSH_KEY B3NzaC1yc2EDAQABAAABAQC9gP6qui1fmHrj02p6OGvnz7kMTJ2rOC3SBYP/Ij/6yz+SU8rL5rqL6jqT30xzy9t1q0zsdJCNB2jExD4xb+NFbaoGlvjF85m12eFqP4CQenxUOdYAepf5sjV2l8WAO3ylspQ78ipLKec98NeKQwLrHB+xon6QfAHXr6ZJ9NRZbmWw/OdpOgAG9Cab+ELTmkfEYgQz01cZE22xEAMvPXz57KlWPvxtE7YwYWy180Yib97EftylsNkrchbSXCwiqgKUf04qWhTgNrVuRJ9mytil6S82VNDxHzTzeCCxY412CV6dDJNLzJItpf/CXQelj/6wJs1GgFl5GWJnqortMR2v cat /home/nodepool/.ssh/id_rsa.pub ssh-rsa B3NzaC1yc2EDAQABAAABAQC9gP6qui1fmHrj02p6OGvnz7kMTJ2rOC3SBYP/Ij/6yz+SU8rL5rqL6jqT30xzy9t1q0zsdJCNB2jExD4xb+NFbaoGlvjF85m12eFqP4CQenxUOdYAepf5sjV2l8WAO3ylspQ78ipLKec98NeKQwLrHB+xon6QfAHXr6ZJ9NRZbmWw/OdpOgAG9Cab+ELTmkfEYgQz01cZE22xEAMvPXz57KlWPvxtE7YwYWy180Yib97EftylsNkrchbSXCwiqgKUf04qWhTgNrVuRJ9mytil6S82VNDxHzTzeCCxY412CV6dDJNLzJItpf/CXQelj/6wJs1GgFl5GWJnqortMR2v jenkins@jenkins-cinderci ssh ubuntu@10.100.128.136 -v OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014 debug1: Reading configuration data /home/nodepool/.ssh/config debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: Connecting to 10.100.128.136 [10.100.128.136] port 22. debug1: Connection established. debug1: Offering RSA public key: /home/nodepool/.ssh/id_rsa debug1: Server accepts key: pkalg ssh-rsa blen 279 debug1: key_parse_private2: missing begin marker debug1: read PEM private key done: type RSA debug1: Authentication succeeded (publickey). Authenticated to 10.100.128.136 ([10.100.128.136]:22). ... I was able to login into the template instance and also am able to login into the slave instances. Also nodepoold was able to login into template instance but now it fails loging in into slave. I tried running it as either nodepol or jenkins users, same result. Thanks, Eduard On Mon, Jan 12, 2015 at 2:09 PM, Eduard Matei eduard.ma...@cloudfounders.com wrote: Hi, Back with another error during image creation with nodepool: 2015-01-12 13:05:17,775 INFO nodepool.image.build.local_01.d-p-c: Downloading python-daemon-2.0.1.tar.gz (62kB) 2015-01-12 13:05:18,022 INFO nodepool.image.build.local_01.d-p-c: Traceback (most recent call last): 2015-01-12 13:05:18,023 INFO nodepool.image.build.local_01.d-p-c: File string, line 20, in module 2015-01-12 13:05:18,023 INFO nodepool.image.build.local_01.d-p-c: File /tmp/pip-build-r6RJKq/python-daemon/setup.py, line 27, in module 2015-01-12 13:05:18,024 INFO nodepool.image.build.local_01.d-p-c: import version 2015-01-12 13:05:18,024 INFO nodepool.image.build.local_01.d-p-c: File version.py, line 51, in module 2015-01-12 13:05:18,024 INFO nodepool.image.build.local_01.d-p-c: import docutils.core 2015-01-12 13:05:18,024 INFO nodepool.image.build.local_01.d-p-c: ImportError: No module named docutils.core 2015-01-12 13:05:18,025 INFO nodepool.image.build.local_01.d-p-c: Complete output from command python setup.py egg_info: 2015-01-12 13:05:18,025 INFO nodepool.image.build.local_01.d-p-c: Traceback (most recent call last): 2015-01-12 13:05:18,025 INFO nodepool.image.build.local_01.d-p-c: 2015-01-12 13:05:18,025 INFO nodepool.image.build.local_01.d-p-c: File string, line 20, in module 2015-01-12 13:05:18,025 INFO nodepool.image.build.local_01.d-p-c: 2015-01-12 13:05:18,025 INFO nodepool.image.build.local_01.d-p-c: File /tmp/pip-build-r6RJKq/python-daemon/setup.py, line 27, in module 2015-01-12 13:05:18,025 INFO nodepool.image.build.local_01.d-p-c: 2015-01-12 13:05:18,025 INFO nodepool.image.build.local_01.d-p-c: import version 2015-01-12 13:05:18,026 INFO nodepool.image.build.local_01.d-p-c: 2015-01-12 13:05:18,026 INFO nodepool.image.build.local_01.d-p-c: File version.py, line 51, in module 2015-01-12 13:05:18,026 INFO nodepool.image.build.local_01.d-p-c: 2015-01-12 13:05:18,026 INFO
Re: [openstack-dev] [nova][NFV][qa][Telco] Testing NUMA, CPU pinning and large pages
Hi Vladik, I added the [Telco] tag. see below.. Am 12.01.2015 um 03:02 schrieb Vladik Romanovsky vladik.romanov...@enovance.com: Hi everyone, Following Steve Gordon's email [1], regarding CI for NUMA, SR-IOV, and other features, I'd like to start a discussion about the NUMA testing in particular. Recently we have started a work to test some of these features. The current plan is to use the functional tests, in the Nova tree, to exercise the code paths for NFV use cases. In general, these will contain tests to cover various scenarios regarding NUMA, CPU pinning, large pages and validate a correct placement/scheduling. I think we need to determine where these patches are belonging to. So IMHO Nova tree makes sense. But I am unsure if Tempest is the right place. I would say all tests with a general propose can be located in Tempest especially scenario tests. Since we are already planning to have a external CI system it would make sense to keep them somewhere outside and use the tempest lib (when ready). Regards Marc In addition to the functional tests in Nova, we have also proposed two basic scenarios in Tempest [2][3]. One to make sure that an instance can boot with a minimal NUMA configuration (a topology that every host should have) and one that would request an impossible topology and fail with an expected exception. This work doesn't eliminate the need of testing on a real hardware, however, these tests should provide coverage for the features that are currently being submitted upstream and hopefully be a good starting point for future testing. Thoughts? Vladik [1] http://lists.openstack.org/pipermail/openstack-dev/2014-November/050306.html [2] https://review.openstack.org/143540 [3] https://review.openstack.org/143541 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] Requesting exception for resource-object-models blueprint spec
https://review.openstack.org/#/c/127609/ This is a fundamental building block for the #3 priority (Scheduler) work in Kilo. It's been through 11 revisions so far and has support from at least one nova-driver and 4 non-drivers. This work is a building block for the scheduler because it changes the way we publish and consume a set of resources managed by the resource tracker and scheduler subsystems. It also replaces the extensible resource tracker with a more robust method of adding new resource classes. Thanks, -jay __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][NFV][qa][Telco] Testing NUMA, CPU pinning and large pages
On Mon, Jan 12, 2015 at 02:47:19PM +0100, Marc Koderer wrote: Hi Vladik, I added the [Telco] tag. see below.. Am 12.01.2015 um 03:02 schrieb Vladik Romanovsky vladik.romanov...@enovance.com: Hi everyone, Following Steve Gordon's email [1], regarding CI for NUMA, SR-IOV, and other features, I'd like to start a discussion about the NUMA testing in particular. Recently we have started a work to test some of these features. The current plan is to use the functional tests, in the Nova tree, to exercise the code paths for NFV use cases. In general, these will contain tests to cover various scenarios regarding NUMA, CPU pinning, large pages and validate a correct placement/scheduling. I think we need to determine where these patches are belonging to. So IMHO Nova tree makes sense. But I am unsure if Tempest is the right place. I would say all tests with a general propose can be located in Tempest especially scenario tests. Since we are already planning to have a external CI system it would make sense to keep them somewhere outside and use the tempest lib (when ready). NUMA, huge pages cpu pinning are all general purpose Nova features. While NFV / Telcos will be a large user of them, they're not the only. As such these features should be tested in a general Nova test suite, as we would for any other Nova functionality, not in a telco-specific test suite as that just re-inforces the impression that this is a niche feature only useful for a few use cases. Regards, Daniel -- |: http://berrange.com -o-http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Telco] [NFV] Service function chaining
Hi Yuriy, FYI there is a project proposal on opnfv wiki on this issue: https://wiki.opnfv.org/requirements_projects/openstack_based_vnf_forwarding_graph On Wed, Jan 7, 2015 at 12:22 AM, yuriy.babe...@telekom.de wrote: Hi all, as discussed per last IRC meeting we prepared first thoughts on the requirements on Service Function Chaining (SFC) and relevant use-case. We have an etherpad for initial draft ideas. Later on we should move it to wiki. All comments are very welcome. https://etherpad.openstack.org/p/kKIqu2ipN6 Best, Yuriy Deutsche Telekom ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Zhipeng (Howard) Huang Standard Engineer IT Standard Patent/IT Prooduct Line Huawei Technologies Co,. Ltd Email: huangzhip...@huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipe...@uci.edu Office: Calit2 Building Room 2402 OpenStack, OpenDaylight, OpenCompute affcienado __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] 答复: [devstack] Opensatck installation issue.
Since it seems a permission error, this may help: http://docs.openstack.org/developer/devstack/guides/single-machine.html#installation-shake-and-bake NB: in my environment I had to edit the file sudoers with a text editor (echoing the string didn't work). On 01/12/15 09:33, Abhishek Shrivastava wrote: Hi all, I am still getting the same error while installing the Openstack through devstack, If someone know the solution please reply. On Fri, Jan 9, 2015 at 1:53 PM, Abhishek Shrivastava abhis...@cloudbyte.com mailto:abhis...@cloudbyte.com wrote: Hi Liuxinguo, Thanks for the suggestion, I'll try and make it work. On Fri, Jan 9, 2015 at 1:24 PM, liuxinguo liuxin...@huawei.com mailto:liuxin...@huawei.com wrote: Hi Abhishek, For the error in the first line: “mkdir: cannot create directory `/logs': Permission denied” and the error at the end: “ln: failed to create symbolic link `/logs/screen/screen-key.log': No such file or directory” The stack user does not have the permission on “/” so it can not create directory `/logs'. Please check the permission. liu *发件人:*Abhishek Shrivastava [mailto:abhis...@cloudbyte.com mailto:abhis...@cloudbyte.com] *发 送时间:*2015年1月9日15:26 *收件人:*OpenStack Development Mailing List (not for usage questions) *主题:*[openstack-dev] [devstack] Opensatck installation issue. Hi, I'm trying to install *Openstack//*through*//devstack master* on my *Ubuntu* *12.04 VM*, but it is failing and generating the following error. If anyone can help me resolving this issue please do reply. -- *Thanks Regards,* *Abhishek* ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org mailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- *Thanks Regards, * *Abhishek* -- *Thanks Regards, * *Abhishek* __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Pasquale Porreca DEK Technologies Via dei Castelli Romani, 22 00040 Pomezia (Roma) Mobile +39 3394823805 Skype paskporr __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [devstack]Openstack installation issue.
Whats the log now? On Mon, Jan 12, 2015 at 4:53 PM, Abhishek Shrivastava abhis...@cloudbyte.com wrote: Hi Samta, Thanks for the suggestion but still problem remains the same. On Mon, Jan 12, 2015 at 4:29 PM, Samta Rangare samtarang...@gmail.com wrote: Hey abhishek, As a quick fix to this problem, edit this file devstack/lib/keystone +170 in this function function configure_keystone { edit this line with adding sudo sudo cp -p $KEYSTONE_DIR/etc/keystone.conf.sample $KEYSTONE_CONF Regards Samta On Mon, Jan 12, 2015 at 4:01 PM, Abhishek Shrivastava abhis...@cloudbyte.com wrote: Hi all, I am writing this again because I am getting the same error from past week while I'm installing Openstack through devstack on Ubuntu 13.10. I am attaching the new log file, please go through it and if anyone can provide me the solution please do reply. -- *Thanks Regards,* *Abhishek* __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- *Thanks Regards,* *Abhishek* __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI
Hi, Back with another error during image creation with nodepool: 2015-01-12 13:05:17,775 INFO nodepool.image.build.local_01.d-p-c: Downloading python-daemon-2.0.1.tar.gz (62kB) 2015-01-12 13:05:18,022 INFO nodepool.image.build.local_01.d-p-c: Traceback (most recent call last): 2015-01-12 13:05:18,023 INFO nodepool.image.build.local_01.d-p-c: File string, line 20, in module 2015-01-12 13:05:18,023 INFO nodepool.image.build.local_01.d-p-c: File /tmp/pip-build-r6RJKq/python-daemon/setup.py, line 27, in module 2015-01-12 13:05:18,024 INFO nodepool.image.build.local_01.d-p-c: import version 2015-01-12 13:05:18,024 INFO nodepool.image.build.local_01.d-p-c: File version.py, line 51, in module 2015-01-12 13:05:18,024 INFO nodepool.image.build.local_01.d-p-c: import docutils.core 2015-01-12 13:05:18,024 INFO nodepool.image.build.local_01.d-p-c: ImportError: No module named docutils.core 2015-01-12 13:05:18,025 INFO nodepool.image.build.local_01.d-p-c: Complete output from command python setup.py egg_info: 2015-01-12 13:05:18,025 INFO nodepool.image.build.local_01.d-p-c: Traceback (most recent call last): 2015-01-12 13:05:18,025 INFO nodepool.image.build.local_01.d-p-c: 2015-01-12 13:05:18,025 INFO nodepool.image.build.local_01.d-p-c: File string, line 20, in module 2015-01-12 13:05:18,025 INFO nodepool.image.build.local_01.d-p-c: 2015-01-12 13:05:18,025 INFO nodepool.image.build.local_01.d-p-c: File /tmp/pip-build-r6RJKq/python-daemon/setup.py, line 27, in module 2015-01-12 13:05:18,025 INFO nodepool.image.build.local_01.d-p-c: 2015-01-12 13:05:18,025 INFO nodepool.image.build.local_01.d-p-c: import version 2015-01-12 13:05:18,026 INFO nodepool.image.build.local_01.d-p-c: 2015-01-12 13:05:18,026 INFO nodepool.image.build.local_01.d-p-c: File version.py, line 51, in module 2015-01-12 13:05:18,026 INFO nodepool.image.build.local_01.d-p-c: 2015-01-12 13:05:18,026 INFO nodepool.image.build.local_01.d-p-c: import docutils.core 2015-01-12 13:05:18,026 INFO nodepool.image.build.local_01.d-p-c: 2015-01-12 13:05:18,026 INFO nodepool.image.build.local_01.d-p-c: ImportError: No module named docutils.core 2015-01-12 13:05:18,026 INFO nodepool.image.build.local_01.d-p-c: 2015-01-12 13:05:18,026 INFO nodepool.image.build.local_01.d-p-c: 2015-01-12 13:05:18,054 INFO nodepool.image.build.local_01.d-p-c: Command python setup.py egg_info failed with error code 1 in /tmp/pip-build-r6RJKq/python-daemon Python-daemon pip package fails to install due to ImportError. Any ideas how to fix this? Thanks, Eduard On Fri, Jan 9, 2015 at 10:00 PM, Patrick East patrick.e...@purestorage.com wrote: Thanks for the links! After digging around in my configs I figured out the issue, I had a typo in my JENKINS_SSH_PUBLIC_KEY_NO_WHITESPACE (copy pasta cut off a character...). But I managed to put the right one in the key for nova to use so it was able to log in to set up the instance, but didn't end up with the right thing in the NODEPOOL_SSH_KEY variable. -Patrick On Fri, Jan 9, 2015 at 9:25 AM, Asselin, Ramy ramy.asse...@hp.com wrote: Regarding SSH Keys and logging into nodes, you need to set the NODEPOOL_SSH_KEY variable 1. I documented my notes here https://github.com/rasselin/os-ext-testing-data/blob/master/etc/nodepool/nodepool.yaml.erb.sample#L48 2. This is also documented ‘officially’ here: https://github.com/openstack-infra/nodepool/blob/master/README.rst 3. Also, I had an issue getting puppet to do the right thing with keys, so it gets forced here: https://github.com/rasselin/os-ext-testing/blob/master/puppet/install_master.sh#L197 Ramy *From:* Eduard Matei [mailto:eduard.ma...@cloudfounders.com] *Sent:* Friday, January 09, 2015 8:58 AM *To:* OpenStack Development Mailing List (not for usage questions) *Subject:* Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI Thanks Patrick, Indeed it seems the cloud provider was setting up vms on a bridge whose eth was DOWN so the vms could not connect to the outside world so the prepare script was failing. Looking into that. Thanks, Eduard On Fri, Jan 9, 2015 at 6:44 PM, Patrick East patrick.e...@purestorage.com wrote: Ah yea, sorry, should have specified; I am having it run the prepare_node_devstack.sh from the infra repo. I see it adding the same public key to the user specified in my nodepool.yaml. The strange part (and I need to double check.. feel like it can't be right) is that on my master node the nodepool users id_rsa changed at some point in the process. -Patrick On Fri, Jan 9, 2015 at 8:38 AM, Jeremy Stanley fu...@yuggoth.org wrote: On 2015-01-09 08:28:39 -0800 (-0800), Patrick East wrote: [...] On a related note, I am having issues with the ssh keys. Nodepool is able to log in to the node to set up the template and create an image from it, but then fails to log in to a build node. Have you run into any issues with
Re: [openstack-dev] 答复: [devstack] Opensatck installation issue.
I tried that also but still the error is same. On Mon, Jan 12, 2015 at 3:32 PM, Pasquale Porreca pasquale.porr...@dektech.com.au wrote: Since it seems a permission error, this may help: http://docs.openstack.org/developer/devstack/guides/single-machine.html#installation-shake-and-bake NB: in my environment I had to edit the file sudoers with a text editor (echoing the string didn't work). On 01/12/15 09:33, Abhishek Shrivastava wrote: Hi all, I am still getting the same error while installing the Openstack through devstack, If someone know the solution please reply. On Fri, Jan 9, 2015 at 1:53 PM, Abhishek Shrivastava abhis...@cloudbyte.com wrote: Hi Liuxinguo, Thanks for the suggestion, I'll try and make it work. On Fri, Jan 9, 2015 at 1:24 PM, liuxinguo liuxin...@huawei.com wrote: Hi Abhishek, For the error in the first line: “mkdir: cannot create directory `/logs': Permission denied” and the error at the end: “ln: failed to create symbolic link `/logs/screen/screen-key.log': No such file or directory” The stack user does not have the permission on “/” so it can not create directory `/logs'. Please check the permission. liu *发件人:* Abhishek Shrivastava [mailto:abhis...@cloudbyte.com] *发 送时间:* 2015年1月9日 15:26 *收件人:* OpenStack Development Mailing List (not for usage questions) *主题:* [openstack-dev] [devstack] Opensatck installation issue. Hi, I'm trying to install *Openstack *through* devstack master* on my *Ubuntu* *12.04 VM*, but it is failing and generating the following error. If anyone can help me resolving this issue please do reply. -- *Thanks Regards,* *Abhishek* ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- *Thanks Regards, * *Abhishek* -- *Thanks Regards, * *Abhishek* __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Pasquale Porreca DEK Technologies Via dei Castelli Romani, 22 00040 Pomezia (Roma) Mobile +39 3394823805 Skype paskporr __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- *Thanks Regards,* *Abhishek* __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Policy][Group-based-policy] ODL Policy Driver Specs
Hi, Can anyone explain the difference between gbp group-create and gbp policy-target-group-create?? I think both these are working same. Thanks Regards Sachi Gupta From: Sumit Naiksatam sumitnaiksa...@gmail.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Date: 11/26/2014 01:35 PM Subject:Re: [openstack-dev] [Policy][Group-based-policy] ODL Policy Driver Specs Hi, This GBP spec is currently being worked on: https://review.openstack.org/#/c/134285/ It will be helpful if you can add [Policy][Group-based-policy] in the subject of your emails, so that the email gets characterized correctly. Thanks, ~Sumit. On Tue, Nov 25, 2014 at 4:27 AM, Sachi Gupta sachi.gu...@tcs.com wrote: Hey All, I need to understand the interaction between the Openstack GBP and the Opendaylight GBP project which will be done by ODL Policy driver. Can someone provide me with specs of ODL Policy driver for making my understanding on call flow. Thanks Regards Sachi Gupta =-=-= Notice: The information contained in this e-mail message and/or attachments to it may contain confidential or privileged information. If you are not the intended recipient, any dissemination, use, review, distribution, printing or copying of the information contained in this e-mail message and/or attachments to it are strictly prohibited. If you have received this communication in error, please notify us by reply e-mail or telephone and immediately and permanently delete the message and any attachments. Thank you ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] 答复: [devstack] Opensatck installation issue.
Its showing the same error on Ubuntu 13.10, so I think its not a Ubuntu Version issue. On Mon, Jan 12, 2015 at 4:02 PM, Amit Das amit@cloudbyte.com wrote: Is this related to the version of Ubuntu being used ? Its 12.04 as per the email. Regards, Amit *CloudByte Inc.* http://www.cloudbyte.com/ On Mon, Jan 12, 2015 at 3:55 PM, Abhishek Shrivastava abhis...@cloudbyte.com wrote: I tried that also but still the error is same. On Mon, Jan 12, 2015 at 3:32 PM, Pasquale Porreca pasquale.porr...@dektech.com.au wrote: Since it seems a permission error, this may help: http://docs.openstack.org/developer/devstack/guides/single-machine.html#installation-shake-and-bake NB: in my environment I had to edit the file sudoers with a text editor (echoing the string didn't work). On 01/12/15 09:33, Abhishek Shrivastava wrote: Hi all, I am still getting the same error while installing the Openstack through devstack, If someone know the solution please reply. On Fri, Jan 9, 2015 at 1:53 PM, Abhishek Shrivastava abhis...@cloudbyte.com wrote: Hi Liuxinguo, Thanks for the suggestion, I'll try and make it work. On Fri, Jan 9, 2015 at 1:24 PM, liuxinguo liuxin...@huawei.com wrote: Hi Abhishek, For the error in the first line: “mkdir: cannot create directory `/logs': Permission denied” and the error at the end: “ln: failed to create symbolic link `/logs/screen/screen-key.log': No such file or directory” The stack user does not have the permission on “/” so it can not create directory `/logs'. Please check the permission. liu *发件人:* Abhishek Shrivastava [mailto:abhis...@cloudbyte.com] *发 送时间:* 2015年1月9日 15:26 *收件人:* OpenStack Development Mailing List (not for usage questions) *主题:* [openstack-dev] [devstack] Opensatck installation issue. Hi, I'm trying to install *Openstack *through* devstack master* on my *Ubuntu* *12.04 VM*, but it is failing and generating the following error. If anyone can help me resolving this issue please do reply. -- *Thanks Regards,* *Abhishek* ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- *Thanks Regards, * *Abhishek* -- *Thanks Regards, * *Abhishek* __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Pasquale Porreca DEK Technologies Via dei Castelli Romani, 22 00040 Pomezia (Roma) Mobile +39 3394823805 Skype paskporr __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- *Thanks Regards,* *Abhishek* __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- *Thanks Regards,* *Abhishek* __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Future-based api for openstack clients calls, that starts background tasks
Konstantin, I believe it's better to work on server side, and use some modern approach like web sockets for async operations. So we won't need to retrieve 100500 times information about object. And then use this feature in clients. create_future = novaclient.servers.create_async() . vm = create_future.result() As well this pattern doesn't look great. I would prefer to see something like: vm = novaclient.servers.create(, sync=True) Best regards, Boris Pavlovic On Mon, Jan 12, 2015 at 2:30 PM, Konstantin Danilov kdani...@mirantis.com wrote: Hi all. There a set of openstack api functions which starts background actions and return preliminary results - like 'novaclient.create'. Those functions requires periodically check results and handle timeouts/errors (and often cleanup + restart help to fix an error). Check/retry/cleanup code duplicated over a lot of core projects. As examples - heat, tempest, rally, etc and definitely in many third-party scripts. I propose to provide common higth-level API for such functions, which uses 'futures' (http://en.wikipedia.org/wiki/Futures_and_promises) as a way to present background task. Idea is to add to each background-task-starter function a complimentary call, that returns 'future' object. E.g. create_future = novaclient.servers.create_async() . vm = create_future.result() This allows to unify(and optimize) monitoring cycles, retries, etc. Please found complete BP at https://github.com/koder-ua/os_api/blob/master/README.md Thanks -- Kostiantyn Danilov aka koder http://koder.ua Principal software engineer, Mirantis skype:koder.ua http://koder-ua.blogspot.com/ http://mirantis.com __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] 答复: [devstack] Opensatck installation issue.
Is this related to the version of Ubuntu being used ? Its 12.04 as per the email. Regards, Amit *CloudByte Inc.* http://www.cloudbyte.com/ On Mon, Jan 12, 2015 at 3:55 PM, Abhishek Shrivastava abhis...@cloudbyte.com wrote: I tried that also but still the error is same. On Mon, Jan 12, 2015 at 3:32 PM, Pasquale Porreca pasquale.porr...@dektech.com.au wrote: Since it seems a permission error, this may help: http://docs.openstack.org/developer/devstack/guides/single-machine.html#installation-shake-and-bake NB: in my environment I had to edit the file sudoers with a text editor (echoing the string didn't work). On 01/12/15 09:33, Abhishek Shrivastava wrote: Hi all, I am still getting the same error while installing the Openstack through devstack, If someone know the solution please reply. On Fri, Jan 9, 2015 at 1:53 PM, Abhishek Shrivastava abhis...@cloudbyte.com wrote: Hi Liuxinguo, Thanks for the suggestion, I'll try and make it work. On Fri, Jan 9, 2015 at 1:24 PM, liuxinguo liuxin...@huawei.com wrote: Hi Abhishek, For the error in the first line: “mkdir: cannot create directory `/logs': Permission denied” and the error at the end: “ln: failed to create symbolic link `/logs/screen/screen-key.log': No such file or directory” The stack user does not have the permission on “/” so it can not create directory `/logs'. Please check the permission. liu *发件人:* Abhishek Shrivastava [mailto:abhis...@cloudbyte.com] *发 送时间:* 2015年1月9日 15:26 *收件人:* OpenStack Development Mailing List (not for usage questions) *主题:* [openstack-dev] [devstack] Opensatck installation issue. Hi, I'm trying to install *Openstack *through* devstack master* on my *Ubuntu* *12.04 VM*, but it is failing and generating the following error. If anyone can help me resolving this issue please do reply. -- *Thanks Regards,* *Abhishek* ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- *Thanks Regards, * *Abhishek* -- *Thanks Regards, * *Abhishek* __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Pasquale Porreca DEK Technologies Via dei Castelli Romani, 22 00040 Pomezia (Roma) Mobile +39 3394823805 Skype paskporr __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- *Thanks Regards,* *Abhishek* __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Cinder] Cutoff deadlines for cinder drivers
Hi guys, Thanks for answering my questions. I have 2 points: 1 - This (remove drivers without CI) is a way impacting change to be implemented without exhausting notification and discussion on the mailing list. I myself was in the meeting but this decision wasn't crystal clear. There must be other driver maintainers completely unaware of this. 2 - Build a CI infrastructure and have people to maintain a the CI for a new driver in a 5 weeks frame. Not all companies has the knowledge and resources necessary to this in such sort period. We should consider a grace release period, i.e. drivers entering on K, have until L to implement theirs CIs. On Mon, Jan 12, 2015 at 4:07 AM, Asselin, Ramy ramy.asse...@hp.com wrote: Feel free to join any of the 3rd party 'mentoring' meetings on IRC Freenode #openstack-meeting to help get started, work through issues, etc. Third Party meeting for all aspects of Third Party needs: Mondays at 1500 UTC and Tuesdays at 0800 UTC. Everyone interested in any aspect Third Party process is encouraged to attend. [1] [1] https://wiki.openstack.org/wiki/Meetings/ThirdParty Ramy -Original Message- From: Mike Perez [mailto:thin...@gmail.com] Sent: Sunday, January 11, 2015 6:53 PM To: jsbry...@electronicjungle.net; OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Cinder] Cutoff deadlines for cinder drivers On 21:00 Sat 10 Jan , Jay S. Bryant wrote: I think what we discussed was that existing drivers were supposed to have something working by the end of k-2, or at least have something close to working. For new drivers they had to have 3rd party CI working by the end of Kilo. Duncan, correct me if I am wrong. Jay On 01/10/2015 04:52 PM, Mike Perez wrote: On 14:42 Fri 09 Jan , Ivan Kolodyazhny wrote: Hi Erlon, We've got a thread mailing-list [1] for it and some details in wiki [2]. Anyway, need to get confirmation from our core devs and/or Mike. [1] http://lists.openstack.org/pipermail/openstack-dev/2014-October/0495 12.html [2] https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers#Testi ng_requirements_for_Kilo_release_and_beyond Regards, Ivan Kolodyazhny On Fri, Jan 9, 2015 at 2:26 PM, Erlon Cruz sombra...@gmail.com wrote: Hi all, hi cinder core devs, I have read on IRC discussions about a deadline for drivers vendors to have their CI running and voting until kilo-2, but I didn't find any post on this list to confirm this. Can anyone confirm this? Thanks, Erlon We did discuss and agree in the Cinder meeting that the deadline would be k-2, but I don't think anyone reached out to the driver maintainers about the deadline. Duncan had this action item [1], perhaps he can speak more about it. [1] - http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-11-19 -16.00.html That is correct [1]. However, I don't think there was any warning given to existing drivers [2]. If Duncan can confirm this is the case, I would recommend fair warning go out for the end of Kilo for existing drivers as well. [1] - https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers#Deadlines [2] - http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-11-19-16.00.html -- Mike Perez __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Future-based api for openstack clients calls, that starts background tasks
Hi all. There a set of openstack api functions which starts background actions and return preliminary results - like 'novaclient.create'. Those functions requires periodically check results and handle timeouts/errors (and often cleanup + restart help to fix an error). Check/retry/cleanup code duplicated over a lot of core projects. As examples - heat, tempest, rally, etc and definitely in many third-party scripts. I propose to provide common higth-level API for such functions, which uses 'futures' (http://en.wikipedia.org/wiki/Futures_and_promises) as a way to present background task. Idea is to add to each background-task-starter function a complimentary call, that returns 'future' object. E.g. create_future = novaclient.servers.create_async() . vm = create_future.result() This allows to unify(and optimize) monitoring cycles, retries, etc. Please found complete BP at https://github.com/koder-ua/os_api/blob/master/README.md Thanks -- Kostiantyn Danilov aka koder http://koder.ua Principal software engineer, Mirantis skype:koder.ua http://koder-ua.blogspot.com/ http://mirantis.com __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [devstack]Openstack installation issue.
Hi Samta, Thanks for the suggestion but still problem remains the same. On Mon, Jan 12, 2015 at 4:29 PM, Samta Rangare samtarang...@gmail.com wrote: Hey abhishek, As a quick fix to this problem, edit this file devstack/lib/keystone +170 in this function function configure_keystone { edit this line with adding sudo sudo cp -p $KEYSTONE_DIR/etc/keystone.conf.sample $KEYSTONE_CONF Regards Samta On Mon, Jan 12, 2015 at 4:01 PM, Abhishek Shrivastava abhis...@cloudbyte.com wrote: Hi all, I am writing this again because I am getting the same error from past week while I'm installing Openstack through devstack on Ubuntu 13.10. I am attaching the new log file, please go through it and if anyone can provide me the solution please do reply. -- *Thanks Regards,* *Abhishek* __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- *Thanks Regards,* *Abhishek* __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [devstack]Openstack installation issue.
Same as before. On Mon, Jan 12, 2015 at 5:04 PM, Samta Rangare samtarang...@gmail.com wrote: Whats the log now? On Mon, Jan 12, 2015 at 4:53 PM, Abhishek Shrivastava abhis...@cloudbyte.com wrote: Hi Samta, Thanks for the suggestion but still problem remains the same. On Mon, Jan 12, 2015 at 4:29 PM, Samta Rangare samtarang...@gmail.com wrote: Hey abhishek, As a quick fix to this problem, edit this file devstack/lib/keystone +170 in this function function configure_keystone { edit this line with adding sudo sudo cp -p $KEYSTONE_DIR/etc/keystone.conf.sample $KEYSTONE_CONF Regards Samta On Mon, Jan 12, 2015 at 4:01 PM, Abhishek Shrivastava abhis...@cloudbyte.com wrote: Hi all, I am writing this again because I am getting the same error from past week while I'm installing Openstack through devstack on Ubuntu 13.10. I am attaching the new log file, please go through it and if anyone can provide me the solution please do reply. -- *Thanks Regards,* *Abhishek* __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- *Thanks Regards,* *Abhishek* __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- *Thanks Regards,* *Abhishek* __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Future-based api for openstack clients calls, that starts background tasks
Boris, Move from sync http to something like websocket requires a lot of work and not directly connected with API issue. When openstack api servers begin to support websockets - it would be easy to change implementation of monitoring thread without breaking compatibility. At the moment periodical pooling from additional thread looks reasonable for me and it creates same amount of http requests as all current implementation. BP is not about improving performance, but about providing convenient and common API to handle background tasks. So we won't need to retrieve 100500 times information about object. As I sad before - this API creates same amount of load as any code which we use to check background task currently. It even can decrease load due to requests aggregation in some cases (but there a points to discuss). As well this pattern doesn't look great. I would prefer to see something like: vm = novaclient.servers.create(, sync=True) This is completely different pattern. It is blocking call, which don't allows you to start two(or more) background tasks and from same thread and make some calculations while they running in background. On Mon, Jan 12, 2015 at 1:42 PM, Boris Pavlovic bpavlo...@mirantis.com wrote: Konstantin, I believe it's better to work on server side, and use some modern approach like web sockets for async operations. So we won't need to retrieve 100500 times information about object. And then use this feature in clients. create_future = novaclient.servers.create_async() . vm = create_future.result() As well this pattern doesn't look great. I would prefer to see something like: vm = novaclient.servers.create(, sync=True) Best regards, Boris Pavlovic On Mon, Jan 12, 2015 at 2:30 PM, Konstantin Danilov kdani...@mirantis.com wrote: Hi all. There a set of openstack api functions which starts background actions and return preliminary results - like 'novaclient.create'. Those functions requires periodically check results and handle timeouts/errors (and often cleanup + restart help to fix an error). Check/retry/cleanup code duplicated over a lot of core projects. As examples - heat, tempest, rally, etc and definitely in many third-party scripts. I propose to provide common higth-level API for such functions, which uses 'futures' (http://en.wikipedia.org/wiki/Futures_and_promises) as a way to present background task. Idea is to add to each background-task-starter function a complimentary call, that returns 'future' object. E.g. create_future = novaclient.servers.create_async() . vm = create_future.result() This allows to unify(and optimize) monitoring cycles, retries, etc. Please found complete BP at https://github.com/koder-ua/os_api/blob/master/README.md Thanks -- Kostiantyn Danilov aka koder http://koder.ua Principal software engineer, Mirantis skype:koder.ua http://koder-ua.blogspot.com/ http://mirantis.com __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Kostiantyn Danilov aka koder.ua Principal software engineer, Mirantis skype:koder.ua http://koder-ua.blogspot.com/ http://mirantis.com __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [oslo] dropping namespace packages
You rock, man. Thanks, I'll steal those. :) /Ihar On 01/11/2015 09:39 PM, Davanum Srinivas wrote: Jay, I have a hacking rule in nova already [1] and am updating the rule in the 3 reviews i have for oslo_utils, oslo_middleware and oslo_config [2] in Nova thanks, dims [1] https://github.com/openstack/nova/blob/master/nova/hacking/checks.py#L452 [2] https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:remove-oslo-namespace,n,z On Sat, Jan 10, 2015 at 9:26 PM, Jay S. Bryant jsbry...@electronicjungle.net wrote: Ihar, I agree that we should do something to enforce using the appropriate namespace so that we don't have the wrong usage sneak in. I haven't gotten any rules written yet. Have had to attend to a family commitment the last few days. Hope that I can tackle the namspace changes next week. Jay On 01/08/2015 12:24 PM, Ihar Hrachyshka wrote: On 01/08/2015 07:03 PM, Doug Hellmann wrote: I’m not sure that’s something we need to enforce. Liaisons should be updating projects now as we release libraries, and then we’ll consider whether we can drop the namespace packages when we plan the next cycle. Without a hacking rule, there is a chance old namespace usage will sneak in, and then we'll need to get back to updating imports. I would rather avoid that and get migration committed with enforcement. /Ihar ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [devstack]Openstack installation issue.
Is your root filesystem full? The log clearly shows the chown stack /etc/keystone passing right before the copy is attempted. -Sean On 01/12/2015 06:59 AM, Abhishek Shrivastava wrote: Same as before. On Mon, Jan 12, 2015 at 5:04 PM, Samta Rangare samtarang...@gmail.com mailto:samtarang...@gmail.com wrote: Whats the log now? On Mon, Jan 12, 2015 at 4:53 PM, Abhishek Shrivastava abhis...@cloudbyte.com mailto:abhis...@cloudbyte.com wrote: Hi Samta, Thanks for the suggestion but still problem remains the same. On Mon, Jan 12, 2015 at 4:29 PM, Samta Rangare samtarang...@gmail.com mailto:samtarang...@gmail.com wrote: Hey abhishek, As a quick fix to this problem, edit this file devstack/lib/keystone +170 in this function function configure_keystone { edit this line with adding sudo sudo cp -p $KEYSTONE_DIR/etc/keystone.conf.sample $KEYSTONE_CONF Regards Samta On Mon, Jan 12, 2015 at 4:01 PM, Abhishek Shrivastava abhis...@cloudbyte.com mailto:abhis...@cloudbyte.com wrote: Hi all, I am writing this again because I am getting the same error from past week while I'm installing Openstack through devstack on Ubuntu 13.10. I am attaching the new log file, please go through it and if anyone can provide me the solution please do reply. -- *Thanks Regards, * *Abhishek* __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- *Thanks Regards, * *Abhishek* __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- *Thanks Regards, * *Abhishek* __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Sean Dague http://dague.net __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [devstack]Openstack installation issue.
Hey abhishek, As a quick fix to this problem, edit this file devstack/lib/keystone +170 in this function function configure_keystone { edit this line with adding sudo sudo cp -p $KEYSTONE_DIR/etc/keystone.conf.sample $KEYSTONE_CONF Regards Samta On Mon, Jan 12, 2015 at 4:01 PM, Abhishek Shrivastava abhis...@cloudbyte.com wrote: Hi all, I am writing this again because I am getting the same error from past week while I'm installing Openstack through devstack on Ubuntu 13.10. I am attaching the new log file, please go through it and if anyone can provide me the solution please do reply. -- *Thanks Regards,* *Abhishek* __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Fuel][client][IMPORTANT] Making Fuel Client a separate project
Hi folks! This is a status update. Right now the patch for creating a new project on Stackforge is blocked by a bug in Zuul [1] which is actually a bug in python-daemon, the patch for this is already published [2] and waiting for being approved. After the patch is merged and all projects and groups are created I will file a request to perform initial setup of core groups. Once they are created it will be possible to land new patches. Meanwhile OSCI team is working [3] on adjusting build system to use python-fuelclient from PyPi [4]. Stay tuned for further updates. References: Zuul’s tests fail with dependencies error https://storyboard.openstack.org/#!/story/2000107 https://storyboard.openstack.org/#!/story/2000107 Pin python-daemon2.0 https://review.openstack.org/#/c/146350/ https://review.openstack.org/#/c/146350/ Create repositories for python-fuelclient package https://bugs.launchpad.net/fuel/+bug/1409673 https://bugs.launchpad.net/fuel/+bug/1409673 python-fuelclient on PyPi https://pypi.python.org/pypi/python-fuelclient https://pypi.python.org/pypi/python-fuelclient 9 січ. 2015 о 15:14 Roman Prykhodchenko m...@romcheg.me mailto:m...@romcheg.me написав(ла): Hi folks, according to the Fuel client refactoring plan [1] it’s necessary to move it out to a separate repository on Stackforge. The process of doing that consists of two major steps: - Landing a patch [2] to project-config for creating a new Stackforge project - Creating an initial core group for python-fuelclient - Moving all un-merged patches from fuel-web to python-fuelclient gerrit repo The first step of this process has already been started so I kindly ask all fuelers to DO NOT MERGE any new patches to fuel-web IF THEY DO touch fuelclient folder. After the project is set up I will let everyone know about that and will tell what to do after that so I encourage all interested people to check this thread once in a while. # References: 1. Re-thinking Fuel Client https://review.openstack.org/#/c/145843 https://review.openstack.org/#/c/145843 2. Add python-fuelclient to Stackforge https://review.openstack.org/#/c/145843 https://review.openstack.org/#/c/145843 - romcheg __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova]Why nova mounts FS for LXC container instead of libvirt?
On Mon, Jan 12, 2015 at 06:28:53PM +0300, Dmitry Guryanov wrote: On 01/05/2015 02:30 PM, Daniel P. Berrange wrote: On Tue, Dec 30, 2014 at 05:18:19PM +0300, Dmitry Guryanov wrote: Hello, Libvirt can create loop or nbd device for LXC container and mount it by itself, for instance, you can add something like this to xml config: filesystem type='file' driver type='loop' format='raw'/ source file='/fedora-20-raw'/ target dir='/'/ /filesystem But nova mounts filesystem for container by itself. Is this because rhel-6 doesn't support filesystems with type='file' or there are some other reasons? The support for mounting using NBD in OpenStack pre-dated the support for doing this in Libvirt. In faact the reason I added this feature to libvirt was precisely because OpenStack was doing this. We haven't switched Nova over to use this new syntax yet though, because that would imply a change to the min required libvirt version for LXC. That said we should probably make such a change, because honestly no one should be using LXC without using user namespaces, othewise their cloud is horribly insecure. This would imply making the min libvirt for LXC much much newer than it is today. It's not very hard to replace mounting in nova with generating proper xml config. Can we do it before kilo release? Are there any people, who use openstack with LXC in production? Looking at libvirt history, it would mean we mandate 1.0.6 as the min libvirt for use with the LXC driver. Regards, Daniel -- |: http://berrange.com -o-http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] Where to keep data about stack breakpoints?
On 01/12/2015 10:29 AM, Tomas Sedovic wrote: Hey folks, I did a quick proof of concept for a part of the Stack Breakpoint spec[1] and I put the does this resource have a breakpoint flag into the metadata of the resource: https://review.openstack.org/#/c/146123/ I'm not sure where this info really belongs, though. It does sound like metadata to me (plus we don't have to change the database schema that way), but can we use it for breakpoints etc., too? Or is metadata strictly for Heat users and not for engine-specific stuff? I'd rather not store it in metadata so we don't mix user metadata with implementation-specific-and-also-subject-to-change runtime metadata. I think this is a big enough feature to warrant a schema update (and I can't think of another place I'd want to put the breakpoint info). I also had a chat with Steve Hardy and he suggested adding a STOPPED state to the stack (this isn't in the spec). While not strictly necessary to implement the spec, this would help people figure out that the stack has reached a breakpoint instead of just waiting on a resource that takes a long time to finish (the heat-engine log and event-list still show that a breakpoint was reached but I'd like to have it in stack-list and resource-list, too). It makes more sense to me to call it PAUSED (we're not completely stopping the stack creation after all, just pausing it for a bit), I'll let Steve explain why that's not the right choice :-). +1 to PAUSED. To me, STOPPED implies an end state (which a breakpoint is not). For sublime end user confusion, we could use BROKEN. ;) Tomas [1]: http://specs.openstack.org/openstack/heat-specs/specs/juno/stack-breakpoint.html __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Ryan Brown / Software Engineer, Openstack / Red Hat, Inc. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Glance] IRC logging
+1 to Flavio's proposal. Thanks, -Nikhil From: Flavio Percoco [fla...@redhat.com] Sent: Monday, January 12, 2015 3:16 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Glance] IRC logging On 09/01/15 12:11 -0800, Joshua Harlow wrote: So the only comment I'll put in is one that I know not everyone agrees with but might as well throw it out there. http://freenode.net/channel_guidelines.shtml (this page has a bunch of useful advice IMHO). From that; something useful to look/think over at least... If you're considering publishing channel logs, think it through. The freenode network is an interactive environment. Even on public channels, most users don't weigh their comments with the idea that they'll be enshrined in perpetuity. For that reason, few participants publish logs. If you're publishing logs on an ongoing basis, your channel topic should reflect that fact. Be sure to provide a way for users to make comments without logging, and get permission from the channel owners before you start. If you're thinking of anonymizing your logs (removing information that identifies the specific users), be aware that it's difficult to do it well—replies and general context often provide identifying information which is hard to filter. If you just want to publish a single conversation, be careful to get permission from each participant. Provide as much context as you can. Avoid the temptation to publish or distribute logs without permission in order to portray someone in a bad light. The reputation you save will most likely be your own. (Joshua, the below is not about what you posted, I really appreciate you bringing the above into the discussion) FWIW, I kind of feel that channel logging should become an OpenStack thing and not a per-project thing. Log all openstack official channels, make it clear in the wiki/homepage/HoToContribute/WhateverYouWannaCallIt and move on. Nothing, absolutely nothing, prevents the above from happening right now. I've local (as in my ZNC server) logs of the last 1y 1/2 and I could just make them public. Really, IRC is basically(?) public by default and I - I know this is probably personal opinion - don't think there's a difference between a logged channel and a not logged one. If we wanted to make the channel private, we should password protect it and invite few people, make them sign a contract where they swear they won't publish logs and whatnot. Anyway, I think a good way to avoid these discussions for future projects is to simply enable logging on all openstack- channels. Cheers, Flavio Brian Rosmaita wrote: The response on the review is overwhelmingly positive (or, strictly speaking, unanimously non-negative). If anyone has an objection, could you please register it before 12:00 UTC on Monday, January 12? https://review.openstack.org/#/c/145025/ thanks, brian *From:* David Stanek [dsta...@dstanek.com] *Sent:* Wednesday, January 07, 2015 4:43 PM *To:* OpenStack Development Mailing List (not for usage questions) *Subject:* Re: [openstack-dev] [Glance] IRC logging It's also important to remember that IRC channels are typically not private and are likely already logged by dozens of people anyway. On Tue, Jan 6, 2015 at 1:22 PM, Christopher Aedo ca...@mirantis.com mailto:ca...@mirantis.com wrote: On Tue, Jan 6, 2015 at 2:49 AM, Flavio Percoco fla...@redhat.com mailto:fla...@redhat.com wrote: Fully agree... I don't see how enable logging should be a limitation for freedom of thought. We've used it in Zaqar since day 0 and it's bee of great help for all of us. The logging does not remove the need of meetings where decisions and more relevant/important topics are discussed. Wanted to second this as well. I'm strongly in favor of logging - looking through backlogs of chats on other channels has been very helpful to me in the past, and it sure to help others in the future. I don't think there is danger of anyone pointing to a logged IRC conversation in this context as some statement of record. -Christopher ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org mailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- David blog: http://www.traceback.org twitter: http://twitter.com/dstanek www: http://dstanek.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- @flaper87 Flavio Percoco
Re: [openstack-dev] [Nova]Why nova mounts FS for LXC container instead of libvirt?
On 01/05/2015 02:30 PM, Daniel P. Berrange wrote: On Tue, Dec 30, 2014 at 05:18:19PM +0300, Dmitry Guryanov wrote: Hello, Libvirt can create loop or nbd device for LXC container and mount it by itself, for instance, you can add something like this to xml config: filesystem type='file' driver type='loop' format='raw'/ source file='/fedora-20-raw'/ target dir='/'/ /filesystem But nova mounts filesystem for container by itself. Is this because rhel-6 doesn't support filesystems with type='file' or there are some other reasons? The support for mounting using NBD in OpenStack pre-dated the support for doing this in Libvirt. In faact the reason I added this feature to libvirt was precisely because OpenStack was doing this. We haven't switched Nova over to use this new syntax yet though, because that would imply a change to the min required libvirt version for LXC. That said we should probably make such a change, because honestly no one should be using LXC without using user namespaces, othewise their cloud is horribly insecure. This would imply making the min libvirt for LXC much much newer than it is today. Regards, Daniel It's not very hard to replace mounting in nova with generating proper xml config. Can we do it before kilo release? Are there any people, who use openstack with LXC in production? -- Dmitry Guryanov __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Glance] IRC logging
There's really no way to _force_ official logging on all project-related channels. People who are opposed to the idea simply move their conversations to new channels. They'll straddle the line between somewhat official looking and official enough to require logging. -- Jeremy Stanley __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Fuel] Dropping Python-2.6 support
Hi, Roman, Indeed, we have to go forward and drop python 2.6 support. That's how it supposed to be, but, unfortunately, it may not be as easy as it seems at first glance. Fuel Master is flying on top of Cent OS 6.5 which doesn't have python 2.7 at all. So we must either run master node on Cent OS 7 or build python2.7 for Cent OS 6.5. The first case, obviously, requires a lot of work, while the second one is not. But I may wrong, since I have no idea what dependencies python 2.7 requires and what we have in our repos. - Igor On Mon, Jan 12, 2015 at 4:55 PM, Roman Prykhodchenko m...@romcheg.me wrote: Folks, as it was planned and then announced at the OpenStack summit OpenStack services deprecated Python-2.6 support. At the moment several services and libraries are already only compatible with Python=2.7. And there is no common sense in trying to get back compatibility with Py2.6 because OpenStack infra does not run tests for that version of Python. The point of this email is that some components of Fuel, say, Nailgun and Fuel Client are still only tested with Python-2.6. Fuel Client in it’s turn is about to use OpenStack CI’s python-jobs for running unit tests. That means that in order to make it compatible with Py2.6 there is a need to run a separate python job in FuelCI. However, I believe that forcing the things being compatible with 2.6 when the rest of ecosystem decided not to go with it and when Py2.7 is already available in the main CentOS repo sounds like a battle with the common sense. So my proposal is to drop 2.6 support in Fuel-6.1. - romcheg __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][ThirdPartyCI][PCI] Intel Third party Hardware based CI for PCI
The public link for your test logs should really be a host name instead of an IP address. That way if you have to change it again in the future, you won't have dead links in old comments. You may already know, but all of the requirements and recommendations are here: http://git.openstack.org/cgit/openstack-infra/system-config/tree/doc/source/third_party.rst Kurt Taylor (krtaylor) On Sun, Jan 11, 2015 at 11:18 PM, yongli he yongli...@intel.com wrote: 在 2015年01月08日 10:31, yongli he 写道: to make a more stable service we upgrade the networking device, then the log server address change to a new IP address: 198.175.100.33 so the sample log change to(replace the 192.55.68.190 to new address): http://198.175.100.33/143614/6/ http://198.175.100.33/139900/4 http://198.175.100.33/143372/3/ http://198.175.100.33/141995/6/ http://198.175.100.33/137715/13/ http://198.175.100.33/133269/14/ Yongli He Hi, Intel set up a Hardware based Third Part CI. it's already running sets of PCI test cases for several weeks (do not sent out comments, just log the result) the log server and these test cases seems fairly stable now. to begin given comments to nova repository, what other necessary work need to be address? Details: 1. ThirdPartySystems https://wiki.openstack.org/wiki/ThirdPartySystems Information https://wiki.openstack.org/wiki/ThirdPartySystems/Intel-PCI-CI 2. a sample logs: http://192.55.68.190/138795/6/ 3. Test cases on github: https://github.com/intel-hw-ci/Intel-Openstack-Hardware-CI/tree/master/pci_testcases Thanks Yongli He __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Kilo devstack issue
On Mon, Jan 12, 2015 at 10:03 AM, Nikesh Kumar Mahalka nikeshmaha...@vedams.com wrote: Hi, We deployed a kilo devstack on ubuntu 14.04 server. We successfully launched a instance from dashboard, but we are unable to open console from dashboard for instance.Also instacne is unable to get ip Below is link for local.conf http://paste.openstack.org/show/156497/ Regards Nikesh __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Correct, see this thread: http://lists.openstack.org/pipermail/openstack-dev/2015-January/054157.html __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs
After some discussion with Sean Dague and a few others it became clear that it would be a good idea to introduce a new tool I've been working on to the list to get a sense of its usefulness generally, work towards getting it into global requirements, and get the documentation fleshed out so that people can actually figure out how to use it well. tl;dr: Help me make this interesting tool useful to you and your HTTP testing by reading this message and following some of the links and asking any questions that come up. The tool is called gabbi https://github.com/cdent/gabbi http://gabbi.readthedocs.org/ https://pypi.python.org/pypi/gabbi It describes itself as a tool for running HTTP tests where requests and responses are represented in a declarative form. Its main purpose is to allow testing of APIs where the focus of test writing (and reading!) is on the HTTP requests and responses, not on a bunch of Python (that obscures the HTTP). The tests are written in YAML and the simplest test file has this form: ``` tests: - name: a test url: / ``` This test will pass if the response status code is '200'. The test file is loaded by a small amount of python code which transforms the file into an ordered sequence of TestCases in a TestSuite[1]. ``` def load_tests(loader, tests, pattern): Provide a TestSuite to the discovery process. test_dir = os.path.join(os.path.dirname(__file__), TESTS_DIR) return driver.build_tests(test_dir, loader, host=None, intercept=SimpleWsgi, fixture_module=sys.modules[__name__]) ``` The loader provides either: * a host to which real over-the-network requests are made * a WSGI app which is wsgi-intercept-ed[2] If an individual TestCase is asked to be run by the testrunner, those tests that are prior to it in the same file are run first, as prerequisites. Each test file can declare a sequence of nested fixtures to be loaded from a configured (in the loader) module. Fixtures are context managers (they establish the fixture upon __enter__ and destroy it upon __exit__). With a proper group_regex setting in .testr.conf each YAML file can run in its own process in a concurrent test runner. The docs contain information on the format of the test files: http://gabbi.readthedocs.org/en/latest/format.html Each test can state request headers and bodies and evaluate both response headers and response bodies. Request bodies can be strings in the YAML, files read from disk, or JSON created from YAML structures. Response verifcation can use JSONPath[3] to inspect the details of response bodies. Response header validation may use regular expressions. There is limited support for refering to the previous request to construct URIs, potentially allowing traversal of a full HATEOAS compliant API. At the moment the most complete examples of how things work are: * Ceilometer's pending use of gabbi: https://review.openstack.org/#/c/146187/ * Gabbi's testing of gabbi: https://github.com/cdent/gabbi/tree/master/gabbi/gabbits_intercept (the loader and faked WSGI app for those yaml files is in: https://github.com/cdent/gabbi/blob/master/gabbi/test_intercept.py) One obvious thing that will need to happen is a suite of concrete examples on how to use the various features. I'm hoping that feedback will help drive that. In my own experimentation with gabbi I've found it very useful. It's helped me explore and learn the ceilometer API in a way that existing test code has completely failed to do. It's also helped reveal several warts that will be very useful to fix. And it is fast. To run and to write. I hope that with some work it can be useful to you too. Thanks. [1] Getting gabbi to play well with PyUnit style tests and with infrastructure like subunit and testrepository was one of the most challenging parts of the build, but the result has been a lot of flexbility. [2] https://pypi.python.org/pypi/wsgi_intercept [3] https://pypi.python.org/pypi/jsonpath-rw -- Chris Dent tw:@anticdent freenode:cdent https://tank.peermore.com/tanks/cdent __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Keystone] Spec proposal deadline Feb 5
This is a reminder that the Keystone spec proposal deadline is Feb 5. Please work to have your specs submitted and approved by that date. The keystone team will be spending time at the midcycle next week (Jan 19, 20, 21) to discuss specs; specs proposed before the midcycle will get priority when reviewing / considering the spec for inclusion in the Kilo release. Any spec that is not approved by the deadline will need an explicit exception granted to land in Kilo. Cheers, Morgan Fainberg __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [heat][tripleo] Making diskimage-builder install from forked repo?
On 09/01/15 07:06, Gregory Haynes wrote: Excerpts from Steven Hardy's message of 2015-01-08 17:37:55 +: Hi all, I'm trying to test a fedora-software-config image with some updated components. I need: - Install latest master os-apply-config (the commit I want isn't released) - Install os-refresh-config fork from https://review.openstack.org/#/c/145764 I can't even get the o-a-c from master part working: export PATH=${PWD}/dib-utils/bin:$PATH export ELEMENTS_PATH=tripleo-image-elements/elements:heat-templates/hot/software-config/elements export DIB_INSTALLTYPE_os_apply_config=source diskimage-builder/bin/disk-image-create vm fedora selinux-permissive \ os-collect-config os-refresh-config os-apply-config \ heat-config-ansible \ heat-config-cfn-init \ heat-config-docker \ heat-config-puppet \ heat-config-salt \ heat-config-script \ ntp \ -o fedora-software-config.qcow2 This is what I'm doing, both tools end up as pip installed versions AFAICS, so I've had to resort to manually hacking the image post-DiB using virt-copy-in. Pretty sure there's a way to make DiB do this, but don't know what, anyone able to share some clues? Do I have to hack the elements, or is there a better way? The docs are pretty sparse, so any help would be much appreciated! :) Thanks, Steve Hey Steve, source-repositories is your friend here :) (check out dib/elements/source-repositires/README). One potential gotcha is that because source-repositires is an element it really only applies to tools used within images (and os-apply-config is used outside the image). To fix this we have a shim in tripleo-incubator/scripts/pull-tools which emulates the functionality of source-repositories. Example usage: * checkout os-apply-config to the ref you wish to use * export DIB_REPOLOCATION_os_apply_config=/path/to/oac * export DIB_REPOREF_os_refresh_config=refs/changes/64/145764/1 * start your devtesting The good news is that devstack is already set up to do this. When HEAT_CREATE_TEST_IMAGE=True devstack will build packages from the currently checked-out os-*-config tools, build a pip repo and configure apache to serve it. Then the elements *should* install from these packages - we're not gating on this functionality (yet) so its possible it has regressed but shouldn't be too hard to get going again. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs
After some discussion with Sean Dague and a few others it became clear that it would be a good idea to introduce a new tool I've been working on to the list to get a sense of its usefulness generally, work towards getting it into global requirements, and get the documentation fleshed out so that people can actually figure out how to use it well. tl;dr: Help me make this interesting tool useful to you and your HTTP testing by reading this message and following some of the links and asking any questions that come up. The tool is called gabbi https://github.com/cdent/gabbi http://gabbi.readthedocs.org/ https://pypi.python.org/pypi/gabbi It describes itself as a tool for running HTTP tests where requests and responses are represented in a declarative form. Its main purpose is to allow testing of APIs where the focus of test writing (and reading!) is on the HTTP requests and responses, not on a bunch of Python (that obscures the HTTP). The tests are written in YAML and the simplest test file has this form: ``` tests: - name: a test url: / ``` This test will pass if the response status code is '200'. The test file is loaded by a small amount of python code which transforms the file into an ordered sequence of TestCases in a TestSuite[1]. ``` def load_tests(loader, tests, pattern): Provide a TestSuite to the discovery process. test_dir = os.path.join(os.path.dirname(__file__), TESTS_DIR) return driver.build_tests(test_dir, loader, host=None, intercept=SimpleWsgi, fixture_module=sys.modules[__name__]) ``` The loader provides either: * a host to which real over-the-network requests are made * a WSGI app which is wsgi-intercept-ed[2] If an individual TestCase is asked to be run by the testrunner, those tests that are prior to it in the same file are run first, as prerequisites. Each test file can declare a sequence of nested fixtures to be loaded from a configured (in the loader) module. Fixtures are context managers (they establish the fixture upon __enter__ and destroy it upon __exit__). With a proper group_regex setting in .testr.conf each YAML file can run in its own process in a concurrent test runner. The docs contain information on the format of the test files: http://gabbi.readthedocs.org/en/latest/format.html Each test can state request headers and bodies and evaluate both response headers and response bodies. Request bodies can be strings in the YAML, files read from disk, or JSON created from YAML structures. Response verifcation can use JSONPath[3] to inspect the details of response bodies. Response header validation may use regular expressions. There is limited support for refering to the previous request to construct URIs, potentially allowing traversal of a full HATEOAS compliant API. At the moment the most complete examples of how things work are: * Ceilometer's pending use of gabbi: https://review.openstack.org/#/c/146187/ * Gabbi's testing of gabbi: https://github.com/cdent/gabbi/tree/master/gabbi/gabbits_intercept (the loader and faked WSGI app for those yaml files is in: https://github.com/cdent/gabbi/blob/master/gabbi/test_intercept.py) One obvious thing that will need to happen is a suite of concrete examples on how to use the various features. I'm hoping that feedback will help drive that. In my own experimentation with gabbi I've found it very useful. It's helped me explore and learn the ceilometer API in a way that existing test code has completely failed to do. It's also helped reveal several warts that will be very useful to fix. And it is fast. To run and to write. I hope that with some work it can be useful to you too. Thanks for the write-up Chris, Needless to say, we're sold on the utility of this on the ceilometer side, in terms of crafting readable, self-documenting tests that reveal the core aspects of an API in a easily consumable way. I'd be interested in hearing the api-wg viewpoint, specifically whether that working group intends to recommend any best practices around the approach to API testing. If so, I think gabbi would be a worthy candidate for consideration. Cheers, Eoghan Thanks. [1] Getting gabbi to play well with PyUnit style tests and with infrastructure like subunit and testrepository was one of the most challenging parts of the build, but the result has been a lot of flexbility. [2] https://pypi.python.org/pypi/wsgi_intercept [3] https://pypi.python.org/pypi/jsonpath-rw -- Chris Dent tw:@anticdent freenode:cdent https://tank.peermore.com/tanks/cdent __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
Re: [openstack-dev] [Fuel] Dropping Python-2.6 support
On Jan 12, 2015, at 9:55 AM, Roman Prykhodchenko m...@romcheg.me wrote: Folks, as it was planned and then announced at the OpenStack summit OpenStack services deprecated Python-2.6 support. At the moment several services and libraries are already only compatible with Python=2.7. And there is no common sense in trying to get back compatibility with Py2.6 because OpenStack infra does not run tests for that version of Python. The intent was to keep 2.6 compatibility for client and Oslo libraries. Which libraries are you referring to that require at least 2.7? Doug The point of this email is that some components of Fuel, say, Nailgun and Fuel Client are still only tested with Python-2.6. Fuel Client in it’s turn is about to use OpenStack CI’s python-jobs for running unit tests. That means that in order to make it compatible with Py2.6 there is a need to run a separate python job in FuelCI. However, I believe that forcing the things being compatible with 2.6 when the rest of ecosystem decided not to go with it and when Py2.7 is already available in the main CentOS repo sounds like a battle with the common sense. So my proposal is to drop 2.6 support in Fuel-6.1. - romcheg __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] request spec freeze exception for virtio-net multiqueue
Hello, I'd like to request an exception for virtio-net multiqueue feature. [1] This is an important feature that aims to increase the total network throughput in guests and not too hard to implement. Thanks, Vladik [1] https://review.openstack.org/#/c/128825 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs
Thanks for this Chris, I'm hoping to get my fingers dirty with it Real Soon Now. On Mon, Jan 12, 2015 at 1:54 PM, Eoghan Glynn egl...@redhat.com wrote: I'd be interested in hearing the api-wg viewpoint, specifically whether that working group intends to recommend any best practices around the approach to API testing. Testing recommendations haven't been part of the conversation yet, but I think it is within scope for the WG to have some opinions on REST API design and validation tools. dt -- Dean Troyer dtro...@gmail.com __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs
On 01/12/2015 03:11 PM, Dean Troyer wrote: Thanks for this Chris, I'm hoping to get my fingers dirty with it Real Soon Now. On Mon, Jan 12, 2015 at 1:54 PM, Eoghan Glynn egl...@redhat.com mailto:egl...@redhat.com wrote: I'd be interested in hearing the api-wg viewpoint, specifically whether that working group intends to recommend any best practices around the approach to API testing. Testing recommendations haven't been part of the conversation yet, but I think it is within scope for the WG to have some opinions on REST API design and validation tools. I definitely like the direction that gabbi seems to be headed. It feels like a much cleaner version of what nova tried to do with API samples. As long as multiple projects think this is an interesting direction, I think it's probably fine to add it to global-requirements and let them start working with it. -Sean -- Sean Dague http://dague.net __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs
On 01/12/2015 03:18 PM, Boris Pavlovic wrote: Chris, The Idea is brilliant. I may steal it! =) But there are some issues that will be faced: 1) Using as a base unittest: python -m subunit.run discover -f gabbi | subunit2pyunit So rally team won't be able to reuse it for load testing (if we directly integrate it) because we will have huge overhead (of discover stuff) 2) Load testing. Using unittest for functional testing adds a lot of troubles: 2.1) It makes things complicated: Like reusing fixtures via input YAML will be painfull 2.2) It adds a lot of functionality that is not required 2.3) It makes it hardly integratabtle with other tools. Like Rally.. 3) Usage by Operators is hard in case of N projects. So you should have some kind of Operators would like to have 1 button that will say (does cloud work or not). And they don't want to combine all gabbi files from all projects and run test. From other side there should be a way to write such code in-projects-tree (so new features are directly tested) and then moved to some common place that is run on every patch (without breaking gates) 4) Using subunit format is not good for functional testing. It doesn't allow you to collect detailed information about execution of test. Like for benchmarking it will be quite interesting to collect durations of every API call. I'm not sure how subunit causes an issue here either way. You can either put content into one of the existing subunit attachments, or could modify it to have a new one. -Sean -- Sean Dague http://dague.net __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs
Sean, I definitely like the direction that gabbi seems to be headed. It feels like a much cleaner version of what nova tried to do with API samples. As long as multiple projects think this is an interesting direction, I think it's probably fine to add it to global-requirements and let them start working with it. +1 more testing better code. Best regards, Boris Pavlovic On Mon, Jan 12, 2015 at 11:20 PM, Sean Dague s...@dague.net wrote: On 01/12/2015 03:11 PM, Dean Troyer wrote: Thanks for this Chris, I'm hoping to get my fingers dirty with it Real Soon Now. On Mon, Jan 12, 2015 at 1:54 PM, Eoghan Glynn egl...@redhat.com mailto:egl...@redhat.com wrote: I'd be interested in hearing the api-wg viewpoint, specifically whether that working group intends to recommend any best practices around the approach to API testing. Testing recommendations haven't been part of the conversation yet, but I think it is within scope for the WG to have some opinions on REST API design and validation tools. I definitely like the direction that gabbi seems to be headed. It feels like a much cleaner version of what nova tried to do with API samples. As long as multiple projects think this is an interesting direction, I think it's probably fine to add it to global-requirements and let them start working with it. -Sean -- Sean Dague http://dague.net __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs
Excerpts from Chris Dent's message of 2015-01-12 19:20:18 +: After some discussion with Sean Dague and a few others it became clear that it would be a good idea to introduce a new tool I've been working on to the list to get a sense of its usefulness generally, work towards getting it into global requirements, and get the documentation fleshed out so that people can actually figure out how to use it well. tl;dr: Help me make this interesting tool useful to you and your HTTP testing by reading this message and following some of the links and asking any questions that come up. The tool is called gabbi https://github.com/cdent/gabbi http://gabbi.readthedocs.org/ https://pypi.python.org/pypi/gabbi It describes itself as a tool for running HTTP tests where requests and responses are represented in a declarative form. Its main purpose is to allow testing of APIs where the focus of test writing (and reading!) is on the HTTP requests and responses, not on a bunch of Python (that obscures the HTTP). The tests are written in YAML and the simplest test file has this form: ``` tests: - name: a test url: / ``` This test will pass if the response status code is '200'. The test file is loaded by a small amount of python code which transforms the file into an ordered sequence of TestCases in a TestSuite[1]. ``` def load_tests(loader, tests, pattern): Provide a TestSuite to the discovery process. test_dir = os.path.join(os.path.dirname(__file__), TESTS_DIR) return driver.build_tests(test_dir, loader, host=None, intercept=SimpleWsgi, fixture_module=sys.modules[__name__]) ``` The loader provides either: * a host to which real over-the-network requests are made * a WSGI app which is wsgi-intercept-ed[2] If an individual TestCase is asked to be run by the testrunner, those tests that are prior to it in the same file are run first, as prerequisites. Each test file can declare a sequence of nested fixtures to be loaded from a configured (in the loader) module. Fixtures are context managers (they establish the fixture upon __enter__ and destroy it upon __exit__). With a proper group_regex setting in .testr.conf each YAML file can run in its own process in a concurrent test runner. The docs contain information on the format of the test files: http://gabbi.readthedocs.org/en/latest/format.html Each test can state request headers and bodies and evaluate both response headers and response bodies. Request bodies can be strings in the YAML, files read from disk, or JSON created from YAML structures. Response verifcation can use JSONPath[3] to inspect the details of response bodies. Response header validation may use regular expressions. There is limited support for refering to the previous request to construct URIs, potentially allowing traversal of a full HATEOAS compliant API. At the moment the most complete examples of how things work are: * Ceilometer's pending use of gabbi: https://review.openstack.org/#/c/146187/ * Gabbi's testing of gabbi: https://github.com/cdent/gabbi/tree/master/gabbi/gabbits_intercept (the loader and faked WSGI app for those yaml files is in: https://github.com/cdent/gabbi/blob/master/gabbi/test_intercept.py) One obvious thing that will need to happen is a suite of concrete examples on how to use the various features. I'm hoping that feedback will help drive that. In my own experimentation with gabbi I've found it very useful. It's helped me explore and learn the ceilometer API in a way that existing test code has completely failed to do. It's also helped reveal several warts that will be very useful to fix. And it is fast. To run and to write. I hope that with some work it can be useful to you too. Thanks. [1] Getting gabbi to play well with PyUnit style tests and with infrastructure like subunit and testrepository was one of the most challenging parts of the build, but the result has been a lot of flexbility. [2] https://pypi.python.org/pypi/wsgi_intercept [3] https://pypi.python.org/pypi/jsonpath-rw Awesome! I was discussing trying to add extensions to RAML[1] so we could do something like this the other day. Is there any reason you didnt use an existing modeling language like this? Cheers, Greg [1] http://raml.org/ __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [horizon] static files handling, bower/
On 12/18/14 6:58 AM, Radomir Dopieralski wrote: Hello, revisiting the package management for the Horizon's static files again, I would like to propose a particular solution. Hopefully it will allow us to both simplify the whole setup, and use the popular tools for the job, without losing too much of benefits of our current process. The changes we would need to make are as follows: * get rid of XStatic entirely; * add to the repository a configuration file for Bower, with all the required bower packages listed and their versions specified; I know I'm very very late to this thread but can I ask why Bower? Bower has a hard requirement on Node.js which was removed as a dependency in Havana. Why are we reintroducing this requirement? For Solaris, a requirement on Node.js is especially problematic as there is no official SPARC port and I'm not aware of anybody else working on one. I agree that XStatic isn't really the best solution here but are there any other solutions that don't involve Node.js? Thanks. -Drew __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs
On Mon, 12 Jan 2015, Gregory Haynes wrote: Awesome! I was discussing trying to add extensions to RAML[1] so we could do something like this the other day. Is there any reason you didnt use an existing modeling language like this? Glad you like it. I chose to go with my own model in the YAML for a few different reasons: * I had some pre-existing code[1] that had worked well (but was considerably less featureful[2]) so I used that as a starting point. * I wanted to model HTTP requests and responses _not_ APIs. RAML looks pretty interesting but it abstracts at a slightly different level for a considerably different purpose. To use it in the context I was working towards would require ignoring a lot of the syntax and (as far as a superficial read goes) adding a fair bit more. * I wanted small, simple and clean but [2] came along so now it is like most languages: small, simple and clean if you try to make it that way, noisy if you let things get out of hand. [1] https://github.com/tiddlyweb/tiddlyweb/blob/master/test/http_runner.py https://github.com/tiddlyweb/tiddlyweb/blob/master/test/httptest.yaml [2] What I found while building gabbi was that it could be a useful as a TDD tool without many features. The constrained feature set would result in constrained (and thus limited in the good way) APIs because the limited expressiveness of the tests would limit ambiguity in the API. However, existing APIs were not limited from the outset and have a fair bit of ambiguity so to test them a lot of flexibility is required in the tests. Already in conversations this evening people are asking for more features in the evaluation of response bodies in order to be able to test more flexibily. -- Chris Dent tw:@anticdent freenode:cdent https://tank.peermore.com/tanks/cdent __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [tc][python-clients] More freedom for all python clients
Hello TC, I would like to propose to allow adding all python-clients from stackforge (that are regarding global-requirements) to global requirements. It doesn't cost anything and simplifies life for everybody on stackforge. P.S. We already have billions libs in global requirements that aren't even on stackforge. Having few more or less doesn't make any sense... Best regards, Boris Pavlovic __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Keystone] LDAP Identity Use Survey
The Keystone development team is looking for deployment feedback regarding the use of the LDAP Identity backend. The Identity backend only covers Users and Groups. We are looking to get an idea of types (read-only, read-write, etc) and reasons for use of the LDAP backend. The answers to this survey will help us to prioritize updates, changes, and set direction for the the LDAP Identity backend. http://goo.gl/forms/bzZT5KGqkv http://goo.gl/forms/bzZT5KGqkv This survey is only meant to get information on the use of the LDAP Identity backend. Identity only contains User and Group information. Cheers, Morgan Fainberg__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] Hacking 0.10 released
Just a heads up for anyone else making these changes: Even though the g-r entry is =0.10, you need to do =0.10.0 in the project for it to pass the requirements check. On 01/10/2015 07:15 PM, Joe Gordon wrote: Hi all, I am happy to announce the release of hacking 0.10. Below is a list of whats new. Unlike most dependencies hacking changes are not automatically pushed out by the OpenStack Proposal Bot. In order to migrate to the new release each project will need a patch like this: https://review.openstack.org/#/c/145570/ - flake8 now uses multiprocessing by default! - Remove H402: first line of docstring should end with punctuation - Remove H904: Wrap long lines in parentheses and not backslash for line continuation - Update H501, don't use locals() for formatting strings. to also check for self.__dict__ - Add H105: don't use author tags - Add H238: check for old style class declarations - Remove all git commit message rules: H801, H802, H803 - Remove complex import rules: H302, H306, H307 Dependency changes: - pep8 from 1.5.6 to 1.5.7 (https://pypi.python.org/pypi/pep8) - flake8 from 2.1.0 to 2.2.4 (https://pypi.python.org/pypi/flake8) - six from = 1.60 to =1.7.0 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs
Chris, The Idea is brilliant. I may steal it! =) But there are some issues that will be faced: 1) Using as a base unittest: python -m subunit.run discover -f gabbi | subunit2pyunit So rally team won't be able to reuse it for load testing (if we directly integrate it) because we will have huge overhead (of discover stuff) 2) Load testing. Using unittest for functional testing adds a lot of troubles: 2.1) It makes things complicated: Like reusing fixtures via input YAML will be painfull 2.2) It adds a lot of functionality that is not required 2.3) It makes it hardly integratabtle with other tools. Like Rally.. 3) Usage by Operators is hard in case of N projects. So you should have some kind of Operators would like to have 1 button that will say (does cloud work or not). And they don't want to combine all gabbi files from all projects and run test. From other side there should be a way to write such code in-projects-tree (so new features are directly tested) and then moved to some common place that is run on every patch (without breaking gates) 4) Using subunit format is not good for functional testing. It doesn't allow you to collect detailed information about execution of test. Like for benchmarking it will be quite interesting to collect durations of every API call. Best regards, Boris Pavlovic On Mon, Jan 12, 2015 at 10:54 PM, Eoghan Glynn egl...@redhat.com wrote: After some discussion with Sean Dague and a few others it became clear that it would be a good idea to introduce a new tool I've been working on to the list to get a sense of its usefulness generally, work towards getting it into global requirements, and get the documentation fleshed out so that people can actually figure out how to use it well. tl;dr: Help me make this interesting tool useful to you and your HTTP testing by reading this message and following some of the links and asking any questions that come up. The tool is called gabbi https://github.com/cdent/gabbi http://gabbi.readthedocs.org/ https://pypi.python.org/pypi/gabbi It describes itself as a tool for running HTTP tests where requests and responses are represented in a declarative form. Its main purpose is to allow testing of APIs where the focus of test writing (and reading!) is on the HTTP requests and responses, not on a bunch of Python (that obscures the HTTP). The tests are written in YAML and the simplest test file has this form: ``` tests: - name: a test url: / ``` This test will pass if the response status code is '200'. The test file is loaded by a small amount of python code which transforms the file into an ordered sequence of TestCases in a TestSuite[1]. ``` def load_tests(loader, tests, pattern): Provide a TestSuite to the discovery process. test_dir = os.path.join(os.path.dirname(__file__), TESTS_DIR) return driver.build_tests(test_dir, loader, host=None, intercept=SimpleWsgi, fixture_module=sys.modules[__name__]) ``` The loader provides either: * a host to which real over-the-network requests are made * a WSGI app which is wsgi-intercept-ed[2] If an individual TestCase is asked to be run by the testrunner, those tests that are prior to it in the same file are run first, as prerequisites. Each test file can declare a sequence of nested fixtures to be loaded from a configured (in the loader) module. Fixtures are context managers (they establish the fixture upon __enter__ and destroy it upon __exit__). With a proper group_regex setting in .testr.conf each YAML file can run in its own process in a concurrent test runner. The docs contain information on the format of the test files: http://gabbi.readthedocs.org/en/latest/format.html Each test can state request headers and bodies and evaluate both response headers and response bodies. Request bodies can be strings in the YAML, files read from disk, or JSON created from YAML structures. Response verifcation can use JSONPath[3] to inspect the details of response bodies. Response header validation may use regular expressions. There is limited support for refering to the previous request to construct URIs, potentially allowing traversal of a full HATEOAS compliant API. At the moment the most complete examples of how things work are: * Ceilometer's pending use of gabbi: https://review.openstack.org/#/c/146187/ * Gabbi's testing of gabbi: https://github.com/cdent/gabbi/tree/master/gabbi/gabbits_intercept (the loader and faked WSGI app for those yaml files is in: https://github.com/cdent/gabbi/blob/master/gabbi/test_intercept.py) One obvious thing that will need to happen is a suite of concrete examples on how to use the various features. I'm hoping that feedback will help drive that. In my own
[openstack-dev] [Fuel] Lack of additional setup on 10Gbit interfaces.
Hi. I'm testing OpenStack setup set on our hardware with Fuel 6.0 and I found the problem with 10Gbit network interfaces configuration. Our setup uses Centos on deployed nodes - I didn't look how this situation looks from Ubuntu perspective, but looking on the fuel-library - there is probably the same effect. With default settings, nodes deployed by fuel have 2.6.32.xxx linux kernel, with 3.10 available and marked as experimental. Under webui for deployment, network interfaces are correctly shown as running on 10Gbit, but... Maximal transfer rates which We could achieve were around 2.5Gbit/s. After some investigation I found that interfaces configured by /etc/sysconfig/network-scripts/ifcfg-* have set default MTU, no matter if particular interface is or is not 10Gbit. I did not searched how other than igxbe drivers works, but this particular under so old kernels in 10Gbit configuration requires MTU set to at least 9000 (to turn on jumbo frames - probably other drivers have similar requirement), to work properly. Manual adding (this is only simplification, this should be set more carefully): for f in /etc/sysconfig/network-scripts/ifcfg-* ; do echo MTU=9000 $f ; done partially resolves this problem (partially, because under default 2.6.32.xxx still We do not have better than 6Gbit/s transfers in single stream, but situation is much better under mentioned above 3.10 kernel - We have full 10Gbit/s). Looking into fuel-library, l23network::l3::ifconfig have ability to also configure MTU, but this functionality looks unused in this situation. End user which buys setup with 10Gbit/s 82599 based network adapters expects that in default configuration all should work as expected. From user perspective - actual situation is faulty. For this moment - not only in time of deploy he must select option marked as experimental, but also he must patch deployed setup, and remember to patch in same way every one added in future physical node. So, what We can do to make end user happier? Could We in puppet files do something like: if interface_link == '10Gbit' and interface_driver == 'igxbe': set_mtu(9000) interface_driver could be readed from link name, from /sys/class/net/devname/device/driver/module interface_link could be readed from ethtool devname | grep Speed Intel Technology Poland sp. z o.o. ul. Slowackiego 173 | 80-298 Gdansk | Sad Rejonowy Gdansk Polnoc | VII Wydzial Gospodarczy Krajowego Rejestru Sadowego - KRS 101882 | NIP 957-07-52-316 | Kapital zakladowy 200.000 PLN. Ta wiadomosc wraz z zalacznikami jest przeznaczona dla okreslonego adresata i moze zawierac informacje poufne. W razie przypadkowego otrzymania tej wiadomosci, prosimy o powiadomienie nadawcy oraz trwale jej usuniecie; jakiekolwiek przegladanie lub rozpowszechnianie jest zabronione. This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). If you are not the intended recipient, please contact the sender and delete all copies; any review or distribution by others is strictly prohibited. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] Where to keep data about stack breakpoints?
On 12/01/15 10:49, Ryan Brown wrote: On 01/12/2015 10:29 AM, Tomas Sedovic wrote: Hey folks, I did a quick proof of concept for a part of the Stack Breakpoint spec[1] and I put the does this resource have a breakpoint flag into the metadata of the resource: https://review.openstack.org/#/c/146123/ I'm not sure where this info really belongs, though. It does sound like metadata to me (plus we don't have to change the database schema that way), but can we use it for breakpoints etc., too? Or is metadata strictly for Heat users and not for engine-specific stuff? I'd rather not store it in metadata so we don't mix user metadata with implementation-specific-and-also-subject-to-change runtime metadata. I think this is a big enough feature to warrant a schema update (and I can't think of another place I'd want to put the breakpoint info). +1 I'm actually not convinced it should be in the template at all. Steve's suggestion of putting it the environment might be a good one, or maybe it should even just be an extra parameter to the stack create/update APIs (like e.g. the timeout is)? I also had a chat with Steve Hardy and he suggested adding a STOPPED state to the stack (this isn't in the spec). While not strictly necessary to implement the spec, this would help people figure out that the stack has reached a breakpoint instead of just waiting on a resource that takes a long time to finish (the heat-engine log and event-list still show that a breakpoint was reached but I'd like to have it in stack-list and resource-list, too). It makes more sense to me to call it PAUSED (we're not completely stopping the stack creation after all, just pausing it for a bit), I'll let Steve explain why that's not the right choice :-). +1 to PAUSED. To me, STOPPED implies an end state (which a breakpoint is not). I agree we need an easy way for the user to see why nothing is happening, but adding additional states to the stack is a pretty dangerous change that risks creating regressions all over the place. If we can find _any_ other way to surface the information, it would be preferable IMHO. cheers, Zane. For sublime end user confusion, we could use BROKEN. ;) Tomas [1]: http://specs.openstack.org/openstack/heat-specs/specs/juno/stack-breakpoint.html __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs
On Tue, 13 Jan 2015, Boris Pavlovic wrote: The Idea is brilliant. I may steal it! =) Feel free. But there are some issues that will be faced: 1) Using as a base unittest: python -m subunit.run discover -f gabbi | subunit2pyunit So rally team won't be able to reuse it for load testing (if we directly integrate it) because we will have huge overhead (of discover stuff) So the use of unittest, subunit and related tools are to allow the tests to be integrated with the usual OpenStack testing handling. That is, gabbi is primarily oriented towards being a tool for developers to drive or validate their work. However we may feel about subunit, testr etc they are a de facto standard. As I said in my message at the top of the thread the vast majority of effort made in gabbi was getting it to be tests in the PyUnit view of the universe. And not just appear to be tests, but each request as an individual TestCase discoverable and addressable in the PyUnit style. In any case, can you go into more details about your concerns with discovery? In my limited exploration thus far the discovery portion is not too heavyweight: reading the YAML files. 2.3) It makes it hardly integratabtle with other tools. Like Rally.. If there's sufficient motivation and time it might make sense to separate the part of gabbi that builds TestCases from the part that runs (and evaluates) HTTP requests and responses. If that happens then integration with tools like Rally and runners is probably possible. 3) Usage by Operators is hard in case of N projects. This is not a use case that I really imagined for gabbi. I didn't want to create a tool for everyone, I was after satisfying a narrow part of the in tree functional tests need that's been discussed for the past several months. That narrow part is: legible tests of the HTTP aspects of project APIs. Operators would like to have 1 button that will say (does cloud work or not). And they don't want to combine all gabbi files from all projects and run test. So, while this is an interesting idea, it's not something that gabbi intends to be. It doesn't validate existing clouds. It validates code that is used to run clouds. Such a thing is probably possible (especially given the fact that you can give a real host to gabbi tests) but that's not the primary goal. 4) Using subunit format is not good for functional testing. It doesn't allow you to collect detailed information about execution of test. Like for benchmarking it will be quite interesting to collect durations of every API call. I think we've all got different definitions of functional testing. For example in my own personal defintion I'm not too concerned about test times: I'm worried about what fails. But if you are concerned about individual test times gabbi makes every request an individual TestCase, which means that subunit can record times for it. Here's a sample of the output from running gabbi's own gabbi tests: $ python -m subunit.run discover gabbi |subunit-trace [...] gabbi.driver.test_intercept_self_inheritance_of_defaults.test_request [0.027512s] ... ok [...] -- Chris Dent tw:@anticdent freenode:cdent https://tank.peermore.com/tanks/cdent __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs
Hi Chris, If there's sufficient motivation and time it might make sense to separate the part of gabbi that builds TestCases from the part that runs (and evaluates) HTTP requests and responses. If that happens then integration with tools like Rally and runners is probably possible. Having separated engine seems like a good idea. It will really simplify stuff So, while this is an interesting idea, it's not something that gabbi intends to be. It doesn't validate existing clouds. It validates code that is used to run clouds. Such a thing is probably possible (especially given the fact that you can give a real host to gabbi tests) but that's not the primary goal. This seems like a huge duplication of efforts. I mean operators will write own tools developers own... Why not just resolve more common problem: Does it work or not? But if you are concerned about individual test times gabbi makes every request an individual TestCase, which means that subunit can record times for it. Here's a sample of the output from running gabbi's own gabbi tests: $ python -m subunit.run discover gabbi |subunit-trace [...] gabbi.driver.test_intercept_self_inheritance_of_defaults.test_request [0.027512s] ... ok [...] What is test_request Just one RestAPI call? Btw the thin that I am interested how they are all combined? - fixtures.set - run first Rest call - run second Rest call ... - fixtures.clean Something like that? And where are you doing cleanup? (like if you would like to test only creation of resource?) Best regards, Boris Pavlovic On Tue, Jan 13, 2015 at 12:37 AM, Chris Dent chd...@redhat.com wrote: On Tue, 13 Jan 2015, Boris Pavlovic wrote: The Idea is brilliant. I may steal it! =) Feel free. But there are some issues that will be faced: 1) Using as a base unittest: python -m subunit.run discover -f gabbi | subunit2pyunit So rally team won't be able to reuse it for load testing (if we directly integrate it) because we will have huge overhead (of discover stuff) So the use of unittest, subunit and related tools are to allow the tests to be integrated with the usual OpenStack testing handling. That is, gabbi is primarily oriented towards being a tool for developers to drive or validate their work. However we may feel about subunit, testr etc they are a de facto standard. As I said in my message at the top of the thread the vast majority of effort made in gabbi was getting it to be tests in the PyUnit view of the universe. And not just appear to be tests, but each request as an individual TestCase discoverable and addressable in the PyUnit style. In any case, can you go into more details about your concerns with discovery? In my limited exploration thus far the discovery portion is not too heavyweight: reading the YAML files. 2.3) It makes it hardly integratabtle with other tools. Like Rally.. If there's sufficient motivation and time it might make sense to separate the part of gabbi that builds TestCases from the part that runs (and evaluates) HTTP requests and responses. If that happens then integration with tools like Rally and runners is probably possible. 3) Usage by Operators is hard in case of N projects. This is not a use case that I really imagined for gabbi. I didn't want to create a tool for everyone, I was after satisfying a narrow part of the in tree functional tests need that's been discussed for the past several months. That narrow part is: legible tests of the HTTP aspects of project APIs. Operators would like to have 1 button that will say (does cloud work or not). And they don't want to combine all gabbi files from all projects and run test. So, while this is an interesting idea, it's not something that gabbi intends to be. It doesn't validate existing clouds. It validates code that is used to run clouds. Such a thing is probably possible (especially given the fact that you can give a real host to gabbi tests) but that's not the primary goal. 4) Using subunit format is not good for functional testing. It doesn't allow you to collect detailed information about execution of test. Like for benchmarking it will be quite interesting to collect durations of every API call. I think we've all got different definitions of functional testing. For example in my own personal defintion I'm not too concerned about test times: I'm worried about what fails. But if you are concerned about individual test times gabbi makes every request an individual TestCase, which means that subunit can record times for it. Here's a sample of the output from running gabbi's own gabbi tests: $ python -m subunit.run discover gabbi |subunit-trace [...] gabbi.driver.test_intercept_self_inheritance_of_defaults.test_request [0.027512s] ... ok [...] -- Chris Dent tw:@anticdent freenode:cdent https://tank.peermore.com/tanks/cdent
Re: [openstack-dev] [Heat] Where to keep data about stack breakpoints?
On 12/01/15 13:05, Steven Hardy wrote: I also had a chat with Steve Hardy and he suggested adding a STOPPED state to the stack (this isn't in the spec). While not strictly necessary to implement the spec, this would help people figure out that the stack has reached a breakpoint instead of just waiting on a resource that takes a long time to finish (the heat-engine log and event-list still show that a breakpoint was reached but I'd like to have it in stack-list and resource-list, too). It makes more sense to me to call it PAUSED (we're not completely stopping the stack creation after all, just pausing it for a bit), I'll let Steve explain why that's not the right choice :-). So, I've not got strong opinions on the name, it's more the workflow: 1. User triggers a stack create/update 2. Heat walks the graph, hits a breakpoint and stops. 3. Heat explicitly triggers continuation of the create/update Did you mean the user rather than Heat for (3)? My argument is that (3) is always a stack update, either a PUT or PATCH update, e.g we_are_ completely stopping stack creation, then a user can choose to re-start it (either with the same or a different definition). Hmmm, ok that's interesting. I have not been thinking of it that way. I've always thought of it like this: http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/adding-lifecycle-hooks.html (Incidentally, this suggests an implementation where the lifecycle hook is actually a resource - with its own API, naturally.) So, if it's requested, before each operation we send out a notification (hopefully via Zaqar), and if a breakpoint is set that operation is not carried out until the user makes an API call acknowledging it. So, it_is_ really an end state, as a user might never choose to update from the stopped state, in which case *_STOPPED makes more sense. That makes a bit more sense now. I think this is going to be really hard to implement though. Because while one branch of the graph stops, other branches have to continue as far as they can. At what point do you change the state of the stack? Paused implies the same action as the PATCH update, only we trigger continuation of the operation from the point we reached via some sort of user signal. If we actually pause an in-progress action via the scheduler, we'd have to start worrying about stuff like token expiry, hitting timeouts, resilience to engine restarts, etc, etc. So forcing an explicit update seems simpler to me. Yes, token expiry and stack timeouts are annoying things we'd have to deal with. (Resilience to engine restarts is not affected though.) However, I'm not sure your model is simpler, and in particular it sounds much harder to implement in the convergence architecture. cheers, Zane. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs
On 01/12/2015 05:00 PM, Boris Pavlovic wrote: Hi Chris, If there's sufficient motivation and time it might make sense to separate the part of gabbi that builds TestCases from the part that runs (and evaluates) HTTP requests and responses. If that happens then integration with tools like Rally and runners is probably possible. Having separated engine seems like a good idea. It will really simplify stuff So, while this is an interesting idea, it's not something that gabbi intends to be. It doesn't validate existing clouds. It validates code that is used to run clouds. Such a thing is probably possible (especially given the fact that you can give a real host to gabbi tests) but that's not the primary goal. This seems like a huge duplication of efforts. I mean operators will write own tools developers own... Why not just resolve more common problem: Does it work or not? I think it's important to look at this in the narrower context, we're not testing full environments here, this is custom crafting HTTP req / resp in a limited context to make sure components are completing a contract. Does it work or not? is so broad a statement as to be meaningless most of the time. It's important to be able to looking at these lower level response flows and make sure they both function, and that when they break, they do so in a debuggable way. So I'd say let's focus on that problem right now, and get some traction on this as part of functional test suites in OpenStack. Genericizing it too much just turns this back into a version of every other full stack testing tool, which we know isn't sufficient for having quality components in OpenStack. -Sean -- Sean Dague http://dague.net __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Cinder] Cutoff deadlines for cinder drivers
On 09:03 Mon 12 Jan , Erlon Cruz wrote: Hi guys, Thanks for answering my questions. I have 2 points: 1 - This (remove drivers without CI) is a way impacting change to be implemented without exhausting notification and discussion on the mailing list. I myself was in the meeting but this decision wasn't crystal clear. There must be other driver maintainers completely unaware of this. I agree that the mailing list has not been exhausted, however, just reaching out to the mailing list is not good enough. My instructions back in November 19th [1][2] were that we need to email individual maintainers and the openstack-dev list. That was not done. As far as I'm concerned, we can't stick to the current deadline for existing drivers. I will bring this up in the next Cinder meeting. 2 - Build a CI infrastructure and have people to maintain a the CI for a new driver in a 5 weeks frame. Not all companies has the knowledge and resources necessary to this in such sort period. We should consider a grace release period, i.e. drivers entering on K, have until L to implement theirs CIs. New driver maintainers have until March 19th. [3] That's around 17 weeks since we discussed this in November [2]. This is part the documentation for how to contribute a driver [4], which links to the third party requirement deadline [3]. [1] - http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-11-19-16.00.html [2] - http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-11-19-16.00.log.html#l-34 [3] - https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers#Deadlines [4] - https://wiki.openstack.org/wiki/Cinder/how-to-contribute-a-driver -- Mike Perez __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [api] API Definition Formats
On 1/9/15, 15:17, Everett Toews everett.to...@rackspace.com wrote: One thing that has come up in the past couple of API WG meetings [1] is just how useful a proper API definition would be for the OpenStack projects. By API definition I mean a format like Swagger, RAML, API Blueprint, etc. These formats are a machine/human readable way of describing your API. Ideally they drive the implementation of both the service and the client, rather than treating the format like documentation where it’s produced as a by product of the implementation. I think this blog post [2] does an excellent job of summarizing the role of API definition formats. Some of the other benefits include validation of requests/responses, easier review of API design/changes, more consideration given to client design, generating some portion of your client code, generating documentation, mock testing, etc. If you have experience with an API definition format, how has it benefitted your prior projects? Do you think it would benefit your current OpenStack project? Thanks, Everett [1] https://wiki.openstack.org/wiki/Meetings/API-WG [2] http://apievangelist.com/2014/12/21/making-sure-the-most-important-layers- of-api-space-stay-open/ Hey Everett, As we discussed in the meeting, I have some experience with a library called Interpol [1] and using it in a massive API service. The idea behind that service was re-written as an open source case study in a project called Caravan [2]. In short, each and every endpoint used JSON Schema to validate the request and response for each version of the endpoint. (Yes, endpoints were versioned individually and that’s a topic for a different discussion.) The files used by Interpol (which is what applied the defined JSON Schema to the request/response cycle via Rack middleware) looked something like https://github.com/bendyworks/caravan/blob/master/lib/endpoint_definitions/ users/user_by_id.yml. If you read it closely, you’ll notice that path parameters are part of the schema [3] and status codes are required [4]. Each part of the schema also has the ability to be described [5]. This allows for Interpol to automatically document the API for you. Finally, you can define example responses [6] so you can prop up a stub application for other services/applications to use. Finally, Interpol has a way of testing the endpoint definitions (as they’re referred to) to ensure that the example data actually does follow the schema provided. As far as I know, there’s nothing similar to Interpol in Python … yet. I’m fairly confident that the middleware would take a weekend or two of sprinting to complete. Further, we could allow for more formats than YAML but I think this could tie in well with the gabbi testing discussion taking place. The rest might take a bit longer to complete. In short, using schemas in test and in production allowed the integration/acceptance tests to remain far more succinct. If you have something enforcing your request and response formats then you can simply test that you did get a status code 200 because something else has validated the contents. If you want to validate that there’s items in the array, you can skip validating the other properties because if there’s at least one, the objects inside have been validated by the middleware (so you can assert at least one came back and be confident). This worked extremely well in my experience and helped improve development time for new endpoints and new endpoint versions. The documentation was also heavily used for the multiple internal clients for that API. The company that used this used the validation in production (as well as in testing) had no problems with scaling or performance. The problem with building something like this /might/ be tying it in to the different frameworks used by each of the services but on the whole could be delegated to each service as it looks to integrate. From my personal perspective, YAML is a nice way to document all of this data, especially since it’s a format that most any language can parse. We used these endpoint definitions to simply how we wrote clients for the API we were developing and I suspect we could do something similar with the existing clients. It would also definitely help any new clients that people are currently writing. The biggest win for us would be having our documentation mostly auto-generated for us and having a whole suite of tests that would check that a real response matches the schema. If it doesn’t, we know the schema needs to be updated and then the docs would be automatically updated as a consequence. It’s a nice way of enforcing that the response changes are documented as they’re changed. Cheers, Ian [1] https://github.com/seomoz/interpol [2] https://github.com/bendyworks/caravan [3] https://github.com/bendyworks/caravan/blob/aa05fb345ad346b85fa989e857478491 2104570b/lib/endpoint_definitions/users/user_by_id.yml#L8..L12 [4]
Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs
On Mon, Jan 12, 2015 at 1:20 PM, Chris Dent chd...@redhat.com wrote: After some discussion with Sean Dague and a few others it became clear that it would be a good idea to introduce a new tool I've been working on to the list to get a sense of its usefulness generally, work towards getting it into global requirements, and get the documentation fleshed out so that people can actually figure out how to use it well. tl;dr: Help me make this interesting tool useful to you and your HTTP testing by reading this message and following some of the links and asking any questions that come up. The tool is called gabbi https://github.com/cdent/gabbi http://gabbi.readthedocs.org/ https://pypi.python.org/pypi/gabbi It describes itself as a tool for running HTTP tests where requests and responses are represented in a declarative form. Its main purpose is to allow testing of APIs where the focus of test writing (and reading!) is on the HTTP requests and responses, not on a bunch of Python (that obscures the HTTP). Hi Chris, I'm interested, sure. What did you use to write the HTTP tests, as in, what was the source of truth for what the requests and responses should be? Thanks, Anne The tests are written in YAML and the simplest test file has this form: ``` tests: - name: a test url: / ``` This test will pass if the response status code is '200'. The test file is loaded by a small amount of python code which transforms the file into an ordered sequence of TestCases in a TestSuite[1]. ``` def load_tests(loader, tests, pattern): Provide a TestSuite to the discovery process. test_dir = os.path.join(os.path.dirname(__file__), TESTS_DIR) return driver.build_tests(test_dir, loader, host=None, intercept=SimpleWsgi, fixture_module=sys.modules[__name__]) ``` The loader provides either: * a host to which real over-the-network requests are made * a WSGI app which is wsgi-intercept-ed[2] If an individual TestCase is asked to be run by the testrunner, those tests that are prior to it in the same file are run first, as prerequisites. Each test file can declare a sequence of nested fixtures to be loaded from a configured (in the loader) module. Fixtures are context managers (they establish the fixture upon __enter__ and destroy it upon __exit__). With a proper group_regex setting in .testr.conf each YAML file can run in its own process in a concurrent test runner. The docs contain information on the format of the test files: http://gabbi.readthedocs.org/en/latest/format.html Each test can state request headers and bodies and evaluate both response headers and response bodies. Request bodies can be strings in the YAML, files read from disk, or JSON created from YAML structures. Response verifcation can use JSONPath[3] to inspect the details of response bodies. Response header validation may use regular expressions. There is limited support for refering to the previous request to construct URIs, potentially allowing traversal of a full HATEOAS compliant API. At the moment the most complete examples of how things work are: * Ceilometer's pending use of gabbi: https://review.openstack.org/#/c/146187/ * Gabbi's testing of gabbi: https://github.com/cdent/gabbi/tree/master/gabbi/gabbits_intercept (the loader and faked WSGI app for those yaml files is in: https://github.com/cdent/gabbi/blob/master/gabbi/test_intercept.py) One obvious thing that will need to happen is a suite of concrete examples on how to use the various features. I'm hoping that feedback will help drive that. In my own experimentation with gabbi I've found it very useful. It's helped me explore and learn the ceilometer API in a way that existing test code has completely failed to do. It's also helped reveal several warts that will be very useful to fix. And it is fast. To run and to write. I hope that with some work it can be useful to you too. Thanks. [1] Getting gabbi to play well with PyUnit style tests and with infrastructure like subunit and testrepository was one of the most challenging parts of the build, but the result has been a lot of flexbility. [2] https://pypi.python.org/pypi/wsgi_intercept [3] https://pypi.python.org/pypi/jsonpath-rw -- Chris Dent tw:@anticdent freenode:cdent https://tank.peermore.com/tanks/cdent __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
Re: [openstack-dev] [Heat] Where to keep data about stack breakpoints?
On Mon, Jan 12, 2015 at 05:10:47PM -0500, Zane Bitter wrote: On 12/01/15 13:05, Steven Hardy wrote: I also had a chat with Steve Hardy and he suggested adding a STOPPED state to the stack (this isn't in the spec). While not strictly necessary to implement the spec, this would help people figure out that the stack has reached a breakpoint instead of just waiting on a resource that takes a long time to finish (the heat-engine log and event-list still show that a breakpoint was reached but I'd like to have it in stack-list and resource-list, too). It makes more sense to me to call it PAUSED (we're not completely stopping the stack creation after all, just pausing it for a bit), I'll let Steve explain why that's not the right choice :-). So, I've not got strong opinions on the name, it's more the workflow: 1. User triggers a stack create/update 2. Heat walks the graph, hits a breakpoint and stops. 3. Heat explicitly triggers continuation of the create/update Did you mean the user rather than Heat for (3)? Oops, yes I did. My argument is that (3) is always a stack update, either a PUT or PATCH update, e.g we_are_ completely stopping stack creation, then a user can choose to re-start it (either with the same or a different definition). Hmmm, ok that's interesting. I have not been thinking of it that way. I've always thought of it like this: http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/adding-lifecycle-hooks.html (Incidentally, this suggests an implementation where the lifecycle hook is actually a resource - with its own API, naturally.) So, if it's requested, before each operation we send out a notification (hopefully via Zaqar), and if a breakpoint is set that operation is not carried out until the user makes an API call acknowledging it. I guess I was trying to keep it initially simpler than that, given that we don't have any integration with a heat-user messaging system at present. So, it_is_ really an end state, as a user might never choose to update from the stopped state, in which case *_STOPPED makes more sense. That makes a bit more sense now. I think this is going to be really hard to implement though. Because while one branch of the graph stops, other branches have to continue as far as they can. At what point do you change the state of the stack? True, this is a disadvantage of specifying a single breakpoint when there may be parallel paths through the graph. However, I was thinking we could just reuse our existing error path implementation, so it needn't be hard to implement at all, e.g. 1. Stack action started where a resource has a breakpoint set 2. Stack.stack_task.resource_action checks if resource is a breakpoint 3. If a breakpoint is set, we raise a exception.ResourceFailure subclass 4. The normal error_wait_time is respected, e.g currently in-progress actions are given a chance to complete. Basically, the only implementation would be raising a special new type of exception, which would enable a suitable message (and event) to be shown to the user Stack create aborted due to breakpoint on resource foo. Pre/post breakpoint actions/messaging could be added later via a similar method to the stack-level lifecycle plugin hooks. If folks are happy with e.g CREATE_FAILED as a post-breakpoint state, this could simplify things a lot, as we'd not need any new state or much new code at all? Paused implies the same action as the PATCH update, only we trigger continuation of the operation from the point we reached via some sort of user signal. If we actually pause an in-progress action via the scheduler, we'd have to start worrying about stuff like token expiry, hitting timeouts, resilience to engine restarts, etc, etc. So forcing an explicit update seems simpler to me. Yes, token expiry and stack timeouts are annoying things we'd have to deal with. (Resilience to engine restarts is not affected though.) However, I'm not sure your model is simpler, and in particular it sounds much harder to implement in the convergence architecture. So you're advocating keeping the scheduler spinning, until a user sends a signal to the resource to clear the breakpoint? I don't see why we couldn't do both, have a abort_on_breakpoint flag or something, but I'd be interested in further understanding how the error-path approach outlined above would be incompatible with convergence. Thanks, Steve __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs
On Mon, 12 Jan 2015, Anne Gentle wrote: I'm interested, sure. What did you use to write the HTTP tests, as in, what was the source of truth for what the requests and responses should be? That is an _extremely_ good question and one I really struggled with as I started integrating gabbi with ceilometer. Initially I thought I'll just use the API docs[1] as the source of truth but I found they were a bit incomplete on some of the nuances, so I asked around for other sources of truth, but got little in the way of response. So then I tried to use the api controller code, but not to put too fine a point on it, but the combination of WSME and Pecan makes for utterly inscrutable code if you're interested in the actual structure of the HTTP requests and response and the URIs being used. So then I tried to use the existing api unit tests and was able to extract a bit there, but it wasn't smooth sailing. So finally what I did was decide that I would do the work in phases and with collaborators: I'd get the initial framework in place and then impose upon those more familiar with the API than I to do subsequent dependent patchsets that cover the API more completely. I have to admit that the concept of API truth is part of the reason I wanted to create this kind of testing. None of the resources I could find in the ceilometer code tree gave any clear overview that mapped URIs to methods, allowing easy discovery of how the code works. I wanted to find some kind of map[2]. Gabbi itself doesn't solve this problem (there's no map between URI and python method) but it can at least show the API, there in the code. It's a step in the right direction. I know that there are discussions in progress about formalizing APIs with tools like RAML (for example the thread Ian just extended[3]). I think these have their place, especially for declaring truth, but they aren't necessarily good learning aids for new developers or good assistants for enabling and maintaining transparency. [1] I started at: http://docs.openstack.org/developer/ceilometer/webapi/v2.html but I think I should have used: http://developer.openstack.org/api-ref-telemetry-v2.html [2] https://github.com/tiddlyweb/tiddlyweb/blob/master/tiddlyweb/urls.map [3] http://lists.openstack.org/pipermail/openstack-dev/2015-January/054153.html -- Chris Dent tw:@anticdent freenode:cdent https://tank.peermore.com/tanks/cdent __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs
Sean, So I'd say let's focus on that problem right now, and get some traction on this as part of functional test suites in OpenStack. Genericizing it too much just turns this back into a version of every other full stack testing tool, which we know isn't sufficient for having quality components in OpenStack. Please be more specific about what tools were tested? It will be nice to see overview. At least what tool were tested and why they can't be used for testing-in-tree. Best regards, Boris Pavlovic On Tue, Jan 13, 2015 at 1:37 AM, Anne Gentle a...@openstack.org wrote: On Mon, Jan 12, 2015 at 1:20 PM, Chris Dent chd...@redhat.com wrote: After some discussion with Sean Dague and a few others it became clear that it would be a good idea to introduce a new tool I've been working on to the list to get a sense of its usefulness generally, work towards getting it into global requirements, and get the documentation fleshed out so that people can actually figure out how to use it well. tl;dr: Help me make this interesting tool useful to you and your HTTP testing by reading this message and following some of the links and asking any questions that come up. The tool is called gabbi https://github.com/cdent/gabbi http://gabbi.readthedocs.org/ https://pypi.python.org/pypi/gabbi It describes itself as a tool for running HTTP tests where requests and responses are represented in a declarative form. Its main purpose is to allow testing of APIs where the focus of test writing (and reading!) is on the HTTP requests and responses, not on a bunch of Python (that obscures the HTTP). Hi Chris, I'm interested, sure. What did you use to write the HTTP tests, as in, what was the source of truth for what the requests and responses should be? Thanks, Anne The tests are written in YAML and the simplest test file has this form: ``` tests: - name: a test url: / ``` This test will pass if the response status code is '200'. The test file is loaded by a small amount of python code which transforms the file into an ordered sequence of TestCases in a TestSuite[1]. ``` def load_tests(loader, tests, pattern): Provide a TestSuite to the discovery process. test_dir = os.path.join(os.path.dirname(__file__), TESTS_DIR) return driver.build_tests(test_dir, loader, host=None, intercept=SimpleWsgi, fixture_module=sys.modules[__name__]) ``` The loader provides either: * a host to which real over-the-network requests are made * a WSGI app which is wsgi-intercept-ed[2] If an individual TestCase is asked to be run by the testrunner, those tests that are prior to it in the same file are run first, as prerequisites. Each test file can declare a sequence of nested fixtures to be loaded from a configured (in the loader) module. Fixtures are context managers (they establish the fixture upon __enter__ and destroy it upon __exit__). With a proper group_regex setting in .testr.conf each YAML file can run in its own process in a concurrent test runner. The docs contain information on the format of the test files: http://gabbi.readthedocs.org/en/latest/format.html Each test can state request headers and bodies and evaluate both response headers and response bodies. Request bodies can be strings in the YAML, files read from disk, or JSON created from YAML structures. Response verifcation can use JSONPath[3] to inspect the details of response bodies. Response header validation may use regular expressions. There is limited support for refering to the previous request to construct URIs, potentially allowing traversal of a full HATEOAS compliant API. At the moment the most complete examples of how things work are: * Ceilometer's pending use of gabbi: https://review.openstack.org/#/c/146187/ * Gabbi's testing of gabbi: https://github.com/cdent/gabbi/tree/master/gabbi/gabbits_intercept (the loader and faked WSGI app for those yaml files is in: https://github.com/cdent/gabbi/blob/master/gabbi/test_intercept.py) One obvious thing that will need to happen is a suite of concrete examples on how to use the various features. I'm hoping that feedback will help drive that. In my own experimentation with gabbi I've found it very useful. It's helped me explore and learn the ceilometer API in a way that existing test code has completely failed to do. It's also helped reveal several warts that will be very useful to fix. And it is fast. To run and to write. I hope that with some work it can be useful to you too. Thanks. [1] Getting gabbi to play well with PyUnit style tests and with infrastructure like subunit and testrepository was one of the most challenging parts of the build, but the result has been a lot of flexbility. [2] https://pypi.python.org/pypi/wsgi_intercept [3] https://pypi.python.org/pypi/jsonpath-rw --
[openstack-dev] SR-IOV IRC meeting on Jan, 13th Canceled
Hi, I’m canceling the meeting since I’m traveling this week. Regards, Robert __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [api] API Definition Formats
On Mon, 12 Jan 2015, Ian Cordasco wrote: This worked extremely well in my experience and helped improve development time for new endpoints and new endpoint versions. The documentation was also heavily used for the multiple internal clients for that API. This idea of definition formats seems like a reasonable idea (see my response to Anne over on the gabbi thread[1]) but I worry about a few things: * Unless you're auto generating the code from the formal defition you run into a lot of opportunities for truth to get out of sync between the definition and the implementation. * Ugh, auto generated code. Magic. Ew. This is Python by golly! * Specifying every single endpoint or many endpoints is just about as anti-REST as you can get if you're a HATEOAS believer. I suspect this line of concern is well-trod ground and not worth bringing back up, but all this stuff about versioning is meh and death to client diversity. * Yes to this: The problem with building something like this /might/ be tying it in to the different frameworks used by each of the services but on the whole could be delegated to each service as it looks to integrate. All that said, what you describe in the following would be nice if it can be made true and work well. I suspect I'm still scarred from WSDL and company but I'm not optimistic that culturally it can be made to work. Simple HTTP APIs wins over SOAP and pragmatic HTTP wins over true REST and JSON wins over XML because all of the latter have a flavor of flexibility and easy to diddle that does not exist in the former. The problem is social, not technical. From my personal perspective, YAML is a nice way to document all of this data, especially since it’s a format that most any language can parse. We used these endpoint definitions to simply how we wrote clients for the API we were developing and I suspect we could do something similar with the existing clients. It would also definitely help any new clients that people are currently writing. The biggest win for us would be having our documentation mostly auto-generated for us and having a whole suite of tests that would check that a real response matches the schema. If it doesn’t, we know the schema needs to be updated and then the docs would be automatically updated as a consequence. It’s a nice way of enforcing that the response changes are documented as they’re changed. [1] http://lists.openstack.org/pipermail/openstack-dev/2015-January/054287.html -- Chris Dent tw:@anticdent freenode:cdent https://tank.peermore.com/tanks/cdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] Where to keep data about stack breakpoints?
I was also thinking of using the environment to hold the breakpoint, similarly to parameters. The CLI and API would process it just like parameters. As for the state of a stack hitting the breakpoint, leveraging the FAILED state seems to be sufficient, we just need to add enough information to differentiate between a failed resource and a resource at a breakpoint. Something like emitting an event or a message should be enough to make that distinction. Debugger for native program typically does the same thing, leveraging the exception handling in the OS by inserting an artificial error at the breakpoint to force a program to stop. Then the debugger would just remember the address of these artificial errors to decode the state of the stopped program. As for the workflow, instead of spinning in the scheduler waiting for a signal, I was thinking of moving the stack off the engine as a failed stack. So this would be an end-state for the stack as Steve suggested, but without adding a new stack state. Again, this is similar to how a program being debugged is handled: they are moved off the ready queue and their context is preserved for examination. This seems to keep the implementation simple and we don't have to worry about timeout, performance, etc. Continuing from the breakpoint then should be similar to stack-update on a failed stack. We do need some additional handling, such as allowing resource in-progress to run to completion instead of aborting. For the parallel paths in a template, I am thinking about these alternatives: 1. Stop after all the current in-progress resources complete, but do not start any new resources even if there is no dependency. This should be easier to implement, but the state of the stack would be non-deterministic. 2. Stop only the paths with the breakpoint, continue all other parallel paths to completion. This seems harder to implement, but the stack would be in a deterministic state and easier for the user to reason with. To be compatible with convergence, I had suggested to Clint earlier to add a mode where the convergence engine does not attempt to retry so the user can debug, and I believe this was added to the blueprint. Ton, From: Steven Hardy sha...@redhat.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Date: 01/12/2015 02:40 PM Subject:Re: [openstack-dev] [Heat] Where to keep data about stack breakpoints? On Mon, Jan 12, 2015 at 05:10:47PM -0500, Zane Bitter wrote: On 12/01/15 13:05, Steven Hardy wrote: I also had a chat with Steve Hardy and he suggested adding a STOPPED state to the stack (this isn't in the spec). While not strictly necessary to implement the spec, this would help people figure out that the stack has reached a breakpoint instead of just waiting on a resource that takes a long time to finish (the heat-engine log and event-list still show that a breakpoint was reached but I'd like to have it in stack-list and resource-list, too). It makes more sense to me to call it PAUSED (we're not completely stopping the stack creation after all, just pausing it for a bit), I'll let Steve explain why that's not the right choice :-). So, I've not got strong opinions on the name, it's more the workflow: 1. User triggers a stack create/update 2. Heat walks the graph, hits a breakpoint and stops. 3. Heat explicitly triggers continuation of the create/update Did you mean the user rather than Heat for (3)? Oops, yes I did. My argument is that (3) is always a stack update, either a PUT or PATCH update, e.g we_are_ completely stopping stack creation, then a user can choose to re-start it (either with the same or a different definition). Hmmm, ok that's interesting. I have not been thinking of it that way. I've always thought of it like this: http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/adding-lifecycle-hooks.html (Incidentally, this suggests an implementation where the lifecycle hook is actually a resource - with its own API, naturally.) So, if it's requested, before each operation we send out a notification (hopefully via Zaqar), and if a breakpoint is set that operation is not carried out until the user makes an API call acknowledging it. I guess I was trying to keep it initially simpler than that, given that we don't have any integration with a heat-user messaging system at present. So, it_is_ really an end state, as a user might never choose to update from the stopped state, in which case *_STOPPED makes more sense. That makes a bit more sense now. I think this is going to be really hard to implement though. Because while one branch of the graph stops, other branches have to continue as far as they can. At what point do you change the state of the stack? True, this is a disadvantage of specifying a single breakpoint when there may be parallel paths through the graph.
[openstack-dev] [devstack] Devstack plugins and gate testing
Hi, With [1] merged, we now have people working on creating external plugins for devstack. I worry about use of arbitrary external locations as plugins for gate jobs. If a plugin is hosted externally (github, bitbucket, etc) we are introducing a whole host of problems when it is used as a gate job. Lack of CI testing for proposed changes, uptime of the remote end, ability to accept contributions, lack of administrative access and consequent ability to recover from bad merges are a few. I would propose we agree that plugins used for gate testing should be hosted in stackforge unless there are very compelling reasons otherwise. To that end, I've proposed [2] as some concrete wording. If we agree, I could add some sort of lint for this to project-config testing. Thanks, -i [1] https://review.openstack.org/#/c/142805/ (Implement devstack external plugins) [2] https://review.openstack.org/#/c/146679/ (Document use of plugins for gate jobs) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs
On Tue, 13 Jan 2015, Boris Pavlovic wrote: Having separated engine seems like a good idea. It will really simplify stuff I'm not certain that's the case, but it may be worth exploration. This seems like a huge duplication of efforts. I mean operators will write own tools developers own... Why not just resolve more common problem: Does it work or not? Because no one tool can solve all problems well. I think it is far better to have lots of small tools that are fairly focused on doing a one or a few small jobs well. It may be that there are pieces of gabbi which can be reused or extracted to more general libraries. If there, that's fantastic. But I think it is very important to try to solve one problem at a time rather than everything at once. $ python -m subunit.run discover gabbi |subunit-trace [...] gabbi.driver.test_intercept_self_inheritance_of_defaults.test_request [0.027512s] ... ok [...] What is test_request Just one RestAPI call? That long dotted name is the name of a dynamically (some metaclass mumbo jumbo magic is used to turn the YAML into TestCase classes) created single TestCase and within that TestCase is one single HTTP request and the evaluation of its response. It directly corresponds to a test named inheritance of defaults in a file called self.yaml. self.yaml is in a directory containing other YAML files, all of which are loaded by a python filed named test_intercept.py. Btw the thin that I am interested how they are all combined? As I said before: Each yaml file is an ordered sequence of tests, each one representing a singe HTTP request. Fixtures are per yaml file. There is no cleanup phase outside of the fixtures. Each fixture is expected to do its own cleanup, if required. And where are you doing cleanup? (like if you would like to test only creation of resource?) In the ceilometer integration that is currently being built, the test_gabbi.py[1] file configures itself to use a mongodb database that is unique for this process. The test harness is responsible for starting the mongodb. In a concurrency situation, each process will have a different database in the same monogo server. When the test run is done, mongo is shut down, the databases removed. In other words, the environment surrounding gabbi is responsible for doing the things it is good at, and gabbi does the HTTP tests. A long running test cannot necessarily depend on what else might be in the datastore used by the API. It needs to test that which it knows about. I hope that clarifies things a bit. [1] https://review.openstack.org/#/c/146187/2/ceilometer/gabbi/test_gabbi.py,cm -- Chris Dent tw:@anticdent freenode:cdent https://tank.peermore.com/tanks/cdent __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [api] API Definition Formats
On 1/12/15, 17:21, Chris Dent chd...@redhat.com wrote: On Mon, 12 Jan 2015, Ian Cordasco wrote: This worked extremely well in my experience and helped improve development time for new endpoints and new endpoint versions. The documentation was also heavily used for the multiple internal clients for that API. This idea of definition formats seems like a reasonable idea (see my response to Anne over on the gabbi thread[1]) but I worry about a few things: * Unless you're auto generating the code from the formal defition you run into a lot of opportunities for truth to get out of sync between the definition and the implementation. The /documentation/ was used by /developers/ to build the internal clients. It was also used by the front-end developers who built the user-facing interface that consumed these APIs. * Ugh, auto generated code. Magic. Ew. This is Python by golly! I’m not suggesting auto-generated code (although that’s always a *possibility*). * Specifying every single endpoint or many endpoints is just about as anti-REST as you can get if you're a HATEOAS believer. I suspect this line of concern is well-trod ground and not worth bringing back up, but all this stuff about versioning is meh and death to client diversity. Except that we don’t even try to achieve HATEOAS (or at least the OpenStack APIs I’ve seen don’t). If we’re being practical about it, then the idea that we have a contract between the API consumer (also read: user) and the server makes for a drastic simplification. The fact that the documentation is auto-generated means that writing tests with gabbi would be so much simpler for you (than waiting for people familiar with it to help you). * Yes to this: The problem with building something like this /might/ be tying it in to the different frameworks used by each of the services but on the whole could be delegated to each service as it looks to integrate. All that said, what you describe in the following would be nice if it can be made true and work well. I suspect I'm still scarred from WSDL and company but I'm not optimistic that culturally it can be made to work. Simple HTTP APIs wins over SOAP and pragmatic HTTP wins over true REST and JSON wins over XML because all of the latter have a flavor of flexibility and easy to diddle that does not exist in the former. The problem is social, not technical. Well I’ve only seen it used with JSON, so I’m not sure where you got XML from (or SOAP for that matter). Besides, this is a tool that will help the API developers more than it will hurt them. In-tree definitions in a (fairly) human readable format that clearly states what is accepted and generated by an endpoint means that scrutinizing Pecan and WSME isn’t necessary (until you start writing the endpoint itself). From my personal perspective, YAML is a nice way to document all of this data, especially since it’s a format that most any language can parse. We used these endpoint definitions to simply how we wrote clients for the API we were developing and I suspect we could do something similar with the existing clients. It would also definitely help any new clients that people are currently writing. The biggest win for us would be having our documentation mostly auto-generated for us and having a whole suite of tests that would check that a real response matches the schema. If it doesn’t, we know the schema needs to be updated and then the docs would be automatically updated as a consequence. It’s a nice way of enforcing that the response changes are documented as they’re changed. [1] http://lists.openstack.org/pipermail/openstack-dev/2015-January/054287.htm l -- Chris Dent tw:@anticdent freenode:cdent https://tank.peermore.com/tanks/cdent __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] sqlalchemy-migrate 0.9.4 released
0.9.2 was blocked because of a change that broke unit tests in some projects, that is fixed in 0.9.4. What happened to 0.9.3? Problems, don't ask - fixed in 0.9.4 (thanks mordred). Changes: mriedem@ubuntu:~/git/sqlalchemy-migrate$ git log --no-merges --oneline 0.9.2..0.9.4 b011e6c Remove svn version tag setting 938757e Ignore transaction management statements in SQL scripts 74553f4 Use native sqlalchemy 0.9 quote attribute with ibmdb2 244c6c5 Don't add warnings filter on import 30f6aea pep8: mark all pep8 checks that currently fail as ignored 7bb74f7 Fix ibmdb2 unique constraint handling for sqlalchemy 0.9 Of special note is 244c6c5 which should remove a ton of the DeprecationWarnings that show up in unit test runs for other projects, like Nova. Also thanks to clarkb for helping me do my first release, you were so gentle. :) -- Thanks, Matt Riedemann __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] Requesting exception for JSON-Home spec
Hi, This spec[1] is for adding JSON-Home feature to Nova v2.1 API. This feature will provide API resource information with a standard way which has been already implemented in Keystone. I hope this feature will promote that people use v2.1 API in production environments. I created a prototype[2] for this feature, and I have known it is not difficult to implement this feature. Thanks Ken'ichi Ohmichi --- [1]: https://review.openstack.org/#/c/130715/ [2]: https://review.openstack.org/#/c/145100/ __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] request spec freeze exception for Attach/Detach SR-IOV interface
2015-01-13 13:57 GMT+08:00 少合冯 lvmxhs...@gmail.com: Hello, I'd like to request an exception for Attach/Detach SR-IOV interface feature. [1] This is an important feature that aims to improve better performance than normal network interface in guests and not too hard to implement. Thanks, Shao He, Feng [1] https://review.openstack.org/#/c/139910/ https://review.openstack.org/#/c/128825 Sorry, the above link is wrong This is the right one. [1] https://review.openstack.org/#/c/139910/ Thanks. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [horizon] static files handling, bower/
On 12/01/15 21:53, Drew Fisher wrote: I know I'm very very late to this thread but can I ask why Bower? Bower has a hard requirement on Node.js which was removed as a dependency in Havana. Why are we reintroducing this requirement? For Solaris, a requirement on Node.js is especially problematic as there is no official SPARC port and I'm not aware of anybody else working on one. I agree that XStatic isn't really the best solution here but are there any other solutions that don't involve Node.js? The same is true for ARM based machines, as node.js is AFAIK x86 only. But, as far as I understand, node.js will become a development requirement (and most probably a requirement for testing), but not for deployment. Bower is just another package manager, comparable to npm, pip etc. if you use that aside your systems package manager. The idea is, to use something like dpkg or rpm to provide dependencies for installation. During development and testing, it's proposed to rely on bower to install dependencies. Matthias __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [gantt] Scheduler sub-group meeting agenda 1/13
Meeting on #openstack-meeting at 1500 UTC (8:00AM MST) 1) Remove direct nova DB/API access by Scheduler Filters - https://review.opernstack.org/138444/ 2) Status on cleanup work - https://wiki.openstack.org/wiki/Gantt/kilo 3) Topics for mid-cycle meetup -- Don Dugger Censeo Toto nos in Kansa esse decisse. - D. Gale Ph: 303/443-3786 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] request spec freeze exception for Attach/Detach SR-IOV interface
Hello, I'd like to request an exception for Attach/Detach SR-IOV interface feature. [1] This is an important feature that aims to improve better performance than normal network interface in guests and not too hard to implement. Thanks, Shao He, Feng [1] https://review.openstack.org/#/c/139910/ __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Requesting Exception/Review for Compute-Capabilities spec
Hi, The ironic needs this feature from nova to implement Firmware settings. The code also has been proposed for the same. Spec link: https://review.openstack.org/133534 Code link: https://review.openstack.org/141010 Regards Nisha -- The Secret Of Success is learning how to use pain and pleasure, instead of having pain and pleasure use you. If You do that you are in control of your life. If you don't life controls you. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] request spec freeze exception for Attach/Detach SR-IOV interface
Hello, I'd like to request an exception for Attach/Detach SR-IOV interface feature. [1] This is an important feature that aims to improve better performance than normal network interface in guests and not too hard to implement. Thanks, Shao He, Feng [1] https://review.openstack.org/#/c/139910/ https://review.openstack.org/#/c/128825 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Fuel] Dropping Python-2.6 support
On 01/12/2015 03:55 PM, Roman Prykhodchenko wrote: Folks, as it was planned and then announced at the OpenStack summit OpenStack services deprecated Python-2.6 support. At the moment several services and libraries are already only compatible with Python=2.7. And there is no common sense in trying to get back compatibility with Py2.6 because OpenStack infra does not run tests for that version of Python. The point of this email is that some components of Fuel, say, Nailgun and Fuel Client are still only tested with Python-2.6. Fuel Client in it’s turn is about to use OpenStack CI’s python-jobs for running unit tests. That means that in order to make it compatible with Py2.6 there is a need to run a separate python job in FuelCI. However, I believe that forcing the things being compatible with 2.6 when the rest of ecosystem decided not to go with it and when Py2.7 is already available in the main CentOS repo sounds like a battle with the common sense. So my proposal is to drop 2.6 support in Fuel-6.1. While I come from the lands where being bleeding edge is preferred, I ask myself (as not programmer) one thing: what does 2.7 provide that you cannot easily achieve in 2.6? Regards, Bartłomiej __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] request spec freeze exception for Attach/Detach SR-IOV interface
2015-01-13 13:57 GMT+08:00 少合冯 lvmxhs...@gmail.com: Hello, I'd like to request an exception for Attach/Detach SR-IOV interface feature. [1] This is an important feature that aims to improve better performance than normal network interface in guests and not too hard to implement. Thanks, Shao He, Feng [1] https://review.openstack.org/#/c/139910/ Oops, after I clicked the link it forward to an wrong link, but I can open it by copy the text https://review.openstack.org/#/c/139910/ into web-browser directly. :) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Nova] Requesting exception for add separated policy rule for each v2.1 api
https://review.openstack.org/#/c/127863/ This spec is part of Nova REST API policy improvement. And those improvement already got generic agreement as in this full view devref https://review.openstack.org/#/c/138270/ This spec is just for Nova REST API v2.1. So really hope it can be done before v2.1 released, then we needn't think about upgrade impact for deployer. Finish this simple task when it's simple. Thanks Alex __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Policy][Group-based-policy] ODL Policy Driver Specs
Hello, Sachi, They both works. End point group has been renamed to policy target group. It is recommended to use gbp policy-target-group-create. Yapeng From: Sachi Gupta [mailto:sachi.gu...@tcs.com] Sent: Monday, January 12, 2015 7:03 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Policy][Group-based-policy] ODL Policy Driver Specs Hi, Can anyone explain the difference between gbp group-create and gbp policy-target-group-create?? I think both these are working same. Thanks Regards Sachi Gupta From:Sumit Naiksatam sumitnaiksa...@gmail.com To:OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Date:11/26/2014 01:35 PM Subject:Re: [openstack-dev] [Policy][Group-based-policy] ODL Policy DriverSpecs Hi, This GBP spec is currently being worked on: https://review.openstack.org/#/c/134285/ It will be helpful if you can add [Policy][Group-based-policy] in the subject of your emails, so that the email gets characterized correctly. Thanks, ~Sumit. On Tue, Nov 25, 2014 at 4:27 AM, Sachi Gupta sachi.gu...@tcs.com wrote: Hey All, I need to understand the interaction between the Openstack GBP and the Opendaylight GBP project which will be done by ODL Policy driver. Can someone provide me with specs of ODL Policy driver for making my understanding on call flow. Thanks Regards Sachi Gupta =-=-= Notice: The information contained in this e-mail message and/or attachments to it may contain confidential or privileged information. If you are not the intended recipient, any dissemination, use, review, distribution, printing or copying of the information contained in this e-mail message and/or attachments to it are strictly prohibited. If you have received this communication in error, please notify us by reply e-mail or telephone and immediately and permanently delete the message and any attachments. Thank you ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI
You are correct to run nodepoold as nodepool user. I didn’t see any issues… Could you double check the public keys listed in .ssh/authorized_keys in the template for Ubuntu and Jenkins users match $NODEPOOL_SSH_KEY? Ramy From: Eduard Matei [mailto:eduard.ma...@cloudfounders.com] Sent: Monday, January 12, 2015 5:30 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI Hi, Regarding the last issue, i fixed it by logging in and manually pip install docutils. Image was created successfully. Now the problem is that nodepool is not able to login into instances created from that image. I have NODEPOOL_SSH_KEY exported in the screen where nodepool is running, and also i am able to login to the instance from user nodepool, but nodepoold gives error: 2015-01-12 14:19:03,095 DEBUG paramiko.transport: Switch to new keys ... 2015-01-12 14:19:03,109 DEBUG paramiko.transport: Trying key c03fbf64440cd0c2ecbc07ce4ed59804 from /home/nodepool/.ssh/id_rsa 2015-01-12 14:19:03,135 DEBUG paramiko.transport: userauth is OK 2015-01-12 14:19:03,162 INFO paramiko.transport: Authentication (publickey) failed. 2015-01-12 14:19:03,185 DEBUG paramiko.transport: Trying discovered key c03fbf64440cd0c2ecbc07ce4ed59804 in /home/nodepool/.ssh/id_rsa 2015-01-12 14:19:03,187 DEBUG paramiko.transport: userauth is OK ^C2015-01-12 14:19:03,210 INFO paramiko.transport: Authentication (publickey) failed. 2015-01-12 14:19:03,253 DEBUG paramiko.transport: EOF in transport thread 2015-01-12 14:19:03,254 INFO nodepool.utils: Password auth exception. Try number 4... echo $NODEPOOL_SSH_KEY B3NzaC1yc2EDAQABAAABAQC9gP6qui1fmHrj02p6OGvnz7kMTJ2rOC3SBYP/Ij/6yz+SU8rL5rqL6jqT30xzy9t1q0zsdJCNB2jExD4xb+NFbaoGlvjF85m12eFqP4CQenxUOdYAepf5sjV2l8WAO3ylspQ78ipLKec98NeKQwLrHB+xon6QfAHXr6ZJ9NRZbmWw/OdpOgAG9Cab+ELTmkfEYgQz01cZE22xEAMvPXz57KlWPvxtE7YwYWy180Yib97EftylsNkrchbSXCwiqgKUf04qWhTgNrVuRJ9mytil6S82VNDxHzTzeCCxY412CV6dDJNLzJItpf/CXQelj/6wJs1GgFl5GWJnqortMR2v cat /home/nodepool/.ssh/id_rsa.pub ssh-rsa B3NzaC1yc2EDAQABAAABAQC9gP6qui1fmHrj02p6OGvnz7kMTJ2rOC3SBYP/Ij/6yz+SU8rL5rqL6jqT30xzy9t1q0zsdJCNB2jExD4xb+NFbaoGlvjF85m12eFqP4CQenxUOdYAepf5sjV2l8WAO3ylspQ78ipLKec98NeKQwLrHB+xon6QfAHXr6ZJ9NRZbmWw/OdpOgAG9Cab+ELTmkfEYgQz01cZE22xEAMvPXz57KlWPvxtE7YwYWy180Yib97EftylsNkrchbSXCwiqgKUf04qWhTgNrVuRJ9mytil6S82VNDxHzTzeCCxY412CV6dDJNLzJItpf/CXQelj/6wJs1GgFl5GWJnqortMR2v jenkins@jenkins-cinderci ssh ubuntu@10.100.128.136mailto:ubuntu@10.100.128.136 -v OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014 debug1: Reading configuration data /home/nodepool/.ssh/config debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: Connecting to 10.100.128.136 [10.100.128.136] port 22. debug1: Connection established. debug1: Offering RSA public key: /home/nodepool/.ssh/id_rsa debug1: Server accepts key: pkalg ssh-rsa blen 279 debug1: key_parse_private2: missing begin marker debug1: read PEM private key done: type RSA debug1: Authentication succeeded (publickey). Authenticated to 10.100.128.136 ([10.100.128.136]:22). ... I was able to login into the template instance and also am able to login into the slave instances. Also nodepoold was able to login into template instance but now it fails loging in into slave. I tried running it as either nodepol or jenkins users, same result. Thanks, Eduard On Mon, Jan 12, 2015 at 2:09 PM, Eduard Matei eduard.ma...@cloudfounders.commailto:eduard.ma...@cloudfounders.com wrote: Hi, Back with another error during image creation with nodepool: 2015-01-12 13:05:17,775 INFO nodepool.image.build.local_01.d-p-c: Downloading python-daemon-2.0.1.tar.gz (62kB) 2015-01-12 13:05:18,022 INFO nodepool.image.build.local_01.d-p-c: Traceback (most recent call last): 2015-01-12 13:05:18,023 INFO nodepool.image.build.local_01.d-p-c: File string, line 20, in module 2015-01-12 13:05:18,023 INFO nodepool.image.build.local_01.d-p-c: File /tmp/pip-build-r6RJKq/python-daemon/setup.py, line 27, in module 2015-01-12 13:05:18,024 INFO nodepool.image.build.local_01.d-p-c: import version 2015-01-12 13:05:18,024 INFO nodepool.image.build.local_01.d-p-c: File version.py, line 51, in module 2015-01-12 13:05:18,024 INFO nodepool.image.build.local_01.d-p-c: import docutils.core 2015-01-12 13:05:18,024 INFO nodepool.image.build.local_01.d-p-c: ImportError: No module named docutils.core 2015-01-12 13:05:18,025 INFO nodepool.image.build.local_01.d-p-c: Complete output from command python setup.py egg_info: 2015-01-12 13:05:18,025 INFO nodepool.image.build.local_01.d-p-c: Traceback (most recent call last): 2015-01-12 13:05:18,025 INFO nodepool.image.build.local_01.d-p-c: 2015-01-12 13:05:18,025 INFO nodepool.image.build.local_01.d-p-c: File string, line 20, in module 2015-01-12 13:05:18,025 INFO
Re: [openstack-dev] [Heat] Where to keep data about stack breakpoints?
On Mon, Jan 12, 2015 at 04:29:15PM +0100, Tomas Sedovic wrote: Hey folks, I did a quick proof of concept for a part of the Stack Breakpoint spec[1] and I put the does this resource have a breakpoint flag into the metadata of the resource: https://review.openstack.org/#/c/146123/ I'm not sure where this info really belongs, though. It does sound like metadata to me (plus we don't have to change the database schema that way), but can we use it for breakpoints etc., too? Or is metadata strictly for Heat users and not for engine-specific stuff? Metadata is supposed to be for template defined metadata (with the notable exception of server resources where we merge SoftwareDeployment metadata in to that defined in the template). So if we're going to use the metadata template interface as a way to define the breakpoint, this is OK, but do we want to mix the definition of the stack with this flow control data? (I personally think probably not). I can think of a couple of alternatives: 1. Use resource_data, which is intended for per-resource internal data, and set it based on API data passed on create/update (see Resource.data_set) 2. Store the breakpoint metadata in the environment I think the environment may be the best option, but we'll have to work out how to best represent a tree of nested stacks (something the spec interface description doesn't consider AFAICS). If we use the environment, then no additional API interfaces are needed, just supporting a new key in the existing data, and python-heatclient can take care of translating any CLI --breakpoint argument into environment data. I also had a chat with Steve Hardy and he suggested adding a STOPPED state to the stack (this isn't in the spec). While not strictly necessary to implement the spec, this would help people figure out that the stack has reached a breakpoint instead of just waiting on a resource that takes a long time to finish (the heat-engine log and event-list still show that a breakpoint was reached but I'd like to have it in stack-list and resource-list, too). It makes more sense to me to call it PAUSED (we're not completely stopping the stack creation after all, just pausing it for a bit), I'll let Steve explain why that's not the right choice :-). So, I've not got strong opinions on the name, it's more the workflow: 1. User triggers a stack create/update 2. Heat walks the graph, hits a breakpoint and stops. 3. Heat explicitly triggers continuation of the create/update My argument is that (3) is always a stack update, either a PUT or PATCH update, e.g we _are_ completely stopping stack creation, then a user can choose to re-start it (either with the same or a different definition). So, it _is_ really an end state, as a user might never choose to update from the stopped state, in which case *_STOPPED makes more sense. Paused implies the same action as the PATCH update, only we trigger continuation of the operation from the point we reached via some sort of user signal. If we actually pause an in-progress action via the scheduler, we'd have to start worrying about stuff like token expiry, hitting timeouts, resilience to engine restarts, etc, etc. So forcing an explicit update seems simpler to me. Steve __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev