Re: [openstack-dev] [Horizon] Introduction of AngularJS in membership workflow
+1000 Excellent I am really excited about having a heavily tested proper client-side layer. This is very needed, given that amount of javascript in Horizon is rising. The hacked together libraries in JQuery, that are there now are very hard to orient in and will be hard to maintain in the future. Not sure what the Horizon consensus will be, but I would recommend writing new libraries only in Angularjs with proper tests. In the meantime we can practise Angularjs by rewriting the existing stuff. I am really looking forward to picking something to rewrite. :-) Also I am not sure how the Horizon community feels about 'syntax sugar' libraries for Javascript and Angular. But from my experience, using Coffeescript and Sugarjs makes programming in Javascript and Angular a fairy tale (you know, rainbows and unicorns everywhere you look). :-D Thanks for working on this. Ladislav On 11/11/2013 08:21 PM, Jordan OMara wrote: Hello Horizon! On November 11th, we submitted a patch to introduce AngularJS into Horizon [1]. We believe AngularJS adds a lot of value to Horizon. First, AngularJS allows us to write HTML templates for interactive elements instead of doing jQuery-based DOM manipulation. This allows the JavaScript layer to focus on business logic, provides easy to write JavaScript testing that focuses on the concern (e.g. business logic, template, DOM manipulation), and eases the on-boarding for new developers working with the JavaScript libraries. Second, AngularJS is not an all or nothing solution and integrates with the existing Django templates. For each feature that requires JavaScript, we can write a self-contained directive to handle the DOM, a template to define our view and a controller to contain the business logic. Then, we can add this directive to the existing template. To see an example in action look at _workflow_step_update_member.html [2]. It can also be done incrementally - this isn't an all-or-nothing approach with a massive front-end time investment, as the Angular components can be introduced over time. Finally, the initial work to bring AngularJS to Horizon provides a springboard to remove the DOM Database (i.e. hidden-divs) used on the membership page (and others). Instead of abusing the DOM, we can instead expose an API for membership data, add an AngularJS resource (i.e. reusable representation of API entities) for the API. The data can then be loaded data asynchronously and allow the HTML to focus on expressing a semantic representation of the data to the user. Please give our patch a try! You can find the interactions on Domains/Groups, Flavors/Access(this form does not seem to work in current master or on my patch) and Projects/UsersGroups. You should notice that it behaves...exactly the same! We look forward to your feedback. Jordan O'Mara Jirka Tomasek [1] [https://review.openstack.org/#/c/55901/] [2] [https://github.com/jsomara/horizon/blob/angular2/horizon/templates/horizon/common/_workflow_step_update_members.html] ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Bad review patterns
On Fri, Nov 8, 2013 at 4:07 AM, Pedro Roque Marques pedro.r.marq...@gmail.com wrote: Radomir, An extra issue that i don't believe you've covered so far is about comment ownership. I've just read an email on the list that follows a pattern that i've heard many complaints about: -1 with a reasonable comment, submitter addresses the comment, reviewer never comes back. Reviewers do need to allocate time to come back and follow up on the answers to their comments. This is true, but it's not necessarily easy to find those reviews that you -1'd. I don't think anyone nefariously -1's and then goes away. Gerrit could be improved in this space to assist reviewers. -- Michael Davies mich...@the-davies.net Rackspace Cloud Builders Australia ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Proposal to recognize indirect contributions to our code base
Nicolas Barcet wrote: [...] To enable this, we are proposing that the commit text of a patch may include a sponsored-by: sponsorname line which could be used by various tools to report on these commits. [...] This proposal raises several questions. (1) Is it a good idea to allow giving credit to patch sponsors On one hand, this encourages customers of OpenStack service companies to fund sending back bugfixes and features upstream. On the other, it (slightly) discourages them to get involved more directly in OpenStack, and exposes company-specific information in a place where only individual contributors were exposed before. I'm not sure we really need to encourage sending bugfixes upstream. People who don't do it will lose in the end... So this is the smart move for them, and they should realize that. In summary, I see how adding this would be beneficial to the OpenStack service companies... not entirely convinced of the technical benefit for the OpenStack open source projects. (2) Is the commit message the right place to track this Commit messages may contain anything, as long as the reviewers accept it :) I'm slightly concerned by the use of (technical) commit messages to convey company-specific credits... but I agree that would be the most convenient place to track this. (3) Is this something the Technical Committee can actually mandate This obviously needs buy-in from the PTLs of the various programs, and by extension their core reviewer teams. We can definitely encourage them to accept commit messages containing that information, but unless we can come up with a good reason why this would make OpenStack technically better, I don't see us being able to enforce it across the board... -- Thierry Carrez (ttx) ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] When is it okay for submitters to say 'I don't want to add tests' ?
+1 also. I spent less than half the time on my first fix (so far) understanding the problem, reproducing it, coding it and learning about the code review system. Much more than half the time was spent on reverse engineering existing tests to be able to add new ones (which had to use features not used by the existing tests) and asking for advice even on where to add the tests. It would have been more efficient for everyone had some test examples been proposed to me. On 12 November 2013 03:34, Ed Leafe e...@openstack.org wrote: On Nov 11, 2013, at 6:42 PM, Vishvananda Ishaya vishvana...@gmail.com wrote: It also gives the submitter a specific example of a well-written test, which can be a faster way to learn than forcing them to get there via trial and error. +1. Implementing a policy that has as the end effect more knowledgeable contributors is a big win. -- Ed Leafe ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] When is it okay for submitters to say 'I don't want to add tests' ?
To be clear, that was a +1 for Mark's suggestion: In cases like that, I'd be of a mind to go +2 Awesome! Thanks for catching this! It would be great to have a unit test for this, but it's clear the current code is broken so I'm fine with merging the fix without a test. You could say it's now the reviewers responsibility to merge a test, but if that requirement then turns off reviewers even reviewing such a patch, then that doesn't help either. On 12 November 2013 11:29, Michael Bright mjbrigh...@gmail.com wrote: +1 also. I spent less than half the time on my first fix (so far) understanding the problem, reproducing it, coding it and learning about the code review system. Much more than half the time was spent on reverse engineering existing tests to be able to add new ones (which had to use features not used by the existing tests) and asking for advice even on where to add the tests. It would have been more efficient for everyone had some test examples been proposed to me. On 12 November 2013 03:34, Ed Leafe e...@openstack.org wrote: On Nov 11, 2013, at 6:42 PM, Vishvananda Ishaya vishvana...@gmail.com wrote: It also gives the submitter a specific example of a well-written test, which can be a faster way to learn than forcing them to get there via trial and error. +1. Implementing a policy that has as the end effect more knowledgeable contributors is a big win. -- Ed Leafe ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [openstack][metering] Contributing to ceilometer while participating in GSoC
Hi Ajay, Ajay Phogat letsgot...@gmail.com wrote: Hello, all! I am a student of Computer Science and have learnt some basics of OpenStack for implementing one of my projects in college. While learning about OpenStack, I came to know about the metering project, ceilometer. I would love to contribute to ceilometer, which according to ohlohhttp://www.ohloh.net/p?ref=homepageq=ceilometer has 88 contributors presently. Also, I wanted to participate in Google Summer of Code while being associated with ceilometer. I wanted to know if contributing to ceilometer can be a valid project for GSoC. Thanks for your interest in OpenStack and Ceilometer, that's great! Many members of the community were away in Hong Kong for the OpenStack Summit last week, sorry for the delay. OpenStack has never participated in GSoC before - we never quite managed to match the organisation requirements in time for the deadline. Maybe this will change next year but I wouldn't necessarily count on it. I still encourage you to get involved with the community even outside of a program like GSoC. There's a lot of things we could use your help with! Maybe a current Ceilometer contributor can point you toward a suitable first task, in the meantime there's a lot of information you can read about to get yourself started contributing: see [0] to learn the general OpenStack contribution guidelines, [1] for finding something small to work on that you find interesting, [2] to get an OpenStack dev environment set up in a VM. Consider also popping by on IRC [3], on #openstack-metering for Ceilometer related questions and #openstack-101 if you encounter any hiccup getting started contributing. There are lots of people happy to help and guide you! Kind regards, Julie [0] https://wiki.openstack.org/wiki/HowToContribute [1] https://bugs.launchpad.net/ceilometer/ [2] http://devstack.org/ [3] https://wiki.openstack.org/IRC Thanks a lot for your time and effort! Ajay Phogat ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Horizon PTL candidacy
On 11/10/2013 11:53 PM, John Dickinson wrote: A random off-the-top-of-my-head use case would be to subscribe to events from creating or changing objects in a particular Swift account or container. This would allow much more efficient listings in Horizon for active containers (and may also be consumed by other listeners too). --John yupp. There are many many usecases for this, and we'd get rid of pulling services for status. Sounds reasonable, but just one caveat ... Notifications can either be disabled in the service config (e.g. by setting the notifier_strategy to noop in the glance config) or mis-configured (e.g. by not overriding control_exchange name in the cinder code) such that the notifications are not seen by the consumer. We have a similar potential problem with ceilometer, and no good way currently of detecting the non-flow of notifications, i.e. the old story that the absence of evidence evidence of absence. I'm not sure whether if it would be workable for horizon to detect whether notifications are flowing for each service by probing in some way (e.g. by setting unsetting a random property on an image and then ensuring that the corresponding image.update events are seen). If the absence of notifications were easily reliably detectable, then obviously horizon could simply fallback to polling. Anyhoo, just some food for thought. Cheers, Eoghan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Horizon PTL candidacy
On 11/12/2013 12:09 PM, Eoghan Glynn wrote: Sounds reasonable, but just one caveat ... Notifications can either be disabled in the service config (e.g. by setting the notifier_strategy to noop in the glance config) or mis-configured (e.g. by not overriding control_exchange name in the cinder code) such that the notifications are not seen by the consumer. We have a similar potential problem with ceilometer, and no good way currently of detecting the non-flow of notifications, i.e. the old story that the absence of evidence evidence of absence. I'm not sure whether if it would be workable for horizon to detect whether notifications are flowing for each service by probing in some way (e.g. by setting unsetting a random property on an image and then ensuring that the corresponding image.update events are seen). If the absence of notifications were easily reliably detectable, then obviously horizon could simply fallback to polling. Anyhoo, just some food for thought. Thank you for your input here. That is true, we'd rely on an additional service, whether marconi or some oslo service doesn't matter in first place here. The service may not be accessible or even not reliable; we might miss messages, and simply trusting in getting messages in the right order etc. is probably not a stable approach here. A fallback to pulling again is definitely an option. Matthias ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Mistral] really simple workflow for Heat configuration tasks
Hi Angus, that is an interesting idea. Since you mentioned the software config proposal in the beginning as a related item, I guess you are trying to solve some software config related issues with Mistral. So a few questions, looking at this purely from a software config perspect: Are you thinking about doing the infrastructure orchestration (VMs, volumes, network etc) with Heat's current capabilities and then let the complete software orchestration be handled by Mistral tasks? I.e. bootstrap the workers on each VM and have the definition of when which agent does something defined in a flow? If yes, is there a way for passing data around - e.g. output produced by one software config step is input for another software config step? Again, if my above assumption is true, couldn't there be problems when we having two ways of doing orchestration, when the software layer thing would take the Heat engine out of some processing and take away some control? Or are you thinking about using Mistral as a general mechanism for task execution in Heat, which would then probably resolve the conflict? Regards, Thomas Angus Salkeld asalk...@redhat.com wrote on 12.11.2013 02:15:15: From: Angus Salkeld asalk...@redhat.com To: openstack-dev@lists.openstack.org, Date: 12.11.2013 02:18 Subject: [openstack-dev] [Mistral] really simple workflow for Heat configuration tasks Hi all I think some of you were at the Software Config session at summit, but I'll link the ideas that were discussed: https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config-WIP https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config To me the basics of it are: 1. we need an entity/resource to place the configuration (in Heat) 2. we need a resource to install the configuration (basically a task in Mistral) A big issue to me is the conflict between heat's taskflow and the new external one. What I mean by conflict is that it will become tricky to manage two parallel taskflow instances in one stack. This could be solved by: 1: totally using mistral (only use mistral workflow) 2: use a very simple model of just asking mistral to run tasks (no workflow) this allows us to use heat's workflow but mistral's task runner. Given that mistral has no real implementation yet, 2 would seem reasonable to me. (I think Heat developers are open to 1 when Mistral is more mature.) How could we use Mistral for config installation? - 1. We have a resource type in Heat that creates tasks in a Mistral workflow (manual workflow). 2. Heat pre-configures the server to have a Mistral worker installed. 3. the Mistral worker pulls tasks from the workflow and passes them to an agent that can run it. (the normal security issues jump up here - giving access to the taskflow from a guest). To do this we need an api that can add tasks to a workflow dynamically. like this: - create a simple workflow - create and run task A [run on server X] - create and run task B [run on server Y] - create and run task C [run on server X] (note: the task is run and completes before the next is added if there is a dependancy, if tasks can be run in parallel then we add multiple tasks) The api could be something like: CRUD mistral/workflows/ CRUD mistral/workflows/wf/tasks One thing that I am not sure of is how a server(worker) would know if a task was for it or not. - perhaps we have a capability property of the task that we can use (capablitiy[server] = server-id) or actually specify the worker we want. I think this would be a good starting point for Mistral as it is a very simple but concrete starting point. Also if this is not done in Mistral we will have to add this in Heat (lets rather have it where it should be). This will also give us a chance to have confidence with Mistral before trying to do more complex workflows. If you (Heat and Mistral developers) are open to this we can discuss what needs to be done. I am willing to help with implementation. Thanks -Angus ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron]Good example of dev doc/docstrings in code.
Welcome Joris! I would enjoy helping you get started contributing to OpenStack. Are you on IRC yet? Talking on IRC is our most efficient means of communication. On the Freenode network I suggest joining #openstack, #openstack-dev, #openstack-101, #openstack-meeting, #openstack-meeting-alt as well as #openstack-neutron. My IRC nick is anteaya, please ping me. To answer your question, documenting code is a great way to get started. There will be a need to document prior code. This does not remove the author's responsibility to document code as we move forward - it is a good habit to get into and ensures that work doesn't get piled onto just one person, everyone has to document their own patches. So what we can do is have a chat, I can find out about your areas of interest and then we can find both some code and the people who wrote it so that you can talk directly with code authors and have them explain in their own words what the method/function is supposed to do. Then I can walk you through submitting your first few patches until you get the hang of it. Then hopefully you will be able to teach others to do the same. I can also help you to learn how to review patches to support those people willing to embrace the docstring ethos so you can really help out here. Thanks Joris, I look forward to chatting on irc in #openstack-neutron, Anita. On 11/12/2013 03:22 AM, Joris Roovers (jroovers) wrote: Hi Anita, Is this an area where a new developer can help out? I've got a little time to spare (not a whole lot, best-effort...) and would like to help out. I'm very new to all of this though. I figure that documenting code is a good way to contribute and learn at the same time. Could anyone point me to a simple class that could benefit from this? I'll probably need some help getting it through the system (never submitted a patch for Openstack before). Thanks! Joris -Original Message- From: Anita Kuno [mailto:ante...@anteaya.info] Sent: Tuesday, November 12, 2013 01:50 To: OpenStack Development Mailing List Subject: [openstack-dev] [Neutron]Good example of dev doc/docstrings in code. Hello: I will be creating noise around testing in order to help shore up the gap between where we are in Neutron and where everybody agrees we would like to be. To that end I would like to point out a great example of docstrings in code: https://github.com/openstack/neutron/blob/master/neutron/neutron_plugin_base_v2.py which is used to generate dev docs. Here are the generated docs from the above docstrings in the code: http://docs.openstack.org/developer/neutron/devref/plugin-api.html The docstrings are great for the author of the patch to ensure the purpose of each method/function is clearly understood by themselves, reviews and users of the code. I would like to encourage current and future patches to include good docstrings/dev docs going forward. Thanks everyone for your support helping to close the gap with testing with Neutron. Thank you, Anita Kuno anteaya ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Future of Project/release meetings, and skipping today's one
Hi everyone, We'll be skipping the project/release status meeting today (usually happening at 21:00 UTC). Most PTLs are just back from the Design Summit week and catching up, and some are still in vacation, so there is little point in having one today. For this week they should work on filing, prioritizing and targeting blueprints, while I'll work on publicizing the final Icehouse schedule. We'll start looking at the resulting icehouse roadmaps starting next week. During the release schedule session we discussed evolving the format of the meeting so that we don't waste time in status updates and focus on addressing technical cross-project issues instead. The idea is to weekly sync with PTLs ahead of the meeting, then have a short meeting to discuss where we are in the cycle, the current identified cross-project issues, and anything that's put on the meeting agenda. I'll contact most PTLs this week so that we can set up those 10-min status sync points (ideally sometime on Tuesday before the meeting). -- Thierry Carrez (ttx) ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] sqlalchemy-migrate needs a new release
I don't know what's all involved in putting out a release for sqlalchemy-migrate but if there is a way that I can help, please let me know. I'll try to catch dripton in IRC today. As for CI with DB2, it's in the blueprint as a work item, I just don't know enough about the infra side of things to get that going, so I'd need some help there. DB2 Express-C is the free version which is the plan to run the unit tests in CI, but the only problem I see with that is it's a trial license and I wouldn't want to have to redo images or licenses every 3 months or however long it lasts. I would think that IBM would be able to provide a permanent license for CI though, otherwise our alternative is running the tests in-house and reporting the results back (something like what the nova virt drivers have to do and vmware is already doing). Thanks, Matt Riedemann On 11/12/2013 1:50 AM, Roman Podoliaka wrote: Hey David, Thank you for undertaking this task! I agree, that merging of DB2 support can be postponed for now, even if it looks totally harmless (though I see no way to test it, as we don't have DB2 instances running on Infra test nodes). Thanks, Roman On Mon, Nov 11, 2013 at 10:54 PM, Davanum Srinivas dava...@gmail.com wrote: @dripton, @Roman Many thanks :) On Mon, Nov 11, 2013 at 3:35 PM, David Ripton drip...@redhat.com wrote: On 11/11/2013 11:37 AM, Roman Podoliaka wrote: As you may know, in our global requirements list [1] we are currently depending on SQLAlchemy 0.7.x versions (which is 'old stable' branch and will be deprecated soon). This is mostly due to the fact, that the latest release of sqlalchemy-migrate from PyPi doesn't support SQLAlchemy 0.8.x+. At the same time, distros have been providing patches for fixing this incompatibility for a long time now. Moreover, those patches have been merged to sqlalchemy-migrate master too. As we are now maintaining sqlalchemy-migrate, we could make a new release of it. This would allow us to bump the version of SQLAlchemy release we are depending on (as soon as we fix all the bugs we have) and let distros maintainers stop carrying their own patches. This has been discussed at the design summit [2], so we just basically need a volunteer from [3] Gerrit ACL group to make a new release. Is sqlalchemy-migrate stable enough to make a new release? I think, yes. The commits we've merged since we adopted this library, only fix a few issues with SQLAlchemy 0.8.x compatibility and enable running of tests (we are currently testing all new changes on py26/py27, SQLAlchemy 0.7.x/0.8.x, SQLite/MySQL/PostgreSQL). Who wants to help? :) Thanks, Roman [1] https://github.com/openstack/requirements/blob/master/global-requirements.txt [2] https://etherpad.openstack.org/p/icehouse-oslo-db-migrations [3] https://review.openstack.org/#/admin/groups/186,members I'll volunteer to do this release. I'll wait 24 hours from the timestamp of this email for input first. So, if anyone has opinions about the timing of this release, please speak up. (In particular, I'd like to do a release *before* Matt Riedermann's DB2 support patch https://review.openstack.org/#/c/55572/ lands, just in case it breaks anything. Of course we could do another release shortly after it gets in, to make folks who use DB2 happy.) -- David Ripton Red Hat drip...@redhat.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Davanum Srinivas :: http://davanum.wordpress.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] Do we need to clean up resource_id after deletion?
On 02/11/13 05:30, Clint Byrum wrote: Excerpts from Christopher Armstrong's message of 2013-11-01 11:34:56 -0700: Vijendar and I are trying to figure out if we need to set the resource_id of a resource to None when it's being deleted. This is done in a few resources, but not everywhere. To me it seems either a) redundant, since the resource is going to be deleted anyway (thus deleting the row in the DB that has the resource_id column) b) actively harmful to useful debuggability, since if the resource is soft-deleted, you'll not be able to find out what physical resource it represented before it's cleaned up. Is there some specific reason we should be calling resource_id_set(None) in a check_delete_complete method? I've often wondered why some do it, and some don't. Seems to me that it should be done not inside each resource plugin but in the generic resource handling code. However, I have not given this much thought. Perhaps others can provide insight into why it has been done that way. There was a time in the very early days of Heat development when deleting something that had already disappeared usually resulted in an error (i.e. we mostly weren't catching NotFound exceptions). I expect this habit dates from that era. I can't think of any reason we still need this, and I agree that it seems unhelpful for debugging. cheers, Zane. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Mistral] really simple workflow for Heat configuration tasks
On 12 нояб. 2013 г., at 19:04, Thomas Spatzier thomas.spatz...@de.ibm.com wrote: If yes, is there a way for passing data around - e.g. output produced by one software config step is input for another software config step? Thomas, yes, we’re planning to have a data flow mechanism similar to what you described here. In conjunction with using HA transport (such as RabbitMQ) it will be very useful feature. We’ll share our design on that soon. For now you can take a look at the slides prepared for HK summit at http://www.slideshare.net/RenatAkhmerov/mistral-hong-kong-unconference-track and particularly at slide 14 about data flow. Please also feel free to ask any questions about the project and share your thoughts with us. Regards, Thomas Angus Salkeld asalk...@redhat.com wrote on 12.11.2013 02:15:15: From: Angus Salkeld asalk...@redhat.com To: openstack-dev@lists.openstack.org, Date: 12.11.2013 02:18 Subject: [openstack-dev] [Mistral] really simple workflow for Heat configuration tasks Hi all I think some of you were at the Software Config session at summit, but I'll link the ideas that were discussed: https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config-WIP https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config To me the basics of it are: 1. we need an entity/resource to place the configuration (in Heat) 2. we need a resource to install the configuration (basically a task in Mistral) A big issue to me is the conflict between heat's taskflow and the new external one. What I mean by conflict is that it will become tricky to manage two parallel taskflow instances in one stack. This could be solved by: 1: totally using mistral (only use mistral workflow) 2: use a very simple model of just asking mistral to run tasks (no workflow) this allows us to use heat's workflow but mistral's task runner. Given that mistral has no real implementation yet, 2 would seem reasonable to me. (I think Heat developers are open to 1 when Mistral is more mature.) How could we use Mistral for config installation? - 1. We have a resource type in Heat that creates tasks in a Mistral workflow (manual workflow). 2. Heat pre-configures the server to have a Mistral worker installed. 3. the Mistral worker pulls tasks from the workflow and passes them to an agent that can run it. (the normal security issues jump up here - giving access to the taskflow from a guest). To do this we need an api that can add tasks to a workflow dynamically. like this: - create a simple workflow - create and run task A [run on server X] - create and run task B [run on server Y] - create and run task C [run on server X] (note: the task is run and completes before the next is added if there is a dependancy, if tasks can be run in parallel then we add multiple tasks) The api could be something like: CRUD mistral/workflows/ CRUD mistral/workflows/wf/tasks One thing that I am not sure of is how a server(worker) would know if a task was for it or not. - perhaps we have a capability property of the task that we can use (capablitiy[server] = server-id) or actually specify the worker we want. I think this would be a good starting point for Mistral as it is a very simple but concrete starting point. Also if this is not done in Mistral we will have to add this in Heat (lets rather have it where it should be). This will also give us a chance to have confidence with Mistral before trying to do more complex workflows. If you (Heat and Mistral developers) are open to this we can discuss what needs to be done. I am willing to help with implementation. Thanks -Angus ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Configure overcommit policy
On 11 November 2013 12:04, Alexander Kuznetsov akuznet...@mirantis.com wrote: Hi all, While studying Hadoop performance in a virtual environment, I found an interesting problem with Nova scheduling. In OpenStack cluster, we have overcommit policy, allowing to put on one compute more vms than resources available for them. While it might be suitable for general types of workload, this is definitely not the case for Hadoop clusters, which usually consume 100% of system resources. Is there any way to tell Nova to schedule specific instances (the ones which consume 100% of system resources) without overcommitting resources on compute node? You could have a flavor with no-overcommit extra spec, and modify the over-commit calculation in the scheduler on that case, but I don't remember seeing that in there. John ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Nova] Summary of Design Summit Session
Hi, I am attempting to extract the consensus we reached in all the design summit sessions here: https://etherpad.openstack.org/p/IcehouseNovaSummit Help to verify that I have not miss-represented things would be very gratefully received. John ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Configure overcommit policy
You can consider having a separate host aggregate for Hadoop, and use a combination of AggregateInstanceExtraSpecFilter (with a special flavor mapped to this host aggregate) and AggregateCoreFilter (overriding cpu_allocation_ratio for this host aggregate to be 1). Regards, Alex From: John Garbutt j...@johngarbutt.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org, Date: 12/11/2013 04:41 PM Subject:Re: [openstack-dev] [nova] Configure overcommit policy On 11 November 2013 12:04, Alexander Kuznetsov akuznet...@mirantis.com wrote: Hi all, While studying Hadoop performance in a virtual environment, I found an interesting problem with Nova scheduling. In OpenStack cluster, we have overcommit policy, allowing to put on one compute more vms than resources available for them. While it might be suitable for general types of workload, this is definitely not the case for Hadoop clusters, which usually consume 100% of system resources. Is there any way to tell Nova to schedule specific instances (the ones which consume 100% of system resources) without overcommitting resources on compute node? You could have a flavor with no-overcommit extra spec, and modify the over-commit calculation in the scheduler on that case, but I don't remember seeing that in there. John ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Trove][Savanna][Murano] Unified Agent proposal discussion at Summit
Thinking in that direction, the Trove team had a design session about current status of agent in project. Just take a look https://etherpad.openstack.org/p/TroveGuestAgents With best regards, Ilya Sviridov http://www.mirantis.ru/ On Tue, Nov 12, 2013 at 4:29 PM, Igor Marnat imar...@mirantis.com wrote: Just to summarize, there was an interest expressed from Murano, Trove, Savanna and Heat teams in regards with implementation of this unified agent. Nothing specific was decided expect suggestion to keep pushing. I'd suggest to keep pushing this way: - create an etherpad - each team interested in having unified agent writes there detailed use cases for an agent to this etherpad - based on these use-cases we can generate very specific and detailed requirements to the agent - based on these requirements we can agree on architecture and approach to implementation. Teams? Regards, Igor Marnat On Tue, Nov 5, 2013 at 6:10 AM, Alexander Tivelkov ativel...@mirantis.com wrote: Hi guys, Recently we had several discussions about the guest VM agents: lot's of projects have the similar needs to run some special logic on the side of guest virtual machines. As far as I know, there are such agents in Savanna, Trove, Murano and may be some other projects as well. The obvious idea is to unite the efforts and have the unified solution which may satisfy everybody's needs. We've discussed this topic before with some of the teams, and got the promising-looking idea to create kind of unified agent library and put it in Oslo or some other shared project. We've scheduled an unconference session on the Summit, this Friday at 3:10 pm. Let's continue discussing the idea there: we may gather the common requirements, discuss the basic design concepts etc. See you there! -- Kind Regards, Alexander Tivelkov Principal Software Engineer OpenStack Platform Product division Mirantis, Inc +7(495) 640 4904, ext 0236 +7-926-267-37-97(cell) Vorontsovskaya street 35 B, building 3, Moscow, Russia. ativel...@mirantis.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron] IPv6 sub-team?
Thanks, Sean! I am on east coast, so Monday 20:00 UTC time and Thursday 21:00 UTC time work great for me. Hopefully we can find a timeslot working for everybody! Shixiong On Nov 11, 2013, at 1:23 PM, Collins, Sean (Contractor) sean_colli...@cable.comcast.com wrote: On Mon, Nov 11, 2013 at 01:16:43PM -0500, Shixiong Shang wrote: +1. We have great interest to run OpenStack over IPv6 and would love to be a part of the discussion. Excellent - please see the thread I've made in OpenStack-Dev - we're tossing out times for the IRC meeting, that works for everyone interested. -- Sean M. Collins ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Trove][Savanna][Murano] Unified Agent proposal discussion at Summit
Ilya, that's cool! Mind if Murano and Savanna teams join the same etherpad? Regards, Igor Marnat On Tue, Nov 12, 2013 at 6:58 PM, Ilya Sviridov isviri...@mirantis.comwrote: Thinking in that direction, the Trove team had a design session about current status of agent in project. Just take a look https://etherpad.openstack.org/p/TroveGuestAgents With best regards, Ilya Sviridov http://www.mirantis.ru/ On Tue, Nov 12, 2013 at 4:29 PM, Igor Marnat imar...@mirantis.com wrote: Just to summarize, there was an interest expressed from Murano, Trove, Savanna and Heat teams in regards with implementation of this unified agent. Nothing specific was decided expect suggestion to keep pushing. I'd suggest to keep pushing this way: - create an etherpad - each team interested in having unified agent writes there detailed use cases for an agent to this etherpad - based on these use-cases we can generate very specific and detailed requirements to the agent - based on these requirements we can agree on architecture and approach to implementation. Teams? Regards, Igor Marnat On Tue, Nov 5, 2013 at 6:10 AM, Alexander Tivelkov ativel...@mirantis.com wrote: Hi guys, Recently we had several discussions about the guest VM agents: lot's of projects have the similar needs to run some special logic on the side of guest virtual machines. As far as I know, there are such agents in Savanna, Trove, Murano and may be some other projects as well. The obvious idea is to unite the efforts and have the unified solution which may satisfy everybody's needs. We've discussed this topic before with some of the teams, and got the promising-looking idea to create kind of unified agent library and put it in Oslo or some other shared project. We've scheduled an unconference session on the Summit, this Friday at 3:10 pm. Let's continue discussing the idea there: we may gather the common requirements, discuss the basic design concepts etc. See you there! -- Kind Regards, Alexander Tivelkov Principal Software Engineer OpenStack Platform Product division Mirantis, Inc +7(495) 640 4904, ext 0236 +7-926-267-37-97(cell) Vorontsovskaya street 35 B, building 3, Moscow, Russia. ativel...@mirantis.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Neutron][LBaaS] LBaaS Subteam meeting
Hi neutron and lbaas folks! We have a plenty of work to do for the Icehouse, so I suggest we start having regular weekly meetings to track our progress. Let's meet at #neutron-lbaas on Thursday, 14 at 15-00 UTC The agenda for the meeting is the following: 1. Blueprint list to be proposed for the icehouse-1 2. QA third-party testing 3. dev resources evaluation 4. Additional features requested by users. Thanks, Eugene. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Solum Weekly Meeting
Hello, Solum meets Tuesdays at 1600 UTC in #openstack-meeting-alt (formerly in #solum) Note: Due to the Nov 3rd change in Daylight Savings Time, this now happens at 08:00 US/Pacific (starts in about 50 minutes from now) Agenda: https://wiki.openstack.org/wiki/Meetings/Solum Regards, Adrian ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Solum] Weekly Team Meeting
Hello, Solum meets Tuesdays at 1600 UTC in #openstack-meeting-alt (formerly in #solum) Note: Due to the Nov 3rd change in Daylight Savings Time, this now happens at 08:00 US/Pacific (starts in about 45 minutes from now) Agenda: https://wiki.openstack.org/wiki/Meetings/Solum Regards, Adrian ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Configure overcommit policy
FYI, by default Openstack overcommit CPU 1:16, meaning it can host 16 times number of cores it possesses. As mentioned Alex, you can change it by enabling AggregateCoreFilter in nova.conf: scheduler_default_filters = list of your filters, adding AggregateCoreFilter here and modifying the overcommit ratio by adding: cpu_allocation_ratio=1.0 Just a suggestion, think of isolating the host for the tenant that uses Hadoop so that it will not serve other applications. You have several filters at your disposal: AggregateInstanceExtraSpecsFilter IsolatedHostsFilter AggregateMultiTenancyIsolation Best regards, Toan - Original Message - From: Alex Glikson glik...@il.ibm.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Sent: Tuesday, November 12, 2013 3:54:02 PM Subject: Re: [openstack-dev] [nova] Configure overcommit policy You can consider having a separate host aggregate for Hadoop, and use a combination of AggregateInstanceExtraSpecFilter (with a special flavor mapped to this host aggregate) and AggregateCoreFilter (overriding cpu_allocation_ratio for this host aggregate to be 1). Regards, Alex From: John Garbutt j...@johngarbutt.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org, Date: 12/11/2013 04:41 PM Subject: Re: [openstack-dev] [nova] Configure overcommit policy On 11 November 2013 12:04, Alexander Kuznetsov akuznet...@mirantis.com wrote: Hi all, While studying Hadoop performance in a virtual environment, I found an interesting problem with Nova scheduling. In OpenStack cluster, we have overcommit policy, allowing to put on one compute more vms than resources available for them. While it might be suitable for general types of workload, this is definitely not the case for Hadoop clusters, which usually consume 100% of system resources. Is there any way to tell Nova to schedule specific instances (the ones which consume 100% of system resources) without overcommitting resources on compute node? You could have a flavor with no-overcommit extra spec, and modify the over-commit calculation in the scheduler on that case, but I don't remember seeing that in there. John ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Oslo] Using of oslo.config options in openstack.common modules
Hi all, Currently, many modules from openstack.common package register oslo.config options. And this is completely OK while these modules are copied to target projects using update.py script. But consider the situation, when we decide to split a new library from oslo-incubator - oslo.spam - and this library uses module openstack.common.eggs, just because we don't want to reinvent the wheel and this module is really useful. Lets say module eggs defines config option 'foo' and this module is also used in Nova. Now we want to use oslo.spam in Nova too. So here is the tricky part: if the versions of openstack.common.eggs in oslo.spam and openstack.common.eggs in Nova define config option 'foo' differently (e.g. the version in Nova is outdated and doesn't provide the help string), oslo.config will raise DuplicateOptError. There are at least two ways to solve this problem: 1) don't use openstack.common code in olso.* libraries 2) don't register config options in openstack.common modules The former is totally doable, but it means that we will end up repeating ourselves, because we already have a set of very useful modules (e.g. lockutils) and there is little sense in rewriting them from scratch within oslo.* libraries. The latter means that we should refactor the existing code in openstack.common package. As these modules are meant to be libraries, it's strange that they rely on config values to control their behavior instead of using the traditional approach of passing function/method/class constructor arguments. ...or I might be missing something :) Thoughts? Thanks, Roman ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Trove][Savanna][Murano] Unified Agent proposal discussion at Summit
Igor, better to create another one to track the requirements for such agent framework as far as this etherpad is official result of design session. With best regards, Ilya Sviridov http://www.mirantis.ru/ On Tue, Nov 12, 2013 at 5:02 PM, Igor Marnat imar...@mirantis.com wrote: Ilya, that's cool! Mind if Murano and Savanna teams join the same etherpad? Regards, Igor Marnat On Tue, Nov 12, 2013 at 6:58 PM, Ilya Sviridov isviri...@mirantis.comwrote: Thinking in that direction, the Trove team had a design session about current status of agent in project. Just take a look https://etherpad.openstack.org/p/TroveGuestAgents With best regards, Ilya Sviridov http://www.mirantis.ru/ On Tue, Nov 12, 2013 at 4:29 PM, Igor Marnat imar...@mirantis.comwrote: Just to summarize, there was an interest expressed from Murano, Trove, Savanna and Heat teams in regards with implementation of this unified agent. Nothing specific was decided expect suggestion to keep pushing. I'd suggest to keep pushing this way: - create an etherpad - each team interested in having unified agent writes there detailed use cases for an agent to this etherpad - based on these use-cases we can generate very specific and detailed requirements to the agent - based on these requirements we can agree on architecture and approach to implementation. Teams? Regards, Igor Marnat On Tue, Nov 5, 2013 at 6:10 AM, Alexander Tivelkov ativel...@mirantis.com wrote: Hi guys, Recently we had several discussions about the guest VM agents: lot's of projects have the similar needs to run some special logic on the side of guest virtual machines. As far as I know, there are such agents in Savanna, Trove, Murano and may be some other projects as well. The obvious idea is to unite the efforts and have the unified solution which may satisfy everybody's needs. We've discussed this topic before with some of the teams, and got the promising-looking idea to create kind of unified agent library and put it in Oslo or some other shared project. We've scheduled an unconference session on the Summit, this Friday at 3:10 pm. Let's continue discussing the idea there: we may gather the common requirements, discuss the basic design concepts etc. See you there! -- Kind Regards, Alexander Tivelkov Principal Software Engineer OpenStack Platform Product division Mirantis, Inc +7(495) 640 4904, ext 0236 +7-926-267-37-97(cell) Vorontsovskaya street 35 B, building 3, Moscow, Russia. ativel...@mirantis.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][LBaaS] LBaaS Subteam meeting
I agree that it would be better to hold it on a channel with a bot which keeps logs. I just found that most convenient slots are already taken on both openstack-meeting and openstack-meeting-alt. 14-00 UTC is convenient for me so I'd like to hear other opinions. Thanks, Eugene. On Tue, Nov 12, 2013 at 7:27 PM, Akihiro Motoki amot...@gmail.com wrote: Hi Eugene, In my opinion, it is better the LBaaS meeting is held on #openstack-meeting or #openstack-meeting-alt as most OpenStack projects do. In addition, information on https://wiki.openstack.org/wiki/Meetings#LBaaS_meeting is not up-to-date. The time is 1400UTC and the channel is #openstack-meeting. I saw someone asked is there LBaaS meeting today? on #openstack-meeting channel several times. Thanks, Akihiro On Wed, Nov 13, 2013 at 12:08 AM, Eugene Nikanorov enikano...@mirantis.com wrote: Hi neutron and lbaas folks! We have a plenty of work to do for the Icehouse, so I suggest we start having regular weekly meetings to track our progress. Let's meet at #neutron-lbaas on Thursday, 14 at 15-00 UTC The agenda for the meeting is the following: 1. Blueprint list to be proposed for the icehouse-1 2. QA third-party testing 3. dev resources evaluation 4. Additional features requested by users. Thanks, Eugene. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Improvement of Cinder API wrt https://bugs.launchpad.net/nova/+bug/1213953
On Tue, Nov 12, 2013 at 8:46 AM, Solly Ross sr...@redhat.com wrote: I'd like to get some sort of consensus on this before I start working on it. Now that people are back from Summit, what would you propose? Best Regards, Solly Ross - Original Message - From: Solly Ross sr...@redhat.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Sent: Tuesday, November 5, 2013 10:40:48 AM Subject: Re: [openstack-dev] Improvement of Cinder API wrt https://bugs.launchpad.net/nova/+bug/1213953 Also, that's still an overly complicated process for one or two VMs. The idea behind the Nova command was to minimize the steps in the image-volume-VM process for a single VM. - Original Message - From: Chris Friesen chris.frie...@windriver.com To: openstack-dev@lists.openstack.org Sent: Tuesday, November 5, 2013 9:23:39 AM Subject: Re: [openstack-dev] Improvement of Cinder API wrt https://bugs.launchpad.net/nova/+bug/1213953 Wouldn't you still need variable timeouts? I'm assuming that copying multi-gig cinder volumes might take a while, even if it's local. (Or are you assuming copy-on-write?) Chris On 11/05/2013 01:43 AM, Caitlin Bestler wrote: Replication of snapshots is one solution to this. You create a Cinder Volume once. snapshot it. Then replicate to the hosts that need it (this is the piece currently missing). Then you clone there. I will be giving an in an hour in conference session on this and other uses of snapshots in the last time slot Wednesday. On Nov 5, 2013 5:58 AM, Solly Ross sr...@redhat.com mailto:sr...@redhat.com wrote: So, There's currently an outstanding issue with regards to a Nova shortcut command that creates a volume from an image and then boots from it in one fell swoop. The gist of the issue is that there is currently a set timeout which can time out before the volume creation has finished (it's designed to time out in case there is an error), in cases where the image download or volume creation takes an extended period of time (e.g. under a Gluster backend for Cinder with certain network conditions). The proposed solution is a modification to the Cinder API to provide more detail on what exactly is going on, so that we could programmatically tune the timeout. My initial thought is to create a new column in the Volume table called 'status_detail' to provide more detailed information about the current status. For instance, for the 'downloading' status, we could have 'status_detail' be the completion percentage or JSON containing the total size and the current amount copied. This way, at each interval we could check to see if the amount copied had changed, and trigger the timeout if it had not, instead of blindly assuming that the operation will complete within a given amount of time. What do people think? Would there be a better way to do this? Best Regards, Solly Ross ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org mailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev I think the best solution here is to clean up the setting of error-status for volumes during create/download and skip the timeout altogether. Last time I looked even this wasn't in that bad of shape (with the exception of the phantom VG doesn't exist that none of us seem to be able to recreate). I'm not a fan of complex variable time-out algorithms, and I'm even less of a fan of adding API functions to gather timeout info. I would like to hear if there's actually a solution offered by call-backs that the rest of us just aren't seeing here? I don't know how that solves the problem though. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] HOT software configuration refined after design summit discussions
On 12/11/13 14:59, Alex Heneveld wrote: One minor suggestion is to consider using a special character (eg $) rather than reserved keywords. As I understand it the keywords are only interpreted when they exactly match the value of a key in a map, so it is already unlikely to be problematic. However I think it would be more familiar and clear if we instead used the rule that any item (key or value) which _starts_ with a $ is interpreted specially. What those rules are is TBD but you could for instance write functions -- as either `$get_param('xxx')` or `$get_param: xxx` -- as well as allow accessing a parameter directly `$xxx `. This sounds like a nice idea on the surface. AWS accomplished the same thing by namespacing functions with the Fn:: prefix (except for 'Ref', bizarrely), and it works fine because the chances are if you randomly (maybe in a Metadata section) have a dict key that happens to start with Fn:: then you can probably just choose a different name. However, if for any reason you have a dict key starting with $ and we interpret that specially, then you are basically hosed since you almost certainly _needed_ it to actually start with $ for a reason. So -1. cheers, Zane. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Improvement of Cinder API wrt https://bugs.launchpad.net/nova/+bug/1213953
On 11/12/2013 8:09 AM, John Griffith wrote: On Tue, Nov 12, 2013 at 8:46 AM, Solly Ross sr...@redhat.com wrote: I'd like to get some sort of consensus on this before I start working on it. Now that people are back from Summit, what would you propose? Best Regards, Solly Ross - Original Message - From: Solly Ross sr...@redhat.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Sent: Tuesday, November 5, 2013 10:40:48 AM Subject: Re: [openstack-dev] Improvement of Cinder API wrt https://bugs.launchpad.net/nova/+bug/1213953 Also, that's still an overly complicated process for one or two VMs. The idea behind the Nova command was to minimize the steps in the image-volume-VM process for a single VM. Complexity is not an issue. Bandwidth and latency are issues. Any solution that achieves the user objectives can be managed by a taskflow. It will be simple for the user to apply. The amount of code involved is relatively low on the factors to compare. Taking extra time and consuming extra bandwidth that were not required are serious issues. My assumption is that the cinder backend will be able to employ copy-on-write when cloning volumes to at least make a thinly provisioned version available almost instantly (even if the full space is allocated and then copied asynchronously. Permanently thin clones just require that the relationship be tracked. Currently that is up to the volume driver, but we could always make these relationships legitimate by recognizing them in Cinder proper). The goal here is not to require new behaviors of backends, but to enable solutions that already exist to be deployed to the benefit of end users. Requiring synchronoous multi-GB copies (locally or even worse over the network) is not a minor price that we should expect customers to endure for the sake of software uniformity. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] [heat] Custom Flavor creation through Heat
Hi, In Telecom Cloud applications, the requirements for every application are different. One application might need 10 CPUs, 10GB RAM and no disk. Another application might need 1 CPU, 512MB RAM and 100GB Disk. This varied requirements directly affects the flavors which need to be created for different applications (virtual instances). Customer has his own custom requirements for CPU, RAM and other hardware requirements. So, based on the requests from the customers, we believe that the flavor creation should be done along with the instance creation, just before the instance is created. Most of the flavors will be specific to that application and therefore will not be suitable by other instances. The obvious way is to allow users to create flavors and boot customized instances through Heat. As of now, users can launch instances through heat along with predefined nova flavors only. We have made some changes in our setup and tested it. This change allows creation of customized nova flavors using heat templates. We are also using extra-specs in the flavors for use in our private cloud deployment. This gives an option to the user to mention custom requirements for the flavor in the heat template directly along with the instance details. There is one problem in the nova flavor creation using heat templates. Admin privileges are required to create nova flavors. There should be a way to allow a normal user to create flavors. Your comments and suggestions are most welcome on how to handle this problem !!! Regards, Vijaykumar Kodam ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] Do we need to clean up resource_id after deletion?
On Tue, Nov 12, 2013 at 7:47 AM, Zane Bitter zbit...@redhat.com wrote: On 02/11/13 05:30, Clint Byrum wrote: Excerpts from Christopher Armstrong's message of 2013-11-01 11:34:56 -0700: Vijendar and I are trying to figure out if we need to set the resource_id of a resource to None when it's being deleted. This is done in a few resources, but not everywhere. To me it seems either a) redundant, since the resource is going to be deleted anyway (thus deleting the row in the DB that has the resource_id column) b) actively harmful to useful debuggability, since if the resource is soft-deleted, you'll not be able to find out what physical resource it represented before it's cleaned up. Is there some specific reason we should be calling resource_id_set(None) in a check_delete_complete method? I've often wondered why some do it, and some don't. Seems to me that it should be done not inside each resource plugin but in the generic resource handling code. However, I have not given this much thought. Perhaps others can provide insight into why it has been done that way. There was a time in the very early days of Heat development when deleting something that had already disappeared usually resulted in an error (i.e. we mostly weren't catching NotFound exceptions). I expect this habit dates from that era. I can't think of any reason we still need this, and I agree that it seems unhelpful for debugging. cheers, Zane. Thanks Zane and others who have responded. My recent patch (now already merged) won't delete the resource_id. -- IRC: radix Christopher Armstrong Rackspace ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] Configuration validation
Thanks folks for the interesting suggestions on this topic! I’ll b updating the BP this week with this and other info i am gathering. Please let me know if you are interested in being involved in brainstorming on this issue and I will set up an irc meeting to discuss it further On Nov 11, 2013, at 3:08 PM, Mark McLoughlin mar...@redhat.com wrote: Hi Nikola, On Mon, 2013-11-11 at 12:44 +0100, Nikola Đipanov wrote: Hey all, During the summit session on the the VMWare driver roadmap, a topic of validating the passed configuration prior to starting services came up (see [1] for more detail on how it's connected to that specific topic). Several ideas were thrown around during the session mostly documented in [1]. There are a few more cases when something like this could be useful (see bug [2] and related patch [3]), and I was wondering if a slightly different approach might be useful. For example use an already existing validation hook in the service class [4] to call into a validation framework that will potentially stop the service with proper logging/notifications. The obvious benefit would be that there is no pre-run required from the user, and the danger of running a misconfigured stack is smaller. One thing worth trying would be to encode the validation rules in the config option declaration. Some rules could be straightforward, like: opts = [ StrOpt('foo_url', validate_rule=cfg.MatchesRegexp('(git|http)://')), ] but the rule you describe is more complex e.g. def validate_proxy_url(conf, group, key, value): if not conf.vnc_enabled: return if conf.ssl_only and value.startswith(http://;): raise ValueError('ssl_only option detected, but ...') opts = [ StrOpt('novncproxy_base_url', validate_rule=validate_proxy_url), ... ] I'm not sure I love this yet, but it's worth experimenting with. Mark. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron] IPv6 sub-team?
As an IPv6 engineer interesting in helping Neutron get where it could be, I'd like to join in on this. I also like the Thursday 21:00 UTC slot. -Anthony -Original Message- From: Shixiong Shang sparkofwisdom.cl...@gmail.com Reply-To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Date: Tuesday, November 12, 2013 10:01 To: Collins, Sean (Contractor) sean_colli...@cable.comcast.com Cc: openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [Neutron] IPv6 sub-team? Thanks, Sean! I am on east coast, so Monday 20:00 UTC time and Thursday 21:00 UTC time work great for me. Hopefully we can find a timeslot working for everybody! Shixiong On Nov 11, 2013, at 1:23 PM, Collins, Sean (Contractor) sean_colli...@cable.comcast.com wrote: On Mon, Nov 11, 2013 at 01:16:43PM -0500, Shixiong Shang wrote: +1. We have great interest to run OpenStack over IPv6 and would love to be a part of the discussion. Excellent - please see the thread I've made in OpenStack-Dev - we're tossing out times for the IRC meeting, that works for everyone interested. -- Sean M. Collins ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] HOT software configuration refined after design summit discussions
Excerpts from Thomas Spatzier's message of 2013-11-11 08:57:58 -0800: Hi all, I have just posted the following wiki page to reflect a refined proposal for HOT software configuration based on discussions at the design summit last week. Angus also put a sample up in an etherpad last week, but we did not have enough time to go thru it in the design session. My write-up is based on Angus' sample, actually a refinement, and on discussions we had in breaks, plus it is trying to reflect all the good input from ML discussions and Steve Baker's initial proposal. https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config-WIP Please review and provide feedback. Hi Thomas, thanks for spelling this out clearly. I am still -1 on anything that specifies the place a configuration is hosted inside the configuration definition itself. Because configurations are encapsulated by servers, it makes more sense to me that the servers (or server groups) would specify their configurations. If changing to a more logical model is just too hard for TOSCA to adapt to, then I suggest this be an area that TOSCA differs from Heat. We don't need two models for communicating configurations to servers, and I'd prefer Heat stay focused on making HOT template authors' and users' lives better. I have seen an alternative approach which separates a configuration definition from a configuration deployer. This at least makes it clear that the configuration is a part of a server. In pseudo-HOT: resources: WebConfig: type: OS::Heat::ChefCookbook properties: cookbook_url: https://some.test/foo parameters: endpoint_host: type: string WebServer: type: OS::Nova::Server properties: image: webserver flavor: 100 DeployWebConfig: type: OS::Heat::ConfigDeployer properties: configuration: {get_resource: WebConfig} on_server: {get_resource: WebServer} parameters: endpoint_host: {get_attribute: [ WebServer, first_ip]} I have implementation questions about both of these approaches though, as it appears they'd have to reach backward in the graph to insert their configuration, or have a generic bucket for all configuration to be inserted. IMO that would look a lot like the method I proposed, which was to just have a list of components attached directly to the server like this: components: WebConfig: type: Chef::Cookbook properties: cookbook_url: https://some.test/foo parameters: endpoing_host: type: string resources: WebServer: type: OS::Nova::Server properties: image: webserver flavor: 100 components: - webconfig: component: {get_component: WebConfig} parameters: endpoint_host: {get_attribute: [ WebServer, first_ip ]} Of course, the keen eye will see the circular dependency there with the WebServer trying to know its own IP. We've identified quite a few use cases for self-referencing attributes, so that is a separate problem we should solve independent of the template composition problem. Anyway, I prefer the idea that parse-time things are called components and run-time things are resources. I don't need a database entry for WebConfig above. It is in the template and entirely static, just sitting there as a reusable chunk for servers to pull in as-needed. Anyway, I don't feel that we resolved any of these issues in the session about configuration at the summit. If we did, we did not record them in the etherpad or the blueprint. We barely got through the prepared list of requirements and only were able to spell out problems, not any solutions. So forgive me if I missed something and want to keep on discussing this. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Improvement of Cinder API wrt https://bugs.launchpad.net/nova/+bug/1213953
On Tue, Nov 12, 2013 at 10:25 AM, Caitlin Bestler caitlin.best...@nexenta.com wrote: On 11/12/2013 8:09 AM, John Griffith wrote: On Tue, Nov 12, 2013 at 8:46 AM, Solly Ross sr...@redhat.com wrote: I'd like to get some sort of consensus on this before I start working on it. Now that people are back from Summit, what would you propose? Best Regards, Solly Ross - Original Message - From: Solly Ross sr...@redhat.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Sent: Tuesday, November 5, 2013 10:40:48 AM Subject: Re: [openstack-dev] Improvement of Cinder API wrt https://bugs.launchpad.net/nova/+bug/1213953 Also, that's still an overly complicated process for one or two VMs. The idea behind the Nova command was to minimize the steps in the image-volume-VM process for a single VM. Complexity is not an issue. Bandwidth and latency are issues. Any solution that achieves the user objectives can be managed by a taskflow. It will be simple for the user to apply. The amount of code involved is relatively low on the factors to compare. Taking extra time and consuming extra bandwidth that were not required are serious issues. My assumption is that the cinder backend will be able to employ copy-on-write when cloning volumes to at least make a thinly provisioned version available almost instantly (even if the full space is allocated and then copied asynchronously. Permanently thin clones just require that the relationship be tracked. Currently that is up to the volume driver, but we could always make these relationships legitimate by recognizing them in Cinder proper). Sorry, but I'm not seeing where you're going with this in relation to the question being asked? The question is how to deal with creating a new bootable volume from nova boot command and be able to tell whether it's timed out, or errored while waiting for creation. Not sure I'm following your solution here, in an ideal scenario yes, if the backend has a volume with the image already available they could utilize things like cloning or snapshot features but that's a pretty significant pre-req and I'm not sure how it relates to the general problem that's being discussed. The goal here is not to require new behaviors of backends, but to enable solutions that already exist to be deployed to the benefit of end users. Requiring synchronoous multi-GB copies (locally or even worse over the network) is not a minor price that we should expect customers to endure for the sake of software uniformity. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][LBaaS] LBaaS Subteam meeting
Hi Eugene, LBaaS meeting on #openstack-meeting was previously schedule on Thursdays 1400UTC. And indeed it is still listed on https://wiki.openstack.org/wiki/Meetings as such, so I believe keeping it in that timeslot should be fine. - Stephen On Tue, Nov 12, 2013 at 7:40 AM, Eugene Nikanorov enikano...@mirantis.com wrote: I agree that it would be better to hold it on a channel with a bot which keeps logs. I just found that most convenient slots are already taken on both openstack-meeting and openstack-meeting-alt. 14-00 UTC is convenient for me so I'd like to hear other opinions. Thanks, Eugene. On Tue, Nov 12, 2013 at 7:27 PM, Akihiro Motoki amot...@gmail.com wrote: Hi Eugene, In my opinion, it is better the LBaaS meeting is held on #openstack-meeting or #openstack-meeting-alt as most OpenStack projects do. In addition, information on https://wiki.openstack.org/wiki/Meetings#LBaaS_meeting is not up-to-date. The time is 1400UTC and the channel is #openstack-meeting. I saw someone asked is there LBaaS meeting today? on #openstack-meeting channel several times. Thanks, Akihiro On Wed, Nov 13, 2013 at 12:08 AM, Eugene Nikanorov enikano...@mirantis.com wrote: Hi neutron and lbaas folks! We have a plenty of work to do for the Icehouse, so I suggest we start having regular weekly meetings to track our progress. Let's meet at #neutron-lbaas on Thursday, 14 at 15-00 UTC The agenda for the meeting is the following: 1. Blueprint list to be proposed for the icehouse-1 2. QA third-party testing 3. dev resources evaluation 4. Additional features requested by users. Thanks, Eugene. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][api] Is this a potential issue
On 11/11/13 at 05:27pm, Jiang, Yunhong wrote: Resend after the HK summit, hope someone can give me hint on it. Thanks --jyh -Original Message- From: Jiang, Yunhong [mailto:yunhong.ji...@intel.com] Sent: Thursday, November 07, 2013 5:39 PM To: openstack-dev@lists.openstack.org Subject: [openstack-dev] [nova][api] Is this a potential issue Hi, all I'm a bit confused of followed code in ./compute/api.py, which will be invoked by api/openstack/compute/servers.py, _action_revert_resize(). From the code seems there is a small windows between get the migration object and update migration.status. If another API request comes at this small window, it means two utility will try to revert resize at same time. Is this a potential issue? Currently implementation already roll back the reservation if something wrong, but not sure if we should update state to reverting as a transaction in get_by_instance_and_status()? The migration shouldn't end up being set to 'reverting' twice because of the expected_task_state set and check in instance.save(expected_task_state=None). The quota reservation could happen twice, so a rollback in the case of a failure in instance.save could be good. --jyh def revert_resize(self, context, instance): Reverts a resize, deleting the 'new' instance in the process. elevated = context.elevated() migration = migration_obj.Migration.get_by_instance_and_status( elevated, instance.uuid, 'finished') Here we get the migration object # reverse quota reservation for increased resource usage deltas = self._reverse_upsize_quota_delta(context, migration) reservations = self._reserve_quota_delta(context, deltas) instance.task_state = task_states.RESIZE_REVERTING instance.save(expected_task_state=None) migration.status = 'reverting' Here we update the status. migration.save() ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [qa] moratorium on new negative tests in Tempest
On 11/12/2013 02:33 PM, David Kranz wrote: On 11/12/2013 01:36 PM, Clint Byrum wrote: Excerpts from Sean Dague's message of 2013-11-12 10:01:06 -0800: During the freeze phase of Havana we got a ton of new contributors coming on board to Tempest, which was super cool. However it meant we had this new influx of negative tests (i.e. tests which push invalid parameters looking for error codes) which made us realize that human creation and review of negative tests really doesn't scale. David Kranz is working on a generative model for this now. Are there some notes or other source material we can follow to understand this line of thinking? I don't agree or disagree with it, as I don't really understand, so it would be helpful to have the problems enumerated and the solution hypothesis stated. Thanks! ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev I am working on this with Marc Koderer but we only just started and are not quite ready. But since you asked now... The problem is that the current implementation of negative tests is that each case is represented as code in a method and targets a particular set of api arguments and expected result. In most (but not all) of these tests there is boilerplate code surrounding the real content which is the actual arguments being passed and the value expected. That boilerplate code has to be written correctly and reviewed. The general form of the solution has to be worked out but basically would involve expressing these tests declaratively, perhaps in a yaml file. In order to do this we will need some kind of json schema for each api. The main implementation around this is defining the yaml attributes that make it easy to express the test cases, and somehow coming up with the json schema for each api. In addition, we would like to support fuzz testing where arguments are, at least partially, randomly generated and the return values are only examined for 4xx vs something else. This would be possible if we had json schemas. The main work is to write a generator and methods for creating bad values including boundary conditions for types with ranges. I had thought a bit about this last year and poked around for an existing framework. I didn't find anything that seemed to make the job much easier but if any one knows of such a thing (python, hopefully) please let me know. The negative tests for each api would be some combination of declaratively specified cases and auto-generated ones. With regard to the json schema, there have been various attempts at this in the past, including some ideas of how wsme/pecan will help, and it might be helpful to have more project coordination. I can see a few options: 1. Tempest keeps its own json schema data 2. Each project keeps its own json schema in a way that supports automated extraction 3. There are several use cases for json schema like this and it gets stored in some openstacky place that is not in tempest So that is the starting point. Comments and suggestions welcome! Marc and I just started working on an etherpad https://etherpad.openstack.org/p/bp_negative_tests but any one is welcome to contribute there. We actually did this back in the good old Drizzle days- and by we, I mean Patrick Crews, who I copied here. He can refer to the research better than I can, but AIUI, generative schema-driven testing of things like this is certainly the right direction. It's about 10 years behind the actual state of the art of the research, but it's in all ways superior to making human combinations of input parameters and output behaviors. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Glance] Meeting Reminder Thursday at 2000 UTC
Hi folks, We'll have a Glance team meeting this Thursday at 2000 UTC (don't forget that UTC applies to both the time and the date!). In your timezone that is http://www.timeanddate.com/worldclock/fixedtime.html?msg=Glance+Meetingiso=20131114T20ah=1. As usual the meeting room is #openstack-meeting-alt on freenode. The agenda can be found at https://etherpad.openstack.org/p/glance-team-meeting-agenda so please feel free to suggest items. This week, I hope we can spend a good chunk of time figuring out how to improve our review responsiveness. Thanks, see you there. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Do we have some guidelines for mock, stub, mox when writing unit test?
Hi On Tue, Nov 12, 2013 at 4:24 PM, Mark McLoughlin mar...@redhat.com wrote: On Tue, 2013-11-12 at 13:11 -0800, Shawn Hartsock wrote: Maybe we should have some 60% rule... that is: If you change more than half of a test... you should *probably* rewrite the test in Mock. A rule needs a reasoning attached to it :) Why do we want people to use mock? Is it really for Python3? If so, I assume that means we've ruled out the python3 port of mox? (Ok by me, but would be good to hear why) And, if that's the case, then we should encourage whoever wants to port mox based tests to mock. The upstream maintainer is not going to port mox to python3 so we have a fork of mox called mox3. Ideally, we would drop the usage of mox in favour of mock so we don't have to carry a forked mox. Or maybe it has nothing to do with Python3 at all? Maybe we just like mock more? But do we like it enough to have a mixture of mock and mox across the codebase? Mark. - Original Message - From: John Garbutt j...@johngarbutt.com To: Mark McLoughlin mar...@redhat.com, OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Sent: Tuesday, November 12, 2013 9:31:25 AM Subject: Re: [openstack-dev] [nova] Do we have some guidelines for mock, stub, mox when writing unit test? On 11 November 2013 23:18, Mark McLoughlin mar...@redhat.com wrote: On Mon, 2013-11-11 at 12:07 +, John Garbutt wrote: On 11 November 2013 10:27, Rosa, Andrea (HP Cloud Services) andrea.r...@hp.com wrote: Hi Generally mock is supposed to be used over mox now for python 3 support. That is my understanding too +1 But I don't think we should waste all our time re-writing all our mox and stub tests. Lets just leave this to happen organically for now as we add and refactor tests. We probably need to take the hit at some point, but that doesn't feel like we should do that right now. Hmm, I don't follow this stance. Adding Python3 support is a goal we all share. If we're saying that the use of mox stands in the way of that goal, but that we'd really prefer if people didn't work on porting tests from mox to mock yet ... then are we saying we don't value people working on porting to Python3? And if we plan to use a Python3 compatible version of mox[1], then isn't the Python3 argument irrelevant and saying use mock for new tests just means we'll end up with a mixture of mox and mock? Good point, I forgot about the port of mox to python3. I liked the idea of prefer mock, with a view that at some point in the future there is only a small amount of mox related code left, that can easily get moved to mock. I guess its a trade off between review capacity, risk of breaking existing tests, and risk of never reaching that end goal. We already have stubs and mox, which do tend to fight each other, adding a third does seem like a bad plan, unless there is a very good reason, which I always had in my head as python3 support. Hmm... I do prefer mock to mox, but not that strongly. John ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [horizon] User registrations
On 13-11-11 01:31 AM, Lyle, David wrote: I think there is certainly interest. I do think it will need to be highly configurable to be useful. The problem, as Dolph points out, is that each deployment has its own workflow. Points of configuration: -Does the local keystone deployment policy support self-registration? The default is no. So, at that point access to self-registration should be hidden. -How many steps are required in the registration process? -Is payment information required? Address? -How is the registration confirmed, email, text, ? -CAPTCHA? I think the two main reasons such a facility is not present in Horizon are: 1. Until recently determining keystone's access policy was not possible. 2. The actual implementation is highly deployment dependent. So, if we are talking features, I think the one I can see being the most useful for me is when an admin is adding user accounts with the dashboard, is the email subsystem notifies the users with onetime login URL, forcing the user to setup a password. This way the admin doesn't have to deal with transmitting passwords to each user. Actually, I guess I am talking about a password reset token. -- Paul Belanger | PolyBeacon, Inc. Jabber: paul.belan...@polybeacon.com | IRC: pabelanger (Freenode) Github: https://github.com/pabelanger | Twitter: https://twitter.com/pabelanger ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Do we have some guidelines for mock, stub, mox when writing unit test?
On 2013-11-12 15:27, Shawn Hartsock wrote: Good point. I assume someone made a comparison similar to this: * http://garybernhardt.github.io/python-mock-comparison/ ... and evangelized a choice. I had assumed that Mock vs mox was not merely based on Python3 support but had something to do with Mock versus Mox. Does anyone have that context in their head? Would they mind sharing it? Not in my head, but the internet remembers everything :-) http://lists.openstack.org/pipermail/openstack-dev/2013-July/012474.html In short, the majority of respondents liked Mock better, it seems to be more widely used, and Mock is part of the stdlib in Python 3.3. -Ben ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][api] Is this a potential issue
-Original Message- From: Andrew Laski [mailto:andrew.la...@rackspace.com] Sent: Tuesday, November 12, 2013 12:07 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [nova][api] Is this a potential issue On 11/11/13 at 05:27pm, Jiang, Yunhong wrote: Resend after the HK summit, hope someone can give me hint on it. Thanks --jyh -Original Message- From: Jiang, Yunhong [mailto:yunhong.ji...@intel.com] Sent: Thursday, November 07, 2013 5:39 PM To: openstack-dev@lists.openstack.org Subject: [openstack-dev] [nova][api] Is this a potential issue Hi, all I'm a bit confused of followed code in ./compute/api.py, which will be invoked by api/openstack/compute/servers.py, _action_revert_resize(). From the code seems there is a small windows between get the migration object and update migration.status. If another API request comes at this small window, it means two utility will try to revert resize at same time. Is this a potential issue? Currently implementation already roll back the reservation if something wrong, but not sure if we should update state to reverting as a transaction in get_by_instance_and_status()? The migration shouldn't end up being set to 'reverting' twice because of the expected_task_state set and check in instance.save(expected_task_state=None). The quota reservation could happen twice, so a rollback in the case of a failure in instance.save could be good. Aha, didn't notice that's a guard. It's really cool. --jyh --jyh def revert_resize(self, context, instance): Reverts a resize, deleting the 'new' instance in the process. elevated = context.elevated() migration = migration_obj.Migration.get_by_instance_and_status( elevated, instance.uuid, 'finished') Here we get the migration object # reverse quota reservation for increased resource usage deltas = self._reverse_upsize_quota_delta(context, migration) reservations = self._reserve_quota_delta(context, deltas) instance.task_state = task_states.RESIZE_REVERTING instance.save(expected_task_state=None) migration.status = 'reverting' Here we update the status. migration.save() ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Custom Flavor creation through Heat
-Original Message- From: Shawn Hartsock [mailto:hartso...@vmware.com] Sent: Tuesday, November 12, 2013 12:56 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [nova] [heat] Custom Flavor creation through Heat My concern with proliferating custom flavors is that it might play havoc with the underlying root-case for flavors. My understanding of flavors is that they are used to solve the resource packing problem in elastic cloud scenarios. That way you know that 256 tiny VMs fit cleanly into your hardware layout and so do 128 medium VMs and 64 large VMs. If you allow flavor of the week then the packing problem re-asserts itself and scheduling becomes harder. I'm a bit surprised that the flavor is used to resolve the packing problem. I thought it should be handled by scheduler, although it's a NP problem. As for custom flavor, I think at least it's against current nova assumption. Currently nova assume flavor should only be created by admin, who knows the cloud quite well. One example is, flavor may contain extra-spec, so if an extra-spec value is specified in the flavor, while the corresponding scheduler filter is not enabled, then the extra-spec has no effect and may cause issue. --jyh Do I understand this right? Given the root-use-case is to help solve VM packing problems, I would think that you could allow a nonsense flavor that would say: the Image provides sizing hints beyond flavors. So you would toggle a VM to be nonsense flavor and trigger different scheduling, packing, allocation behaviors. tl;dr - I think that breaks flavors, but I think you should allow it by allowing a cloud to escape flavors all together if they want. # Shawn Hartsock - Original Message - From: Steve Baker sba...@redhat.com To: openstack-dev@lists.openstack.org Sent: Tuesday, November 12, 2013 2:25:23 PM Subject: Re: [openstack-dev] [nova] [heat] Custom Flavor creation through Heat On 11/13/2013 07:50 AM, Steven Dake wrote: On 11/12/2013 10:25 AM, Kodam, Vijayakumar (EXT-Tata Consultancy Ser - FI/Espoo) wrote: Hi, In Telecom Cloud applications, the requirements for every application are different. One application might need 10 CPUs, 10GB RAM and no disk. Another application might need 1 CPU, 512MB RAM and 100GB Disk. This varied requirements directly affects the flavors which need to be created for different applications (virtual instances). Customer has his own custom requirements for CPU, RAM and other hardware requirements. So, based on the requests from the customers, we believe that the flavor creation should be done along with the instance creation, just before the instance is created. Most of the flavors will be specific to that application and therefore will not be suitable by other instances. The obvious way is to allow users to create flavors and boot customized instances through Heat. As of now, users can launch instances through heat along with predefined nova flavors only. We have made some changes in our setup and tested it. This change allows creation of customized nova flavors using heat templates. We are also using extra-specs in the flavors for use in our private cloud deployment. This gives an option to the user to mention custom requirements for the flavor in the heat template directly along with the instance details. There is one problem in the nova flavor creation using heat templates. Admin privileges are required to create nova flavors. There should be a way to allow a normal user to create flavors. Your comments and suggestions are most welcome on how to handle this problem !!! Regards, Vijaykumar Kodam Vjaykumar, I have long believed that an OS::Nova::Flavor resource would make a good addition to Heat, but as you pointed out, this type of resource requires administrative priveleges. I generally also believe it is bad policy to implement resources that *require* admin privs to operate, because that results in yet more resources that require admin. We are currently solving the IAM user cases (keystone doesn't allow the creation of users without admin privs). It makes sense that cloud deployers would want to control who could create flavors to avoid DOS attacks against their inrastructure or prevent trusted users from creating a wacky flavor that the physical infrastructure can't support. I'm unclear if nova offers a way to reduce permissions required for flavor creation. One option that may be possible is via the keystone trusts mechanism. Steve Hardy did most of the work integrating Heat with the new keystone trusts system - perhaps he has some input. I would be happy for you to submit your OS::Nova::Flavor resource to heat. There are a couple of nova-specific issues that will need to be addressed: *
[openstack-dev] Alembic or SA Migrate (again)
Hi Folks! Sorry to dig up a really old topic, but I¹d like to know the status of ceilometer db migrations. I¹d like to submit two branches to modify the Event and Trait tables. If I were to do that now, I would need to write SQLAlchemy scripts to do the database migration - (background: https://bitbucket.org/zzzeek/alembic/issue/21/column-renames-not-supported- on-sqlite). Since the unit tests use db migrations to build up the db schema, there¹s currently no way to get the unit tests to run if your new code uses an alembic migration and needs to alter columns which mine does :( A couple of questions: 1) What is the progress of creating the schema from the models for unit tests? 2) What is the time frame for requiring alembic migrations? 3) Should I push these branches up now, or wait and use an alembic migration? 4) Is there anything I can do to help with 1 or 2? Thanks! -john Related threads: http://lists.openstack.org/pipermail/openstack-dev/2013-August/014214.html http://lists.openstack.org/pipermail/openstack-dev/2013-September/014593.ht ml smime.p7s Description: S/MIME cryptographic signature ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Do we have some guidelines for mock, stub, mox when writing unit test?
On Tue, Nov 12, 2013 at 4:49 PM, Mark McLoughlin mar...@redhat.com wrote: On Tue, 2013-11-12 at 16:42 -0500, Chuck Short wrote: Hi On Tue, Nov 12, 2013 at 4:24 PM, Mark McLoughlin mar...@redhat.com wrote: On Tue, 2013-11-12 at 13:11 -0800, Shawn Hartsock wrote: Maybe we should have some 60% rule... that is: If you change more than half of a test... you should *probably* rewrite the test in Mock. A rule needs a reasoning attached to it :) Why do we want people to use mock? Is it really for Python3? If so, I assume that means we've ruled out the python3 port of mox? (Ok by me, but would be good to hear why) And, if that's the case, then we should encourage whoever wants to port mox based tests to mock. The upstream maintainer is not going to port mox to python3 so we have a fork of mox called mox3. Ideally, we would drop the usage of mox in favour of mock so we don't have to carry a forked mox. Isn't that the opposite conclusion you came to here: http://lists.openstack.org/pipermail/openstack-dev/2013-July/012474.html i.e. using mox3 results in less code churn? Mark. Yes that was my original position but I though we agreed in thread (further on) that we would use mox3 and then migrate to mock further on. Regards chuck ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] HOT software configuration refined after design summit discussions
On 12/11/13 10:32 -0800, Clint Byrum wrote: Excerpts from Thomas Spatzier's message of 2013-11-11 08:57:58 -0800: Hi all, I have just posted the following wiki page to reflect a refined proposal for HOT software configuration based on discussions at the design summit last week. Angus also put a sample up in an etherpad last week, but we did not have enough time to go thru it in the design session. My write-up is based on Angus' sample, actually a refinement, and on discussions we had in breaks, plus it is trying to reflect all the good input from ML discussions and Steve Baker's initial proposal. https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config-WIP Please review and provide feedback. Hi Thomas, thanks for spelling this out clearly. I am still -1 on anything that specifies the place a configuration is hosted inside the configuration definition itself. Because configurations are encapsulated by servers, it makes more sense to me that the servers (or server groups) would specify their configurations. If changing to a more logical model is just too hard for TOSCA to adapt to, then I suggest this be an area that TOSCA differs from Heat. We don't need two models for communicating configurations to servers, and I'd prefer Heat stay focused on making HOT template authors' and users' lives better. I have seen an alternative approach which separates a configuration definition from a configuration deployer. This at least makes it clear that the configuration is a part of a server. In pseudo-HOT: resources: WebConfig: type: OS::Heat::ChefCookbook properties: cookbook_url: https://some.test/foo parameters: endpoint_host: type: string WebServer: type: OS::Nova::Server properties: image: webserver flavor: 100 DeployWebConfig: type: OS::Heat::ConfigDeployer properties: configuration: {get_resource: WebConfig} on_server: {get_resource: WebServer} parameters: endpoint_host: {get_attribute: [ WebServer, first_ip]} This is what Thomas defined, with one optimisation. - The webconfig is a yaml template. As you say the component is static - if so why even put it inline in the template (well that was my thinking, it seems like a template not really a resource). I have implementation questions about both of these approaches though, as it appears they'd have to reach backward in the graph to insert their configuration, or have a generic bucket for all configuration Yeah, it does depend on the implementation. If we use Mistral the agent will need to ask Mistral for the tasks that apply to the server. $ mistral task-consume \ --tags=instance_id=$(my_instance_id);stack_id=$(stack_id) to be inserted. IMO that would look a lot like the method I proposed, which was to just have a list of components attached directly to the server like this: components: WebConfig: type: Chef::Cookbook properties: cookbook_url: https://some.test/foo parameters: endpoing_host: type: string resources: WebServer: type: OS::Nova::Server properties: image: webserver flavor: 100 components: - webconfig: component: {get_component: WebConfig} parameters: endpoint_host: {get_attribute: [ WebServer, first_ip ]} I'd change this to: components: - webconfig: component: {get_file: ./my_configs/webconfig.yaml} parameters: endpoint_host: {get_attribute: [ WebServer, first_ip ]} This *could* be a short hand notation like the volumes property on aws instances. Of course, the keen eye will see the circular dependency there with the WebServer trying to know its own IP. We've identified quite a few use cases for self-referencing attributes, so that is a separate problem we should solve independent of the template composition problem. (aside) I don't like the idea of self ref as it breaks the idea that references are resolved top down. Basically we have to put in a nasty hack to produce broken behaviour (first resolution is bogus and only following resoultions are possibly correct). In this case just use the deployer to break your circular dep? Anyway, I prefer the idea that parse-time things are called components and run-time things are resources. I don't need a database entry for WebConfig above. It is in the template and entirely static, just sitting there as a reusable chunk for servers to pull in as-needed. IMO is should just be a template/formatted file. Anyway, I don't feel that we resolved any of these issues in the session about configuration at the summit. If we did, we did not record them in the etherpad or the blueprint. We barely got through the prepared list of requirements and only were able to spell out problems, not any solutions. So forgive me if I missed something and want to keep on discussing this. ___ OpenStack-dev mailing list
[openstack-dev] [nova] Blueprint for Juniper OpenContrail vrouter nova vif driver support
Hi All, A blueprint has been registered to add Nova vif driver support for Juniper vrouter. The Juniper OpenContrail Controller is a logically centralized but physically distributed Software Defined Networking (SDN) controller that is responsible for providing the management, control, and analytics functions of the virtualized network. The Juniper OpenContrail vRouter is a forwarding plane (of a distributed router) that runs in the hypervisor of a virtualized server. It extends the network from the physical routers and switches in a data center into a virtual overlay network hosted in the virtualized servers. The OpenContrail vRouter is conceptually similar to existing commercial and open source vSwitches such as for example the Open vSwitch (OVS) but it also provides routing and higher layer services. The OpenContrail Controller provides the logically centralized control plane and management plane of the system and orchestrates the vRouters. Blueprint https://blueprints.launchpad.net/nova/+spec/juniper-opencontrail-vrouter-nova-vif-driver Associated contrail neutron plugin blueprint https://blueprints.launchpad.net/neutron/+spec/juniper-plugin-with-extensions Please review blueprint, all comments are welcome. Regards, Prasad ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Ceilometer][qa]Tempest tests for Ceilometer
Hello, guys! I hope everybody has eventually got home after the summit and feeling ok :) So it's time to proceed thinking about integration, unit and performance testing in Ceilometer. First of all I'd like to appreciate your help in composing etherpad https://etherpad.openstack.org/p/icehouse-summit-ceilometer-integration-tests . If you didn't participate in design session about integration tests but have thoughts about it please add your comments. Here is a list of ceilometer-regarding cr in tempest (just a reminder): 1. https://review.openstack.org/#/c/39237/ 2. https://review.openstack.org/#/c/55276/ And even more but they are abandoned due to reviewers' inactivity (take a look in whiteboard): https://blueprints.launchpad.net/tempest/+spec/add-basic-ceilometer-tests . Is there any reasons why cr were not reviewed? I guess the first step to be done is test plan. I've created a doc https://etherpad.openstack.org/p/ceilometer-test-plan and plan to start working on it. If you have any thoughts about the plan - you are welcome! Thanks, Nadya Hi Nadya, How about we add this as an agenda item for the next metering IRC meeting[1] this coming Thursday? I personally would like to get a feeling for who is interested in contributing to this effort, and perhaps do a initial rough division of tasks so as to avoid duplicating work? Cheers, Eoghan [1] https://wiki.openstack.org/wiki/Meetings/MeteringAgenda ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Custom Flavor creation through Heat
- Original Message - From: Yunhong Jiang yunhong.ji...@intel.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Sent: Tuesday, November 12, 2013 5:39:58 PM Subject: Re: [openstack-dev] Custom Flavor creation through Heat -Original Message- From: Shawn Hartsock [mailto:hartso...@vmware.com] Sent: Tuesday, November 12, 2013 12:56 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [nova] [heat] Custom Flavor creation through Heat My concern with proliferating custom flavors is that it might play havoc with the underlying root-case for flavors. My understanding of flavors is that they are used to solve the resource packing problem in elastic cloud scenarios. That way you know that 256 tiny VMs fit cleanly into your hardware layout and so do 128 medium VMs and 64 large VMs. If you allow flavor of the week then the packing problem re-asserts itself and scheduling becomes harder. I'm a bit surprised that the flavor is used to resolve the packing problem. I thought it should be handled by scheduler, although it's a NP problem. I should have said flavors help to make the packing problem simpler for the scheduler ... flavors do not solve the packing problem. As for custom flavor, I think at least it's against current nova assumption. Currently nova assume flavor should only be created by admin, who knows the cloud quite well. One example is, flavor may contain extra-spec, so if an extra-spec value is specified in the flavor, while the corresponding scheduler filter is not enabled, then the extra-spec has no effect and may cause issue. Beyond merely extra-specs my understanding was that because you *have* flavors you can make assumptions about packing that make the problem space smaller... someone made a nice presentation showing how having restricted flavors made scheduling easier. I can't find it right now. It was presented at an OpenStack summit IIRC ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [heat][mistral] EventScheduler vs Mistral scheduling
On 12/11/13 15:13 -0800, Christopher Armstrong wrote: Given the recent discussion of scheduled autoscaling at the summit session on autoscaling, I looked into the state of scheduling-as-a-service in and around OpenStack. I found two relevant wiki pages: https://wiki.openstack.org/wiki/EventScheduler https://wiki.openstack.org/wiki/Mistral/Cloud_Cron_details The first one proposes and describes in some detail a new service and API strictly for scheduling the invocation of webhooks. The second one describes a part of Mistral (in less detail) to basically do the same, except executing taskflows directly. Here's the first question: should scalable cloud scheduling exist strictly as a feature of Mistral, or should it be a separate API that only does event scheduling? Mistral could potentially make use of the event scheduling API (or just rely on users using that API directly to get it to execute their task flows). Second question: if the proposed EventScheduler becomes a real project, which OpenStack Program should it live under? Third question: Is anyone actively working on this stuff? :) Your work mates;) https://github.com/rackerlabs/qonos how about merge qonos into mistral, or at lest put it into stackforge? -Angus -- IRC: radix Christopher Armstrong Rackspace ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Glance Tasks
George, Thanks for the comments, they make a lot of sense. There is a Glance team meeting on Thursday where we would like to push a bit further on this. Would you mind sending in a few more details? Perhaps a sample of what your ideal layout would be? As an example, how would you prefer actions are handled that do not effect a currently existing resource but ultimately create a new resource (for example the import action). Thanks! John On 11/11/13, 8:05 PM, George Reese wrote: I was asked at the OpenStack Summit to look at the Glance Tasks, particularly as a general pattern for other asynchronous operations. If I understand Glance Tasks appropriately, different asynchronous operations get replaced by a single general purpose API call? In general, a unified API for task tracking across all kinds of asynchronous operations is a good thing. However, assuming this understanding is correct, I have two comments: #1 A consumer of an API should not need to know a priori whether a given operation is asynchronous. The asynchronous nature of the operation should be determined through a response. Specifically, if the client gets a 202 response, then it should recognize that the action is asynchronous and expect a task in the response. If it gets something else, then the action is synchronous. This approach has the virtual of being proper HTTP and allowing the needs of the implementation to dictate the synchronous/asynchronous nature of the API call and not a fixed contract. #2 I really don't like the idea of a single endpoint (/v2/tasks) for executing all tasks for a particular OpenStack component. Changes should be made through the resource being impacted. -George -- George Reese (george.re...@imaginary.com mailto:george.re...@imaginary.com) t: @GeorgeReese m: +1(207)956-0217 tel:%2B1%28207%29956-0217 Skype: nspollution ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Custom Flavor creation through Heat
not the video I was looking for, but he kind of makes the point about planning... http://youtu.be/2E0C9zLSINE?t=42m55s # Shawn Hartsock - Original Message - From: Shawn Hartsock hartso...@vmware.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Sent: Tuesday, November 12, 2013 6:38:21 PM Subject: Re: [openstack-dev] Custom Flavor creation through Heat - Original Message - From: Yunhong Jiang yunhong.ji...@intel.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Sent: Tuesday, November 12, 2013 5:39:58 PM Subject: Re: [openstack-dev] Custom Flavor creation through Heat -Original Message- From: Shawn Hartsock [mailto:hartso...@vmware.com] Sent: Tuesday, November 12, 2013 12:56 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [nova] [heat] Custom Flavor creation through Heat My concern with proliferating custom flavors is that it might play havoc with the underlying root-case for flavors. My understanding of flavors is that they are used to solve the resource packing problem in elastic cloud scenarios. That way you know that 256 tiny VMs fit cleanly into your hardware layout and so do 128 medium VMs and 64 large VMs. If you allow flavor of the week then the packing problem re-asserts itself and scheduling becomes harder. I'm a bit surprised that the flavor is used to resolve the packing problem. I thought it should be handled by scheduler, although it's a NP problem. I should have said flavors help to make the packing problem simpler for the scheduler ... flavors do not solve the packing problem. As for custom flavor, I think at least it's against current nova assumption. Currently nova assume flavor should only be created by admin, who knows the cloud quite well. One example is, flavor may contain extra-spec, so if an extra-spec value is specified in the flavor, while the corresponding scheduler filter is not enabled, then the extra-spec has no effect and may cause issue. Beyond merely extra-specs my understanding was that because you *have* flavors you can make assumptions about packing that make the problem space smaller... someone made a nice presentation showing how having restricted flavors made scheduling easier. I can't find it right now. It was presented at an OpenStack summit IIRC. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Proposal to recognize indirect contributions to our code base
On 11/12/2013 01:58 AM, Thierry Carrez wrote: This proposal raises several questions. (1) Is it a good idea to allow giving credit to patch sponsors On one hand, this encourages customers of OpenStack service companies to fund sending back bugfixes and features upstream. Does it? I'm trying to wrap my head around this topic since Nick mentioned it to me in Hong Kong. I am not sure that companies being serviced would have an increased incentive to upstream their code. I'd love to spend more time framing the problem as precisely as we can before heading down the path to find a solution. Why are customers of Mirantis, enovance, aptira, etc not willing to pay them to contribute patches upstream? /stef -- Ask and answer questions on https://ask.openstack.org ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] Help to review UBS patch
Hi stackers, From the design summit, Boris has a great idea to improve db performance but needs more evaluation because of memcached. For UBS, it seems we agree to go with the current solution and don't depend on Boris's great idea. Can someone help to review the ground work of UBS https://review.openstack.org/#/c/35759/? It has got 1 approval but needs 1 more. Again, for the other 2 patches depending on it, https://review.openstack.org/#/c/35764/ (already got 2 approvals) and https://review.openstack.org/#/c/35765/, we will rebase to the latest master after the ground work is merged. Thanks, due to time zone issue in China, I choose to send a junk mail instead of ping nobody in IRC, sorry for that;-) Thanks. -- Shane ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Help to review UBS patch
Hi stackers, From the design summit, Boris has a great idea to improve db performance but needs more evaluation because of memcached. For UBS, it seems we agree to go with the current solution and don't depend on Boris's great idea. Can someone help to review the ground work of UBS https://review.openstack.org/#/c/35759/? It has got 1 approval but needs 1 more. Again, for the other 2 patches depending on it, https://review.openstack.org/#/c/35764/ (already got 2 approvals) and https://review.openstack.org/#/c/35765/, we will rebase to the latest master after the ground work is merged. Thanks, due to time zone issue in China, I choose to send a junk mail instead of ping nobody in IRC, sorry for that;-) Thanks. -- Shane ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [qa] moratorium on new negative tests in Tempest
I agree that parametric testing, with input generators is the way to go for the API testing. Both positive and negative. I've looked at a number of frameworks in the past and the one that until recently was the highest on my list is Robot: http://code.google.com/p/robotframework/ I had looked at it in the past for doing parametric testing for APIs. It doesn't seem to have the generators, but it has a fair amount of infrastructure. But in my search in preparation for responding to this email, I stumbled upon a test framework I had not seen before that looks promising: http://www.squashtest.org/index.php/en/squash-ta/squash-ta-overview It does the data generation separate from the test code, the setup, tear down. It actually looks quite interesting, and it is open source. It might not pan out, but it's worth a look. Another page by the same group: http://www.squashtest.org/index.php/en/what-is-squash/tools-and-functionalities/squash-data is the data generators. I'm not sure just how much of the project is open source, but I suspect enough for our purposes. The other question is whether the licensing is acceptable for OpenStack.org. I'm willing to jump in and help on this as this sort of stuff is my bailiwick. A subgroup maybe? I also want to get some of the QA/Test lore written down so newbies can come up to speed sooner and we reduce some of the vagueness that causes reviews to thrash a bit. I started a blueprint: https://blueprints.launchpad.net/tempest/+spec/test-developer-documentation and being pretty much a newbie myself, wasn't sure how to start (I have only limited access to IRC), but realized I should start an Etherpad with strawman sections and let people edit there. Hope this is useful. --Rocky -Original Message- From: pcrews [mailto:glee...@gmail.com] Sent: Tuesday, November 12, 2013 2:03 PM To: Monty Taylor; openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [qa] moratorium on new negative tests in Tempest On 11/12/2013 12:20 PM, Monty Taylor wrote: On 11/12/2013 02:33 PM, David Kranz wrote: On 11/12/2013 01:36 PM, Clint Byrum wrote: Excerpts from Sean Dague's message of 2013-11-12 10:01:06 -0800: During the freeze phase of Havana we got a ton of new contributors coming on board to Tempest, which was super cool. However it meant we had this new influx of negative tests (i.e. tests which push invalid parameters looking for error codes) which made us realize that human creation and review of negative tests really doesn't scale. David Kranz is working on a generative model for this now. Are there some notes or other source material we can follow to understand this line of thinking? I don't agree or disagree with it, as I don't really understand, so it would be helpful to have the problems enumerated and the solution hypothesis stated. Thanks! ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev I am working on this with Marc Koderer but we only just started and are not quite ready. But since you asked now... The problem is that the current implementation of negative tests is that each case is represented as code in a method and targets a particular set of api arguments and expected result. In most (but not all) of these tests there is boilerplate code surrounding the real content which is the actual arguments being passed and the value expected. That boilerplate code has to be written correctly and reviewed. The general form of the solution has to be worked out but basically would involve expressing these tests declaratively, perhaps in a yaml file. In order to do this we will need some kind of json schema for each api. The main implementation around this is defining the yaml attributes that make it easy to express the test cases, and somehow coming up with the json schema for each api. In addition, we would like to support fuzz testing where arguments are, at least partially, randomly generated and the return values are only examined for 4xx vs something else. This would be possible if we had json schemas. The main work is to write a generator and methods for creating bad values including boundary conditions for types with ranges. I had thought a bit about this last year and poked around for an existing framework. I didn't find anything that seemed to make the job much easier but if any one knows of such a thing (python, hopefully) please let me know. The negative tests for each api would be some combination of declaratively specified cases and auto-generated ones. With regard to the json schema, there have been various attempts at this in the past, including some ideas of how wsme/pecan will help, and it might be helpful to have more project coordination. I can see a few options: 1. Tempest keeps its own json schema data 2. Each project keeps its own json schema
Re: [openstack-dev] Horizon Issue
Hi, On 13 November 2013 12:54, K S khyat...@gmail.com wrote: Hi I'm a newbie to horizon and I'm working on a trove related issue. My use case is that on successful execution of the workflow in trove, I need to redirect the workflow to a view. Additionally I want to pass a couple of parameters from the workflow to the chained view. I have tried the following approaches till now : - set the success_url in workflow. This redirects me to the view, however I am unable to pass parameters. You can define a get_success_url() method on your workflow and return the URL you want. There's an example here: https://github.com/openstack/horizon/blob/7a51bc7ddd57b39f4389e84ac77bd2dc98c9a3cd/openstack_dashboard/dashboards/project/networks/subnets/workflows.py#L69-L71 class CreateSubnet(network_workflows.CreateNetwork): snip def get_success_url(self): return reverse(horizon:project:networks:detail, args=(self.context.get('network_id'),)) Hope that helps, Kieran - 'return redirect() ' in the handle method of the workflow. This does not help as expected output of the handle method is a boolean. Please help me if I am missing some concepts here. Or if this is a limitation of the workflow. Can I redirect a workflow (passing parameters) to a view? I know we can easily achieve this behavior with forms but not sure if this is possible with workflows in horizon. Your help will be highly appreciated. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] disable/enable services and agent tests
Hi all, In Tempest, we have some testcases that disable/enable services or agents. e.g. https://review.openstack.org/#/c/55271/1/tempest/api/compute/admin/test_services.py test_service_enable_disable test_disable_service_with_disable_reason Since Tempest run in parallel for now, I'm afraid of these testcases have possible impact on other tests. Anyone has ideas about it? Should we remove such testcases? Regards, Zhi Kun ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Blueprint for Juniper OpenContrail vrouter nova vif driver support
On Nov 12, 2013, at 5:26 PM, Prasad Miriyala pmiriy...@juniper.net wrote: Hi All, A blueprint has been registered to add Nova vif driver support for Juniper vrouter. The Juniper OpenContrail Controller is a logically centralized but physically distributed Software Defined Networking (SDN) controller that is responsible for providing the management, control, and analytics functions of the virtualized network. The Juniper OpenContrail vRouter is a forwarding plane (of a distributed router) that runs in the hypervisor of a virtualized server. It extends the network from the physical routers and switches in a data center into a virtual overlay network hosted in the virtualized servers. The OpenContrail vRouter is conceptually similar to existing commercial and open source vSwitches such as for example the Open vSwitch (OVS) but it also provides routing and higher layer services. The OpenContrail Controller provides the logically centralized control plane and management plane of the system and orchestrates the vRouters. Blueprint https://blueprints.launchpad.net/nova/+spec/juniper-opencontrail-vrouter-nova-vif-driver Associated contrail neutron plugin blueprint https://blueprints.launchpad.net/neutron/+spec/juniper-plugin-with-extensions Please review blueprint, all comments are welcome. Regards, Prasad Hi Prasad: We have seen the BP and review. The problem that we Neutron core are currently looking at is something which was discussed at the Summit in Hong Kong last week: The requirement of having Smokestack/Tempest tests for plugins. As a core team, we haven't decided yet if new plugins will require these tests before being added to the tree. All plugins will require these tests by Icehouse-2, so IMHO requiring new plugins to have them before they are submitted makes sense. I suspect we will cover this next week at the weekly Neutron IRC meeting, so stay tuned. Thanks, Kyle ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] disable/enable services and agent tests
I think we should remove these tests, even though tempest run in serial. Because we can't assume the enable service action will be successful. (But on the other hand, one test is failed, the tempest gate will be failed. We need find it out and fix it. :-) ) On 2013?11?13? 10:23, Zhi Kun Liu wrote: Hi all, In Tempest, we have some testcases that disable/enable services or agents. e.g. https://review.openstack.org/#/c/55271/1/tempest/api/compute/admin/test_services.py test_service_enable_disable test_disable_service_with_disable_reason Since Tempest run in parallel for now, I'm afraid of these testcases have possible impact on other tests. Anyone has ideas about it? Should we remove such testcases? Regards, Zhi Kun ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Blueprint for Juniper OpenContrail vrouter nova vif driver support
Kyle, These requirement should also be required for existing third plugins. Will you allow new patches to existing plugins without this requirement? I hope we don't end up creating multiple classes of citizens. Regards -Harshad On Nov 12, 2013, at 7:08 PM, Kyle Mestery (kmestery) kmest...@cisco.com wrote: On Nov 12, 2013, at 5:26 PM, Prasad Miriyala pmiriy...@juniper.net wrote: Hi All, A blueprint has been registered to add Nova vif driver support for Juniper vrouter. The Juniper OpenContrail Controller is a logically centralized but physically distributed Software Defined Networking (SDN) controller that is responsible for providing the management, control, and analytics functions of the virtualized network. The Juniper OpenContrail vRouter is a forwarding plane (of a distributed router) that runs in the hypervisor of a virtualized server. It extends the network from the physical routers and switches in a data center into a virtual overlay network hosted in the virtualized servers. The OpenContrail vRouter is conceptually similar to existing commercial and open source vSwitches such as for example the Open vSwitch (OVS) but it also provides routing and higher layer services. The OpenContrail Controller provides the logically centralized control plane and management plane of the system and orchestrates the vRouters. Blueprint https://blueprints.launchpad.net/nova/+spec/juniper-opencontrail-vrouter-nova-vif-driver Associated contrail neutron plugin blueprint https://blueprints.launchpad.net/neutron/+spec/juniper-plugin-with-extensions Please review blueprint, all comments are welcome. Regards, Prasad Hi Prasad: We have seen the BP and review. The problem that we Neutron core are currently looking at is something which was discussed at the Summit in Hong Kong last week: The requirement of having Smokestack/Tempest tests for plugins. As a core team, we haven't decided yet if new plugins will require these tests before being added to the tree. All plugins will require these tests by Icehouse-2, so IMHO requiring new plugins to have them before they are submitted makes sense. I suspect we will cover this next week at the weekly Neutron IRC meeting, so stay tuned. Thanks, Kyle ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [horizon] User registrations
Garry Chen - iPhone On 2013年11月13日, at 上午5:44, Paul Belanger paul.belan...@polybeacon.com wrote: On 13-11-11 01:31 AM, Lyle, David wrote: I think there is certainly interest. I do think it will need to be highly configurable to be useful. The problem, as Dolph points out, is that each deployment has its own workflow. Points of configuration: -Does the local keystone deployment policy support self-registration? The default is no. So, at that point access to self-registration should be hidden. -How many steps are required in the registration process? -Is payment information required? Address? -How is the registration confirmed, email, text, ? -CAPTCHA? I think the two main reasons such a facility is not present in Horizon are: 1. Until recently determining keystone's access policy was not possible. 2. The actual implementation is highly deployment dependent. So, if we are talking features, I think the one I can see being the most useful for me is when an admin is adding user accounts with the dashboard, is the email subsystem notifies the users with onetime login URL, forcing the user to setup a password. This way the admin doesn't have to deal with transmitting passwords to each user. Actually, I guess I am talking about a password reset token. -- Paul Belanger | PolyBeacon, Inc. Jabber: paul.belan...@polybeacon.com | IRC: pabelanger (Freenode) Github: https://github.com/pabelanger | Twitter: https://twitter.com/pabelanger ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Blueprint for Juniper OpenContrail vrouter nova vif driver support
On Nov 12, 2013, at 10:02 PM, Harshad Nakil hna...@contrailsystems.com wrote: Kyle, These requirement should also be required for existing third plugins. Will you allow new patches to existing plugins without this requirement? I hope we don't end up creating multiple classes of citizens. Regards -Harshad All plugins will require the Smokestack/Tempest tests to be claimed as supported. This will be required by Icehouse-2. The only thing I am bringing up here is whether or not we allow new plugins into the tree without the tests, given we're already in the Icehouse development cycle. That's what I suspect we need more input on from the rest of the Neutron core team. Thanks, Kyle On Nov 12, 2013, at 7:08 PM, Kyle Mestery (kmestery) kmest...@cisco.com wrote: On Nov 12, 2013, at 5:26 PM, Prasad Miriyala pmiriy...@juniper.net wrote: Hi All, A blueprint has been registered to add Nova vif driver support for Juniper vrouter. The Juniper OpenContrail Controller is a logically centralized but physically distributed Software Defined Networking (SDN) controller that is responsible for providing the management, control, and analytics functions of the virtualized network. The Juniper OpenContrail vRouter is a forwarding plane (of a distributed router) that runs in the hypervisor of a virtualized server. It extends the network from the physical routers and switches in a data center into a virtual overlay network hosted in the virtualized servers. The OpenContrail vRouter is conceptually similar to existing commercial and open source vSwitches such as for example the Open vSwitch (OVS) but it also provides routing and higher layer services. The OpenContrail Controller provides the logically centralized control plane and management plane of the system and orchestrates the vRouters. Blueprint https://blueprints.launchpad.net/nova/+spec/juniper-opencontrail-vrouter-nova-vif-driver Associated contrail neutron plugin blueprint https://blueprints.launchpad.net/neutron/+spec/juniper-plugin-with-extensions Please review blueprint, all comments are welcome. Regards, Prasad Hi Prasad: We have seen the BP and review. The problem that we Neutron core are currently looking at is something which was discussed at the Summit in Hong Kong last week: The requirement of having Smokestack/Tempest tests for plugins. As a core team, we haven't decided yet if new plugins will require these tests before being added to the tree. All plugins will require these tests by Icehouse-2, so IMHO requiring new plugins to have them before they are submitted makes sense. I suspect we will cover this next week at the weekly Neutron IRC meeting, so stay tuned. Thanks, Kyle ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [mistral] Roadmap, TaskFlow Mistral, Questions to answer before start Mistral implementation
Hi everyone, We’ve created several etherpads to start discussing all things related with further Mistral development. Here they are: https://etherpad.openstack.org/p/TaskFlowAndMistral https://etherpad.openstack.org/p/MistralQuestionsBeforeImplementation https://etherpad.openstack.org/p/MistralRoadmap And repeating the one that was an initial DSL/API specification draft: https://etherpad.openstack.org/p/TaskServiceDesign I’m hereby inviting all interested to start collaborating on Mistral design and development plans using these etherpads. One of the important question to answer formally is “When do we use TaskFlow and when Mistral?”. It think it’ll give more understanding of what we need from Mistral. Renat Akhmerov @ Mirantis Inc.___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Openstack-dev][nova][swift]Use of 'httpplus' instead of 'httplib' in swift plugin to support 'Expect:100-continue' header
Hi, This is in relation with the patch at https://review.openstack.org/#/c/55517/ which supports use of 'Expect: 100-Continue' header in swift client during a PUT request. This will help attain an interim response before actual upload of chunks, which will act as a fast fail in case of 401s pertaining to auth-token expiration in which case the token will have to be refetched and the request retried. The concern here is that the 'httplib' library currently being used in the swift client does not provide support for this header. So the idea is to switch to, may be httpplushttps://code.google.com/p/httpplus/source/browse/httpplus/__init__.py#212 which seems to be handling this header adequately. Thoughts? -- Thanks And Regards Amala Basha +91-7760972008 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev