Re: [openstack-dev] [nova] Unshelve Instance Performance Optimization Questions
There has been some talk in cinder meetings about making cinder-glance interactions more efficient. They are already optimised in some deployments, e.g. ceph glance and ceph cinder, and some backends cache glance images so that many volumes created from the same image becomes very efficient. (search the meeting logs or channel logs for 'public snapshot' to get some entry points into the discussions) I'd like to see more work done on this, and perhaps re-examine a cinder backend to glance. This would give some of what you're suggesting (particularly fast, low traffic un-shelve), and there is more that can be done to improve that performance, particularly if we can find a better performing generic CoW technology than QCOW2. As suggested in the review, in the short term you might be better experimenting with moving to boot-from-volume instances if you have a suitable cinder deployed, since that gives you some of the performance improvements already. On 16 February 2015 at 12:10, Kekane, Abhishek abhishek.kek...@nttdata.com wrote: Hi Devs, Problem Statement: Performance and storage efficiency of shelving/unshelving instance booted from image is far worse than instance booted from volume. When you unshelve hundreds of instances at the same time, instance spawning time varies and it mainly depends on the size of the instance snapshot and the network speed between glance and nova servers. If you have configured file store (shared storage) as a backend in Glance for storing images/snapshots, then it's possible to improve the performance of unshelve instance dramatically by configuring nova.image.download.FileTransfer in nova. In this case, it simply copies the instance snapshot as if it is stored on the local filesystem of the compute node. But then again in this case, it is observed the network traffic between shared storage servers and nova increases enormously resulting in slow spawning of the instances. I would like to gather some thoughts about how we can improve the performance of unshelve api (booted from image only) in terms of downloading large size instance snapshots from glance. I have proposed a nova-specs [1] to address this performance issue. Please take a look at it. During the last nova mid-cycle summit, Michael Still https://review.openstack.org/#/q/owner:mikal%2540stillhq.com+status:open,n,z has suggested alternative solutions to tackle this issue. Storage solutions like ceph (Software based) and NetApp (Hardare based) support exposing images from glance to nova-compute and cinder-volume with copy in write feature. This way there will be no need to download the instance snapshot and unshelve api will be pretty faster than getting it from glance. Do you think the above performance issue should be handled in the OpenStack software as described in nova-specs [1] or storage solutions like ceph/NetApp should be used in production environment? Apart from ceph/NetApp solutions, are there any other options available in the market. [1] https://review.openstack.org/#/c/135387/ Thank You, Abhishek Kekane __ Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Duncan Thomas __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] test fixtures not returning memory
I was attempting to build some convenience fixtures for starting the API service inside the Nova functional tests - https://review.openstack.org/#/c/155902/ However, as soon as I refactored the code it fails on OOM killer. Running locally there is a remarkable difference in running the code with and without at patch. Locally without that patch each test process gets to a high water mark of about 380 MB. With that patches the test processes quickly climb above 1G (which explains why they die in the gate). Is there something fundamental about fixtures that doesn't return memory on cleanup? Is there a way to we can fix that? I'd really like to not have to write our own in tree alternative to avoid the memory leak. -Sean -- Sean Dague http://dague.net __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Fuel] fake threads in tests
Hello, This somehow relates to [1]: in integration tests we have a class called FakeThread. It is responsible for spawning threads to simulate asynchronous tasks in fake env. In BaseIntegrationTest class we have a method called _wait_for_threads that waits for all fake threads to terminate. In my understanding what these things actually do is that they just simulate Astute's responses. I'm thinking if this could be replaced by a better solution, I just want to start a discussion on the topic. My suggestion is to get rid of all this stuff and implement a predictable solution: something along promises or coroutines that would execute synchronously. With either promises or coroutines we could simulate tasks responses any way we want without the need to wait using unpredictable stuff like sleeping, threading and such. No need for waiting or killing threads. It would hopefully make our tests easier to debug and get rid of the random errors that are sometimes getting into our master branch. P. [1] https://bugs.launchpad.net/fuel/+bug/1421599 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Feature Freeze Exception for Add config drive support for PCS containers
Hi, There are some questions about FFE policy that are not obvious for me as a newbie. Last version of my change was submitted at February, 11th, and has not been changed since that date. That change has +2 from Daniel and +1 from Garry Kotton and +1 from Pavel Kholkin. The only thing I needed to merge this code was the last +2. Today I've got a comment from Michael Still who said that merge deadline has expired. Can you please clarify, does this mean that this FFE was not accepted? Or should I wait a for further core review? Anyway, is there any chance to merge this code in Kilo release? Change is ready and does not have any objections except expired deadline note from Michael. Looking forward for your comments On 02/12/2015 04:26 PM, Daniel P. Berrange wrote: On Wed, Feb 11, 2015 at 03:28:49PM +0300, aburluka wrote: Hello, I'd like to request a feature freeze exception for the change [1] This change implements configuration drive support in Parallels containers. It does not change existing Nova behaviour. It's a last patch in parallels series, that implements blueprint pcs-support [2]. Previous patches of that blueprint were merged. So it's the last one to implement initial Parallels Cloud Server support in Nova compute driver. This change was reviewed by Daniel Berrange and Garry Kotton. I am looking forward for your decision about considering this changes for a feature freeze exception I'm happy to sponsor this, given that it lets us complete the intended level of support for parallels in Kilo. It does touch a bit of shared code but the changes at straightforward and should not cause regression in other drivers. Regards, Daniel -- Regards, Alexander Burluka __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Feature Freeze Exception for Add config drive support for PCS containers
On Mon, Feb 16, 2015 at 02:53:08PM +0300, aburluka wrote: Hi, There are some questions about FFE policy that are not obvious for me as a newbie. Last version of my change was submitted at February, 11th, and has not been changed since that date. That change has +2 from Daniel and +1 from Garry Kotton and +1 from Pavel Kholkin. The only thing I needed to merge this code was the last +2. Today I've got a comment from Michael Still who said that merge deadline has expired. Can you please clarify, does this mean that this FFE was not accepted? Or should I wait a for further core review? Anyway, is there any chance to merge this code in Kilo release? Change is ready and does not have any objections except expired deadline note from Michael. It would be better if the Nova core reviewers all replied to the FFE mails to say whether they support them or not, but it seems most have been ignoring all the emailed FFE requests :-( None the less there is a meeting today to finalize which FFEs will be approved. Regards, Daniel -- |: http://berrange.com -o-http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] test fixtures not returning memory
On 02/16/2015 06:43 AM, Sean Dague wrote: I was attempting to build some convenience fixtures for starting the API service inside the Nova functional tests - https://review.openstack.org/#/c/155902/ However, as soon as I refactored the code it fails on OOM killer. Running locally there is a remarkable difference in running the code with and without at patch. Locally without that patch each test process gets to a high water mark of about 380 MB. With that patches the test processes quickly climb above 1G (which explains why they die in the gate). Is there something fundamental about fixtures that doesn't return memory on cleanup? Is there a way to we can fix that? I'd really like to not have to write our own in tree alternative to avoid the memory leak. Sigh, don't you just hate it when you look at a patch after the second cup of coffee and realize you forgot one thing. Turns out I failed to fully remove the API startup path in my extract, so things were starting up unbounded. Fixtures seems to be doing the right thing with it's copy. Sorry for the noise folks. -Sean -- Sean Dague http://dague.net __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Feature Freeze Exception for Add config drive support for PCS containers
Thank you so much for your help, Daniel On 02/16/2015 02:57 PM, Daniel P. Berrange wrote: On Mon, Feb 16, 2015 at 02:53:08PM +0300, aburluka wrote: Hi, There are some questions about FFE policy that are not obvious for me as a newbie. Last version of my change was submitted at February, 11th, and has not been changed since that date. That change has +2 from Daniel and +1 from Garry Kotton and +1 from Pavel Kholkin. The only thing I needed to merge this code was the last +2. Today I've got a comment from Michael Still who said that merge deadline has expired. Can you please clarify, does this mean that this FFE was not accepted? Or should I wait a for further core review? Anyway, is there any chance to merge this code in Kilo release? Change is ready and does not have any objections except expired deadline note from Michael. It would be better if the Nova core reviewers all replied to the FFE mails to say whether they support them or not, but it seems most have been ignoring all the emailed FFE requests :-( None the less there is a meeting today to finalize which FFEs will be approved. Regards, Daniel -- Regards, Alexander Burluka __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev