Hi Piotr,

https://doc.opensuse.org/documentation/leap/virtualization/html/book.virt/cha.cachemodes.html#sec.cache.mode.live.migration
 

So yes, CEPH being considered "clustered storage" - live migration works - but 
in case of QCOW2 (NFS) it doesn't actually work.

BTW, as for CEPH, you would probably want to also check RBD client side 
write-back cache... (versus/instead qemu cache=writeback) (i.e. 32MB writeback 
cache in librbd per each volume, etc.).
I believe I did test one versus another caching (was operating CEPH backed 
CloudStack installation myself a while ago) - afaik, there were no visible 
performance/latency differences in RBD write-back caching versus qemu writeback 
caching (both active = issues with performance)

Kind regards,
Andrija

andrija.pa...@shapeblue.comĀ 
www.shapeblue.com
Amadeus House, Floral Street, London  WC2E 9DPUK
@shapeblue
  
 


-----Original Message-----
From: Piotr Pisz <pi...@piszki.pl> 
Sent: 08 March 2019 12:44
To: users@cloudstack.apache.org
Subject: RE: downloaded template vs disk service offering

Hey Andrija,

Thank you for the explanation, now I finally understand how it works :-) As for 
live migration, the migration of such machines (with cache = writeback) in ceph 
rbd (centos 7 kvm) works without any problem.

Regards,
Piotr


-----Original Message-----
From: Andrija Panic <andrija.pa...@shapeblue.com>
Sent: Friday, March 8, 2019 9:22 AM
To: users@cloudstack.apache.org
Subject: RE: downloaded template vs disk service offering

Hi Piotr,

It's true that when setting cache mode for Disk offering via GUI doesn't get it 
implemented in DB (does API works fine, did you test it ? if so please raise 
the GitHub issue with description).

In general, you can initially set cache mode for disk only on Disk Offering 
(possibly also on Compute Offering for the Root disk).
When you make new template from existing disk, this new template will have 
source_template_id field in vm_templates table (on it's row) set to your 
original template from which you created the volum (template --> disk --> new 
template)

Also worth noting - all volumes are inheriting "on he fly" (when you start VM) 
this cache mode setting from it's template (all volumes have "template_id" 
field in "volumes" table)

So if you set cache_mode (via DB) for specific template, it will affect ALL VMs 
created from that template...(once you stop and start those VMs, obviously) - 
i.e. when you deploy new VM, some column values are copied over to the actual 
volume row/table, but some are just read on the fly, as this cache_mode.

Nevertheless, I would strongly discourage using write-back cache for disks, 
since:

- it can be severely risky, in case of power loss, kernel panic, etc - you will 
end up with corrupted volumes.
- VMs can NOT be live migrated (at least with KVM), with cache set to anything 
else than none (google it yourself) - happy to learn if this limitation is 
present for other Hypervisors as well

Fine to play with, but I would skip it in production.

Kind regards,
Andrija

andrija.pa...@shapeblue.com
www.shapeblue.com
Amadeus House, Floral Street, London  WC2E 9DPUK @shapeblue
  
 


-----Original Message-----
From: Piotr Pisz <pi...@piszki.pl>
Sent: 08 March 2019 08:32
To: users@cloudstack.apache.org
Subject: downloaded template vs disk service offering

Hi Users :-)

I have a question.
If from the disk for which the cache = writeback paramter was set, I make a 
template, all new machines have cache = writeback. And that's ok. 
If I load a template from outside, volume has cache = none. I have not found a 
place in DB where I could improve this parameter.
Do you know where we can set the template cache?

PS. Disk offering made with GUI does not set the cache parameter in DB...

Regards,
Piotr


Reply via email to