I can also assume that "cachemode" as API parameter is not supported, since
when creating data disk offering via GUI also doesn't set it in DB/table.

CM:    create diskoffering name=xxx displaytext=xxx storagetype=shared
disksize=1024 cachemode=writeback

this also does not set cachemode in table... my guess it's not implemented
in API

Let me know if I can help with any testing here.

Cheers

On 20 February 2018 at 13:09, Andrija Panic <andrija.pa...@gmail.com> wrote:

> Hi Paul,
>
> not helping directly answering your question, but here are some
> observations and "warning" if client's are using write-back cache on KVM
> level
>
>
> I have (long time ago) tested performance in 3 combinations (this was not
> really thorough testing but a brief testing with FIO and random IO WRITE)
>
> - just CEPH rbd cache (on KVM side)
>            i.e. [client]
>                  rbd cache = true
>                  rbd cache writethrough until flush = true
>                  #(this is default 32MB per volume, afaik
>
> - just KMV write-back cache (had to manually edit disk_offering table to
> activate cache mode, since when creating new disk offering via GUI, the
> disk_offering tables was NOT populated with "write-back" sertting/value ! )
>
> - both CEPH and KVM write-back cahce active
>
> My observations were like following, but would be good to actually confirm
> by someone else:
>
> - same performance with only CEPH caching or with only KVM caching
> - a bit worse performance with both CEPH and KVM caching active (nonsense
> combination, I know...)
>
>
> Please keep in mind, that some ACS functionality, KVM live-migrations on
> shared storage (NFS/CEPH) are NOT supported when you use KVM write-back
> cache, since this is considered "unsafe" migration, more info here:
> https://doc.opensuse.org/documentation/leap/virtualization/html/book.virt/
> cha.cachemodes.html#sec.cache.mode.live.migration
>
> or in short:
> "
> The libvirt management layer includes checks for migration compatibility
> based on several factors. If the guest storage is hosted on a clustered
> file system, is read-only or is marked shareable, then the cache mode is
> ignored when determining if migration can be allowed. Otherwise libvirt
> will not allow migration unless the cache mode is set to none. However,
> this restriction can be overridden with the “unsafe” option to the
> migration APIs, which is also supported by virsh, as for example in
>
> virsh migrate --live --unsafe
> "
>
> Cheers
> Andrija
>
>
> On 20 February 2018 at 11:24, Paul Angus <paul.an...@shapeblue.com> wrote:
>
>> Hi Wido,
>>
>> This is for KVM (with Ceph backend as it happens), the API documentation
>> is out of sync with UI capabilities, so I'm trying to figure out if we
>> *should* be able to set cacheMode for root disks.  It seems to make quite a
>> difference to performance.
>>
>>
>>
>> paul.an...@shapeblue.com
>> www.shapeblue.com
>> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
>> @shapeblue
>>
>>
>>
>>
>> -----Original Message-----
>> From: Wido den Hollander [mailto:w...@widodh.nl]
>> Sent: 20 February 2018 09:03
>> To: dev@cloudstack.apache.org
>> Subject: Re: Caching modes
>>
>>
>>
>> On 02/20/2018 09:46 AM, Paul Angus wrote:
>> > Hey guys,
>> >
>> > Can anyone shed any light on write caching in CloudStack?   cacheMode
>> is available through the UI for data disks (but not root disks), but not
>> documented as an API option for data or root disks (although is documented
>> as a response for data disks).
>> >
>>
>> What hypervisor?
>>
>> In case of KVM it's passed down to XML which then passes it to Qemu/KVM
>> which then handles the caching.
>>
>> The implementation varies per hypervisor, so that should be the question.
>>
>> Wido
>>
>>
>> > #huh?
>> >
>> > thanks
>> >
>> >
>> >
>> > paul.an...@shapeblue.com
>> > www.shapeblue.com
>> > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
>> >
>> >
>> >
>>
>
>
>
> --
>
> Andrija Panić
>



-- 

Andrija Panić

Reply via email to