>>> Alexandre DERUMIER <[email protected]> schrieb am Mittwoch, 31.
August 2016
um 16:10:
>> >eanwhile I tried to update the viostor driver within the vm (a
W2k8)
>>>but that
>>>results in a bluescreen.
>
>>>When booting via recovery console and loading the new driver from
an
>>>actual
>>>qemu driver iso the disks are all in writeback mode.
>>>So maybe the cache mode depends on the iodriver within the machine.
>>>
>>>I'll see how to upgrade the driver without having a bluescreen
>>>afterwards
>>>(by having another reason to avoid that windows crap).
>
> Very old virtio drivers (don't remember, but it was some year ago),
didn't
> support flush/fua correctly.
> https://bugzilla.redhat.com/show_bug.cgi?id=837324
>
> So, it's quite possible that rbd_cache_writethrough_until_flush force
> writethrough in this case.
That makes sense. The time reference of the bugreport matches the
driver version 61.63.103.3000
from 03.07.2012 distributed with virtio-win-0.1-30.iso from Fedora.
Thank you.
Regards
>
> ----- Mail original -----
> De: "Steffen Weißgerber" <[email protected]>
> À: "ceph-users" <[email protected]>, [email protected]
> Envoyé: Mercredi 31 Août 2016 15:43:17
> Objet: [ceph-users] Antw: Re: rbd cache mode with qemu
>
>>>> Loris Cuoghi <[email protected]> schrieb am Dienstag, 30.
August
> 2016 um
> 16:34:
>> Hello,
>>
>
> Hi Loris,
>
> thank you for your answer.
>
>> Le 30/08/2016 à 14:08, Steffen Weißgerber a écrit :
>>> Hello,
>>>
>>> after correcting the configuration for different qemu vm's with rbd
> disks
>>> (we removed the cache=writethrough option to have the default
>>> writeback mode) we have a strange behaviour after restarting the
> vm's.
>>>
>>> For most of them the cache mode is now writeback as expected. But
> some
>>> neverthless use the disks in writethrough mode (at least an 'info
> block'
>>> reports that on the qemu monitor). This does also not change when
>>> configuring cache=writeback explicitly.
>>>
>>> On our 6 node KVM Cluster we have the same behaviour for the
> problematic
>>> vm's on all hosts which are configured equally with qemu 2.5.1,
ceph
> 0.94.7
>>> and kernel 4.4.6.
>>>
>>> The ceph cluster has version 0.94.6.
>>>
>>> For me it seems to be a problem specific to the rbd's. Is there a
> way to
>>> check the cache behaviour of a single rbd?
>>
>> To my knowledge, there is not such a thing as an RBD's cache mode.
>>
>> The librbd cache exists in the client's memory, not on the Ceph
>> cluster's hosts. Its configuration is to be put in the ceph
>> configuration file, on each client host.
>>
>
> Yes, that's what I think also but my guess was that the client cache
> behaviour
> is somehow controlled by the communication between librbd on the
client
> and
> the ceph cluster.
>
>> Setting QEMU's disk cache mode to "writeback" informs the guest's OS
>
>> that it needs to explicitly flush dirty data to persistent storage
> when
>> needed.
>
> Meanwhile I tried to update the viostor driver within the vm (a W2k8)
> but that
> results in a bluescreen.
>
> When booting via recovery console and loading the new driver from an
> actual
> qemu driver iso the disks are all in writeback mode.
> So maybe the cache mode depends on the iodriver within the machine.
>
> I'll see how to upgrade the driver without having a bluescreen
> afterwards
> (by having another reason to avoid that windows crap).
>
>>
>>>
>>> Regards
>>>
>>> Steffen
>>>
>>>
>
> Regards
>
>>>
>> _______________________________________________
>> ceph-users mailing list
>> [email protected]
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> --
> Klinik-Service Neubrandenburg GmbH
> Allendestr. 30, 17036 Neubrandenburg
> Amtsgericht Neubrandenburg, HRB 2457
> Geschaeftsfuehrerin: Gudrun Kappich
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Klinik-Service Neubrandenburg GmbH
Allendestr. 30, 17036 Neubrandenburg
Amtsgericht Neubrandenburg, HRB 2457
Geschaeftsfuehrerin: Gudrun Kappich
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com