On Wed, Nov 30, 2016 at 9:48 AM, Martin Polednik <mpoled...@redhat.com>
wrote:

> On 29/11/16 22:01 +0200, Yaniv Kaul wrote:
>
>> It appears that VDSM changes the following params:
>> vm.dirty_ratio = 5
>> vm.dirty_background_ratio = 2
>>
>> Any idea why? Because we use cache=none it's irrelevant anyway?
>>
>
> It's not really irrelevant, the host still uses disk cache. Anyway,
> there is BZ[1] with a presentation[2] that (imho reasonably) states:
>
> "Reduce dirty page limits in KVM host to allow
> direct I/O writer VMs to compete successfully
> with buffered writer processes for storage
> access"
>

Thanks, but it really makes no sense to me. The direct IO by the VMs is
going to a different storage than what the host is writing to, in most
cases.
The host would write to the local disk, the VMs - to a shared storage,
across NFS or block layer or so. Moreover, their IO is not buffered.
There is very little IO coming from the host itself, generally (I hope
so!).

Partially unrelated - the trend today is actually to put NOOP on the VMs -
the deadline is quite meaningless, as the host scheduler will reschedule
anyway as it see fits.
Most likely it is also a deadline scheduler (but could be NOOP as well if
it's an all flash array, for example). Therefore there is no reason for
anything but simple NOOP on the VMs themselves.

In short, I think it's an outdated decision that perhaps should be
revisited. Not urgent, though.
Y.


> I wonder why virtual-host tuned profile doesn't contain these values:
>
> $ grep vm.dirty /usr/lib/tuned/virtual-host/tuned.conf
> vm.dirty_background_ratio = 5
>
> [1]https://bugzilla.redhat.com/show_bug.cgi?id=740887
> [2]http://perf1.lab.bos.redhat.com/bengland/laptop/rhev/
> rhev-vm-rsptime.pdf
>
> TIA,
>> Y.
>>
>
> _______________________________________________
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>
>
_______________________________________________
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Reply via email to