This is needed to prevent any inconsistencies stemming from buffered
writes/caching file data during live VM migration.
Besides, for Gluster to truly honor direct-io behavior in qemu's
'cache=none' mode (which is what oVirt uses),
one needs to turn on performance.strict-o-direct and disable remote-
Hi,
I can confirm that after setting these two options, I haven't encountered
disk corruptions anymore.
The downside, is that at least for me it had a pretty big impact on
performance.
The iops really went down - performing inside vm fio tests.
On Wed, Mar 27, 2019, 07:03 Krutika Dhananjay wrote
Could you enable strict-o-direct and disable remote-dio on the src volume
as well, restart the vms on "old" and retry migration?
# gluster volume set performance.strict-o-direct on
# gluster volume set network.remote-dio off
-Krutika
On Tue, Mar 26, 2019 at 10:32 PM Sander Hoentjen wrote:
>
On 26-03-19 14:23, Sahina Bose wrote:
> +Krutika Dhananjay and gluster ml
>
> On Tue, Mar 26, 2019 at 6:16 PM Sander Hoentjen wrote:
>> Hello,
>>
>> tl;dr We have disk corruption when doing live storage migration on oVirt
>> 4.2 with gluster 3.12.15. Any idea why?
>>
>> We have a 3-node oVirt clus
+Krutika Dhananjay and gluster ml
On Tue, Mar 26, 2019 at 6:16 PM Sander Hoentjen wrote:
>
> Hello,
>
> tl;dr We have disk corruption when doing live storage migration on oVirt
> 4.2 with gluster 3.12.15. Any idea why?
>
> We have a 3-node oVirt cluster that is both compute and gluster-storage.
>
5 matches
Mail list logo