Hi Steven,
Il 29-08-2017 11:45 Steven Whitehouse ha scritto:
Yes, there is some additional overhead due to the clustering. You can
however usually organise things so that the overheads are minimised as
you mentioned above by being careful about the workload.
No. You want to use the default data=ordered for the most part. It is
less a question of data loss and more a question of whether in case of
a power outage it is possible for a file being written to, to land up
with incorrect content. That can happen in the data=writeback case
(where block allocation has succeeded, but the new data has not yet
been written to disk) but not in the data=ordered case.
I think there is a misunderstanding: I am not speaking about filesystem
mount options (data=ordered vs data=writeback), rather on the QEMU
virtual disk caching mode: on Red Hat documentation, it is suggested to
set QEMU vdisk in cache=none mode. However, cache=writeback has some
significant performance advantages in a number of situations. As, since
at least 5 years, QEMU with cache=writeback supports barrier passing and
so it is safe to use, I wondered why Red Hat officially suggest to avoid
it on GFS2. I suspect it is related to the performance degradation due
to cache coherence between the two hosts, but I would like to be certain
in not related to inherently unsafe operation on GFS2.
Yes, it works well. The size limit was based on fsck time, rather than
any reliability issues. It will work reliably at much larger sizes,
but it will take longer and use more memory.
Great. Any advice on how much time is needed for full fsck on a 8+ TB
volume?
I hope that answers a few more of your questions,
Steve.
Absolutely great info. Thank you very much Steve.
--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.da...@assyoma.it - i...@assyoma.it
GPG public key ID: FF5F32A8
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster