Il 2020-09-21 15:45 Andreas Gruenbacher ha scritto:
On Mon, Sep 21, 2020 at 3:40 PM Pierre-Philipp Braun
wrote:
In case it matters, I am using vanilla Linux 4.18.20 and not the RHEL
nor CentOS with patches.
It seem that 4.18.20 is missing the relevant parts of the following
fixes:
a27a0c
Il 2020-09-19 18:57 Pierre-Philipp Braun ha scritto:
On 19.09.2020 14:14, Gionatan Danti wrote:
cp --sparse=never
Hello neighbor (ciao from France).
I am not sure what you mean, as there **was both** the original and
the copied file sums in my message.
Sorry, I somewhat managed to miss
Il 2020-09-19 13:21 Pierre-Philipp Braun ha scritto:
Hello #linux-cluster
I am attempting to host thin-provisioned virtual disks on GFS2. In
that regard, I experience a weird and unexpected issue: when copying
(or packing/extracting) a sparse file with a file-system on it, and
which lives and g
Il 29-08-2017 13:28 Steven Whitehouse ha scritto:
There is no siginificant overhead when reading the same file on
multiple nodes. The overhead mostly applies when writes are involved
in some form, whether mixed with other writes or reads. GFS2 does
ensure cache coherency, but in order to do that
Il 29-08-2017 13:13 Steven Whitehouse ha scritto:
Whatever kind of storage is being used with GFS2, it needs to act as
if there was no cache or as if there is a common cache between all
nodes - what we want to avoid is caches which are specific to each
node. Using individual node caching will sti
Il 29-08-2017 12:59 Steven Whitehouse ha scritto:
Yes, it definitely needs to be set to cache=none mode. Barrier passing
is only one issue, and as you say it is down to the cache coherency,
since the block layer is not aware of the caching requirements of the
upper layers in this case.
Ok. So
Hi Steven,
Il 29-08-2017 11:45 Steven Whitehouse ha scritto:
Yes, there is some additional overhead due to the clustering. You can
however usually organise things so that the overheads are minimised as
you mentioned above by being careful about the workload.
No. You want to use the default data
Il 26-08-2017 11:34 Kristián Feldsam ha scritto:
Hello, accroding to red hat documentation "smaller is better". I
personaly use 1TB volumes with 256MB journal
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html-single/Global_File_System_2/index.html#s1-formatting-gfs2
Hi list,
I am evaluating how to refresh my "standard" cluster configuration and
GFS2 clearly is on the table ;)
GOAL: to have a 2-node HA cluster running DRBD (active/active), GFS2 (to
store disk image) and KVM (as hypervisor). The cluster had to support
live migration, but manual failover is