Krzysztof Strasburger wrote:
volume replicated
type cluster/replicate
subvolumes sub1 sub2
end-volume
and on host2:
volume replicated
type cluster/replicate
subvolumes sub2 sub1
end-volume
then following (positive) side-effects should occur:
1. After a crash, ls -R would correctly self-heal the volume either on host1
or on host2 (on this one, which has the newer sobvolume as the first
on the list).
2. This is probably almost invisible, but the directory-read workload should
be more equally distributed between sub1 and sub2.
Is this the right workaround.
This is not a workaround. Shuffling the order of subvolumes can have
disastrous consequences.
Replicate uses the first subvolume as the lock server, and if you
shuffle the order
the two clients will use different subvolumes as lock servers. This can
cause data to be inconsistent.
We plan to fix this known issue, however, in one of the 3.x releases. If
you need a workaround,
the correct thing to do is generate a list of all files from the second
subvolume like this:
[r...@backend2] # find /export/directory/ > filelist.txt
Then trigger self heal on all the files from the mountpoint:
[r...@mountpoint] # cat filelist.txt | xargs stat
This will recreate all the files.
Vikas
_______________________________________________
Gluster-users mailing list
[email protected]
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users