On 04/24/2017 05:03 PM, Sven Achtelik wrote:
my oVirt-Setup is 3 Hosts with gluster and reaplica 3. I always try to
stay on the current version and I’m applying updates/upgrade if there
are any. For this I put a host in maintenance and also use the “Stop
Gluster Service” checkbox. After it’s done updating I’ll set it back
to active and wait until the engine sees all bricks again and then
I’ll go for the next host.
This worked fine for me the last month and now that I have more and
more VMs running the changes that are written to the gluster volume
while a host is in maintenance become a lot more and it takes pretty
long for the healing to complete. What I don’t understand is that I
don’t really see a lot of network usage in the GUI during that time
and it feels quiet slow. The Network for the gluster is a 10G and I’m
quiet happy with the performance of it, it’s just the healing that
takes long. I noticed that because I couldn’t update the third host
because of unsynced gluster volumes.
Is there any limiting variable that slows down traffic during healing
that needs to be configured ? Or should I maybe change my updating
process somehow to avoid having so many changes in queue?
Users mailing list
Do you have granular entry heal enabled on the volume? If no, there
is a feature called granular entry self-heal which should be enabled
with sharded volumes to get the benefits. So when a brick goes down and
say only 1 in those million entries is created/deleted. Self-heal would
be done for only that file it won't crawl the entire directory.
You can run|gluster volume set/VOLNAME/cluster.granular-entry-heal
enable / disable|command only if the volume is in|Created|state. If the
volume is in any other state other than|Created|, for
example,|Started|,|Stopped|, and so on, execute|gluster volume heal
VOLNAME granular-entry-heal|enable / disable||command to enable or
disable granular-entry-heal option.
Users mailing list