On 04/24/2017 05:36 PM, Sven Achtelik wrote:

Hi Kasturi,

I’ll try that. Will this be persistent over a reboot of a host or even stopping of the complete cluster ?

Thank you

Hi Sven,

This is a volume set option ((has nothing to do with reboot)and it will be present on the volume until you reset it manually using 'gluster volume reset' command . You just need to execute 'gluster volume heal <volname> granular-entry-heal enable' and this will do the right thing for you.


*Von:*knarra [mailto:kna...@redhat.com]
*Gesendet:* Montag, 24. April 2017 13:44
*An:* Sven Achtelik <sven.achte...@eps.aero>; users@ovirt.org
*Betreff:* Re: [ovirt-users] Hyperconverged Setup and Gluster healing

On 04/24/2017 05:03 PM, Sven Achtelik wrote:

    Hi All,

    my oVirt-Setup is 3 Hosts with gluster and reaplica 3. I always
    try to stay on the current version and I’m applying
    updates/upgrade if there are any. For this I put a host in
    maintenance and also use the “Stop Gluster Service”  checkbox.
    After it’s done updating I’ll set it back to active and wait until
    the engine sees all bricks again and then I’ll go for the next host.

    This worked fine for me the last month and now that I have more
    and more VMs running the changes that are written to the gluster
    volume while a host is in maintenance become a lot more and it
    takes pretty long for the healing to complete. What I don’t
    understand is that I don’t really see a lot of network usage in
    the GUI during that time and it feels quiet slow. The Network for
    the gluster is a 10G and I’m quiet happy with the performance of
    it, it’s just the healing that takes long. I noticed that because
    I couldn’t update the third host because of unsynced gluster volumes.

    Is there any limiting variable that slows down traffic during
    healing that needs to be configured ? Or should I maybe change my
    updating process somehow to avoid having so many changes in queue?

    Thank you,



    Users mailing list

    Users@ovirt.org <mailto:Users@ovirt.org>


Hi Sven,

Do you have granular entry heal enabled on the volume? If no, there is a feature called granular entry self-heal which should be enabled with sharded volumes to get the benefits. So when a brick goes down and say only 1 in those million entries is created/deleted. Self-heal would be done for only that file it won't crawl the entire directory.

You can run|gluster volume set|/VOLNAME/|cluster.granular-entry-heal enable / disable|command only if the volume is in|Created|state. If the volume is in any other state other than|Created|, for example,|Started|,|Stopped|, and so on, execute|gluster volume heal VOLNAME granular-entry-heal||enable / disable|command to enable or disable granular-entry-heal option.



Users mailing list

Reply via email to