Re: [Gluster-users] Continual heals happening on cluster

2016-04-17 Thread Lindsay Mathieson
I shutdown all the vms, waited for heals to finish, then stopped tall gluster processes. set the following to off: performance.write-behind datastore4 performance.flush-behind datastore4 cluster.data-self-heal and restarted everything - same issue - assuming it actually is an

[Gluster-users] Kudos to All

2016-04-17 Thread Lindsay Mathieson
Hi developers, didn't want to be whiny all the time re possible issues :) and to congratulate you on the 3.7.x release - its come a long way since the 3.5 range I initially looked at and I really appreciate the attention to details for VM hosting environments. The chunking really works and

Re: [Gluster-users] Freezing during heal

2016-04-17 Thread Kevin Lemonnier
I believe Proxmox is just an interface to KVM that uses the lib, so if I'm not mistaken there isn't client logs ? It's not the first time I have the issue, it happens on every heal on the 2 clusters I have. I did let the heal finish that night and the VMs are working now, but it is pretty

Re: [Gluster-users] Number of shards in .shard directory

2016-04-17 Thread Krutika Dhananjay
Hmm I don't have an answer to that question right now because I'm no on-disk fs expert. As far as gluster is concerned, I don't actually see any issue except for the following: as of today, gluster's entry self-heal algorithm crawls the whole parent directory and compares the source and sink to

Re: [Gluster-users] Freezing during heal

2016-04-17 Thread Krutika Dhananjay
Could you share the client logs and information about the approx time/day when you saw this issue? -Krutika On Sat, Apr 16, 2016 at 12:57 AM, Kevin Lemonnier wrote: > Hi, > > We have a small glusterFS 3.7.6 cluster with 3 nodes running with proxmox > VM's on it. I did set

Re: [Gluster-users] How accurate is heal statistics heal-count?

2016-04-17 Thread Krutika Dhananjay
I just checked the code. All `statistics heal-count` does is to count the number of indices (in other words the number of files to be healed) present per brick in the .glusterfs/indices/xattrop directory - which is where we hold empty files representing inodes that need heal - and prints them. I

Re: [Gluster-users] How accurate is heal statistics heal-count?

2016-04-17 Thread Kevin Lemonnier
> - heal info immediately starts showing an every changing list of shards > being healed on all three nodes. > - heal statistics heal-count shows nothing > Interestingly, I have the exact oposite on my cluster : heal info shows nothing but I constantly have between 3 and ~50 in statistics

Re: [Gluster-users] How accurate is heal statistics heal-count?

2016-04-17 Thread Joe Julian
All the steps necessary to marking something as needing healed are performed for every write (or metadata change). If you happen to check in the middle of that transaction, you'll see a heal pending. On 04/17/2016 12:32 PM, Lindsay Mathieson wrote: On 18/04/2016 12:23 AM, Krutika Dhananjay

Re: [Gluster-users] How accurate is heal statistics heal-count?

2016-04-17 Thread Lindsay Mathieson
On 18/04/2016 6:55 AM, Joe Julian wrote: All the steps necessary to marking something as needing healed are performed for every write (or metadata change). If you happen to check in the middle of that transaction, you'll see a heal pending. So seeing ongoing heals on a busy volume is normal?

Re: [Gluster-users] How accurate is heal statistics heal-count?

2016-04-17 Thread Lindsay Mathieson
On 18/04/2016 5:49 AM, Kevin Lemonnier wrote: Interestingly, I have the exact oposite on my cluster : heal info shows nothing but I constantly have between 3 and ~50 in statistics heal-count. I assumed it was just syncing the writes of the different VM's, that it was expected. I was hoping

Re: [Gluster-users] How accurate is heal statistics heal-count?

2016-04-17 Thread Lindsay Mathieson
On 18/04/2016 12:23 AM, Krutika Dhananjay wrote: I just checked the code. All `statistics heal-count` does is to count the number of indices (in other words the number of files to be healed) present per brick in the .glusterfs/indices/xattrop directory - which is where we hold empty files

Re: [Gluster-users] Number of shards in .shard directory

2016-04-17 Thread Lindsay Mathieson
On 18/04/2016 12:42 AM, Krutika Dhananjay wrote: But with granular entry self-heal feature which is going to be introduced in 3.8, this will also be a non-issue since there AFR will record the exact shards that were created/deleted while a brick is down and only sync them to the sink brick once

Re: [Gluster-users] How accurate is heal statistics heal-count?

2016-04-17 Thread Lindsay Mathieson
On 18/04/2016 5:32 AM, Lindsay Mathieson wrote: - shutdown the VM's - waited until all heals completed - restarted the VM'a - heal info immediately starts showing an every changing list of shards being healed on all three nodes. - heal statistics heal-count shows nothing I missed a step, I

[Gluster-users] Multiple volumes on the same disk

2016-04-17 Thread Lindsay Mathieson
As per the subject - my underlying file system is ZFS RAID10. Is there any problem with creating multiple volumes with brick on the same ZFS pool? is there cooperation on reads/writes? - Thinking of separating out the various groups (Dev, Support, Testing, Office) into their own volumes. But if

Re: [Gluster-users] Number of shards in .shard directory

2016-04-17 Thread Krutika Dhananjay
As per https://github.com/gluster/glusterweb/blob/master/source/community/roadmap/3.8/index.md , it would be around end-of-May/June. -Krutika On Mon, Apr 18, 2016 at 1:34 AM, Lindsay Mathieson < lindsay.mathie...@gmail.com> wrote: > On 18/04/2016 12:42 AM, Krutika Dhananjay wrote: > >> But with

Re: [Gluster-users] How accurate is heal statistics heal-count?

2016-04-17 Thread Lindsay Mathieson
On 18/04/2016 6:55 AM, Joe Julian wrote: All the steps necessary to marking something as needing healed are performed for every write (or metadata change). If you happen to check in the middle of that transaction, you'll see a heal pending. Ok, have been off walking the dogs and had time to

Re: [Gluster-users] Multiple volumes on the same disk

2016-04-17 Thread Gmail
I’ve tried more than one volume on the same Zpool, but with separate ZFS share for every volume. I didn’t find any performance issues compared to XFS on LVM. PS: ZFS by default stores the extended attributes in a hidden directory instead of extending the file inode size like what XFS do!

Re: [Gluster-users] Multiple volumes on the same disk

2016-04-17 Thread Lindsay Mathieson
On 18 April 2016 at 13:35, Gmail wrote: > I’ve tried more than one volume on the same Zpool, but with separate ZFS > share for every volume. I didn’t find any performance issues compared to XFS > on LVM. > Thanks Bishoy, good to know > > PS: ZFS by default stores the

[Gluster-users] Self heal files

2016-04-17 Thread jayakrishnan mm
Hi, Self healing daemon refers to .glusterfs/indices/xattrop directory to see the files which are to be healed, and these dir should contain gfids of those files. I see some other ids also which are prefixed with xattrop- , for example :