I shutdown all the vms, waited for heals to finish, then stopped tall
gluster processes.
set the following to off:
performance.write-behind
datastore4 performance.flush-behind
datastore4 cluster.data-self-heal
and restarted everything - same issue - assuming it actually is an
Hi developers, didn't want to be whiny all the time re possible issues
:) and to congratulate you on the 3.7.x release - its come a long way
since the 3.5 range I initially looked at and I really appreciate the
attention to details for VM hosting environments.
The chunking really works and
I believe Proxmox is just an interface to KVM that uses the lib, so if I'm not
mistaken there isn't client logs ?
It's not the first time I have the issue, it happens on every heal on the 2
clusters I have.
I did let the heal finish that night and the VMs are working now, but it is
pretty
Hmm I don't have an answer to that question right now because I'm no
on-disk fs expert.
As far as gluster is concerned, I don't actually see any issue except for
the following:
as of today, gluster's entry self-heal algorithm crawls the whole parent
directory and compares the
source and sink to
Could you share the client logs and information about the approx time/day
when you saw this issue?
-Krutika
On Sat, Apr 16, 2016 at 12:57 AM, Kevin Lemonnier
wrote:
> Hi,
>
> We have a small glusterFS 3.7.6 cluster with 3 nodes running with proxmox
> VM's on it. I did set
I just checked the code. All `statistics heal-count` does is to count the
number of indices (in other words the number of files to be healed) present
per brick in the .glusterfs/indices/xattrop directory - which is where we
hold empty files representing inodes that need heal - and prints them. I
> - heal info immediately starts showing an every changing list of shards
> being healed on all three nodes.
> - heal statistics heal-count shows nothing
>
Interestingly, I have the exact oposite on my cluster : heal info shows nothing
but I constantly have
between 3 and ~50 in statistics
All the steps necessary to marking something as needing healed are
performed for every write (or metadata change). If you happen to check
in the middle of that transaction, you'll see a heal pending.
On 04/17/2016 12:32 PM, Lindsay Mathieson wrote:
On 18/04/2016 12:23 AM, Krutika Dhananjay
On 18/04/2016 6:55 AM, Joe Julian wrote:
All the steps necessary to marking something as needing healed are
performed for every write (or metadata change). If you happen to check
in the middle of that transaction, you'll see a heal pending.
So seeing ongoing heals on a busy volume is normal?
On 18/04/2016 5:49 AM, Kevin Lemonnier wrote:
Interestingly, I have the exact oposite on my cluster : heal info shows nothing
but I constantly have
between 3 and ~50 in statistics heal-count. I assumed it was just syncing the
writes of the different
VM's, that it was expected.
I was hoping
On 18/04/2016 12:23 AM, Krutika Dhananjay wrote:
I just checked the code. All `statistics heal-count` does is to count
the number of indices (in other words the number of files to be
healed) present per brick in the .glusterfs/indices/xattrop directory
- which is where we hold empty files
On 18/04/2016 12:42 AM, Krutika Dhananjay wrote:
But with granular entry self-heal feature which is going to be
introduced in 3.8, this will also be a non-issue since
there AFR will record the exact shards that were created/deleted while
a brick is down and only sync them to the sink brick once
On 18/04/2016 5:32 AM, Lindsay Mathieson wrote:
- shutdown the VM's
- waited until all heals completed
- restarted the VM'a
- heal info immediately starts showing an every changing list of
shards being healed on all three nodes.
- heal statistics heal-count shows nothing
I missed a step, I
As per the subject - my underlying file system is ZFS RAID10. Is there
any problem with creating multiple volumes with brick on the same ZFS
pool? is there cooperation on reads/writes?
- Thinking of separating out the various groups (Dev, Support,
Testing, Office) into their own volumes.
But if
As per
https://github.com/gluster/glusterweb/blob/master/source/community/roadmap/3.8/index.md
,
it would be around end-of-May/June.
-Krutika
On Mon, Apr 18, 2016 at 1:34 AM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> On 18/04/2016 12:42 AM, Krutika Dhananjay wrote:
>
>> But with
On 18/04/2016 6:55 AM, Joe Julian wrote:
All the steps necessary to marking something as needing healed are
performed for every write (or metadata change). If you happen to check
in the middle of that transaction, you'll see a heal pending.
Ok, have been off walking the dogs and had time to
I’ve tried more than one volume on the same Zpool, but with separate ZFS share
for every volume. I didn’t find any performance issues compared to XFS on LVM.
PS: ZFS by default stores the extended attributes in a hidden directory instead
of extending the file inode size like what XFS do!
On 18 April 2016 at 13:35, Gmail wrote:
> I’ve tried more than one volume on the same Zpool, but with separate ZFS
> share for every volume. I didn’t find any performance issues compared to XFS
> on LVM.
>
Thanks Bishoy, good to know
>
> PS: ZFS by default stores the
Hi,
Self healing daemon refers to .glusterfs/indices/xattrop directory to
see the files which are to be healed, and these dir should
contain gfids of those files.
I see some other ids also which are prefixed with xattrop- , for
example :
19 matches
Mail list logo