David Gossage <
>>>>>>>>>>>>>>>> dgoss...@carouselchecks.com> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Tue, Aug 30, 2016 at 7:18
Tue, Aug 30, 2016 at 6:07 PM, David Gossage <
>>>>>>>>>>>>>>>> dgoss...@carouselchecks.com> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Tue, Aug
does heals also in the
>>>>>>>>>>>>>>>>> same order.
>>>>>>>>>>>>>>>>> But if it gets interrupted in the middle (say because
>>>>>>>>>>>>>>>>> self-heal-daemo
that node. From my
>>>>>>>>>>>>>>> experience
>>>>>>>>>>>>>>> other day telling it to heal full again did nothing regardless
>>>>>>>>>>>>>>> of nod
>>>>>> where it
>>>>>>>>>>>> is absent and marks the fact that the file or directory might need
>>>>>>>>>>>> data/entry and metadata heal too (this also means that an index is
>
t;>>>>>>
>>>>>>>>>> Also attached are the glustershd logs from the 3 nodes, along
>>>>>>>>>> with the test node i tried yesterday with same results.
>>>>>>
hards for
>>>>>>>>>> reads.
>>>>>>>>>>
>>>>>>>>>> Also attached are the glustershd logs from the 3 nodes, along
>>>>>>>>>> with the test node i tried yesterday with same r
d-replicate-0: starting full sweep on subvol
>>>>>>>> glustershard-client-1
>>>>>>>> [2016-08-30 13:56:25.224616] I [MSGID: 108026]
>>>>>>>> [afr-self-heald.c:646:afr_shd_full_healer]
>>>>>>>> 0-glust
>
>>>>>
>>>>>>
>>>>>>>
>>>>>>>>>
>>>>>>>>>> My suspicion is that this is what happened on your setup. Could
>>>>>>>>>> you confirm if that was the case?
>>>
t; glustershard-client-0
>>>>>>>
>>>>>>> While when looking at past few days of the 3 prod nodes i only found
>>>>>>> that on my 2nd node
>>>>>>> [2016-08-27 01:26:42.638772] I [MSGID: 108026]
&g
t;>> finished full sweep on subvol GLUSTER1-client-1
>>>>>> [2016-08-27 20:03:42.560188] I [MSGID: 108026]
>>>>>> [afr-self-heald.c:646:afr_shd_full_healer] 0-GLUSTER1-replicate-0:
>>>>>> starting full sweep on subvol
>>
>>>>>>>
>>>>>>>>>
>>>>>>>>>> My suspicion is that this is what happened on your setup. Could
>>>>>>>>>> you confirm if that was the case?
>>>>>>>>
;>>>>>
>>>>>>>
>>>>>>> OK. How did you figure it was not adding any new files? I need to
>>>>>>> know what places you were monitoring to come to this conclusion.
>>>>>>>
>>>>>
s on shard directory. Directories
>>>>>> visible from mount healed quickly. This was with one VM so it has only
>>>>>> 800
>>>>>> shards as well. After hours at work it had added a total of 33 shards to
>>>>>> be healed. I
AM, David Gossage <
>>>>>> dgoss...@carouselchecks.com> wrote:
>>>>>>
>>>>>>> attached brick and client logs from test machine where same behavior
>>>>>>> occurred not sure if anything new is there. its still on 3.8.
t;>
>>>>>> Number of Bricks: 1 x 3 = 3
>>>>>> Transport-type: tcp
>>>>>> Bricks:
>>>>>> Brick1: 192.168.71.10:/gluster2/brick1/1
>>>>>> Brick2: 192.168.71.11:/gluster2/brick2/1
>>>>>> Brick3: 192.
;> features.shard: on
>>>>>> server.allow-insecure: on
>>>>>> storage.owner-uid: 36
>>>>>> storage.owner-gid: 36
>>>>>> cluster.server-quorum-type: server
>>>>>> cluster.quorum-type: auto
>>&
>>> cluster.quorum-type: auto
>>>> network.remote-dio: on
>>>> cluster.eager-lock: enable
>>>> performance.stat-prefetch: off
>>>> performance.io-cache: off
>>>> performance.quick-read: off
>>>> cluster.self-heal-window-size:
luster.server-quorum-type: server
>>>> cluster.quorum-type: auto
>>>> network.remote-dio: on
>>>> cluster.eager-lock: enable
>>>> performance.stat-prefetch: off
>>>> performance.io-cache: off
>>>> performance.quick-read: off
f
>>> nfs.addr-namelookup: off
>>> nfs.disable: on
>>> performance.read-ahead: off
>>> performance.readdir-ahead: on
>>> cluster.granular-entry-heal: on
>>>
>>>
>>>
>>> On Mon, Aug 29, 2016 at 2:20 PM, David Gossage <
at 7:01 AM, Anuradha Talur <ata...@redhat.com>
>>> wrote:
>>>
>>>>
>>>>
>>>> - Original Message -
>>>> > From: "David Gossage" <dgoss...@carouselchecks.com>
>>>> > To: "Anuradha Tal
David Gossage" <dgoss...@carouselchecks.com>
>>> > To: "Anuradha Talur" <ata...@redhat.com>
>>> > Cc: "gluster-users@gluster.org List" <Gluster-users@gluster.org>,
>>> "Krutika Dhananjay" <kdhan...@redha
On Mon, Aug 29, 2016 at 11:25 PM, Darrell Budic
wrote:
> I noticed that my new brick (replacement disk) did not have a .shard
> directory created on the brick, if that helps.
>
> I removed the affected brick from the volume and then wiped the disk, did
> an add-brick, and
Ignore. I just realised you're on 3.7.14. So then the problem may not be
with granular entry self-heal feature.
-Krutika
On Tue, Aug 30, 2016 at 10:14 AM, Krutika Dhananjay
wrote:
> OK. Do you also have granular-entry-heal on - just so that I can isolate
> the problem
OK. Do you also have granular-entry-heal on - just so that I can isolate
the problem area.
-Krutika
On Tue, Aug 30, 2016 at 9:55 AM, Darrell Budic
wrote:
> I noticed that my new brick (replacement disk) did not have a .shard
> directory created on the brick, if that
I noticed that my new brick (replacement disk) did not have a .shard directory
created on the brick, if that helps.
I removed the affected brick from the volume and then wiped the disk, did an
add-brick, and everything healed right up. I didn’t try and set any attrs or
anything else, just
luster-users@gluster.org List" <Gluster-users@gluster.org>,
> "Krutika Dhananjay" <kdhan...@redhat.com>
> > Sent: Monday, August 29, 2016 5:12:42 PM
> > Subject: Re: [Gluster-users] 3.8.3 Shards Healing Glacier Slow
> >
> > On Mon, Aug 29
Just to let you know I’m seeing the same issue under 3.7.14 on CentOS 7. Some
content was healed correctly, now all the shards are queued up in a heal list,
but nothing is healing. Got similar brick errors logged to the ones David was
getting on the brick that isn’t healing:
[2016-08-29
Got it. Thanks.
I tried the same test and shd crashed with SIGABRT (well, that's because I
compiled from src with -DDEBUG).
In any case, this error would prevent full heal from proceeding further.
I'm debugging the crash now. Will let you know when I have the RC.
-Krutika
On Mon, Aug 29, 2016
On Mon, Aug 29, 2016 at 7:14 AM, David Gossage
wrote:
> On Mon, Aug 29, 2016 at 5:25 AM, Krutika Dhananjay
> wrote:
>
>> Could you attach both client and brick logs? Meanwhile I will try these
>> steps out on my machines and see if it is easily
On Mon, Aug 29, 2016 at 7:14 AM, David Gossage
wrote:
> On Mon, Aug 29, 2016 at 5:25 AM, Krutika Dhananjay
> wrote:
>
>> Could you attach both client and brick logs? Meanwhile I will try these
>> steps out on my machines and see if it is easily
luster-users@gluster.org List" <Gluster-users@gluster.org>,
> "Krutika Dhananjay" <kdhan...@redhat.com>
> > Sent: Monday, August 29, 2016 5:12:42 PM
> > Subject: Re: [Gluster-users] 3.8.3 Shards Healing Glacier Slow
> >
> > On Mon, Aug 29
-
> > > From: "Krutika Dhananjay" <kdhan...@redhat.com>
> > > To: "David Gossage" <dgoss...@carouselchecks.com>
> > > Cc: "gluster-users@gluster.org List" <Gluster-users@gluster.org>
> > > Sent: Monday, August
gt;
> > Cc: "gluster-users@gluster.org List" <Gluster-users@gluster.org>
> > Sent: Monday, August 29, 2016 3:55:04 PM
> > Subject: Re: [Gluster-users] 3.8.3 Shards Healing Glacier Slow
> >
> > Could you attach both client and brick logs? Meanwhile I will try
Response inline.
- Original Message -
> From: "Krutika Dhananjay" <kdhan...@redhat.com>
> To: "David Gossage" <dgoss...@carouselchecks.com>
> Cc: "gluster-users@gluster.org List" <Gluster-users@gluster.org>
> Sent: Monday, Aug
Could you attach both client and brick logs? Meanwhile I will try these
steps out on my machines and see if it is easily recreatable.
-Krutika
On Mon, Aug 29, 2016 at 2:31 PM, David Gossage
wrote:
> Centos 7 Gluster 3.8.3
>
> Brick1:
36 matches
Mail list logo