Re: [Gluster-users] self-heal trouble after changing arbiter brick

2018-02-09 Thread Karthik Subrahmanya
On 09-Feb-2018 7:07 PM, "Seva Gluschenko"  wrote:

Hi Karthik,


Thank you very much, you made me much more relaxed. Below is getfattr
output for a file from all the bricks:

root@gv2 ~ # getfattr -d -e hex -m . /data/glusterfs/testset/306/
30677af808ad578916f54783904e6342.pack

getfattr: Removing leading '/' from absolute path names
# file: data/glusterfs/testset/306/30677af808ad578916f54783904e6342.pack
trusted.afr.dirty=0x
trusted.afr.myvol-client-6=0x00010001
trusted.bit-rot.version=0x02005a0d2f650005bf97
trusted.gfid=0xe46e9a655128456bba0d98568d432717

root@gv3 ~ # getfattr -d -e hex -m . /data/glusterfs/testset/306/
30677af808ad578916f54783904e6342.pack

getfattr: Removing leading '/' from absolute path names
# file: data/glusterfs/testset/306/30677af808ad578916f54783904e6342.pack
trusted.afr.dirty=0x
trusted.afr.myvol-client-6=0x00010001
trusted.bit-rot.version=0x02005a0d2f6900076620
trusted.gfid=0xe46e9a655128456bba0d98568d432717

root@gv1 ~ # getfattr -d -e hex -m . /data/gv23-arbiter/testset/306/
30677af808ad578916f54783904e6342.pack

getfattr: Removing leading '/' from absolute path names
# file: data/gv23-arbiter/testset/306/30677af808ad578916f54783904e6342.pack
trusted.gfid=0xe46e9a655128456bba0d98568d432717

Is it okay that only gfid info is available on the arbiter brick?

Yes it is fine. On both the data bricks you have the good copy and the
entry is also created on the arbiter brick.  The arbiter brick is being
blamed by both the data bricks as well. So heal is happening in the right
direction and it should complete in some time.

Best Regards,
Karthik



--
Best Regards,

Seva Gluschenko
CTO @ http://webkontrol.ru


February 9, 2018 2:01 PM, "Karthik Subrahmanya" > wrote:

On Fri, Feb 9, 2018 at 3:23 PM, Seva Gluschenko  wrote:

Hi Karthik,

Thank you for your reply. The heal is still undergoing, as the
/var/log/glusterfs/glustershd.log keeps growing, and there's a lot of
pending entries in the heal info.

The gluster version is 3.10.9 and 3.10.10 (the version update in progress).
It doesn't have info summary [yet?], and the heal info is way too long to
attach here. (It takes more than 20 minutes just to collect it, but the
truth is, the cluster is quite heavily loaded, it handles roughly 8 million
reads and 100k writes daily.)

Since you have huge number of files inside nested directories and high load
on the cluster, it might take some time to complete the heal. You don't
need to worry about the gfids you are seeing on the heal info output.
Heal info summary is supported from version 3.13.


The heal info output is full of lines like this:

...

Brick gv2:/data/glusterfs









...

And so forth. Out of 80k+ lines, less than just 200 are not related to
gfids (and yes, number of gfids is well beyond 64999):

# grep -c gfid heal-info.fpack
80578

# grep -v gfid heal-info.myvol
Brick gv0:/data/glusterfs
Status: Connected
Number of entries: 0

Brick gv1:/data/glusterfs
Status: Connected
Number of entries: 0

Brick gv4:/data/gv01-arbiter
Status: Connected
Number of entries: 0

Brick gv2:/data/glusterfs
/testset/13f/13f27c303b3cb5e23ee647d8285a4a6d.pack
/testset/05c - Possibly undergoing heal

/testset/b99 - Possibly undergoing heal

/testset/dd7 - Possibly undergoing heal

/testset/0b8 - Possibly undergoing heal

/testset/f21 - Possibly undergoing heal

...

And here is the getfattr output for a sample file:

# getfattr -d -e hex -m . /data/glusterfs/testset/13f/13
f27c303b3cb5e23ee647d8285a4a6d.pack
getfattr: Removing leading '/' from absolute path names
# file: data/glusterfs/testset/13f/13f27c303b3cb5e23ee647d8285a4a6d.pack
trusted.afr.dirty=0x
trusted.afr.myvol-client-6=0x0001
trusted.bit-rot.version=0x02005a0d2f650005bf97
trusted.gfid=0xb42d966b77154de990ecd092201714fd

I tried several files, and the output is pretty much the same, the gfid is
the only difference.

Could it be anything else I would provide to shed some light on this?

I wanted to check the getfattr output of a file and a directory which
belongs to the second replica sub volume from all the 3 bricks
Brick4: gv2:/data/glusterfs
Brick5: gv3:/data/glusterfs
Brick6: gv1:/data/gv23-arbiter (arbiter)
to see the direction of pending markers being set.
Regards,
Karthik


--
Best Regards,

Seva Gluschenko
CTO @ http://webkontrol.ru

February 9, 2018 9:16 AM, "Karthik Subrahmanya" > wrote:

Hey,
Did the heal completed and you still have some entries pending heal?
If yes then can you provide the following informations to debug the issue.
1. Which version of gluster you are running
2. gluster volume heal  info summary or gluster volume heal
 info
3. getfattr -d -e hex -m .  output 

Re: [Gluster-users] self-heal trouble after changing arbiter brick

2018-02-09 Thread Seva Gluschenko
Hi Karthik,
Thank you very much, you made me much more relaxed. Below is getfattr output 
for a file from all the bricks:

root@gv2 ~ # getfattr -d -e hex -m . 
/data/glusterfs/testset/306/30677af808ad578916f54783904e6342.pack
getfattr: Removing leading '/' from absolute path names
# file: data/glusterfs/testset/306/30677af808ad578916f54783904e6342.pack
trusted.afr.dirty=0x
trusted.afr.myvol-client-6=0x00010001
trusted.bit-rot.version=0x02005a0d2f650005bf97
trusted.gfid=0xe46e9a655128456bba0d98568d432717

root@gv3 ~ # getfattr -d -e hex -m . 
/data/glusterfs/testset/306/30677af808ad578916f54783904e6342.pack
getfattr: Removing leading '/' from absolute path names
# file: data/glusterfs/testset/306/30677af808ad578916f54783904e6342.pack
trusted.afr.dirty=0x
trusted.afr.myvol-client-6=0x00010001
trusted.bit-rot.version=0x02005a0d2f6900076620
trusted.gfid=0xe46e9a655128456bba0d98568d432717

root@gv1 ~ # getfattr -d -e hex -m . 
/data/gv23-arbiter/testset/306/30677af808ad578916f54783904e6342.pack
getfattr: Removing leading '/' from absolute path names
# file: data/gv23-arbiter/testset/306/30677af808ad578916f54783904e6342.pack
trusted.gfid=0xe46e9a655128456bba0d98568d432717

Is it okay that only gfid info is available on the arbiter brick?

--
Best Regards,

Seva Gluschenko
CTO @ http://webkontrol.ru (http://webkontrol.ru/)
February 9, 2018 2:01 PM, "Karthik Subrahmanya" )> wrote:
On Fri, Feb 9, 2018 at 3:23 PM, Seva Gluschenko  wrote:
Hi Karthik,

Thank you for your reply. The heal is still undergoing, as the 
/var/log/glusterfs/glustershd.log keeps growing, and there's a lot of pending 
entries in the heal info.

The gluster version is 3.10.9 and 3.10.10 (the version update in progress). It 
doesn't have info summary [yet?], and the heal info is way too long to attach 
here. (It takes more than 20 minutes just to collect it, but the truth is, the 
cluster is quite heavily loaded, it handles roughly 8 million reads and 100k 
writes daily.) 
Since you have huge number of files inside nested directories and high load on 
the cluster, it might take some time to complete the heal. You don't need to 
worry about the gfids you are seeing on the heal info output. 
Heal info summary is supported from version 3.13. 
The heal info output is full of lines like this:

...

Brick gv2:/data/glusterfs









...

And so forth. Out of 80k+ lines, less than just 200 are not related to gfids 
(and yes, number of gfids is well beyond 64999):

# grep -c gfid heal-info.fpack
80578

# grep -v gfid heal-info.myvol
Brick gv0:/data/glusterfs
Status: Connected
Number of entries: 0

Brick gv1:/data/glusterfs
Status: Connected
Number of entries: 0

Brick gv4:/data/gv01-arbiter
Status: Connected
Number of entries: 0

Brick gv2:/data/glusterfs
/testset/13f/13f27c303b3cb5e23ee647d8285a4a6d.pack
/testset/05c - Possibly undergoing heal

/testset/b99 - Possibly undergoing heal

/testset/dd7 - Possibly undergoing heal

/testset/0b8 - Possibly undergoing heal

/testset/f21 - Possibly undergoing heal

...

And here is the getfattr output for a sample file:

# getfattr -d -e hex -m . 
/data/glusterfs/testset/13f/13f27c303b3cb5e23ee647d8285a4a6d.pack
getfattr: Removing leading '/' from absolute path names
# file: data/glusterfs/testset/13f/13f27c303b3cb5e23ee647d8285a4a6d.pack
trusted.afr.dirty=0x
trusted.afr.myvol-client-6=0x0001
trusted.bit-rot.version=0x02005a0d2f650005bf97
trusted.gfid=0xb42d966b77154de990ecd092201714fd

I tried several files, and the output is pretty much the same, the gfid is the 
only difference.

Could it be anything else I would provide to shed some light on this? 
I wanted to check the getfattr output of a file and a directory which belongs 
to the second replica sub volume from all the 3 bricks
Brick4: gv2:/data/glusterfs
Brick5: gv3:/data/glusterfs
Brick6: gv1:/data/gv23-arbiter (arbiter) 
to see the direction of pending markers being set.
Regards, 
Karthik 
--
Best Regards,

Seva Gluschenko
CTO @ http://webkontrol.ru (http://webkontrol.ru/)
February 9, 2018 9:16 AM, "Karthik Subrahmanya"  wrote:
Hey,Did the heal completed and you still have some entries pending heal?
If yes then can you provide the following informations to debug the issue.
1. Which version of gluster you are running2. gluster volume heal  
info summary or gluster volume heal  info3. getfattr -d -e hex -m . 
 output of any one of the which is pending heal from all the 
bricksRegards,Karthik 

On Thu, Feb 8, 2018 at 12:48 PM, Seva Gluschenko  wrote:
Hi folks,

I'm troubled moving an arbiter brick to another server because of I/O load 

Re: [Gluster-users] self-heal trouble after changing arbiter brick

2018-02-09 Thread Karthik Subrahmanya
On Fri, Feb 9, 2018 at 3:23 PM, Seva Gluschenko  wrote:

> Hi Karthik,
>
> Thank you for your reply. The heal is still undergoing, as the
> /var/log/glusterfs/glustershd.log keeps growing, and there's a lot of
> pending entries in the heal info.
>
> The gluster version is 3.10.9 and 3.10.10 (the version update in
> progress). It doesn't have info summary [yet?], and the heal info is way
> too long to attach here. (It takes more than 20 minutes just to collect it,
> but the truth is, the cluster is quite heavily loaded, it handles roughly 8
> million reads and 100k writes daily.)
>
Since you have huge number of files inside nested directories and high load
on the cluster, it might take some time to complete the heal. You don't
need to worry about the gfids you are seeing on the heal info output.
Heal info summary is supported from version 3.13.

>
> The heal info output is full of lines like this:
>
> ...
>
> Brick gv2:/data/glusterfs
> 
> 
> 
> 
> 
> 
> 
> 
> 
> ...
>
> And so forth. Out of 80k+ lines, less than just 200 are not related to
> gfids (and yes, number of gfids is well beyond 64999):
>
> # grep -c gfid heal-info.fpack
> 80578
>
> # grep -v gfid heal-info.myvol
> Brick gv0:/data/glusterfs
> Status: Connected
> Number of entries: 0
>
> Brick gv1:/data/glusterfs
> Status: Connected
> Number of entries: 0
>
> Brick gv4:/data/gv01-arbiter
> Status: Connected
> Number of entries: 0
>
> Brick gv2:/data/glusterfs
> /testset/13f/13f27c303b3cb5e23ee647d8285a4a6d.pack
> /testset/05c - Possibly undergoing heal
>
> /testset/b99 - Possibly undergoing heal
>
> /testset/dd7 - Possibly undergoing heal
>
> /testset/0b8 - Possibly undergoing heal
>
> /testset/f21 - Possibly undergoing heal
>
> ...
>
> And here is the getfattr output for a sample file:
>
> # getfattr -d -e hex -m . /data/glusterfs/testset/13f/
> 13f27c303b3cb5e23ee647d8285a4a6d.pack
> getfattr: Removing leading '/' from absolute path names
> # file: data/glusterfs/testset/13f/13f27c303b3cb5e23ee647d8285a4a6d.pack
> trusted.afr.dirty=0x
> trusted.afr.myvol-client-6=0x0001
> trusted.bit-rot.version=0x02005a0d2f650005bf97
> trusted.gfid=0xb42d966b77154de990ecd092201714fd
>
> I tried several files, and the output is pretty much the same, the gfid is
> the only difference.
>
> Could it be anything else I would provide to shed some light on this?
>
I wanted to check the getfattr output of a file and a directory which
belongs to the second replica sub volume from all the 3 bricks
Brick4: gv2:/data/glusterfs
Brick5: gv3:/data/glusterfs
Brick6: gv1:/data/gv23-arbiter (arbiter)
to see the direction of pending markers being set.

Regards,
Karthik

>
> --
> Best Regards,
>
> Seva Gluschenko
> CTO @ http://webkontrol.ru
>
>
> February 9, 2018 9:16 AM, "Karthik Subrahmanya"  <%22karthik%20subrahmanya%22%20%3cksubr...@redhat.com%3E>> wrote:
>
> Hey,
> Did the heal completed and you still have some entries pending heal?
> If yes then can you provide the following informations to debug the issue.
> 1. Which version of gluster you are running
> 2. gluster volume heal  info summary or gluster volume heal
>  info
> 3. getfattr -d -e hex -m .  output of any one of the
> which is pending heal from all the bricks
> Regards,
> Karthik
> On Thu, Feb 8, 2018 at 12:48 PM, Seva Gluschenko 
> wrote:
>
> Hi folks,
>
> I'm troubled moving an arbiter brick to another server because of I/O load
> issues. My setup is as follows:
>
> # gluster volume info
>
> Volume Name: myvol
> Type: Distributed-Replicate
> Volume ID: 43ba517a-ac09-461e-99da-a197759a7dc8
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 3 x (2 + 1) = 9
> Transport-type: tcp
> Bricks:
> Brick1: gv0:/data/glusterfs
> Brick2: gv1:/data/glusterfs
> Brick3: gv4:/data/gv01-arbiter (arbiter)
> Brick4: gv2:/data/glusterfs
> Brick5: gv3:/data/glusterfs
> Brick6: gv1:/data/gv23-arbiter (arbiter)
> Brick7: gv4:/data/glusterfs
> Brick8: gv5:/data/glusterfs
> Brick9: pluto:/var/gv45-arbiter (arbiter)
> Options Reconfigured:
> nfs.disable: on
> transport.address-family: inet
> storage.owner-gid: 1000
> storage.owner-uid: 1000
> cluster.self-heal-daemon: enable
>
> The gv23-arbiter is the brick that was recently moved from other server
> (chronos) using the following command:
>
> # gluster volume replace-brick myvol chronos:/mnt/gv23-arbiter
> gv1:/data/gv23-arbiter commit force
> volume replace-brick: success: replace-brick commit force operation
> successful
>
> It's not the first time I was moving an arbiter brick, and the heal-count
> was zero for all the bricks before the change, so I didn't expect much
> trouble then. What was probably wrong is that I then forced chronos out of
> cluster with gluster peer detach command. All since that, over the course
> of the last 3 days, I see this:
>
> # gluster volume heal myvol statistics heal-count
> Gathering count of entries to be healed on volume myvol 

Re: [Gluster-users] self-heal trouble after changing arbiter brick

2018-02-09 Thread Seva Gluschenko
Hi Karthik,

Thank you for your reply. The heal is still undergoing, as the 
/var/log/glusterfs/glustershd.log keeps growing, and there's a lot of pending 
entries in the heal info.

The gluster version is 3.10.9 and 3.10.10 (the version update in progress). It 
doesn't have info summary [yet?], and the heal info is way too long to attach 
here. (It takes more than 20 minutes just to collect it, but the truth is, the 
cluster is quite heavily loaded, it handles roughly 8 million reads and 100k 
writes daily.)

The heal info output is full of lines like this:

...

Brick gv2:/data/glusterfs









...

And so forth. Out of 80k+ lines, less than just 200 are not related to gfids 
(and yes, number of gfids is well beyond 64999):

# grep -c gfid heal-info.fpack
80578

# grep -v gfid heal-info.myvol
Brick gv0:/data/glusterfs
Status: Connected
Number of entries: 0

Brick gv1:/data/glusterfs
Status: Connected
Number of entries: 0

Brick gv4:/data/gv01-arbiter
Status: Connected
Number of entries: 0

Brick gv2:/data/glusterfs
/testset/13f/13f27c303b3cb5e23ee647d8285a4a6d.pack
/testset/05c - Possibly undergoing heal

/testset/b99 - Possibly undergoing heal

/testset/dd7 - Possibly undergoing heal

/testset/0b8 - Possibly undergoing heal

/testset/f21 - Possibly undergoing heal

...

And here is the getfattr output for a sample file:

# getfattr -d -e hex -m . 
/data/glusterfs/testset/13f/13f27c303b3cb5e23ee647d8285a4a6d.pack
getfattr: Removing leading '/' from absolute path names
# file: data/glusterfs/testset/13f/13f27c303b3cb5e23ee647d8285a4a6d.pack
trusted.afr.dirty=0x
trusted.afr.myvol-client-6=0x0001
trusted.bit-rot.version=0x02005a0d2f650005bf97
trusted.gfid=0xb42d966b77154de990ecd092201714fd

I tried several files, and the output is pretty much the same, the gfid is the 
only difference.

Could it be anything else I would provide to shed some light on this?

--
Best Regards,

Seva Gluschenko
CTO @ http://webkontrol.ru (http://webkontrol.ru/)
February 9, 2018 9:16 AM, "Karthik Subrahmanya" )> wrote:
Hey,
 Did the heal completed and you still have some entries pending heal?
If yes then can you provide the following informations to debug the issue.
1. Which version of gluster you are running2. gluster volume heal  
info summary or gluster volume heal  info3. getfattr -d -e hex -m . 
 output of any one of the which is pending heal from all the 
bricks
 Regards,Karthik 
On Thu, Feb 8, 2018 at 12:48 PM, Seva Gluschenko  wrote:
Hi folks,

I'm troubled moving an arbiter brick to another server because of I/O load 
issues. My setup is as follows:

# gluster volume info

Volume Name: myvol
Type: Distributed-Replicate
Volume ID: 43ba517a-ac09-461e-99da-a197759a7dc8
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x (2 + 1) = 9
Transport-type: tcp
Bricks:
Brick1: gv0:/data/glusterfs
Brick2: gv1:/data/glusterfs
Brick3: gv4:/data/gv01-arbiter (arbiter)
Brick4: gv2:/data/glusterfs
Brick5: gv3:/data/glusterfs
Brick6: gv1:/data/gv23-arbiter (arbiter)
Brick7: gv4:/data/glusterfs
Brick8: gv5:/data/glusterfs
Brick9: pluto:/var/gv45-arbiter (arbiter)
Options Reconfigured:
nfs.disable: on
transport.address-family: inet
storage.owner-gid: 1000
storage.owner-uid: 1000
cluster.self-heal-daemon: enable

The gv23-arbiter is the brick that was recently moved from other server 
(chronos) using the following command:

# gluster volume replace-brick myvol chronos:/mnt/gv23-arbiter 
gv1:/data/gv23-arbiter commit force
volume replace-brick: success: replace-brick commit force operation successful

It's not the first time I was moving an arbiter brick, and the heal-count was 
zero for all the bricks before the change, so I didn't expect much trouble 
then. What was probably wrong is that I then forced chronos out of cluster with 
gluster peer detach command. All since that, over the course of the last 3 
days, I see this:

# gluster volume heal myvol statistics heal-count
Gathering count of entries to be healed on volume myvol has been successful

Brick gv0:/data/glusterfs
Number of entries: 0

Brick gv1:/data/glusterfs
Number of entries: 0

Brick gv4:/data/gv01-arbiter
Number of entries: 0

Brick gv2:/data/glusterfs
Number of entries: 64999

Brick gv3:/data/glusterfs
Number of entries: 64999

Brick gv1:/data/gv23-arbiter
Number of entries: 0

Brick gv4:/data/glusterfs
Number of entries: 0

Brick gv5:/data/glusterfs
Number of entries: 0

Brick pluto:/var/gv45-arbiter
Number of entries: 0

According to the /var/log/glusterfs/glustershd.log, the self-healing is 
undergoing, so it might be worth just sit and wait, but I'm wondering why this 
64999 heal-count persists (a limitation on counter? In fact, gv2 and gv3 bricks 
contain roughly 30 million files), and I feel bothered because of the following 
output:

# gluster volume heal myvol info heal-failed

Re: [Gluster-users] self-heal trouble after changing arbiter brick

2018-02-08 Thread Karthik Subrahmanya
On Fri, Feb 9, 2018 at 11:46 AM, Karthik Subrahmanya 
wrote:

> Hey,
>
> Did the heal completed and you still have some entries pending heal?
> If yes then can you provide the following informations to debug the issue.
> 1. Which version of gluster you are running
> 2. Output of gluster volume heal  info summary or gluster volume
> heal  info
> 3. getfattr -d -e hex -m .  output of any one of the
> file which is pending heal from all the bricks
>
> Regards,
> Karthik
>
> On Thu, Feb 8, 2018 at 12:48 PM, Seva Gluschenko 
> wrote:
>
>> Hi folks,
>>
>> I'm troubled moving an arbiter brick to another server because of I/O
>> load issues. My setup is as follows:
>>
>> # gluster volume info
>>
>> Volume Name: myvol
>> Type: Distributed-Replicate
>> Volume ID: 43ba517a-ac09-461e-99da-a197759a7dc8
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 3 x (2 + 1) = 9
>> Transport-type: tcp
>> Bricks:
>> Brick1: gv0:/data/glusterfs
>> Brick2: gv1:/data/glusterfs
>> Brick3: gv4:/data/gv01-arbiter (arbiter)
>> Brick4: gv2:/data/glusterfs
>> Brick5: gv3:/data/glusterfs
>> Brick6: gv1:/data/gv23-arbiter (arbiter)
>> Brick7: gv4:/data/glusterfs
>> Brick8: gv5:/data/glusterfs
>> Brick9: pluto:/var/gv45-arbiter (arbiter)
>> Options Reconfigured:
>> nfs.disable: on
>> transport.address-family: inet
>> storage.owner-gid: 1000
>> storage.owner-uid: 1000
>> cluster.self-heal-daemon: enable
>>
>> The gv23-arbiter is the brick that was recently moved from other server
>> (chronos) using the following command:
>>
>> # gluster volume replace-brick myvol chronos:/mnt/gv23-arbiter
>> gv1:/data/gv23-arbiter commit force
>> volume replace-brick: success: replace-brick commit force operation
>> successful
>>
>> It's not the first time I was moving an arbiter brick, and the heal-count
>> was zero for all the bricks before the change, so I didn't expect much
>> trouble then. What was probably wrong is that I then forced chronos out of
>> cluster with gluster peer detach command. All since that, over the course
>> of the last 3 days, I see this:
>>
>> # gluster volume heal myvol statistics heal-count
>> Gathering count of entries to be healed on volume myvol has been
>> successful
>>
>> Brick gv0:/data/glusterfs
>> Number of entries: 0
>>
>> Brick gv1:/data/glusterfs
>> Number of entries: 0
>>
>> Brick gv4:/data/gv01-arbiter
>> Number of entries: 0
>>
>> Brick gv2:/data/glusterfs
>> Number of entries: 64999
>>
>> Brick gv3:/data/glusterfs
>> Number of entries: 64999
>>
>> Brick gv1:/data/gv23-arbiter
>> Number of entries: 0
>>
>> Brick gv4:/data/glusterfs
>> Number of entries: 0
>>
>> Brick gv5:/data/glusterfs
>> Number of entries: 0
>>
>> Brick pluto:/var/gv45-arbiter
>> Number of entries: 0
>>
>> According to the /var/log/glusterfs/glustershd.log, the self-healing is
>> undergoing, so it might be worth just sit and wait, but I'm wondering why
>> this 64999 heal-count persists (a limitation on counter? In fact, gv2 and
>> gv3 bricks contain roughly 30 million files), and I feel bothered because
>> of the following output:
>>
>> # gluster volume heal myvol info heal-failed
>> Gathering list of heal failed entries on volume myvol has been
>> unsuccessful on bricks that are down. Please check if all brick processes
>> are running.
>>
>> I attached the chronos server back to the cluster, with no noticeable
>> effect. Any comments and suggestions would be much appreciated.
>>
>> --
>> Best Regards,
>>
>> Seva Gluschenko
>> CTO @ http://webkontrol.ru
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] self-heal trouble after changing arbiter brick

2018-02-08 Thread Karthik Subrahmanya
Hey,

Did the heal completed and you still have some entries pending heal?
If yes then can you provide the following informations to debug the issue.
1. Which version of gluster you are running
2. gluster volume heal  info summary or gluster volume heal
 info
3. getfattr -d -e hex -m .  output of any one of the
which is pending heal from all the bricks

Regards,
Karthik

On Thu, Feb 8, 2018 at 12:48 PM, Seva Gluschenko  wrote:

> Hi folks,
>
> I'm troubled moving an arbiter brick to another server because of I/O load
> issues. My setup is as follows:
>
> # gluster volume info
>
> Volume Name: myvol
> Type: Distributed-Replicate
> Volume ID: 43ba517a-ac09-461e-99da-a197759a7dc8
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 3 x (2 + 1) = 9
> Transport-type: tcp
> Bricks:
> Brick1: gv0:/data/glusterfs
> Brick2: gv1:/data/glusterfs
> Brick3: gv4:/data/gv01-arbiter (arbiter)
> Brick4: gv2:/data/glusterfs
> Brick5: gv3:/data/glusterfs
> Brick6: gv1:/data/gv23-arbiter (arbiter)
> Brick7: gv4:/data/glusterfs
> Brick8: gv5:/data/glusterfs
> Brick9: pluto:/var/gv45-arbiter (arbiter)
> Options Reconfigured:
> nfs.disable: on
> transport.address-family: inet
> storage.owner-gid: 1000
> storage.owner-uid: 1000
> cluster.self-heal-daemon: enable
>
> The gv23-arbiter is the brick that was recently moved from other server
> (chronos) using the following command:
>
> # gluster volume replace-brick myvol chronos:/mnt/gv23-arbiter
> gv1:/data/gv23-arbiter commit force
> volume replace-brick: success: replace-brick commit force operation
> successful
>
> It's not the first time I was moving an arbiter brick, and the heal-count
> was zero for all the bricks before the change, so I didn't expect much
> trouble then. What was probably wrong is that I then forced chronos out of
> cluster with gluster peer detach command. All since that, over the course
> of the last 3 days, I see this:
>
> # gluster volume heal myvol statistics heal-count
> Gathering count of entries to be healed on volume myvol has been successful
>
> Brick gv0:/data/glusterfs
> Number of entries: 0
>
> Brick gv1:/data/glusterfs
> Number of entries: 0
>
> Brick gv4:/data/gv01-arbiter
> Number of entries: 0
>
> Brick gv2:/data/glusterfs
> Number of entries: 64999
>
> Brick gv3:/data/glusterfs
> Number of entries: 64999
>
> Brick gv1:/data/gv23-arbiter
> Number of entries: 0
>
> Brick gv4:/data/glusterfs
> Number of entries: 0
>
> Brick gv5:/data/glusterfs
> Number of entries: 0
>
> Brick pluto:/var/gv45-arbiter
> Number of entries: 0
>
> According to the /var/log/glusterfs/glustershd.log, the self-healing is
> undergoing, so it might be worth just sit and wait, but I'm wondering why
> this 64999 heal-count persists (a limitation on counter? In fact, gv2 and
> gv3 bricks contain roughly 30 million files), and I feel bothered because
> of the following output:
>
> # gluster volume heal myvol info heal-failed
> Gathering list of heal failed entries on volume myvol has been
> unsuccessful on bricks that are down. Please check if all brick processes
> are running.
>
> I attached the chronos server back to the cluster, with no noticeable
> effect. Any comments and suggestions would be much appreciated.
>
> --
> Best Regards,
>
> Seva Gluschenko
> CTO @ http://webkontrol.ru
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users