Re: [Gluster-users] Proper command for replace-brick on distribute–replicate?

2023-07-18 Thread Alan Orth
Hello from the future!

Today I was doing a replace-brick and found that the docs are still out of
date. Then I found this very relevant thread of mine from four years ago.
Now I've successfully replaced a brick on a distribute-replicate volume
(with no manual shenanigans like killing bricks, setting xattrs, etc). I
have sent a pull request to update Gluster Docs:
https://github.com/gluster/glusterdocs/pull/782

Cheers!

On Wed, Jun 12, 2019 at 2:00 PM Ravishankar N 
wrote:

>
> On 12/06/19 1:38 PM, Alan Orth wrote:
>
> Dear Ravi,
>
> Thanks for the confirmation—I replaced a brick in a volume last night and
> by the morning I see that Gluster has replicated data there, though I don't
> have any indication of its progress. The `gluster v heal volume info` and
> `gluster v heal volume info split-brain` are all looking good so I guess
> that's enough of an indication.
>
> Yes, right now, heal info showing no files is the indication. A new
> command for pending heal time estimation is something that is being worked
> upon. See https://github.com/gluster/glusterfs/issues/643
>
>
> One question, though. Immediately after I replaced the brick I checked
> `gluster v status volume` and I saw the following:
>
> Task Status of Volume volume
>
> --
> Task : Rebalance
> ID   : a890e99c-5715-4bc1-80ee-c28490612135
> Status   : not started
>
> I did not initiate a rebalance, so it's strange to see it there. Is
> Gluster hinting that I should start a rebalance? If a rebalance is "not
> started" shouldn't Gluster just not show it at all?
>
> `replace-brick` should not show rebalance status. Not sure why you're
> seeing it. Adding Nithya for help.
>
>
> Regarding the patch to the documentation: absolutely! Let me just get my
> Gluster back in order after my confusing upgrade last month. :P
>
> Great. Please send the PR for the https://github.com/gluster/glusterdocs/
> project. I think docs/Administrator Guide/Managing Volumes.md is the file
> that needs to be updated.
>
> -Ravi
>
>
> Thanks,
>
> On Tue, Jun 11, 2019 at 7:32 PM Ravishankar N 
> wrote:
>
>>
>> On 11/06/19 9:11 PM, Alan Orth wrote:
>>
>> Dear list,
>>
>> In a recent discussion on this list Ravi suggested that the documentation
>> for replace-brick¹ was out of date. For a distribute–replicate volume the
>> documentation currently says that we need to kill the old brick's PID,
>> create a temporary empty directory on the FUSE mount, check the xattrs,
>> replace-brick with commit force.
>>
>> Is all this still necessary? I'm running Gluster 5.6 on CentOS 7 with a
>> distribute–replicate volume.
>>
>> No,  all these very steps are 'codified' into the `replace brick commit
>> force` command via https://review.gluster.org/#/c/glusterfs/+/10076/ and
>> https://review.gluster.org/#/c/glusterfs/+/10448/.  You can see the
>> commit messages of these 2 patches for more details.
>>
>> You can play around with most of these commands in a 1 node setup if you
>> want to convince yourself that they work. There is no need to form a
>> cluster.
>> [root@tuxpad glusterfs]# gluster v create testvol replica 3 
>> 127.0.0.2:/home/ravi/bricks/brick{1..3}
>> force
>> [root@tuxpad glusterfs]# gluster v start testvol
>> [root@tuxpad glusterfs]# mount -t glusterfs 127.0.0.2:testvol
>> /mnt/fuse_mnt/
>> [root@tuxpad glusterfs]# touch /mnt/fuse_mnt/FILE
>> [root@tuxpad glusterfs]# ll /home/ravi/bricks/brick*/FILE
>> -rw-r--r--. 2 root root 0 Jun 11 21:55 /home/ravi/bricks/brick1/FILE
>> -rw-r--r--. 2 root root 0 Jun 11 21:55 /home/ravi/bricks/brick2/FILE
>> -rw-r--r--. 2 root root 0 Jun 11 21:55 /home/ravi/bricks/brick3/FILE
>>
>> [root@tuxpad glusterfs]# gluster v replace-brick testvol 
>> 127.0.0.2:/home/ravi/bricks/brick3
>> 127.0.0.2:/home/ravi/bricks/brick3_new commit force
>> volume replace-brick: success: replace-brick commit force operation
>> successful
>> [root@tuxpad glusterfs]# ll /home/ravi/bricks/brick3_new/FILE
>>
>> -rw-r--r--. 2 root root 0 Jun 11 21:55 /home/ravi/bricks/brick3_new/FILE
>> Why don't you send a patch to update the doc for replace-brick? I'd be
>> happy to review it. ;-)
>> HTH,
>> Ravi
>>
>>
>> Thank you,
>>
>> ¹ https://docs.gluster.org/en/latest/Administrator Guide/Managing
>> Volumes/
>> --
>> Alan Orth
>> alan.o...@gmail.com
>> https://picturingjordan.com
>> https://englishbulgaria.net
>> https://mjanja.ch
>> "In heaven all the interesting people are missing." ―Friedrich Nietzsche
>>
>> ___
>> Gluster-users mailing 
>> listGluster-users@gluster.orghttps://lists.gluster.org/mailman/listinfo/gluster-users
>>
>>
>
> --
> Alan Orth
> alan.o...@gmail.com
> https://picturingjordan.com
> https://englishbulgaria.net
> https://mjanja.ch
> "In heaven all the interesting people are missing." ―Friedrich Nietzsche
>
>

-- 
Alan Orth
alan.o...@gmail.com
https://picturingjordan.com

Re: [Gluster-users] Proper command for replace-brick on distribute–replicate?

2019-06-12 Thread Ravishankar N


On 12/06/19 1:38 PM, Alan Orth wrote:

Dear Ravi,

Thanks for the confirmation—I replaced a brick in a volume last night 
and by the morning I see that Gluster has replicated data there, 
though I don't have any indication of its progress. The `gluster v 
heal volume info` and `gluster v heal volume info split-brain` are all 
looking good so I guess that's enough of an indication.
Yes, right now, heal info showing no files is the indication. A new 
command for pending heal time estimation is something that is being 
worked upon. See https://github.com/gluster/glusterfs/issues/643


One question, though. Immediately after I replaced the brick I checked 
`gluster v status volume` and I saw the following:


Task Status of Volume volume
--
Task                 : Rebalance
ID                   : a890e99c-5715-4bc1-80ee-c28490612135
Status               : not started

I did not initiate a rebalance, so it's strange to see it there. Is 
Gluster hinting that I should start a rebalance? If a rebalance is 
"not started" shouldn't Gluster just not show it at all?


`replace-brick` should not show rebalance status. Not sure why you're 
seeing it. Adding Nithya for help.




Regarding the patch to the documentation: absolutely! Let me just get 
my Gluster back in order after my confusing upgrade last month. :P


Great. Please send the PR for the 
https://github.com/gluster/glusterdocs/ project. I think 
docs/Administrator Guide/Managing Volumes.md is the file that needs to 
be updated.


-Ravi



Thanks,

On Tue, Jun 11, 2019 at 7:32 PM Ravishankar N > wrote:



On 11/06/19 9:11 PM, Alan Orth wrote:

Dear list,

In a recent discussion on this list Ravi suggested that the
documentation for replace-brick¹ was out of date. For a
distribute–replicate volume the documentation currently says that
we need to kill the old brick's PID, create a temporary empty
directory on the FUSE mount, check the xattrs, replace-brick with
commit force.

Is all this still necessary? I'm running Gluster 5.6 on CentOS 7
with a distribute–replicate volume.

No,  all these very steps are 'codified' into the `replace brick
commit force` command via
https://review.gluster.org/#/c/glusterfs/+/10076/ and
https://review.gluster.org/#/c/glusterfs/+/10448/. You can see the
commit messages of these 2 patches for more details.

You can play around with most of these commands in a 1 node setup
if you want to convince yourself that they work. There is no need
to form a cluster.
[root@tuxpad glusterfs]# gluster v create testvol replica 3
127.0.0.2:/home/ravi/bricks/brick{1..3} force
[root@tuxpad glusterfs]# gluster v start testvol
[root@tuxpad glusterfs]# mount -t glusterfs 127.0.0.2:testvol
/mnt/fuse_mnt/
[root@tuxpad glusterfs]# touch /mnt/fuse_mnt/FILE
[root@tuxpad glusterfs]# ll /home/ravi/bricks/brick*/FILE
-rw-r--r--. 2 root root 0 Jun 11 21:55 /home/ravi/bricks/brick1/FILE
-rw-r--r--. 2 root root 0 Jun 11 21:55 /home/ravi/bricks/brick2/FILE
-rw-r--r--. 2 root root 0 Jun 11 21:55 /home/ravi/bricks/brick3/FILE

[root@tuxpad glusterfs]# gluster v replace-brick testvol
127.0.0.2:/home/ravi/bricks/brick3
127.0.0.2:/home/ravi/bricks/brick3_new commit force
volume replace-brick: success: replace-brick commit force
operation successful
[root@tuxpad glusterfs]# ll /home/ravi/bricks/brick3_new/FILE

-rw-r--r--. 2 root root 0 Jun 11 21:55
/home/ravi/bricks/brick3_new/FILE

Why don't you send a patch to update the doc for replace-brick?
I'd be happy to review it. ;-)
HTH,
Ravi


Thank you,

¹ https://docs.gluster.org/en/latest/Administrator Guide/Managing
Volumes/
-- 
Alan Orth

alan.o...@gmail.com 
https://picturingjordan.com
https://englishbulgaria.net
https://mjanja.ch
"In heaven all the interesting people are missing." ―Friedrich
Nietzsche

___
Gluster-users mailing list
Gluster-users@gluster.org  
https://lists.gluster.org/mailman/listinfo/gluster-users




--
Alan Orth
alan.o...@gmail.com 
https://picturingjordan.com
https://englishbulgaria.net
https://mjanja.ch
"In heaven all the interesting people are missing." ―Friedrich Nietzsche
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Proper command for replace-brick on distribute–replicate?

2019-06-12 Thread Alan Orth
Dear Ravi,

Thanks for the confirmation—I replaced a brick in a volume last night and
by the morning I see that Gluster has replicated data there, though I don't
have any indication of its progress. The `gluster v heal volume info` and
`gluster v heal volume info split-brain` are all looking good so I guess
that's enough of an indication.

One question, though. Immediately after I replaced the brick I checked
`gluster v status volume` and I saw the following:

Task Status of Volume volume
--
Task : Rebalance
ID   : a890e99c-5715-4bc1-80ee-c28490612135
Status   : not started

I did not initiate a rebalance, so it's strange to see it there. Is Gluster
hinting that I should start a rebalance? If a rebalance is "not started"
shouldn't Gluster just not show it at all?

Regarding the patch to the documentation: absolutely! Let me just get my
Gluster back in order after my confusing upgrade last month. :P

Thanks,

On Tue, Jun 11, 2019 at 7:32 PM Ravishankar N 
wrote:

>
> On 11/06/19 9:11 PM, Alan Orth wrote:
>
> Dear list,
>
> In a recent discussion on this list Ravi suggested that the documentation
> for replace-brick¹ was out of date. For a distribute–replicate volume the
> documentation currently says that we need to kill the old brick's PID,
> create a temporary empty directory on the FUSE mount, check the xattrs,
> replace-brick with commit force.
>
> Is all this still necessary? I'm running Gluster 5.6 on CentOS 7 with a
> distribute–replicate volume.
>
> No,  all these very steps are 'codified' into the `replace brick commit
> force` command via https://review.gluster.org/#/c/glusterfs/+/10076/ and
> https://review.gluster.org/#/c/glusterfs/+/10448/.  You can see the
> commit messages of these 2 patches for more details.
>
> You can play around with most of these commands in a 1 node setup if you
> want to convince yourself that they work. There is no need to form a
> cluster.
> [root@tuxpad glusterfs]# gluster v create testvol replica 3 
> 127.0.0.2:/home/ravi/bricks/brick{1..3}
> force
> [root@tuxpad glusterfs]# gluster v start testvol
> [root@tuxpad glusterfs]# mount -t glusterfs 127.0.0.2:testvol
> /mnt/fuse_mnt/
> [root@tuxpad glusterfs]# touch /mnt/fuse_mnt/FILE
> [root@tuxpad glusterfs]# ll /home/ravi/bricks/brick*/FILE
> -rw-r--r--. 2 root root 0 Jun 11 21:55 /home/ravi/bricks/brick1/FILE
> -rw-r--r--. 2 root root 0 Jun 11 21:55 /home/ravi/bricks/brick2/FILE
> -rw-r--r--. 2 root root 0 Jun 11 21:55 /home/ravi/bricks/brick3/FILE
>
> [root@tuxpad glusterfs]# gluster v replace-brick testvol 
> 127.0.0.2:/home/ravi/bricks/brick3
> 127.0.0.2:/home/ravi/bricks/brick3_new commit force
> volume replace-brick: success: replace-brick commit force operation
> successful
> [root@tuxpad glusterfs]# ll /home/ravi/bricks/brick3_new/FILE
>
> -rw-r--r--. 2 root root 0 Jun 11 21:55 /home/ravi/bricks/brick3_new/FILE
> Why don't you send a patch to update the doc for replace-brick? I'd be
> happy to review it. ;-)
> HTH,
> Ravi
>
>
> Thank you,
>
> ¹ https://docs.gluster.org/en/latest/Administrator Guide/Managing Volumes/
> --
> Alan Orth
> alan.o...@gmail.com
> https://picturingjordan.com
> https://englishbulgaria.net
> https://mjanja.ch
> "In heaven all the interesting people are missing." ―Friedrich Nietzsche
>
> ___
> Gluster-users mailing 
> listGluster-users@gluster.orghttps://lists.gluster.org/mailman/listinfo/gluster-users
>
>

-- 
Alan Orth
alan.o...@gmail.com
https://picturingjordan.com
https://englishbulgaria.net
https://mjanja.ch
"In heaven all the interesting people are missing." ―Friedrich Nietzsche
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Proper command for replace-brick on distribute–replicate?

2019-06-11 Thread Ravishankar N


On 11/06/19 9:11 PM, Alan Orth wrote:

Dear list,

In a recent discussion on this list Ravi suggested that the 
documentation for replace-brick¹ was out of date. For a 
distribute–replicate volume the documentation currently says that we 
need to kill the old brick's PID, create a temporary empty directory 
on the FUSE mount, check the xattrs, replace-brick with commit force.


Is all this still necessary? I'm running Gluster 5.6 on CentOS 7 with 
a distribute–replicate volume.
No,  all these very steps are 'codified' into the `replace brick commit 
force` command via https://review.gluster.org/#/c/glusterfs/+/10076/ and 
https://review.gluster.org/#/c/glusterfs/+/10448/.  You can see the 
commit messages of these 2 patches for more details.


You can play around with most of these commands in a 1 node setup if you 
want to convince yourself that they work. There is no need to form a 
cluster.
[root@tuxpad glusterfs]# gluster v create testvol replica 3 
127.0.0.2:/home/ravi/bricks/brick{1..3} force

[root@tuxpad glusterfs]# gluster v start testvol
[root@tuxpad glusterfs]# mount -t glusterfs 127.0.0.2:testvol /mnt/fuse_mnt/
[root@tuxpad glusterfs]# touch /mnt/fuse_mnt/FILE
[root@tuxpad glusterfs]# ll /home/ravi/bricks/brick*/FILE
-rw-r--r--. 2 root root 0 Jun 11 21:55 /home/ravi/bricks/brick1/FILE
-rw-r--r--. 2 root root 0 Jun 11 21:55 /home/ravi/bricks/brick2/FILE
-rw-r--r--. 2 root root 0 Jun 11 21:55 /home/ravi/bricks/brick3/FILE

[root@tuxpad glusterfs]# gluster v replace-brick testvol 
127.0.0.2:/home/ravi/bricks/brick3 
127.0.0.2:/home/ravi/bricks/brick3_new commit force
volume replace-brick: success: replace-brick commit force operation 
successful

[root@tuxpad glusterfs]# ll /home/ravi/bricks/brick3_new/FILE

-rw-r--r--. 2 root root 0 Jun 11 21:55 /home/ravi/bricks/brick3_new/FILE

Why don't you send a patch to update the doc for replace-brick? I'd be 
happy to review it. ;-)

HTH,
Ravi


Thank you,

¹ https://docs.gluster.org/en/latest/Administrator Guide/Managing Volumes/
--
Alan Orth
alan.o...@gmail.com 
https://picturingjordan.com
https://englishbulgaria.net
https://mjanja.ch
"In heaven all the interesting people are missing." ―Friedrich Nietzsche

___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users