Hi Ravi,

Thanks for the clarification. For one reason or another I thought there was no 
longer a way to shrink a gluster filesystem but it appears from testing you are 
correct. I generated 100 test files on a 2node/4 ebs brick distributed gluster 
volume, removed one brick, committed and ended up having those files 
successfully migrated to the remaining 3 bricks.

Volume Name: volume1
Type: Distribute
Volume ID: 1180f5ec-af82-4322-8fc2-1add01161442
Status: Started
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: gluster1.local:/export/xvdf/brick
Brick2: gluster1.local:/export/xvdg/brick
Brick3: gluster2.local:/export/xvdf/brick
Brick4: gluster2.local:/export/xvdg/brick
[root@ip-172-31-19-77 ~]#

Gluster1

[root@ip-172-31-19-77 ~]# ls /export/xvdf/brick/
1  13  24  26  33  47  48  5  51  53  54  58  61  62  64  65  70  72  73  8  9  
91  94  96  97  98
[root@ip-172-31-19-77 ~]# ls /export/xvdg/brick/
11  18  19  20  23  27  28  32  35  38  44  46  49  63  68  69  7  78  79  82  
85  86  88  93  99
[root@ip-172-31-19-77 ~]# ls /export/xvdf/brick/ | wc -l
26
[root@ip-172-31-19-77 ~]# ls /export/xvdg/brick/ | wc -l
25

Gluster2

[ec2-user@ip-172-31-19-78 ~]$ ls /export/xvdf/brick/
16  22  29  30  34  37  4  41  43  56  57  60  71  74  75  76  80  83  89  90  
92
[ec2-user@ip-172-31-19-78 ~]$ ls /export/xvdg/brick/
10   12  15  2   25  31  39  42  50  55  6   67  81  87
100  14  17  21  3   36  40  45  52  59  66  77  84  95
[ec2-user@ip-172-31-19-78 ~]$ ls /export/xvdg/brick/ | wc -l
28
[ec2-user@ip-172-31-19-78 ~]$ ls /export/xvdf/brick/ | wc -l
21

Remove Brick Operation

[root@ip-172-31-19-77 ~]# gluster volume remove-brick volume1 
gluster1.local:/export/xvdf/brick start
volume remove-brick start: success
ID: b0a6decd-9130-4773-97f9-af108ff0bd77
[root@ip-172-31-19-77 ~]# gluster volume remove-brick volume1 
gluster1.local:/export/xvdf/brick status
                                    Node Rebalanced-files          size       
scanned      failures       skipped               status   run time in secs
                               ---------      -----------   -----------   
-----------   -----------   -----------         ------------     --------------
                               localhost               41        0Bytes         
  141             0             0            completed               1.00

[root@ip-172-31-19-77 ~]# gluster volume remove-brick volume1 
gluster1.local:/export/xvdf/brick commit
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit: success

[root@ip-172-31-19-77 ~]# gluster volume info volume1

Volume Name: volume1
Type: Distribute
Volume ID: 1180f5ec-af82-4322-8fc2-1add01161442
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: gluster1.local:/export/xvdg/brick
Brick2: gluster2.local:/export/xvdf/brick
Brick3: gluster2.local:/export/xvdg/brick


I checked the brick mount points and the overall gluster mount point and found 
that it did indeed migrate the data from the removed brick to the others and 
all files were accounted for.

I checked the most recent documentation on github here 
https://github.com/gluster/glusterfs/blob/master/doc/admin-guide/en-US/markdown/admin_managing_volumes.md#shrinking-volumes
 and found only examples for 'remove-brick force' (which obviously doesn't 
migrate any data) and 'replace-brick' (which is now deprecated but isn't noted 
as such in the docs). I'm left wondering why they don't have a non-forceful 
remove-brick operation documented for people that simply want to remove a brick 
while migrating data to remaining bricks.

Am I missing something or this common use case undocumented?

John







From: Ravishankar N
Date: Wednesday, April 1, 2015 at 9:38 AM
To: John Lilley, "[email protected]<mailto:[email protected]>"
Subject: Re: [Gluster-users] Shrinking gluster filesystem in 3.6.2



On 03/31/2015 05:12 AM, Lilley, John F. wrote:
Hi,

I'd like to shrink my aws/ebs-based *distribute-only* gluster file system by 
migrating the data to other already existing, active and partially utilized 
bricks but found the 'replace-brick start' mentioned in the documentation is 
now deprecated.  I see that there has been some back and forth on the mailing 
list regarding migrating data using self-heal on a replicated system but not so 
much on a distribute-only file system. Can anyone tell me the blessed way of 
doing this in 3.6.2? Is there one?

To be clear, all of the ebs-based bricks are partially utilized at this point 
so I'd need a method to migrate the data first.


If I understand you correctly, you want to replace a brick in a distribute 
volume with one of lesser capacity. You could first add a new brick and then 
remove the existing brick with remove-brick start/status/commit sequence. 
Something like this:
------------------------------------------------
[root@tuxpad ~]# gluster volume info testvol

Volume Name: testvol
Type: Distribute
Volume ID: a89aa154-885c-4e14-8d3a-b555733b11f1
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: 127.0.0.2:/home/ravi/bricks/brick1
Brick2: 127.0.0.2:/home/ravi/bricks/brick2
Brick3: 127.0.0.2:/home/ravi/bricks/brick3
[root@tuxpad ~]#
[root@tuxpad ~]#
[root@tuxpad ~]# gluster volume add-brick testvol 
127.0.0.2:/home/ravi/bricks/brick{4..6}
volume add-brick: success
[root@tuxpad ~]#
[root@tuxpad ~]# gluster volume info testvol

Volume Name: testvol
Type: Distribute
Volume ID: a89aa154-885c-4e14-8d3a-b555733b11f1
Status: Started
Number of Bricks: 6
Transport-type: tcp
Bricks:
Brick1: 127.0.0.2:/home/ravi/bricks/brick1
Brick2: 127.0.0.2:/home/ravi/bricks/brick2
Brick3: 127.0.0.2:/home/ravi/bricks/brick3
Brick4: 127.0.0.2:/home/ravi/bricks/brick4
Brick5: 127.0.0.2:/home/ravi/bricks/brick5
Brick6: 127.0.0.2:/home/ravi/bricks/brick6
[root@tuxpad ~]#
[root@tuxpad ~]#
[root@tuxpad ~]#
[root@tuxpad ~]# gluster v remove-brick testvol 
127.0.0.2:/home/ravi/bricks/brick{1..3} start
volume remove-brick start: success
ID: d535675e-8362-4a44-a291-1e567a77531e
[root@tuxpad ~]# gluster v remove-brick testvol 
127.0.0.2:/home/ravi/bricks/brick{1..3} status
                                    Node Rebalanced-files          size       
scanned      failures       skipped               status   run time in secs
                               ---------      -----------   -----------   
-----------   -----------   -----------         ------------     --------------
                               localhost               10        0Bytes         
   20             0             0            completed               0.00
[root@tuxpad ~]#
[root@tuxpad ~]# gluster v remove-brick testvol 
127.0.0.2:/home/ravi/bricks/brick{1..3} commit
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit: success
Check the removed bricks to ensure all files are migrated.
If files with data are found on the brick path, copy them via a gluster mount 
point before re-purposing the removed brick.
[root@tuxpad ~]#
[root@tuxpad ~]# gluster volume info testvol

Volume Name: testvol
Type: Distribute
Volume ID: a89aa154-885c-4e14-8d3a-b555733b11f1
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: 127.0.0.2:/home/ravi/bricks/brick4
Brick2: 127.0.0.2:/home/ravi/bricks/brick5
Brick3: 127.0.0.2:/home/ravi/bricks/brick6
[root@tuxpad ~]#
------------------------------------------------
Hope this helps.
Ravi

Thank You,
John



_______________________________________________
Gluster-users mailing list
[email protected]<mailto:[email protected]>http://www.gluster.org/mailman/listinfo/gluster-users

_______________________________________________
Gluster-users mailing list
[email protected]
http://www.gluster.org/mailman/listinfo/gluster-users

Reply via email to