Hi, All and thank you.
We left it alone and finished rebalancing, it seems to be working . Thanks
again for your help
J. Sanchez
-
Jose Sanchez
Systems/Network Analyst
Center of Advanced Research Computing
1601 Central Ave.
MSC 01 1190
Albuquerque, NM
Hi,
Removing data to speed up from rebalance is not something that is recommended.
Rebalance can be stopped but if started again it will start from the beginning
(will have to check and skip the files already moved).
Rebalance will take a while, better to let it run. It doesn't have any
down
Hi All
We were able to get all 4 bricks are distributed , we can see the right amount
of space. but we have been rebalancing since 4 days ago for 16Tb. and still
only 8tb. is there a way to speed up. there is also data we can remove from it
to speed it up, but what is the best procedures
Hi Jose,
Why are all the bricks visible in volume info if the pre-validation
for add-brick failed? I suspect that the remove brick wasn't done
properly.
You can provide the cmd_history.log to verify this. Better to get the
other log messages.
Also I need to know what are the bricks that were
Looking at the logs , it seems that it is trying to add using the same port was
assigned for gluster01ib:
Any Ideas??
Jose
[2018-04-25 22:08:55.169302] I [MSGID: 106482]
[glusterd-brick-ops.c:447:__glusterd_handle_add_brick] 0-management: Received
add brick req
[2018-04-25
Hello Karthik
Im having trouble adding the two bricks back online. Any help is appreciated
thanks
when i try to add-brick command this is what i get
[root@gluster01 ~]# gluster volume add-brick scratch
gluster02ib:/gdata/brick2/scratch/
volume add-brick: failed: Pre Validation failed
On Wed, Apr 11, 2018 at 7:38 PM, Jose Sanchez wrote:
> Hi Karthik
>
> Looking at the information you have provided me, I would like to make sure
> that I’m running the right commands.
>
> 1. gluster volume heal scratch info
>
If the count is non zero, trigger the heal
Hi Jose,
Thanks for providing the volume info. You have 2 subvolumes. Data is
replicated within the bricks of that subvolumes.
First one consisting of Node A's brick1 & Node B's brick1 and the second
one consisting of Node A's brick2 and Node B's brick2.
You don't have the same data on all the 4
Hi Jose,
By switching into pure distribute volume you will lose availability if
something goes bad.
I am guessing you have a nX2 volume.
If you want to preserve one copy of the data in all the distributes, you
can do that by decreasing the replica count in the remove-brick operation.
If you have