If you use the same block device for the arbiter, I would recommend you to
'mkfs' again.
For example , XFS brick will be done via 'mkfs.xfs -f -i size=512 /dev/DEVICE'.
Reusing a brick without recreating the FS is error-prone.
Also, don't forget to create your brick dir , once the device is
First to answer your question how this first happened, I reached that issue
first by simply rebooting my arbiter node yesterday morning in order to due
some maintenance which I do on a regular basis and was never a problem before
GlusterFS 7.8.
I have now removed the arbiter brick from all of
Il 27/10/20 07:40, mabi ha scritto:
> First to answer your question how this first happened, I reached that issue
> first by simply rebooting my arbiter node yesterday morning in order to due
> some maintenance which I do on a regular basis and was never a problem before
> GlusterFS 7.8.
In my
You need to fix that "reject" issue before trying anything else.
Have you tried to "detach" the arbiter and then "probe" it again ?
I have no idea what you did to reach that state - can you provide the details ?
Best Regards,
Strahil Nikolov
В понеделник, 26 октомври 2020 г., 20:38:38
Ok I see I won't go down that path of disabling quota.
I could now remove the arbiter brick of my volume which has the quota issue so
it is now a simple 2 nodes replica with 1 brick per node.
Now I would like to add the brick back but I get the following error:
volume add-brick: failed: Host
Detaching the arbiter is pointless...
Quota is an extended file attribute, and thus disabling and reenabling quota on
a volume with millions of files will take a lot of time and lots of IOPS. I
would leave it as a last resort.
Also, it was mentioned in the list about the following script that
‐‐‐ Original Message ‐‐‐
On Monday, October 26, 2020 3:39 PM, Diego Zuccato
wrote:
> Memory does not serve me well (there are 28 disks, not 26!), but bash
> history does :)
Yes, I also too often rely on history ;)
> gluster volume remove-brick BigVol replica 2
>
‐‐‐ Original Message ‐‐‐
On Monday, October 26, 2020 2:56 PM, Diego Zuccato
wrote:
> The volume is built by 26 10TB disks w/ genetic data. I currently don't
> have exact numbers, but it's still at the beginning, so there are a bit
> less than 10TB actually used.
> But you're only
On Monday, October 26, 2020 11:34 AM, Diego Zuccato
wrote:
> IIRC it's the same issue I had some time ago.
> I solved it by "degrading" the volume to replica 2, then cleared the
> arbiter bricks and upgraded again to replica 3 arbiter 1.
Thanks Diego for pointing out this workaround. How much
Il 26/10/20 15:09, mabi ha scritto:
> Right, seen liked that this sounds reasonable. Do you actually remember the
> exact command you ran in order to remove the brick? I was thinking this
> should be it:
> gluster volume remove-brick force
> but should I use "force" or "start"?
Memory does
Il 26/10/20 14:46, mabi ha scritto:
>> I solved it by "degrading" the volume to replica 2, then cleared the
>> arbiter bricks and upgraded again to replica 3 arbiter 1.
> Thanks Diego for pointing out this workaround. How much data do you have on
> that volume in terms of TB and files? Because I
Dear all,
Thanks to this fix I could successfully upgrade from GlusterFS 6.9 to 7.8 but
now, 1 week later after the upgrade, I have rebooted my third node (arbiter
node) and unfortunately the bricks do not want to come up on that node. I get
the same following error message:
[2020-10-26
Il 26/10/20 07:40, mabi ha scritto:
> Thanks to this fix I could successfully upgrade from GlusterFS 6.9 to
> 7.8 but now, 1 week later after the upgrade, I have rebooted my third
> node (arbiter node) and unfortunately the bricks do not want to come up
> on that node. I get the same following
Hi,
issue https://github.com/gluster/glusterfs/issues/1332 is fixed now with
https://github.com/gluster/glusterfs/commit/865cca1190e233381f975ff36118f46e29477dcf
.
It will be backported to release-7 and release-8 branches soon.
On Mon, Sep 7, 2020 at 1:14 AM Strahil Nikolov
wrote:
> Your
Your e-mail got in the spam...
If you haven't fixed the issue, check Hari's topic about quota issues (based on
the error message you provided) :
https://medium.com/@harigowtham/glusterfs-quota-fix-accounting-840df33fcd3a
Most probably there is a quota issue and you need to fix it.
Best
Dear Nikhil,
Thank you for your answer. So does this mean that all my FUSE clients where I
have the volume mounted will not loose at any time their connection during the
whole upgrade procedure of all 3 nodes?
I am asking because if I understand correctly there will be an overlap of time
Hello Mabi
You don't need to follow the offline upgrade procedure. Please do follow
the online upgrade procedure only. Upgrade the nodes one by one, you will
notice the `Peer Rejected` state, after upgrading one node or so, but once
all the nodes are upgraded it will be back to `Peer in
Hello,
So to be precise I am exactly having the following issue:
https://github.com/gluster/glusterfs/issues/1332
I could not wait any longer to find some workarounds or quick fixes so I
decided to downgrade my rejected from 7.7 back to 6.9 which worked.
I would be really glad if someone
Hello,
I just started an upgrade of my 3 nodes replica (incl arbiter) of GlusterFS
from 6.9 to 7.7 but unfortunately after upgrading the first node, that node
gets rejected due to the following error:
[2020-08-22 17:43:00.240990] E [MSGID: 106012]
19 matches
Mail list logo