Usually there is always only 1 "master" , but when you power off one of the 2
nodes - the geo rep should handle that and the second node should take the job.
How long did you wait after gluster1 has been rebooted ?
Best Regards,
Strahil Nikolov
В понеделник, 26 октомври 2020 г., 22:46:21
You need to fix that "reject" issue before trying anything else.
Have you tried to "detach" the arbiter and then "probe" it again ?
I have no idea what you did to reach that state - can you provide the details ?
Best Regards,
Strahil Nikolov
В понеделник, 26 октомври 2020 г., 20:38:38
Hello
How do you keep track of the health status of your Gluster volumes? When
Brick went down (crash, failure, shutdown), node failure, peering issue,
on-going healing?
Gluster Tendrl is complex and sometimes it's broken, Prometheus exporter
still lacking, gstatus is basic.
Currently, to
Well I do not reboot the host. I shut down the host. Then after 15 min give
up.
Don't know why that happened.
I will try it latter
---
Gilberto Nunes Ferreira
Em seg., 26 de out. de 2020 às 21:31, Strahil Nikolov
escreveu:
> Usually there is always only 1 "master" , but when you power
Ok I see I won't go down that path of disabling quota.
I could now remove the arbiter brick of my volume which has the quota issue so
it is now a simple 2 nodes replica with 1 brick per node.
Now I would like to add the brick back but I get the following error:
volume add-brick: failed: Host
Detaching the arbiter is pointless...
Quota is an extended file attribute, and thus disabling and reenabling quota on
a volume with millions of files will take a lot of time and lots of IOPS. I
would leave it as a last resort.
Also, it was mentioned in the list about the following script that
I was able to solve the issue restarting all servers.
Now I have another issue!
I just powered off the gluster01 server and then the geo-replication
entered in faulty status.
I tried to stop and start the gluster geo-replication like that:
gluster volume geo-replication DATA
Hi there...
I'd created a 2 gluster vol and another 1 gluster server acting as a backup
server, using geo-replication.
So in gluster01 I'd issued the command:
gluster peer probe gluster02;gluster peer probe gluster03
gluster vol create DATA replica 2 gluster01:/DATA/master01-data
‐‐‐ Original Message ‐‐‐
On Monday, October 26, 2020 3:39 PM, Diego Zuccato
wrote:
> Memory does not serve me well (there are 28 disks, not 26!), but bash
> history does :)
Yes, I also too often rely on history ;)
> gluster volume remove-brick BigVol replica 2
>
‐‐‐ Original Message ‐‐‐
On Monday, October 26, 2020 2:56 PM, Diego Zuccato
wrote:
> The volume is built by 26 10TB disks w/ genetic data. I currently don't
> have exact numbers, but it's still at the beginning, so there are a bit
> less than 10TB actually used.
> But you're only
On Monday, October 26, 2020 11:34 AM, Diego Zuccato
wrote:
> IIRC it's the same issue I had some time ago.
> I solved it by "degrading" the volume to replica 2, then cleared the
> arbiter bricks and upgraded again to replica 3 arbiter 1.
Thanks Diego for pointing out this workaround. How much
Il 26/10/20 15:09, mabi ha scritto:
> Right, seen liked that this sounds reasonable. Do you actually remember the
> exact command you ran in order to remove the brick? I was thinking this
> should be it:
> gluster volume remove-brick force
> but should I use "force" or "start"?
Memory does
Il 26/10/20 14:46, mabi ha scritto:
>> I solved it by "degrading" the volume to replica 2, then cleared the
>> arbiter bricks and upgraded again to replica 3 arbiter 1.
> Thanks Diego for pointing out this workaround. How much data do you have on
> that volume in terms of TB and files? Because I
HI Strahil,
Thanx for your feedback.
I had already received your feedback which seems to be very useful.
You had pointed at /var/lib/glusterd/groups/db-workload profile which
includes recommended gluster volume settings for such work-loads (includes
direct IO).
I will be testing this setup though
HI Strahil, thanks for your reply,
I had one node with 13 clients, the rest with 14. I've just restarted the
services on that node, now I have 14, let's see what happens.
Regarding the samba repos, I wasn't aware of that, I was using centos main
repo. I'll check the out
Best Regards,
Martin
On
Dear all,
Thanks to this fix I could successfully upgrade from GlusterFS 6.9 to 7.8 but
now, 1 week later after the upgrade, I have rebooted my third node (arbiter
node) and unfortunately the bricks do not want to come up on that node. I get
the same following error message:
[2020-10-26
Il 26/10/20 07:40, mabi ha scritto:
> Thanks to this fix I could successfully upgrade from GlusterFS 6.9 to
> 7.8 but now, 1 week later after the upgrade, I have rebooted my third
> node (arbiter node) and unfortunately the bricks do not want to come up
> on that node. I get the same following
17 matches
Mail list logo