I can't find anything suspicious in the brick logs other than authetication
refused to clients trying to mount a dir that is not existing on the arb_n,
because the self-heal isn't working.
I tried to add another node and replace-brick a faulty arbiter, however this
new arbiter sees the same
Hi,
I think that the best way is to go through the logs on the affected arbiter
brick (maybe even temporarily increase the log level).
What is the output of:
find /var/brick/arb_0/brick -not -user 36 -printfind /var/brick/arb_0/brick
-not group 36 -print
Maybe there are some files/dirs that are
For the arb_0 I seeonly 8 clients , while there should be 12 clients:
Brick : 192.168.0.40:/var/bricks/0/brickClients connected : 12
Brick : 192.168.0.41:/var/bricks/0/brickClients connected : 12
Brick : 192.168.0.80:/var/bricks/arb_0/brickClients connected : 8
Can you try to reconnect them. The
Ok, will do.
working arbiter:
ls -ln /var/bricks/arb_0/ >>> drwxr-xr-x 13 33 33 146 Mai 29 22:38 brick
ls- lna /var/bricks/arb_0/brick >>> drw--- 262 0 0 8192 Mai 29 22:38
.glusterfs
+ all data-brick dirs ...
affected arbiter:
ls -ln /var/bricks/arb_0/ >>> drwxr-xr-x 3 0 0 24 Mai 30
I would avoid shrinking the volume. An oVirt user reported issues after volume
shrinking.
Did you try to format the arbiter brick and 'replace-brick' ?
Best Regards,Strahil Nikolov
I can't find anything suspicious in the brick logs other than authetication
refused to clients trying to mount
Hi Aravinda,
Thank you very much - we will give that a try.
On Mon, 31 May 2021 at 20:29, Aravinda VK wrote:
> Hi David,
>
> On 31-May-2021, at 10:37 AM, David Cunningham
> wrote:
>
> Hello,
>
> We have a GlusterFS configuration with mirrored nodes on the master side
> geo-replicating to
Hm,
I tried format and reset-brick on node2 - no success.
I tried new brick on new node3 and replace-brick - no success as the new
arbiter is created wrongly and self-heal does not work.
I also restarted all nodes turn by turn without any improvement.
If shrinking the volume is not
Srijan
no problem at all -- thanks for your help. If you need any additional
information please let me know.
Regards,
Marco
On Thu, 27 May 2021 at 18:39, Srijan Sivakumar wrote:
> Hi Marco,
>
> Thank you for opening the issue. I'll check the log contents and get back
> to you.
>
> On Thu,
Thanks Strahil,
unfortunately I cannot connect as the mount is denied as in mount.log provided.
IPs > n.n.n..100 are clients and simply cannot mount the volume. When killing
the arb pids on node2 new clients can mount the volume. When bringing them up
again I experience the same problem.
I
Hi David,
> On 31-May-2021, at 10:37 AM, David Cunningham
> wrote:
>
> Hello,
>
> We have a GlusterFS configuration with mirrored nodes on the master side
> geo-replicating to mirrored nodes on the secondary side.
>
> When geo-replication is initially created it seems to automatically add
10 matches
Mail list logo