Thank you for the clarification
Regards
David Spisla

Outlook für Android<https://aka.ms/ghei36> herunterladen

________________________________

David Spisla
Software Engineer
[email protected]
+49 761 59034852
iTernity GmbH
Heinrich-von-Stephan-Str. 21
79100 Freiburg
Deutschland
Website

Newsletter

Support Portal
iTernity GmbH. Geschäftsführer: Ralf Steinemann.
​Eingetragen beim Amtsgericht Freiburg: HRB-Nr. 701332.
​USt.Id DE242664311. [v01.023]
From: Atin Mukherjee <[email protected]>
Sent: Friday, June 14, 2019 7:03:05 PM
To: David Spisla
Cc: [email protected] List
Subject: Re: [Gluster-users] Duplicated brick processes after restart of 
glusterd

Please see https://bugzilla.redhat.com/show_bug.cgi?id=1696147 which is fixed 
in 5.6 . Although a race, I believe you're hitting this. Although the title of 
the bug reflects it to be shd + brick multiplexing combo, but it's applicable 
for bricks too.

On Fri, Jun 14, 2019 at 2:07 PM David Spisla 
<[email protected]<mailto:[email protected]>> wrote:
Dear Gluster Community,

this morning I had an interesting observation. On my 2 Node Gluster v5.5 System 
with 3 Replica1 volumes (volume1, volume2, test) I had duplicated brick 
processes (See output of ps aux in attached file duplicate_bricks.txt) for each 
of the volumes. Additionally there is a fs-ss volume which I use instead of 
gluster_shared_storage but this volume was not effected.

After doing some research I found a hint in glusterd.log . It seems to be that 
after a restart glusterd couldn't found the pid files for the freshly created 
brick processes and create new brick processes. One can see in the brick logs 
that for all the volumes that two brick processes were created just one after 
another.

Result: Two brick processes for each of the volumes volume1, volume2 and test.
"gluster vo status" shows that the pid number was mapped to the wrong port 
number for hydmedia and impax

But beside of that the volume was working correctly. I resolve that issue with 
a workaround. Kill all brick processes and restart glusterd. After that 
everything is fine.

Is this a bug in glusterd? You can find all relevant informations attached below

Regards
David Spisla
_______________________________________________
Gluster-users mailing list
[email protected]<mailto:[email protected]>
https://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
[email protected]
https://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to