Isn't this just brick multiplexing?

On June 19, 2017 5:55:54 AM PDT, Atin Mukherjee <[email protected]> wrote:
>On Sun, Jun 18, 2017 at 1:40 PM, Yong Zhang <[email protected]> wrote:
>
>> Hi, all
>>
>>
>>
>> I found two of my bricks from different volumes are using the same
>port
>> 49154 on the same glusterfs server node, is this normal?
>>
>
>No it's not.
>
>Can you please help me with the following information:
>
>1. gluster --version
>2. glusterd log & cmd_history logs from both the nodes
>3. If you are using latest gluster release (3.11) then glusterd
>statedump
>output by executing
>    # kill -SIGUSR1 $(pidof glusterd)
>    the file will be available in /var/run/gluster
>
>
>>
>> Status of volume: home-rabbitmq-qa
>>
>> Gluster process                             TCP Port  RDMA Port 
>Online
>> Pid
>>
>> ------------------------------------------------------------
>> ------------------
>>
>> Brick 10.10.1.100:/glusterfsvolumes/home/ho
>>
>> me-rabbitmq-qa/brick                        49154     0          Y
>> 1538
>>
>> Brick 10.10.1.101:/glusterfsvolumes/home/ho
>>
>> me-rabbitmq-qa/brick                        49154     0          Y
>> 1584
>>
>> Self-heal Daemon on localhost               N/A       N/A        Y
>> 4624
>>
>> Self-heal Daemon on devshglus02.acslocal.ho
>>
>> neywell.com                                 N/A       N/A        Y
>> 2218
>>
>>
>>
>> Task Status of Volume home-rabbitmq-qa
>>
>> ------------------------------------------------------------
>> ------------------
>>
>> There are no active volume tasks
>>
>> Status of volume: paas-ota-qa
>>
>> Gluster process                             TCP Port  RDMA Port 
>Online
>> Pid
>>
>> ------------------------------------------------------------
>> ------------------
>>
>> Brick 10.10.1.100:/glusterfsvolumes/paas/pa
>>
>> as-ota-qa/brick                             49154     0          Y
>> 10320
>>
>> Brick 10.10.1.101:/glusterfsvolumes/paas/pa
>>
>> as-ota-qa/brick                             49154     0          Y
>> 987
>>
>> Self-heal Daemon on localhost               N/A       N/A        Y
>> 4624
>>
>> Self-heal Daemon on devshglus02.acslocal.ho
>>
>> neywell.com                                 N/A       N/A        Y
>> 2218
>>
>>
>>
>> Task Status of Volume paas-ota-qa
>>
>> ------------------------------------------------------------
>> ------------------
>>
>> There are no active volume tasks
>>
>> _______________________________________________
>> Gluster-users mailing list
>> [email protected]
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.
_______________________________________________
Gluster-users mailing list
[email protected]
http://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to