Hi Strahil,
The name of device is not at all a problem here. Can you please check the log
of glusterd, and see if there is any useful information about the failure. Also
please provide the output of `lvscan` and `lvs --noheadings -o pool_lv` from
all nodes
Regards
Rafi KC
- Original
- Original Message -
From: "Atin Mukherjee"
To: "Rafi Kavungal Chundattu Parambil" , "Riccardo Murri"
Cc: gluster-users@gluster.org
Sent: Wednesday, March 27, 2019 4:07:42 PM
Subject: Re: [Gluster-users] cannot add server back to cluster after
reins
Hi David Spisla,
gluster_shared_storage works just like yet another volume(There are some
special handling related to authentication). So bringing down a glusterd node
shouldn't have lead to unmount of the volume.
Can you please share the mount log, brick logs at time when you stopped
Let me try out the reproducible. By the time, can you try taking statedump of
the client process, snapd process, snapshot brick process. Please refer to
documentation [1] in case if you have any trouble in performing statedump
operation.
[1] :
Can you try mounting snapshot just like we mount a regular volume?
syntax mount -t glusterfs host1:/snaps//
Regards
Rafi KC
- Original Message -
From: "Riccardo Murri"
To: gluster-users@gluster.org
Sent: Wednesday, July 18, 2018 3:58:34 PM
Subject: [Gluster-users] Cannot list
You have to figure out the difference in volinfo in the peers . and rectify it.
Or simply you can reduce version in vol info by one in node3 and restating the
glusterd will solve the problem.
But I would be more interested to figure out why glusterd crashed.
1) Can you paste back trace of the