Hello,
I am stuck with failure of my gluster 2x replica heal with messages at
glustershd.log as :
*[2018-11-21 05:28:07.813003] E [MSGID: 114031]
[client-rpc-fops.c:1646:client3_3_entrylk_cbk] 0-gv1-client-0: remote
operation failed [Transport endpoint is not connected]*
When the log hits
Hi Marcus,
/var/log/glusterfs/snaps/urd-gds-volume/snapd.log is the log file of the
snapview daemon that is mainly used for user serviceable snapshots. Are you
using that feature? i.e. are you accessing the snapshots of your volume
from the main volume's mount point?
Few other information that
We'll be in #gluster-meeting on freenode at 15:00 UTC on Wednesday, Nov
21st.
https://bit.ly/gluster-community-meetings has the agenda, feel free to add!
- amye
--
Amye Scavarda | a...@redhat.com | Gluster Community Lead
___
Gluster-users mailing list
reply inline.
On Tue, Nov 20, 2018 at 3:53 PM Gudrun Mareike Amedick
wrote:
>
> Hi,
>
> I think I know what happened. According to the logs, the crawlers recieved a
> signum(15). They seemed to have died before having finished. Probably too
> much to do simultaneously. I have disabled and
Hi,
I think I know what happened. According to the logs, the crawlers recieved a
signum(15). They seemed to have died before having finished. Probably too
much to do simultaneously. I have disabled and re-enabled quota and will set
the quotas again with more time.
Is there a way to restart a
Hello Ravi,
I am using Gluster v4.1.5. I have replica 4 volume. This is the info:
Volume Name: testv1
Type: Replicate
Volume ID: a5b2d650-4e93-4334-94bb-3105acb112d1
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 4 = 4
Transport-type: tcp
Bricks:
Brick1: