[Gluster-users] quotad error log warnings repeated

2019-02-06 Thread mabi
Hello, I am running a 3 node (with arbiter) GlusterFS 4.1.6 cluster with one replicated volume where I have quotas enabled. Now I checked my quotad.log file on one of the nodes and can see a lot of these warning messages which are repeated a lot: The message "W [MSGID: 101016]

Re: [Gluster-users] Getting timedout error while rebalancing

2019-02-06 Thread deepu srinivasan
HI Nithya We have a test gluster setup.We are testing the rebalancing option of gluster. So we started the volume which have 1x3 brick with some data on it . command : gluster volume create test-volume replica 3 192.168.xxx.xx1:/home/data/repl 192.168.xxx.xx2:/home/data/repl

Re: [Gluster-users] Message repeated over and over after upgrade from 4.1 to 5.3: W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329) [0x7fd966fcd329] -->/us

2019-02-06 Thread Artem Russakovskii
Hi Nithya, Indeed, I upgraded from 4.1 to 5.3, at which point I started seeing crashes, and no further releases have been made yet. volume info: Type: Replicate Volume ID: SNIP Status: Started Snapshot Count: 0 Number of Bricks: 1 x 4 = 4 Transport-type: tcp Bricks: Brick1: SNIP

Re: [Gluster-users] Corrupted File readable via FUSE?

2019-02-06 Thread David Spisla
Hello Raghavendra, I can not give you the output of the gluster commands because I repaired the system already. But beside of this this errors occurs randomly. I am sure that only one copy of the file was corrupted because it is part of a test and I corrupt one copy of the file manually on brick

[Gluster-users] gluster client 4.1 memory leak

2019-02-06 Thread Richard Neuboeck
Hi Gluster-Group, I've stumbled upon a memory leak in the gluster client 4.1. It manifests itself the same way the last one [1] did in 3.12. Memory consumption of the glusterfs process climbs until the system is out of memory and the process gets killed. Excerpt from the system log: rnel: Out of

Re: [Gluster-users] gluster 5.3: transport endpoint gets disconnected - Assertion failed: GF_MEM_TRAILER_MAGIC

2019-02-06 Thread Nithya Balachandran
On Wed, 6 Feb 2019 at 14:34, Hu Bert wrote: > Hi there, > > just curious - from man mount.glusterfs: > >lru-limit=N > Set fuse module's limit for number of inodes kept in LRU > list to N [default: 0] > Sorry, that is a bug in the man page and we will fix that. The current

Re: [Gluster-users] gluster 5.3: transport endpoint gets disconnected - Assertion failed: GF_MEM_TRAILER_MAGIC

2019-02-06 Thread Hu Bert
Hi there, just curious - from man mount.glusterfs: lru-limit=N Set fuse module's limit for number of inodes kept in LRU list to N [default: 0] This seems to be the default already? Set it explicitly? Regards, Hubert Am Mi., 6. Feb. 2019 um 09:26 Uhr schrieb Nithya

Re: [Gluster-users] usage of harddisks: each hdd a brick? raid?

2019-02-06 Thread Hu Bert
Hey there, just a little update... This week we switched from our 3 "old" gluster servers to 3 new ones, and with that we threw some hardware at the problem... old: 3 servers, each has 4 * 10 TB disks; each disk is used as a brick -> 4 x 3 = 12 distribute-replicate new: 3 servers, each has 10 *

Re: [Gluster-users] Getting timedout error while rebalancing

2019-02-06 Thread Atin Mukherjee
On Tue, Feb 5, 2019 at 8:43 PM Nithya Balachandran wrote: > > > On Tue, 5 Feb 2019 at 17:26, deepu srinivasan wrote: > >> HI Nithya >> We have a test gluster setup.We are testing the rebalancing option of >> gluster. So we started the volume which have 1x3 brick with some data on it >> . >>

Re: [Gluster-users] gluster 5.3: transport endpoint gets disconnected - Assertion failed: GF_MEM_TRAILER_MAGIC

2019-02-06 Thread Nithya Balachandran
Hi, The client logs indicates that the mount process has crashed. Please try mounting the volume with the volume option lru-limit=0 and see if it still crashes. Thanks, Nithya On Thu, 24 Jan 2019 at 12:47, Hu Bert wrote: > Good morning, > > we currently transfer some data to a new glusterfs

Re: [Gluster-users] Message repeated over and over after upgrade from 4.1 to 5.3: W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329) [0x7fd966fcd329] -->/us

2019-02-06 Thread Nithya Balachandran
Hi Artem, Do you still see the crashes with 5.3? If yes, please try mount the volume using the mount option lru-limit=0 and see if that helps. We are looking into the crashes and will update when have a fix. Also, please provide the gluster volume info for the volume in question. regards,