Hello,
I am running a 3 node (with arbiter) GlusterFS 4.1.6 cluster with one
replicated volume where I have quotas enabled.
Now I checked my quotad.log file on one of the nodes and can see a lot of these
warning messages which are repeated a lot:
The message "W [MSGID: 101016]
HI Nithya
We have a test gluster setup.We are testing the rebalancing option of
gluster. So we started the volume which have 1x3 brick with some data on it
.
command : gluster volume create test-volume replica 3
192.168.xxx.xx1:/home/data/repl 192.168.xxx.xx2:/home/data/repl
Hi Nithya,
Indeed, I upgraded from 4.1 to 5.3, at which point I started seeing
crashes, and no further releases have been made yet.
volume info:
Type: Replicate
Volume ID: SNIP
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 4 = 4
Transport-type: tcp
Bricks:
Brick1: SNIP
Hello Raghavendra,
I can not give you the output of the gluster commands because I repaired
the system already. But beside of this this errors occurs randomly. I am
sure that only one copy of the file was corrupted because it is part of a
test and I corrupt one copy of the file manually on brick
Hi Gluster-Group,
I've stumbled upon a memory leak in the gluster client 4.1. It
manifests itself the same way the last one [1] did in 3.12. Memory
consumption of the glusterfs process climbs until the system is out
of memory and the process gets killed.
Excerpt from the system log:
rnel: Out of
On Wed, 6 Feb 2019 at 14:34, Hu Bert wrote:
> Hi there,
>
> just curious - from man mount.glusterfs:
>
>lru-limit=N
> Set fuse module's limit for number of inodes kept in LRU
> list to N [default: 0]
>
Sorry, that is a bug in the man page and we will fix that. The current
Hi there,
just curious - from man mount.glusterfs:
lru-limit=N
Set fuse module's limit for number of inodes kept in LRU
list to N [default: 0]
This seems to be the default already? Set it explicitly?
Regards,
Hubert
Am Mi., 6. Feb. 2019 um 09:26 Uhr schrieb Nithya
Hey there,
just a little update...
This week we switched from our 3 "old" gluster servers to 3 new ones,
and with that we threw some hardware at the problem...
old: 3 servers, each has 4 * 10 TB disks; each disk is used as a brick
-> 4 x 3 = 12 distribute-replicate
new: 3 servers, each has 10 *
On Tue, Feb 5, 2019 at 8:43 PM Nithya Balachandran
wrote:
>
>
> On Tue, 5 Feb 2019 at 17:26, deepu srinivasan wrote:
>
>> HI Nithya
>> We have a test gluster setup.We are testing the rebalancing option of
>> gluster. So we started the volume which have 1x3 brick with some data on it
>> .
>>
Hi,
The client logs indicates that the mount process has crashed.
Please try mounting the volume with the volume option lru-limit=0 and see
if it still crashes.
Thanks,
Nithya
On Thu, 24 Jan 2019 at 12:47, Hu Bert wrote:
> Good morning,
>
> we currently transfer some data to a new glusterfs
Hi Artem,
Do you still see the crashes with 5.3? If yes, please try mount the volume
using the mount option lru-limit=0 and see if that helps. We are looking
into the crashes and will update when have a fix.
Also, please provide the gluster volume info for the volume in question.
regards,
11 matches
Mail list logo