Re: [Gluster-users] gluster 3.12.8 fuse consume huge memory

2018-08-31 Thread huting3
If I just mount the client,I the memory will not rise. But when I read and write a lot of files(billions), the client will consume huge memory. You can check it in this way.

Re: [Gluster-users] Transport endpoint is not connected : issue

2018-08-31 Thread Atin Mukherjee
Can you please pass all the gluster log files from the server where the transport end point not connected error is reported? As restarting glusterd didn’t solve this issue, I believe this isn’t a stale port problem but something else. Also please provide the output of ‘gluster v info ’ (@cc Ravi,

[Gluster-users] Transport endpoint is not connected : issue

2018-08-31 Thread Johnson, Tim
Hello all, We have a gluster replicate (with arbiter) volumes that we are getting “Transport endpoint is not connected” with on a rotating basis from each of the two file servers, and a third host that has the arbiter bricks on. This is happening when trying to run a heal on all the

[Gluster-users] Bug with hardlink limitation in 3.12.13 ?

2018-08-31 Thread Reiner Keller
Hello, Am 31.08.2018 um 13:59 schrieb Shyam Ranganathan: > I suspect you have hit this: > https://bugzilla.redhat.com/show_bug.cgi?id=1602262#c5 > > I further suspect your older setup was 3.10 based and not 3.12 based. > > There is an additional feature added in 3.12 that stores GFID to path >

Re: [Gluster-users] gluster 3.12.8 fuse consume huge memory

2018-08-31 Thread Darrell Budic
I’m not seeing any leaks myself, been on 3.12.13 for about 38 hours now, still small. You did restart that node, or at least put it into maintenance (if it’s ovirt) to be sure you restarted the glusterfs processes after updating? That’s a lot of run time unless it’s really busy, so figured I’d

Re: [Gluster-users] [External] Re: file metadata operations performance - gluster 4.1

2018-08-31 Thread Davide Obbi
it didnt make a difference. I will try to re-configure with a 2x3 config On Fri, Aug 31, 2018 at 1:48 PM Raghavendra Gowdappa wrote: > another relevant option is setting cluster.lookup-optimize on. > > On Fri, Aug 31, 2018 at 3:22 PM, Davide Obbi > wrote: > >> #gluster vol set VOLNAME group

Re: [Gluster-users] Bug with hardlink limitation in 3.12.13 ?

2018-08-31 Thread Shyam Ranganathan
On 08/31/2018 07:15 AM, Reiner Keller wrote: > Hello, > > I got yesterday unexpected error "No space left on device" on my new > gluster volume caused by too many hardlinks. > This happened while I done "rsync --aAHXxv ..." replication from old > gluster to new gluster servers - each running

Re: [Gluster-users] [External] Re: file metadata operations performance - gluster 4.1

2018-08-31 Thread Raghavendra Gowdappa
another relevant option is setting cluster.lookup-optimize on. On Fri, Aug 31, 2018 at 3:22 PM, Davide Obbi wrote: > #gluster vol set VOLNAME group nl-cache --> didn't know there are groups > of options, after this command i got set the following: > performance.nl-cache-timeout: 600 >

[Gluster-users] Bug with hardlink limitation in 3.12.13 ?

2018-08-31 Thread Reiner Keller
Hello, I got yesterday unexpected error "No space left on device" on my new gluster volume caused by too many hardlinks. This happened while I done "rsync --aAHXxv ..." replication from old gluster to new gluster servers - each running latest version 3.12.13 (for changing volume schema from 2x2

Re: [Gluster-users] [External] Re: file metadata operations performance - gluster 4.1

2018-08-31 Thread Davide Obbi
#gluster vol set VOLNAME group nl-cache --> didn't know there are groups of options, after this command i got set the following: performance.nl-cache-timeout: 600 performance.nl-cache: on performance.parallel-readdir: on performance.io-thread-count: 64 network.inode-lru-limit: 20 to note that

Re: [Gluster-users] Was: Upgrade to 4.1.2 geo-replication does not work Now: Upgraded to 4.1.3 geo node Faulty

2018-08-31 Thread Kotresh Hiremath Ravishankar
Hi Marcus, Could you attach full logs? Is the same trace back happening repeatedly? It will be helpful you attach the corresponding mount log as well. What's the rsync version, you are using? Thanks, Kotresh HR On Fri, Aug 31, 2018 at 12:16 PM, Marcus Pedersén wrote: > Hi all, > > I had

Re: [Gluster-users] Geo-Replication Faulty.

2018-08-31 Thread Tiemen Ruiten
I had the same issue a few weeks ago and found this thread which helped me resolve it: https://lists.gluster.org/pipermail/gluster-users/2018-July/034465.html On 28 August 2018 at 08:50, Krishna Verma wrote: > Hi All, > > > > I need a help in setup geo replication as its getting faulty. > > > >

[Gluster-users] Was: Upgrade to 4.1.2 geo-replication does not work Now: Upgraded to 4.1.3 geo node Faulty

2018-08-31 Thread Marcus Pedersén
Hi all, I had problems with stopping sync after upgrade to 4.1.2. I upgraded to 4.1.3 and it ran fine for one day, but now one of the master nodes shows faulty. Most of the sync jobs have return code 23, how do I resolve this? I see messages like: _GMaster: Sucessfully fixed all entry ops

Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-08-31 Thread Hu Bert
Hi Pranith, i just wanted to ask if you were able to get any feedback from your colleagues :-) btw.: we migrated some stuff (static resources, small files) to a nfs server that we actually wanted to replace by glusterfs. Load and cpu usage has gone down a bit, but still is asymmetric on the 3

Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

2018-08-31 Thread Krishna Verma
Hi Kotresh, I have tested the geo replication over distributed volumes with 2*2 gluster setup. [root@gluster-poc-noida ~]# gluster volume geo-replication glusterdist gluster-poc-sj::glusterdist status MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE

Re: [Gluster-users] gluster connection interrupted during transfer

2018-08-31 Thread Raghavendra Gowdappa
On Fri, Aug 31, 2018 at 11:11 AM, Richard Neuboeck wrote: > On 08/31/2018 03:50 AM, Raghavendra Gowdappa wrote: > > +Mohit. +Milind > > > > @Mohit/Milind, > > > > Can you check logs and see whether you can find anything relevant? > > From glances at the system logs nothing out of the ordinary >