If I just mount the client,I the memory will not rise. But when I read and write a lot of files(billions), the client will consume huge memory. You can check it in this way.
Can you please pass all the gluster log files from the server where the
transport end point not connected error is reported? As restarting glusterd
didn’t solve this issue, I believe this isn’t a stale port problem but
something else. Also please provide the output of ‘gluster v info ’
(@cc Ravi,
Hello all,
We have a gluster replicate (with arbiter) volumes that we are getting
“Transport endpoint is not connected” with on a rotating basis from each of
the two file servers, and a third host that has the arbiter bricks on.
This is happening when trying to run a heal on all the
Hello,
Am 31.08.2018 um 13:59 schrieb Shyam Ranganathan:
> I suspect you have hit this:
> https://bugzilla.redhat.com/show_bug.cgi?id=1602262#c5
>
> I further suspect your older setup was 3.10 based and not 3.12 based.
>
> There is an additional feature added in 3.12 that stores GFID to path
>
I’m not seeing any leaks myself, been on 3.12.13 for about 38 hours now, still
small.
You did restart that node, or at least put it into maintenance (if it’s ovirt)
to be sure you restarted the glusterfs processes after updating? That’s a lot
of run time unless it’s really busy, so figured I’d
it didnt make a difference. I will try to re-configure with a 2x3 config
On Fri, Aug 31, 2018 at 1:48 PM Raghavendra Gowdappa
wrote:
> another relevant option is setting cluster.lookup-optimize on.
>
> On Fri, Aug 31, 2018 at 3:22 PM, Davide Obbi
> wrote:
>
>> #gluster vol set VOLNAME group
On 08/31/2018 07:15 AM, Reiner Keller wrote:
> Hello,
>
> I got yesterday unexpected error "No space left on device" on my new
> gluster volume caused by too many hardlinks.
> This happened while I done "rsync --aAHXxv ..." replication from old
> gluster to new gluster servers - each running
another relevant option is setting cluster.lookup-optimize on.
On Fri, Aug 31, 2018 at 3:22 PM, Davide Obbi
wrote:
> #gluster vol set VOLNAME group nl-cache --> didn't know there are groups
> of options, after this command i got set the following:
> performance.nl-cache-timeout: 600
>
Hello,
I got yesterday unexpected error "No space left on device" on my new
gluster volume caused by too many hardlinks.
This happened while I done "rsync --aAHXxv ..." replication from old
gluster to new gluster servers - each running latest version 3.12.13
(for changing volume schema from 2x2
#gluster vol set VOLNAME group nl-cache --> didn't know there are groups of
options, after this command i got set the following:
performance.nl-cache-timeout: 600
performance.nl-cache: on
performance.parallel-readdir: on
performance.io-thread-count: 64
network.inode-lru-limit: 20
to note that
Hi Marcus,
Could you attach full logs? Is the same trace back happening repeatedly? It
will be helpful you attach the corresponding mount log as well.
What's the rsync version, you are using?
Thanks,
Kotresh HR
On Fri, Aug 31, 2018 at 12:16 PM, Marcus Pedersén
wrote:
> Hi all,
>
> I had
I had the same issue a few weeks ago and found this thread which helped me
resolve it:
https://lists.gluster.org/pipermail/gluster-users/2018-July/034465.html
On 28 August 2018 at 08:50, Krishna Verma wrote:
> Hi All,
>
>
>
> I need a help in setup geo replication as its getting faulty.
>
>
>
>
Hi all,
I had problems with stopping sync after upgrade to 4.1.2.
I upgraded to 4.1.3 and it ran fine for one day, but now one of the master
nodes shows faulty.
Most of the sync jobs have return code 23, how do I resolve this?
I see messages like:
_GMaster: Sucessfully fixed all entry ops
Hi Pranith,
i just wanted to ask if you were able to get any feedback from your
colleagues :-)
btw.: we migrated some stuff (static resources, small files) to a nfs
server that we actually wanted to replace by glusterfs. Load and cpu
usage has gone down a bit, but still is asymmetric on the 3
Hi Kotresh,
I have tested the geo replication over distributed volumes with 2*2 gluster
setup.
[root@gluster-poc-noida ~]# gluster volume geo-replication glusterdist
gluster-poc-sj::glusterdist status
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER
SLAVE
On Fri, Aug 31, 2018 at 11:11 AM, Richard Neuboeck
wrote:
> On 08/31/2018 03:50 AM, Raghavendra Gowdappa wrote:
> > +Mohit. +Milind
> >
> > @Mohit/Milind,
> >
> > Can you check logs and see whether you can find anything relevant?
>
> From glances at the system logs nothing out of the ordinary
>
16 matches
Mail list logo