Re: [Gluster-devel] glusterfs 3.6.0beta3 fills up inodes

2014-12-01 Thread Andrea Tartaglia
Raised it on bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1169331 I'll keep track of it. Thanks, Andrea Venky Shankar mailto:yknev.shan...@gmail.com 30 November 2014 01:43 The secondary node (passive replica) collects changelogs in .processing as the primary node (first replica)

Re: [Gluster-devel] glusterfs 3.6.0beta3 fills up inodes

2014-11-29 Thread Venky Shankar
The secondary node (passive replica) collects changelogs in .processing as the primary node (first replica) performs the synchronization. On a replica failover, the passive replica (now active) starts where the primary left. This _overload_ of changelog backlog in the passive node is the cause of

[Gluster-devel] glusterfs 3.6.0beta3 fills up inodes

2014-11-12 Thread Andrea Tartaglia
Hi guys, I've got a geo-rep setup which copies data across 3 DCs, it works fine, but I've just spotted that the both the master and slave server ( the main ones ) ran out of inodes. Looking through the directories which have lots of files, the

Re: [Gluster-devel] glusterfs 3.6.0beta3 fills up inodes

2014-11-12 Thread Venky Shankar
It's safe to purge everything under .processed. That what geo-rep had already replicated, so it's OK to delete it. Also, consider purging these entries periodically as geo-rep doesn't purge them on it's own (at least for now). Venky On Thu, Nov 13, 2014 at 3:27 AM, Andrea Tartaglia