All '.processed' directories (under working_dir and working_dir/.history)
contain processed changelogs and is no longer required by geo-replication
apart from debugging
purposes. That directory can be cleaned up if it's consuming too much space.
On Wed, Feb 12, 2020 at 11:36 PM Sunny Kumar
Thanks for the update Erik! Yes, an update on fixed issues really helps
those who have similar issues.
-Amar
On Thu, Feb 13, 2020 at 11:54 PM Erik Jacobson
wrote:
> While it's still early, our testing is showing this issue fixed in
> glusterfs7.2 (we were at 416).
>
> Closing the loop in case
While it's still early, our testing is showing this issue fixed in
glusterfs7.2 (we were at 416).
Closing the loop in case people search for this.
Erik
On Sun, Jan 26, 2020 at 12:04:00PM -0600, Erik Jacobson wrote:
> > One last reply to myself.
>
> One of the test cases my test scripts
Replication would be better yes but HA isn't a hard requirement whereas the
most likely loss of a brick would be power. In that case we could stop the
entire file system then bring the brick back up should users complain about
poor I/O performance.
Could you share more about your
I was using EC configuration 16+4 with 40 servers, each server has
68x10TB JBOD disks.
Glusterfs is mounted to 1000 hadoop datanodes, we were using glusterfs
as hadoop archive.
When a disk fails we did not loose write speed but read speed slows
down too much. I used glusterfs at the edge and it
Do not use EC with small files. You cannot tolerate losing a 300TB
brick, reconstruction will take ages. When I was using glusterfs
reconstruction speed of ec was 10-15MB/sec. If you do not loose bricks
you will be ok.
On Thu, Feb 13, 2020 at 7:38 PM Douglas Duckworth
wrote:
>
> Hello
>
> I am
Hello
I am thinking of building a Gluster file system for archival data. Initially
it will start as 6 brick dispersed volume then expand to distributed dispersed
as we increase capacity.
Since metadata in Gluster isn't centralized it will eventually not perform well
at scale. So I am
Hi Renaud,
Thanks for your reply.
We are not using NFS here. We restarted all Gluster Servers (one by one)
and most of the clients but the problem persisted. Fortunately, we were
able to copy the data and carry on.
Cheers,
On Tue, 11 Feb 2020 at 13:58, Renaud Fortier
wrote:
> Hi Alberto,
>
>