Not historically, but we are using bonding for replication between the
servers. It's been stable for at least 6 months, but it's possible that
one of the links in the bond is failing or something.
Would this type of restart be triggered by a loss of communication between
bricks in a replica set?
We're running a fairly large 2-replica volume across two servers. The
volume is approximately 20TB of small 1K-4MB files. The volume is exported
via NFS, and mounted remotely by two clients.
For the past few weeks the Gluster brick processes have been randomly
restarting. Luckily they've been d
Initially I was suspecting about server-quorum be the culprit which is
not the case. By any chance is your network flaky?
On 02/01/2016 10:33 PM, Logan Barfield wrote:
> Volume Name: data02
> Type: Replicate
> Volume ID: 1c8928b1-f49e-4950-be06-0f8ce5adf870
> Status: Started
> Number of Bricks: 1
Volume Name: data02
Type: Replicate
Volume ID: 1c8928b1-f49e-4950-be06-0f8ce5adf870
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gluster-stor01:/export/data/brick02 <-- 10.1.1.10
Brick2: gluster-stor02:/export/data/brick02 <-- 10.1.1.11
Options Reconfigured:
s
Could you paste output of gluster volume info?
~Atin
On 01/29/2016 11:59 PM, Logan Barfield wrote:
> We're running a fairly large 2-replica volume across two servers. The
> volume is approximately 20TB of small 1K-4MB files. The volume is
> exported via NFS, and mounted remotely by two clients.
We're running a fairly large 2-replica volume across two servers. The
volume is approximately 20TB of small 1K-4MB files. The volume is exported
via NFS, and mounted remotely by two clients.
For the past few weeks the Gluster brick processes have been randomly
restarting. Luckily they've been d