ite performance problems. Can you try with 3.7.9?
>
> On Sat, Mar 26, 2016 at 7:52 PM, Andreas Mather
> wrote:
> > Hello!
> >
> > I experience very slow performance on gluster 3.7.8 on CentOS 7.2 when
> using
> > small block sizes with dd.
> >
> &g
Hello!
I experience very slow performance on gluster 3.7.8 on CentOS 7.2 when
using small block sizes with dd.
I run 2 gluster servers with 2 bricks each (each located on separate SSDs)
and 2 gluster clients (which will run virtual machines). It's a completely
fresh setup, so no load on the syste
open
handle on the file), which turned out not to be the case.
I ended up shutting down all VMs and restarting the server. Afterwards
healing worked as expected
- Andreas
On Mon, Oct 5, 2015 at 1:01 PM, Anuradha Talur wrote:
>
>
> - Original Message -----
> > From: &quo
at 3:18 PM, Anuradha Talur wrote:
>
>
> - Original Message -
> > From: "Andreas Mather"
> > To: "Gluster-users@gluster.org List"
> > Sent: Thursday, September 24, 2015 1:24:12 PM
> > Subject: [Gluster-users] gluster 3.7.3 - volume heal
Hi!
Our provider had network maintenance this night, so 2 of our 4 servers got
disconnected and reconnected. Since we knew this was coming, we shifted all
work load off the affected servers. This morning, most of the cluster seems
fine, but for one volume, no heal info can be retrieved, so we basi
ing 3.7.
>
> -Alastair
>
>
> On 27 August 2015 at 10:04, Andreas Mather
> wrote:
>
>> Hi Humble!
>>
>> Thanks for the reply. The docs do not mention anything related to
>> 3.6->3.7 upgrade that applies to my case.
>>
>>
thedocs.org/en/latest/Upgrade-Guide/README/
>
>
>
> --Humble
>
>
> On Thu, Aug 27, 2015 at 4:57 PM, Andreas Mather
> wrote:
>
>> Hi All!
>>
>> I wanted to do a rolling upgrade of gluster from 3.6.3 to 3.7.3, but
>> after the upgrade, the updated n
Hi All!
I wanted to do a rolling upgrade of gluster from 3.6.3 to 3.7.3, but after
the upgrade, the updated node won't connect.
The cluster has 4 nodes (vhost[1-4]) and 4 volumes (vol[1-4]) with 2
replicas each:
vol1: vhost1/brick1, vhost2/brick2
vol2: vhost2/brick1, vhost1/brick2
vol3: vhost3/br
Hello!
I'm setting up a qemu/KVM cluster with glusterfs as shared storage. While
testing the cluster, I constantly hit a split-brain situation on VM image
files which I cannot explain.
Setup:
2 bare metal servers running glusterfs (replica 2), having the volume
mounted and one virtual machine whi