On Sat, 30 Mar 2019 at 08:06, Vijay Bellur wrote:
>
>
> On Fri, Mar 29, 2019 at 6:42 AM Atin Mukherjee
> wrote:
>
>> All,
>>
>> As many of you already know that the design logic with which GlusterD
>> (here on to be referred as GD1) was implemented has some fundamental
>> scalability
On Fri, Mar 29, 2019 at 6:42 AM Atin Mukherjee wrote:
> All,
>
> As many of you already know that the design logic with which GlusterD
> (here on to be referred as GD1) was implemented has some fundamental
> scalability bottlenecks at design level, especially around it's way of
> handshaking
Hello,
Yes I did find some hits on this in the following logs. We started seeing
failures after upgrading to 5.3 from 4.6. If you want me to check for
something else let me know. Thank you all on the gluster team for finding and
fixing that problem whatever it was!
[root@lonbaknode3
Hello Nithya,
I removed several options that I admit I didn't quite understand and I had
added from Google searches. Was dumb for me to have added in the first place
not understanding them.
1 of these options apparently was causing directory listing to be about 7
seconds vs when I cut
On Fri, Mar 29, 2019, 10:03 PM Jim Kinney wrote:
> Currently running 3.12 on Centos 7.6. Doing cleanups on split-brain and
> out of sync, need heal files.
>
> We need to migrate the three replica servers to gluster v. 5 or 6. Also
> will need to upgrade about 80 clients as well. Given that a
Currently running 3.12 on Centos 7.6. Doing cleanups on split-brain and
out of sync, need heal files.
We need to migrate the three replica servers to gluster v. 5 or 6. Also
will need to upgrade about 80 clients as well. Given that a complete
removal of gluster will not touch the 200+TB of data
All,
As many of you already know that the design logic with which GlusterD (here
on to be referred as GD1) was implemented has some fundamental scalability
bottlenecks at design level, especially around it's way of handshaking
configuration meta data and replicating them across all the peers.
Hi,
Have added a few more info that was missed earlier.
The disconnect issue being minor we are working on it with a lower priority.
But yes, it will be fixed soon.
The bug to track this is: https://bugzilla.redhat.com/show_bug.cgi?id=1694010
The workaround to get over this if it happens is to,
On Fri, Mar 29, 2019 at 12:47 PM Krutika Dhananjay
wrote:
> Questions/comments inline ...
>
> On Thu, Mar 28, 2019 at 10:18 PM wrote:
>
>> Dear All,
>>
>> I wanted to share my experience upgrading from 4.2.8 to 4.3.1. While
>> previous upgrades from 4.1 to 4.2 etc. went rather smooth, this one
Questions/comments inline ...
On Thu, Mar 28, 2019 at 10:18 PM wrote:
> Dear All,
>
> I wanted to share my experience upgrading from 4.2.8 to 4.3.1. While
> previous upgrades from 4.1 to 4.2 etc. went rather smooth, this one was a
> different experience. After first trying a test upgrade on a 3
Hi Raghavendra,
i'll try to gather the information you need, hopefully this weekend.
One thing i've done this week: deactivate performance.quick-read
(https://bugzilla.redhat.com/show_bug.cgi?id=1673058), which
(according to munin) ended in a massive drop in network traffic and a
slightly lower
Hello Gluster users,
As you all aware that glusterfs-6 is out, we would like to inform you
that, we have spent a significant amount of time in testing
glusterfs-6 in upgrade scenarios. We have done upgrade testing to
glusterfs-6 from various releases like 3.12, 4.1 and 5.3.
As glusterfs-6 has
12 matches
Mail list logo