Weren't we supposed to get a new 4.1.x this month on the 20th?
In particular I am interested in the client Memory leak fix as I have an
VM cluster I need to put into production and don't want to immediately
turn around and do the upgrade.
Any word on that?
-wk
Personally, I'd like to see the glusterd service replaced by a k8s native
controller (named "kluster").
I'm hoping to use this vacation I'm currently on to write up a design doc.
On August 23, 2018 12:58:03 PM PDT, Michael Adam wrote:
>On 2018-07-25 at 06:38 -0700, Vijay Bellur wrote:
>> Hi
On 2018-07-25 at 06:38 -0700, Vijay Bellur wrote:
> Hi all,
Hi Vijay,
Thanks for announcing this to the public and making everyone
more aware of Gluster's focus on container storage!
I would like to add an additional perspective to this,
giving some background about the history and origins:
Hello,
did anyone ever managed to achieve reasonable waiting time while performing
metadata intensive operations such as git clone, untar etc...? Is this
possible workload or will never be in scope for glusterfs?
I'd like to know, if possible, what would be the options that affect such
volume
On Wed, Aug 22, 2018 at 12:01 PM Hu Bert wrote:
> Just an addition: in general there are no log messages in
> /var/log/glusterfs/ (if you don't all 'gluster volume ...'), but on
> the node with the lowest load i see in cli.log.1:
>
> [2018-08-22 06:20:43.291055] I