ad-ahead-page-count 16
> performance.stat-prefetch on
> server.event-threads 8 (default?)
> client.event-threads 8
>
> Any help given is appreciated!
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http:
6 PM, Arman Khalatyan <arm2...@gmail.com> wrote:
> Interesting table Karan!,
> Could you please tell us how you did the benchmark? fio or iozone
> orsimilar?
>
> thanks
> Arman.
>
> On Wed, Sep 27, 2017 at 1:20 PM, Karan Sandha <ksan...@redhat.com> wrote:
>
>
Is this sufficient?
>
> I really don't want to do the geo-replication in its current form.
>
> Thanks
>
> CC
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster
Adding Rahul and Kothresh who are SME on geo replication
Thanks & Regards
Karan Sandha
On Sat, Jul 29, 2017 at 3:37 PM, mabi <m...@protonmail.ch> wrote:
> Hello
>
> To my two node replica volume I have added an arbiter node for safety
> purpose. On that volume I als
gluster volume set ** client.event-threads 4
*
gluster volume start
Thanks & regards
Karan Sandha
On 05/09/2017 03:03 PM, Chiku wrote:
Hello,
I'm testing glusterfs for windows client.
I created 2 servers for glusterfs (3.10.1 replication 2) on centos 7.3.
Right now, I just use def
(default=2)
gluster volume set *vol-name* client.event-threads 4 *(default=2)*
*
and do a re-balance on the volume and then check the performance, we
generally see a performance bump up when we turn these parameter on.
Thanks & regards
Karan Sandha
On 03/08/2017 02:21 AM, Deepak Naidu w
-threads 4
*
Start the volume and do a rebalance on the volume using :- gluster
volume rebalance start ; check the status using the gluster
volume rebalance status . you will be seeing a bump in
performance of small files workload.
Thanks & Regards
Karan Sandha
On 02/25/2017 02:46 PM, Va
ter volume create gluster1 transport tcp
cyan:/gluster/ssd1/brick1*new* green:/gluster/ssd1/brick2*new*
red:/gluster/ssd1/brick3*new* pink:/gluster/ssd1/brick4*new
*
for time being this will solve the your problem.
Thanks & Regards
Karan Sandha
On 01/05/2017 05:53 AM, Zack Boll wrote: