On 05/22/17 10:27, mabi wrote:
Sorry for posting again but I was really wondering if it is somehow possible to tune gluster in order to make better use of all my cores (see below for the details). I suspect that is the reason for the high sporadic context switches I have been experiencing.

Cheers!


In theory, more clients and more diverse filesets.

The only way to know would be for you to analyze the traffic pattern and/or profile gluster on your server. There's never some magic "tune software X to operate more efficiently" setting, or else it would be the default (except for the "turbo" button back in the early PC clone days).



-------- Original Message --------
Subject: Re: [Gluster-users] 120k context switches on GlsuterFS nodes
Local Time: May 18, 2017 8:43 PM
UTC Time: May 18, 2017 6:43 PM
From: [email protected]
To: Ravishankar N <[email protected]>
Pranith Kumar Karampuri <[email protected]>, Gluster Users <[email protected]>, Gluster Devel <[email protected]>

I have a single Intel Xeon CPU E5-2620 v3 @ 2.40GHz in each nodes and this one has 6 cores and 12 threads. I thought this would be enough for GlusterFS. When I check my CPU graphs everything is pretty much idle and there is hardly any peeks at all on the CPU. During the very high context switch my CPU graphs shows the following:

1 thread was 100% busy in CPU user
1 thread was 100% busy in CPU system

leaving actually 10 other threads out of the total of 12 threads unused...

Is there maybe any performance tuning parameters I need to configure in order to make a better use of my CPU cores or threads?




-------- Original Message --------
Subject: Re: [Gluster-users] 120k context switches on GlsuterFS nodes
Local Time: May 18, 2017 7:03 AM
UTC Time: May 18, 2017 5:03 AM
From: [email protected]
To: Pranith Kumar Karampuri <[email protected]>, mabi <[email protected]> Gluster Users <[email protected]>, Gluster Devel <[email protected]>


On 05/17/2017 11:07 PM, Pranith Kumar Karampuri wrote:
+ gluster-devel

On Wed, May 17, 2017 at 10:50 PM, mabi <[email protected] <mailto:[email protected]>> wrote:

    I don't know exactly what kind of context-switches it was but
    what I know is that it is the "cs" number under "system" when
    you run vmstat.

Okay, that could be due to the syscalls themselves or pre-emptive multitasking in case there aren't enough cpu cores. I think the spike in numbers is due to more users accessing the files at the same time like you observed, translating into more syscalls. You can try capturing the gluster volume profile info the next time it occurs and co-relate with the cs count. If you don't see any negative performance impact, I think you don't need to be bothered much by the numbers.

HTH,
Ravi


    Also I use the percona linux monitoring template for cacti
    
(https://www.percona.com/doc/percona-monitoring-plugins/LATEST/cacti/linux-templates.html
    
<https://www.percona.com/doc/percona-monitoring-plugins/LATEST/cacti/linux-templates.html>)
    which monitors context switches too. If that's of any use
    interrupts where also quite high during that time with peaks up
    to 50k interrupts.



    -------- Original Message --------
    Subject: Re: [Gluster-users] 120k context switches on
    GlsuterFS nodes
    Local Time: May 17, 2017 2:37 AM
    UTC Time: May 17, 2017 12:37 AM
    From: [email protected] <mailto:[email protected]>
    To: mabi <[email protected] <mailto:[email protected]>>,
    Gluster Users <[email protected]
    <mailto:[email protected]>>


    On 05/16/2017 11:13 PM, mabi wrote:
    Today I even saw up to 400k context switches for around 30
    minutes on my two nodes replica... Does anyone else have so
    high context switches on their GlusterFS nodes?

    I am wondering what is "normal" and if I should be worried...




    -------- Original Message --------
    Subject: 120k context switches on GlsuterFS nodes
    Local Time: May 11, 2017 9:18 PM
    UTC Time: May 11, 2017 7:18 PM
    From: [email protected] <mailto:[email protected]>
    To: Gluster Users <[email protected]>
    <mailto:[email protected]>

    Hi,

    Today I noticed that for around 50 minutes my two GlusterFS
    3.8.11 nodes had a very high amount of context switches,
    around 120k. Usually the average is more around 1k-2k. So I
    checked what was happening and there where just more users
    accessing (downloading) their files at the same time. These
    are directories with typical cloud files, which means files
    of any sizes ranging from a few kB to MB and a lot of course.

    Now I never saw such a high number in context switches in my
    entire life so I wanted to ask if this is normal or to be
    expected? I do not find any signs of errors or warnings in
    any log files.


    What context switch are you referring to (syscalls
    context-switch on the bricks?) ? How did you measure this?
    -Ravi

    My volume is a replicated volume on two nodes with ZFS as
    filesystem behind and the volume is mounted using FUSE on
    the client (the cloud server). On that cloud server the
    glusterfs process was using quite a lot of system CPU but
    that server (VM) only has 2 vCPUs so maybe I should increase
    the number of vCPUs...

    Any ideas or recommendations?



    Regards,
    M.



    _______________________________________________
    Gluster-users mailing list
    [email protected] <mailto:[email protected]>
    http://lists.gluster.org/mailman/listinfo/gluster-users
    <http://lists.gluster.org/mailman/listinfo/gluster-users>

    _______________________________________________ Gluster-users
    mailing list [email protected]
    <mailto:[email protected]>
    http://lists.gluster.org/mailman/listinfo/gluster-users
<http://lists.gluster.org/mailman/listinfo/gluster-users>
--
Pranith

_______________________________________________
Gluster-users mailing list
[email protected]
http://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
[email protected]
http://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to