[ 
https://issues.apache.org/jira/browse/TS-4719?focusedWorklogId=26487&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-26487
 ]

ASF GitHub Bot logged work on TS-4719:
--------------------------------------

                Author: ASF GitHub Bot
            Created on: 16/Aug/16 03:14
            Start Date: 16/Aug/16 03:14
    Worklog Time Spent: 10m 
      Work Description: GitHub user PSUdaemon opened a pull request:

    https://github.com/apache/trafficserver/pull/868

    TS-4719: Make AIO threads interleave memory across NUMA nodes.

    

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/PSUdaemon/trafficserver ts-4719

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/trafficserver/pull/868.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #868
    
----
commit e5261f4d5edf557fe03571b46eacc61d430425e3
Author: Phil Sorber <[email protected]>
Date:   2016-08-16T03:13:26Z

    TS-4719: Make AIO threads interleave memory across NUMA nodes.

----


Issue Time Tracking
-------------------

            Worklog Id:     (was: 26487)
            Time Spent: 10m
    Remaining Estimate: 0h

> Should have NUMA affinity for disk threads (AIO threads)
> --------------------------------------------------------
>
>                 Key: TS-4719
>                 URL: https://issues.apache.org/jira/browse/TS-4719
>             Project: Traffic Server
>          Issue Type: Improvement
>          Components: Cache, Core
>            Reporter: Leif Hedstrom
>            Assignee: Phil Sorber
>             Fix For: 7.0.0
>
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> We should properly interleave AIO threads (disk threads) evenly across the 
> NUMA nodes, and have affinity on those nodes. That would allow for better 
> memory distribution across NUMA nodes. We've noticed pretty uneven 
> allocations on some boxes, which I attribute to this problem (but no strong 
> evidence, but it does make sense).
> {code}
> Per-node process memory usage (in MBs) for PID 33471 ([ET_NET 0])
>                            Node 0          Node 1           Total
>                   --------------- --------------- ---------------
> Huge                         0.00            0.00            0.00
> Heap                         0.00            0.00            0.00
> Stack                        1.38            0.64            2.02
> Private                 188993.75        59142.80       248136.55
> ----------------  --------------- --------------- ---------------
> Total                   188995.13        59143.44       248138.57
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to