[
https://issues.apache.org/jira/browse/TS-4719?focusedWorklogId=31441&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-31441
]
ASF GitHub Bot logged work on TS-4719:
--------------------------------------
Author: ASF GitHub Bot
Created on: 02/Nov/16 17:16
Start Date: 02/Nov/16 17:16
Worklog Time Spent: 10m
Work Description: GitHub user PSUdaemon opened a pull request:
https://github.com/apache/trafficserver/pull/1170
TS-4719: Make AIO threads interleave memory across NUMA nodes.
(cherry picked from commit e5261f4d5edf557fe03571b46eacc61d430425e3)
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/PSUdaemon/trafficserver
numa_interleave_back_port
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/trafficserver/pull/1170.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #1170
----
commit 047d8d81649617779a5213a527210a173585e919
Author: Phil Sorber <[email protected]>
Date: 2016-08-16T03:13:26Z
TS-4719: Make AIO threads interleave memory across NUMA nodes.
(cherry picked from commit e5261f4d5edf557fe03571b46eacc61d430425e3)
----
Issue Time Tracking
-------------------
Worklog Id: (was: 31441)
Time Spent: 1h 10m (was: 1h)
> Should have NUMA affinity for disk threads (AIO threads)
> --------------------------------------------------------
>
> Key: TS-4719
> URL: https://issues.apache.org/jira/browse/TS-4719
> Project: Traffic Server
> Issue Type: Improvement
> Components: Cache, Core
> Reporter: Leif Hedstrom
> Assignee: Phil Sorber
> Fix For: 7.0.0
>
> Time Spent: 1h 10m
> Remaining Estimate: 0h
>
> We should properly interleave AIO threads (disk threads) evenly across the
> NUMA nodes, and have affinity on those nodes. That would allow for better
> memory distribution across NUMA nodes. We've noticed pretty uneven
> allocations on some boxes, which I attribute to this problem (but no strong
> evidence, but it does make sense).
> {code}
> Per-node process memory usage (in MBs) for PID 33471 ([ET_NET 0])
> Node 0 Node 1 Total
> --------------- --------------- ---------------
> Huge 0.00 0.00 0.00
> Heap 0.00 0.00 0.00
> Stack 1.38 0.64 2.02
> Private 188993.75 59142.80 248136.55
> ---------------- --------------- --------------- ---------------
> Total 188995.13 59143.44 248138.57
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)