[
https://issues.apache.org/jira/browse/YARN-5764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15813375#comment-15813375
]
Wangda Tan commented on YARN-5764:
----------------------------------
Thanks [~devaraj.k] for updating design doc and patch, some questions/comments:
1) What is the benefit to manually specify NUMA node? Since this is potentially
complex for end user to specify, I think it's better to directly read data from
OS.
2) Does the changes work on platform other than Linux?
3) I'm not quite sure about if this could happen: with this patch, YARN will
launch process one by one on each NUMA node to bind memory/cpu. Is it possible
that there's another process (outside of YARN) uses memory of NUMA node which
causes processes launched by YARN failed to bind or run?
4) This patch uses hard binding (get allocated resource on specified node or
fail), is it better to specify soft binding (prefer to allocate and can also
accept other node). I think soft binding should be default behavior to support
NUMA.
Thoughts?
> NUMA awareness support for launching containers
> -----------------------------------------------
>
> Key: YARN-5764
> URL: https://issues.apache.org/jira/browse/YARN-5764
> Project: Hadoop YARN
> Issue Type: New Feature
> Components: nodemanager, yarn
> Reporter: Olasoji
> Assignee: Devaraj K
> Attachments: NUMA Awareness for YARN Containers.pdf,
> YARN-5764-v0.patch, YARN-5764-v1.patch
>
>
> The purpose of this feature is to improve Hadoop performance by minimizing
> costly remote memory accesses on non SMP systems. Yarn containers, on launch,
> will be pinned to a specific NUMA node and all subsequent memory allocations
> will be served by the same node, reducing remote memory accesses. The current
> default behavior is to spread memory across all NUMA nodes.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]