[
https://issues.apache.org/jira/browse/HADOOP-10278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13891220#comment-13891220
]
Daryn Sharp commented on HADOOP-10278:
--------------------------------------
I need to look at the latest patch, but comments on the comments:
bq. The performance hit will probably increase with more handler threads too
Aside: With my background cycles (which is about nil), HADOOP-10300 should
hopefully be able to reduce the number of handlers yet improve performance.
Fewer threads also mean less contention on the callq and hopefully even better
performance. I've got a POC but haven't stressed or benched it yet.
bq. Even though the server CPU time increased, the throughput wasn't really
affected
An impact to throughput would be a concern, but cpu utilization isn't (yet)
much of a concern. I'm trying to get the NN to actually use more than a few
cores under heavy load.
bq. You can get a small performance gain using volatile instead of AtomicRef
I'm curious if that's true. I'd expect a warmed up JVM to have effectively
inlined the call. Personally I prefer AtomicReference to volatile if for no
other reason than it's explicit to a dev that something about the ref is
"magical", but if the impact is measurable I would be swayed.
> Refactor to make CallQueue pluggable
> ------------------------------------
>
> Key: HADOOP-10278
> URL: https://issues.apache.org/jira/browse/HADOOP-10278
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: ipc
> Reporter: Chris Li
> Attachments: HADOOP-10278-atomicref.patch,
> HADOOP-10278-atomicref.patch, HADOOP-10278.patch
>
>
> * Refactor CallQueue into an interface, base, and default implementation that
> matches today's behavior
> * Make the call queue impl configurable, keyed on port so that we minimize
> coupling
--
This message was sent by Atlassian JIRA
(v6.1.5#6160)