[
https://issues.apache.org/jira/browse/HADOOP-16266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16826876#comment-16826876
]
Steve Loughran commented on HADOOP-16266:
-----------------------------------------
see:
http://steveloughran.blogspot.com/2015/09/time-on-multi-core-multi-socket-servers.html
issues
* if it is single core, then on a multicore system if a thread is rescheduled:
different/invalid answers
* if it is single die (as it is on newer parts), then on a two socket system
you can have inconsistencies, though much lower risk, even after rescheduling.
* Same issues with execution on VMs and containers: you don't know what is
happening, but a -ve value is not to be unexpected
Overall then: a low cost way of microbenchmarking. Not a silver bullet, and
when you start yielding the CPU, the risk of invalid answers increases.
Regarding the patch (BTW, github PRs are now a good way for reviewing), not
looked in detail enough to say anything other than: please use SFL4J not
commons logging. I'll let people who know the RPC code look at things in more
detail.
If you want low cost, fine-grained metrics of CPU perf, nanotime is probably a
good fit. If you are trying to measure IPC times, the measuring inconsistencies
across cores/dies makes it riskier
> Add more fine-grained processing time metrics to the RPC layer
> --------------------------------------------------------------
>
> Key: HADOOP-16266
> URL: https://issues.apache.org/jira/browse/HADOOP-16266
> Project: Hadoop Common
> Issue Type: Improvement
> Components: ipc
> Reporter: Christopher Gregorian
> Assignee: Christopher Gregorian
> Priority: Minor
> Labels: rpc
> Attachments: HADOOP-16266.001.patch, HADOOP-16266.002.patch,
> HADOOP-16266.003.patch, HADOOP-16266.004.patch, HADOOP-16266.005.patch
>
>
> Splitting off of HDFS-14403 to track the first part: introduces more
> fine-grained measuring of how a call's processing time is split up.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]