[
https://issues.apache.org/jira/browse/HDFS-8410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15590805#comment-15590805
]
SammiChen commented on HDFS-8410:
---------------------------------
It costs about 2 milliseconds in my desktop to decode a strip group with 6
blocks, each block is 64k. This decoding time merely depends on how fast the
CPU is. It's "Intel(R) Core(TM) i5-4460 CPU @ 3.20GHz" with 4 cores in my
desktop, not the leading edge CPU model. Given that CPU is becoming more and
more powerful, I think it is not safe to use millisecond granularity to record
one time decoding time. We can choose between nanosecond or microsecond. I
would prefer nanosecond for one reason. it can be directly get through
{{System.nanoTime()}}. If microsecond is used, there is one extra division of
1000. That's not good from performance point of view. And a {{long}} number can
host nanoseconds, representing hundreds of years. So there is not going to a
overflow quickly.
> Add computation time metrics to datanode for ECWorker
> -----------------------------------------------------
>
> Key: HDFS-8410
> URL: https://issues.apache.org/jira/browse/HDFS-8410
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Reporter: Li Bo
> Assignee: SammiChen
> Attachments: HDFS-8410-001.patch, HDFS-8410-002.patch,
> HDFS-8410-003.patch, HDFS-8410-004.patch
>
>
> This is a sub task of HDFS-7674. It adds time metric for ec decode work.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]