[
https://issues.apache.org/jira/browse/HDFS-8411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15708232#comment-15708232
]
SammiChen commented on HDFS-8411:
---------------------------------
Hi [~andrew.wang], thanks for review the patch! I will upload a new patch to
address the comments.
bq. It looks like the metrics only increment at the block granularity, it
doesn't track partial block reads/writes that then fail.
You're right. The next patch will track both block granularity and partial
blcok reads/writes.
bq. To normalize with the other reconstruction metrics, maybe name
"ecReconstructionBytesRead" and "ecReconstructionBytesWritten"
handled.
bq. This is an optional comment (can do in follow-on maybe), but since it looks
like the ECWorker can write both locally and remote, should we differentiate
these as well? This is like how we differentiate remote vs. local reads in
FileSystem$Statistics. e.g. bytesRead, bytesReadLocalHost,
bytesReadDistanceOfOneOrTwo, etc.
I would prefer it as a follow-on
> Add bytes count metrics to datanode for ECWorker
> ------------------------------------------------
>
> Key: HDFS-8411
> URL: https://issues.apache.org/jira/browse/HDFS-8411
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Reporter: Li Bo
> Assignee: SammiChen
> Attachments: HDFS-8411-001.patch, HDFS-8411-002.patch,
> HDFS-8411-003.patch, HDFS-8411-004.patch
>
>
> This is a sub task of HDFS-7674. It calculates the amount of data that is
> read from local or remote to attend decoding work, and also the amount of
> data that is written to local or remote datanodes.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]