[
https://issues.apache.org/jira/browse/HDFS-8449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15272031#comment-15272031
]
Li Bo commented on HDFS-8449:
-----------------------------
Thanks very much for Kai's review.
bq. Could you enhance TestReconstructStripedFile similarly?
bq. Could we share TestReconstructStripedFile#waitForRecoveryFinished and avoid
waitForRecoveryFinished?
I think it’s a little strange to call
{{TestReconstructStripedFile#waitForRecoveryFinished}} in
{{TestDataNodeErasureCodingMetrics}} because the change of
{{TestReconstructStripedFile}} may impact {{TestDataNodeErasureCodingMetrics}}.
We can move the shared function to a util class.
I think it's better to do the changes of {{TestReconstructStripedFile}} in a
new separate jira in order to make this jira focusing on the test of datanode
metrics.
bq. Could we use DFSTestUtil.writeFile to generate the test file?
Both implementations are OK. There’re many test cases directly using
outputstream to write a file.
bq. I'm not sure about the following block codes are necessary.
The system will execute the actions periodically. In the test we should make
sure the actions are executed before moving forward.
> Add tasks count metrics to datanode for ECWorker
> ------------------------------------------------
>
> Key: HDFS-8449
> URL: https://issues.apache.org/jira/browse/HDFS-8449
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Reporter: Li Bo
> Assignee: Li Bo
> Attachments: HDFS-8449-000.patch, HDFS-8449-001.patch,
> HDFS-8449-002.patch, HDFS-8449-003.patch, HDFS-8449-004.patch,
> HDFS-8449-005.patch, HDFS-8449-006.patch, HDFS-8449-007.patch,
> HDFS-8449-008.patch, HDFS-8449-009.patch
>
>
> This sub task try to record ec recovery tasks that a datanode has done,
> including total tasks, failed tasks and sucessful tasks.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]