[
https://issues.apache.org/jira/browse/HDFS-8411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15708820#comment-15708820
]
Rakesh R commented on HDFS-8411:
--------------------------------
Thanks [~Sammi] for the work. I've few comments,
# Striped reading logic is used by ec worker as well as block group checksum
logic. Could you please rephrase the java comment and description
{{// Bytes read by erasure coding worker.}} to {{// Bytes read by ec striped
block reconstructor}}
Also, just a suggestion instead of adding description separately, can we use
below way of adding description.
{code}
@Metric("Bytes read by erasure coding striped block reconstructor")
MutableCounterLong ecReconstructionBytesRead;
@Metric("Bytes written by erasure coding striped block reconstructor")
MutableCounterLong ecReconstructionBytesWritten;
{code}
{code}
+ ecReconstructionBytesRead = registry.newCounter(
+ info("ecReconstructionBytesRead", "Bytes read by erasure coding " +
+ "worker"), (long) 0);
+ ecReconstructionBytesWritten = registry.newCounter(
+ info("ecReconstructionBytesWritten", "Bytes written by erasure " +
+ "coding worker"), (long) 0);
{code}
# How about avoiding code duplication in {{testEcCodingTime}} and
{{testEcBytesFullBlock}} by creating a method like below and use it in both the
tests.
{code}
private MetricsRecordBuilder waitForReconstructionFinished(DataNode workerDn)
throws TimeoutException, InterruptedException {
MetricsRecordBuilder rb = getMetrics(workerDn.getMetrics().name());
// Ensure that reconstruction task is finished
GenericTestUtils.waitFor(new Supplier<Boolean>() {
@Override
public Boolean get() {
long taskMetricValue = getLongCounter("EcReconstructionTasks", rb);
return (taskMetricValue > 0);
}
}, 500, 60000);
return rb;
}
{code}
# Else part is not required as it is throwing exception when there is no
successful write, right? Can be like below,
{code}
int nSuccess = stripedWriter.transferData2Targets();
if (nSuccess == 0) {
String error = "Transfer failed for all targets.";
throw new IOException(error);
}
// update bytes written metrics
getDatanode().getMetrics().
incrECReconstructionBytesWritten(nSuccess * toReconstructLen);
}
{code}
# Please take care findbug warnings and fix necessary checkstyle warnings.
> Add bytes count metrics to datanode for ECWorker
> ------------------------------------------------
>
> Key: HDFS-8411
> URL: https://issues.apache.org/jira/browse/HDFS-8411
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Reporter: Li Bo
> Assignee: SammiChen
> Attachments: HDFS-8411-001.patch, HDFS-8411-002.patch,
> HDFS-8411-003.patch, HDFS-8411-004.patch, HDFS-8411-005.patch
>
>
> This is a sub task of HDFS-7674. It calculates the amount of data that is
> read from local or remote to attend decoding work, and also the amount of
> data that is written to local or remote datanodes.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]