[ https://issues.apache.org/jira/browse/HDFS-6682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14649916#comment-14649916 ]
Andrew Wang commented on HDFS-6682: ----------------------------------- Cool, thanks Allen. Maybe we file a new JIRA for this? It would also be useful when tuning replication rate limiting. Code-wise I think it would look a lot like this patch, unless we want sliding window fanciness. A count and rate of enqueued vs. processed would be a good start though. > Add a metric to expose the timestamp of the oldest under-replicated block > ------------------------------------------------------------------------- > > Key: HDFS-6682 > URL: https://issues.apache.org/jira/browse/HDFS-6682 > Project: Hadoop HDFS > Issue Type: Improvement > Reporter: Akira AJISAKA > Assignee: Akira AJISAKA > Labels: metrics > Attachments: HDFS-6682.002.patch, HDFS-6682.003.patch, > HDFS-6682.004.patch, HDFS-6682.005.patch, HDFS-6682.006.patch, HDFS-6682.patch > > > In the following case, the data in the HDFS is lost and a client needs to put > the same file again. > # A Client puts a file to HDFS > # A DataNode crashes before replicating a block of the file to other DataNodes > I propose a metric to expose the timestamp of the oldest > under-replicated/corrupt block. That way client can know what file to retain > for the re-try. -- This message was sent by Atlassian JIRA (v6.3.4#6332)