[ 
https://issues.apache.org/jira/browse/HDFS-15629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibin Huang updated HDFS-15629:
--------------------------------
    Attachment: HDFS-15629-001.patch

> Add seqno when warning slow mirror/disk in BlockReceiver
> --------------------------------------------------------
>
>                 Key: HDFS-15629
>                 URL: https://issues.apache.org/jira/browse/HDFS-15629
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: Haibin Huang
>            Priority: Major
>         Attachments: HDFS-15629-001.patch
>
>
> When client write slow, it will print a slow log from DataStreamer
> {code:java}
> if (ack.getSeqno() != DFSPacket.HEART_BEAT_SEQNO) {
>   Long begin = packetSendTime.get(ack.getSeqno());
>   if (begin != null) {
>     long duration = Time.monotonicNow() - begin;
>     if (duration > dfsclientSlowLogThresholdMs) {
>       LOG.info("Slow ReadProcessor read fields for block " + block
>           + " took " + duration + "ms (threshold="
>           + dfsclientSlowLogThresholdMs + "ms); ack: " + ack
>           + ", targets: " + Arrays.asList(targets));
>     }
>   }
> }
> {code}
> here is an example:
> Slow ReadProcessor read fields for block BP-XXX:blk_XXX took 2756ms 
> (threshold=100ms); ack: seqno: 3341 status: SUCCESS status: SUCCESS status: 
> SUCCESS downstreamAckTimeNanos: 2751531959 4: "\000\000\000", targets: [XXX, 
> XXX, XXX]
> There is an ack seqno in the log, so we can find which packet cause write 
> slow. However, datanode didn't print the seqno in slow log, so we can't kown 
> this packet write slow in which stage.
> HDFS-11603 and HDFS-12814 add some slow warnings in BlockReceiver, i think we 
> should add seqno in these slow warnings, in order to find the corresponding 
> packet write slow in which stage.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to