[ 
https://issues.apache.org/jira/browse/HADOOP-5286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raghu Angadi updated HADOOP-5286:
---------------------------------

    Attachment: dn-log.txt

Thanks Hemanth. I am attaching the datanode log.

It might not be just the hard disks that are slow, even network might be 
affected too, since there are write timeouts while writing to clients. The 
follow log shows how severely degraded this machine is :

{quote} 2009-02-19 10:10:58,297 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 92 blocks got 
processed in 445591 msecs {quote}

It took 7.5 minutes to scan and report 92 blocks. 

To see what is actually wrong with the machine, someone in charge of the 
hardware needs to take a look. I will close this issue for now. 

> DFS client blocked for a long time reading blocks of a file on the JobTracker
> -----------------------------------------------------------------------------
>
>                 Key: HADOOP-5286
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5286
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.20.0
>            Reporter: Hemanth Yamijala
>         Attachments: dn-log.txt, jt-log-for-blocked-reads.txt
>
>
> On a large cluster, we've observed that DFS client was blocked on reading a 
> block of a file for almost 1 and half hours. The file was being read by the 
> JobTracker of the cluster, and was a split file of a job. On the NameNode 
> logs, we observed that the block had a message as follows:
> Inconsistent size for block blk_2044238107768440002_840946 reported from 
> <ip>:<port> current size is 195072 reported size is 1318567
> Details follow.
>  

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to