[
https://issues.apache.org/jira/browse/HDFS-198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13505228#comment-13505228
]
Sujesh Chirackkal commented on HDFS-198:
----------------------------------------
Getting the same error in CDH 4,Hive 0.8.1 while running query with lot of
dynamic partitions. Setting the number of reducers to a larger value helps me
to resolve the issue for now.
> org.apache.hadoop.dfs.LeaseExpiredException during dfs write
> ------------------------------------------------------------
>
> Key: HDFS-198
> URL: https://issues.apache.org/jira/browse/HDFS-198
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: hdfs client, name-node
> Reporter: Runping Qi
>
> Many long running cpu intensive map tasks failed due to
> org.apache.hadoop.dfs.LeaseExpiredException.
> See [a comment
> below|https://issues.apache.org/jira/browse/HDFS-198?focusedCommentId=12910298&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#action_12910298]
> for the exceptions from the log:
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira