[
https://issues.apache.org/jira/browse/HADOOP-1874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12527532
]
Christian Kunz commented on HADOOP-1874:
----------------------------------------
I applied Devaraj's patch lazy-dfs-ops.1.patch. Looks as if successfully
prevents the job from losing task trackers.
But it shifts the problem from mapred to dfs: the namenode lost **all** 1400
datanodes, which repeatedly time out sending
2007-09-14 09:01:43,008 WARN org.apache.hadoop.dfs.DataNode:
java.net.SocketTimeoutException: timed out waiting for rpc response
at org.apache.hadoop.ipc.Client.call(Client.java:472)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:165)
at org.apache.hadoop.dfs.$Proxy0.sendHeartbeat(Unknown Source)
at org.apache.hadoop.dfs.DataNode.offerService(DataNode.java:485)
at org.apache.hadoop.dfs.DataNode.run(DataNode.java:1310)
at java.lang.Thread.run(Thread.java:619)
The reduces that use libhdfs to access dfs also time out:
07/09/14 08:41:00 INFO fs.DFSClient: Could not complete file, retrying...
07/09/14 08:41:18 INFO fs.DFSClient: Could not complete file, retrying...
and other java.net.SocketTimeoutException: timed out waiting for rpc response
Namenode is cpu-busy, with 3 out of 4 processors running at 100%. It is running
with 60 handlers and queue size once with 100 and once with 500 * handler_count
> lost task trackers -- jobs hang
> -------------------------------
>
> Key: HADOOP-1874
> URL: https://issues.apache.org/jira/browse/HADOOP-1874
> Project: Hadoop
> Issue Type: Bug
> Components: mapred
> Affects Versions: 0.15.0
> Reporter: Christian Kunz
> Assignee: Devaraj Das
> Priority: Blocker
> Attachments: lazy-dfs-ops.1.patch, lazy-dfs-ops.patch
>
>
> This happens on a 1400 node cluster using a recent nightly build patched with
> HADOOP-1763 (that fixes a previous 'lost task tracker' issue) running a
> c++-pipes job with 4200 maps and 2800 reduces. The task trackers start to get
> lost in high numbers at the end of job completion.
> Similar non-pipes job do not show the same problem, but is unclear whether it
> is related to c++-pipes. It could also be dfs overload when reduce tasks
> close and validate all newly created dfs files. I see dfs client rpc timeout
> exception. But this alone does not explain the escalation in losing task
> trackers.
> I also noticed that the job tracker becomes rather unresponsive with rpc
> timeout and call queue overflow exceptions. Job Tracker is running with 60
> handlers.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.