recently we had some resolver issues so i have added all the ips of the slaves, namenode, jobtracker to the /etc/hosts file in all the slaves, namenode, and jobtracker. This is one of the 5000+ task attemp, it seems every task is taking around 6 minutes to process. I don't have the number in hand but it seems long to me to process 150MB little over 6 minutes. I am not sure if we are still having some network related problem. Can someone tell me is this normal? *syslog logs*
2011-07-14 18:58:54,830 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=MAP, sessionId= 2011-07-14 18:58:55,387 WARN org.apache.hadoop.io.UTF8: truncating long string: 119379 chars, starting with /user/logs 2011-07-14 19:05:11,283 INFO org.apache.hadoop.mapred.TaskRunner: Task:attempt_201107141056_0010_m_003403_0 is done. And is in the process of commiting 2011-07-14 19:05:14,479 INFO org.apache.hadoop.mapred.TaskRunner: Task attempt_201107141056_0010_m_003403_0 is allowed to commit now 2011-07-14 19:05:16,055 INFO org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Saved output of task 'attempt_201107141056_0010_m_003403_0' to hdfs://xxxx:50001/tmp/temp-1127928319/tmp-748402288 2011-07-14 19:05:16,058 INFO org.apache.hadoop.mapred.TaskRunner: Task 'attempt_201107141056_0010_m_003403_0' done. Counters for task_201107141056_0010_m_003403 ------------------------------ *FileSystemCounters* HDFS_BYTES_READ 134,297,584 HDFS_BYTES_WRITTEN 16,612,431 *Map-Reduce Framework* Map input records 396,178 Spilled Records 0 Map output records 200,435