[ https://issues.apache.org/jira/browse/HADOOP-6109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12724340#action_12724340 ]
thushara wijeratna commented on HADOOP-6109: -------------------------------------------- debugging steps taken: ================== i included LOG statements to get an idea of the latencies thus - inside LineReader.readLine(): if (length >= 0) { str.append(buffer, startPosn, length); LOG.info("str.length= " + str.getLength() + " just wrote from " + startPosn + " to " + length + " bytes"); } and this is the kind of output i get: at the beginning: 2009-06-03 09:35:56,863 INFO org.apache.hadoop.util.LineReader: str.length= 4096 just wrote from 0 to 4096 bytes 2009-06-03 09:35:56,864 INFO org.apache.hadoop.util.LineReader: str.length= 8192 just wrote from 0 to 4096 bytes 2009-06-03 09:35:56,865 INFO org.apache.hadoop.util.LineReader: str.length= 12288 just wrote from 0 to 4096 bytes 2009-06-03 09:35:56,866 INFO org.apache.hadoop.util.LineReader: str.length= 16384 just wrote from 0 to 4096 bytes 2009-06-03 09:35:56,866 INFO org.apache.hadoop.util.LineReader: str.length= 20480 just wrote from 0 to 4096 bytes 2009-06-03 09:35:56,867 INFO org.apache.hadoop.util.LineReader: str.length= 24576 just wrote from 0 to 4096 bytes 2009-06-03 09:35:56,867 INFO org.apache.hadoop.util.LineReader: str.length= 28672 just wrote from 0 to 4096 bytes 2009-06-03 09:35:56,868 INFO org.apache.hadoop.util.LineReader: str.length= 32768 just wrote from 0 to 4096 bytes 2009-06-03 09:35:56,869 INFO org.apache.hadoop.util.LineReader: str.length= 36864 just wrote from 0 to 4096 bytes in the end: 2009-06-03 09:46:02,918 INFO org.apache.hadoop.util.LineReader: str.length= 60141568 just wrote from 0 to 4096 bytes 2009-06-03 09:46:03,048 INFO org.apache.hadoop.util.LineReader: str.length= 60145664 just wrote from 0 to 4096 bytes 2009-06-03 09:46:03,118 INFO org.apache.hadoop.util.LineReader: str.length= 60149760 just wrote from 0 to 4096 bytes 2009-06-03 09:46:03,183 INFO org.apache.hadoop.util.LineReader: str.length= 60153856 just wrote from 0 to 4096 bytes 2009-06-03 09:46:03,252 INFO org.apache.hadoop.util.LineReader: str.length= 60157952 just wrote from 0 to 4096 bytes 2009-06-03 09:46:03,317 INFO org.apache.hadoop.util.LineReader: str.length= 60162048 just wrote from 0 to 4096 bytes 2009-06-03 09:46:03,456 INFO org.apache.hadoop.util.LineReader: str.length= 60166144 just wrote from 0 to 4096 bytes notice the times are degrading in the end - this is the pattern, there is about 1 millisecond between the 1st consecutive reads, and in the end there is more than 50ms between 2 consecutive reads. Text.append has a potential perf issue. as you can see, about 60M of the input line is being read in 10 minutes. > Handle large (several MB) text input lines in a reasonable amount of time > ------------------------------------------------------------------------- > > Key: HADOOP-6109 > URL: https://issues.apache.org/jira/browse/HADOOP-6109 > Project: Hadoop Common > Issue Type: Improvement > Components: util > Affects Versions: 0.19.0 > Environment: Linux 2.6 kernel, java 1.6 AMD Dual-Core Opteron 2.6GHz > with 1M L1/L2 cache 1.8G RAM > Reporter: thushara wijeratna > Attachments: HADOOP-1234.patch > > > problem: > ======= > hadoop was timing out on a simple pass-through job (with the default 10 min > timeout) > cause: > ===== > i hunted this down to how Text lines are being processed inside > org.apache.hadoop.util.LineReader. > i have a fix, a task that took more than 20 minutes and still failed to > complete, completes with this fix in under 30 s. > i attach the patch (for trunk) > the problem traces: > ================ > hadoop version: 0.19.0 > userlogs on slave node: > 2009-05-29 13:57:33,551 WARN org.apache.hadoop.mapred.TaskRunner: Parent > died. Exiting attempt_200905281652_0013_m_000006_1 > [r...@domu-12-31-38-01-7c-92 attempt_200905281652_0013_m_000006_1]# > tellingly, the last input line processed right before this WARN is 19K. (i > log the full input line in the map function for debugging) > output on map-reduce task: > Task attempt_200905281652_0013_m_000006_2 failed to report status for 600 > seconds. Killing! > 09/05/29 14:08:01 INFO mapred.JobClient: map 99% reduce 32% > 09/05/29 14:18:05 INFO mapred.JobClient: map 98% reduce 32% > java.io.IOException: Job failed! > at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1217) > at > com.adxpose.data.mr.DailyHeatmapAggregator.run(DailyHeatmapAggregator.java:547) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) > at > com.adxpose.data.mr.DailyHeatmapAggregator.main(DailyHeatmapAggregator.java:553) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at org.apache.hadoop.util.RunJar.main(RunJar.java:165) > at org.apache.hadoop.mapred.JobShell.run(JobShell.java:54) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79) > at org.apache.hadoop.mapred.JobShell.main(JobShell.java:68) -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.