Hi,
I'm trying to split a single file throug a mep-reduce job. My input is
a sequence file where each entry represent a graph node together with
its neighbors and i would like to split it in more files.
A typical execution is, for example

hadoop jar bin/kshell1.jar jm.job.GraphPartitioner graphStructure
graphPartitions 5 93 1

where
  - graphStructure is the input folder, containing just one file
  - graphPartition is the output folder
  - 5 is the number of partitions
  - 93 is the number of graph nodes
  - 1 is a flag for a "range mode" (i.e. nodes are splitted in ranges
0-18, 19-37, 38-55, 56-74 and 75-92 )

Executing the *same exact afore-mentioned command* two subsequent
times, the behavior is not the same. How is it possible? (in
attachment, the log)
I'm running Hadoop in distributed mode on 5 machines with no special
parameterization.

Thank you all!

Ivan
leona...@net-server00:~/kshell$ hadoop jar bin/kshell1.jar jm.job.GraphPartitioner graphStructure graphPartitions 5 93 1
'graphPartitions' is dirty, overwriting!
10/11/27 15:28:12 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
10/11/27 15:28:12 INFO mapred.FileInputFormat: Total input paths to process : 1
10/11/27 15:28:13 INFO mapred.JobClient: Running job: job_201011271124_0019
10/11/27 15:28:14 INFO mapred.JobClient:  map 0% reduce 0%
10/11/27 15:28:22 INFO mapred.JobClient:  map 100% reduce 0%
10/11/27 15:28:34 INFO mapred.JobClient:  map 100% reduce 60%
10/11/27 15:28:35 INFO mapred.JobClient:  map 100% reduce 80%
10/11/27 15:28:36 INFO mapred.JobClient:  map 100% reduce 100%
10/11/27 15:28:38 INFO mapred.JobClient: Job complete: job_201011271124_0019
10/11/27 15:28:38 INFO mapred.JobClient: Counters: 18
10/11/27 15:28:38 INFO mapred.JobClient:   Job Counters
10/11/27 15:28:38 INFO mapred.JobClient:     Launched reduce tasks=5
10/11/27 15:28:38 INFO mapred.JobClient:     Launched map tasks=2
10/11/27 15:28:38 INFO mapred.JobClient:     Data-local map tasks=2
10/11/27 15:28:38 INFO mapred.JobClient:   FileSystemCounters
10/11/27 15:28:38 INFO mapred.JobClient:     FILE_BYTES_READ=4781
10/11/27 15:28:38 INFO mapred.JobClient:     HDFS_BYTES_READ=6905
10/11/27 15:28:38 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=9848
10/11/27 15:28:38 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=5673
10/11/27 15:28:38 INFO mapred.JobClient:   Map-Reduce Framework
10/11/27 15:28:38 INFO mapred.JobClient:     Reduce input groups=93
10/11/27 15:28:38 INFO mapred.JobClient:     Combine output records=0
10/11/27 15:28:38 INFO mapred.JobClient:     Map input records=93
10/11/27 15:28:38 INFO mapred.JobClient:     Reduce shuffle bytes=4811
10/11/27 15:28:38 INFO mapred.JobClient:     Reduce output records=0
10/11/27 15:28:38 INFO mapred.JobClient:     Spilled Records=186
10/11/27 15:28:38 INFO mapred.JobClient:     Map output bytes=4564
10/11/27 15:28:38 INFO mapred.JobClient:     Map input bytes=5348
10/11/27 15:28:38 INFO mapred.JobClient:     Combine input records=0
10/11/27 15:28:38 INFO mapred.JobClient:     Map output records=93
10/11/27 15:28:38 INFO mapred.JobClient:     Reduce input records=93
leona...@net-server00:~/kshell$ hadoop jar bin/kshell1.jar jm.job. GraphPartitioner graphStructure graphPartitions 5 93 1
'graphPartitions' is dirty, overwriting!
10/11/27 15:28:47 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
10/11/27 15:28:47 INFO mapred.FileInputFormat: Total input paths to process : 1
10/11/27 15:28:47 INFO mapred.JobClient: Running job: job_201011271124_0020
10/11/27 15:28:48 INFO mapred.JobClient:  map 0% reduce 0%
10/11/27 15:28:57 INFO mapred.JobClient:  map 100% reduce 0%
10/11/27 15:29:06 INFO mapred.JobClient:  map 100% reduce 3%
10/11/27 15:29:07 INFO mapred.JobClient:  map 100% reduce 13%
10/11/27 15:29:09 INFO mapred.JobClient:  map 100% reduce 30%
10/11/27 15:29:10 INFO mapred.JobClient:  map 100% reduce 20%
10/11/27 15:29:11 INFO mapred.JobClient: Task Id : attempt_201011271124_0020_r_000001_0, Status : FAILED
Shuffle Error: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out.
10/11/27 15:29:12 INFO mapred.JobClient: Task Id : attempt_201011271124_0020_r_000002_0, Status : FAILED
Shuffle Error: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out.
10/11/27 15:29:12 INFO mapred.JobClient: Task Id : attempt_201011271124_0020_r_000003_0, Status : FAILED
Shuffle Error: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out.
10/11/27 15:29:12 INFO mapred.JobClient: Task Id : attempt_201011271124_0020_r_000004_0, Status : FAILED
Shuffle Error: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out.
10/11/27 15:29:22 INFO mapred.JobClient:  map 100% reduce 30%
10/11/27 15:29:25 INFO mapred.JobClient:  map 100% reduce 20%
10/11/27 15:29:27 INFO mapred.JobClient:  map 100% reduce 40%
10/11/27 15:29:27 INFO mapred.JobClient: Task Id : attempt_201011271124_0020_r_000001_1, Status : FAILED
Shuffle Error: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out.
10/11/27 15:29:27 INFO mapred.JobClient: Task Id : attempt_201011271124_0020_r_000002_1, Status : FAILED
Shuffle Error: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out.
10/11/27 15:29:27 INFO mapred.JobClient: Task Id : attempt_201011271124_0020_r_000003_1, Status : FAILED
Shuffle Error: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out.
10/11/27 15:29:37 INFO mapred.JobClient:  map 100% reduce 46%
10/11/27 15:29:40 INFO mapred.JobClient:  map 100% reduce 40%
10/11/27 15:29:42 INFO mapred.JobClient:  map 100% reduce 60%
10/11/27 15:29:42 INFO mapred.JobClient: Task Id : attempt_201011271124_0020_r_000001_2, Status : FAILED
Shuffle Error: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out.
10/11/27 15:29:42 INFO mapred.JobClient: Task Id : attempt_201011271124_0020_r_000002_2, Status : FAILED
Shuffle Error: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out.
10/11/27 15:29:52 INFO mapred.JobClient:  map 100% reduce 63%
10/11/27 15:29:55 INFO mapred.JobClient:  map 100% reduce 60%
10/11/27 15:29:57 INFO mapred.JobClient:  map 100% reduce 80%
10/11/27 15:30:02 INFO mapred.JobClient: Job complete: job_201011271124_0020
10/11/27 15:30:02 INFO mapred.JobClient: Counters: 19
10/11/27 15:30:02 INFO mapred.JobClient:   Job Counters
10/11/27 15:30:02 INFO mapred.JobClient:     Launched reduce tasks=14
10/11/27 15:30:02 INFO mapred.JobClient:     Launched map tasks=2
10/11/27 15:30:02 INFO mapred.JobClient:     Data-local map tasks=2
10/11/27 15:30:02 INFO mapred.JobClient:     Failed reduce tasks=1
10/11/27 15:30:02 INFO mapred.JobClient:   FileSystemCounters
10/11/27 15:30:02 INFO mapred.JobClient:     FILE_BYTES_READ=3972
10/11/27 15:30:02 INFO mapred.JobClient:     HDFS_BYTES_READ=6905
10/11/27 15:30:02 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=9039
10/11/27 15:30:02 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=4684
10/11/27 15:30:02 INFO mapred.JobClient:   Map-Reduce Framework
10/11/27 15:30:02 INFO mapred.JobClient:     Reduce input groups=74
10/11/27 15:30:02 INFO mapred.JobClient:     Combine output records=0
10/11/27 15:30:02 INFO mapred.JobClient:     Map input records=93
10/11/27 15:30:02 INFO mapred.JobClient:     Reduce shuffle bytes=3990
10/11/27 15:30:02 INFO mapred.JobClient:     Reduce output records=0
10/11/27 15:30:02 INFO mapred.JobClient:     Spilled Records=167
10/11/27 15:30:02 INFO mapred.JobClient:     Map output bytes=4564
10/11/27 15:30:02 INFO mapred.JobClient:     Map input bytes=5348
10/11/27 15:30:02 INFO mapred.JobClient:     Combine input records=0
10/11/27 15:30:02 INFO mapred.JobClient:     Map output records=93
10/11/27 15:30:02 INFO mapred.JobClient:     Reduce input records=74
Exception in thread "main" java.io.IOException: Job failed!
        at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1252)
        at jm.job.GraphPartitioner.run(GraphPartitioner.java:104)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
        at jm.job.GraphPartitioner.main(GraphPartitioner.java:123)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:156)

Reply via email to