Hello, 

I'm trying to execute a genetic algorithm with the Mahout distributed mode. It 
works well in my local computer, but when I try to execute it in a distributed 
environment something goes wrong. After finish the first map-reduce iteration 
the following exception is throwed: 

11/04/28 10:48:20 INFO mapreduce.Job: map 100% reduce 100% 
11/04/28 10:48:22 INFO mapreduce.Job: Job complete: job_201104281044_0001 
11/04/28 10:48:22 INFO mapreduce.Job: Counters: 33 
FileInputFormatCounters 
BYTES_READ=2004 
FileSystemCounters 
FILE_BYTES_READ=222 
FILE_BYTES_WRITTEN=476 
HDFS_BYTES_READ=2155 
HDFS_BYTES_WRITTEN=384 

Shuffle Errors 
BAD_ID=0 
CONNECTION=0 
IO_ERROR=0 
WRONG_LENGTH=0 
WRONG_MAP=0 
WRONG_REDUCE=0 
Job Counters 
Data-local map tasks=1 
Total time spent by all maps waiting after reserving slots (ms)=0 
Total time spent by all reduces waiting after reserving slots (ms)=0 
SLOTS_MILLIS_MAPS=3232 
SLOTS_MILLIS_REDUCES=3376 
Launched map tasks=1 
Launched reduce tasks=1 
Map-Reduce Framework 
Combine input records=0 
Combine output records=0 
Failed Shuffles=0 
GC time elapsed (ms)=47 
Map input records=12 
Map output bytes=192 
Map output records=12 
Merged Map outputs=1 
Reduce input groups=12 
Reduce input records=12 
Reduce output records=12 
Reduce shuffle bytes=222 
Shuffled Maps =1 
Spilled Records=24 
SPLIT_RAW_BYTES=151 
11/04/28 10:48:22 WARN conf.Configuration: io.sort.mb is deprecated. Instead, 
use mapreduce.task.io.sort.mb 
11/04/28 10:48:22 WARN conf.Configuration: io.sort.factor is deprecated. 
Instead, use mapreduce.task.io.sort.factor 
Exception in thread "main" java.io.EOFException 
at java.io.DataInputStream.readFully(DataInputStream.java:197) 
at java.io.DataInputStream.readFully(DataInputStream.java:169) 
at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1518) 
at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1483) 
at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1366) 
at 
org.apache.hadoop.io.SequenceFile$Sorter$SegmentDescriptor.nextRawKey(SequenceFile.java:3163)
 
at 
org.apache.hadoop.io.SequenceFile$Sorter$MergeQueue.merge(SequenceFile.java:2977)
 
at org.apache.hadoop.io.SequenceFile$Sorter.merge(SequenceFile.java:2706) 
at org.apache.hadoop.io.SequenceFile$Sorter.merge(SequenceFile.java:2677) 
at org.apache.hadoop.io.SequenceFile$Sorter.merge(SequenceFile.java:2791) 
at 
org.apache.mahout.ga.watchmaker.OutputUtils.importEvaluations(OutputUtils.java:81)
 
at 
org.apache.mahout.ga.watchmaker.MahoutEvaluator.evaluate(MahoutEvaluator.java:79)
 
at es.udc.tic.world.Poblacion.actualizarFitness(Poblacion.java:66) 
at Principal.main(Principal.java:42) 
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 
at java.lang.reflect.Method.invoke(Method.java:613) 
at org.apache.hadoop.util.RunJar.main(RunJar.java:192) 

Does anyone know what the problem is? 

Thanks in advance. 

Reply via email to