I installed Java and Hadoop on a GoGrid cloud server using Red Hat Enterprise Linux Server release 5.1 (Tikanga). Hadoop installed fine and starts fine, however I get an error (java.lang.NullPointerException at java.util.concurrent.ConcurrentHashMap) while running the Hadoop wordcount example. My guess was that this was a localhost or IPv6 issue.
* I have tested replacing 'localhost' with both the local IP, and server IP addresses (when out of options) in Hadoop conf * I have disabled IPv6 both in sysctl.conf and hadoop-env.sh (former followed by a server restart) Any thoughts? Thank you. The output is given below # bin/hadoop jar hadoop-0.20.2-examples.jar wordcount datasets tests/out7 10/07/27 05:44:59 INFO input.FileInputFormat: Total input paths to process : 1 10/07/27 05:44:59 INFO mapred.JobClient: Running job: job_201007270544_0001 10/07/27 05:45:00 INFO mapred.JobClient: map 0% reduce 0% 10/07/27 05:45:12 INFO mapred.JobClient: map 100% reduce 0% 10/07/27 05:45:17 INFO mapred.JobClient: Task Id : attempt_201007270544_0001_r_000000_0, Status : FAILED Error: java.lang.NullPointerException at java.util.concurrent.ConcurrentHashMap.get(Unknown Source) at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$GetMapEventsThread.getMapCompletionEvents(ReduceTask.java:2683) at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$GetMapEventsThread.run(ReduceTask.java:2605) 10/07/27 05:45:24 INFO mapred.JobClient: Task Id : attempt_201007270544_0001_r_000000_1, Status : FAILED Error: java.lang.NullPointerException at java.util.concurrent.ConcurrentHashMap.get(Unknown Source) at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$GetMapEventsThread.getMapCompletionEvents(ReduceTask.java:2683) at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$GetMapEventsThread.run(ReduceTask.java:2605) 10/07/27 05:45:30 INFO mapred.JobClient: Task Id : attempt_201007270544_0001_r_000000_2, Status : FAILED Error: java.lang.NullPointerException at java.util.concurrent.ConcurrentHashMap.get(Unknown Source) at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$GetMapEventsThread.getMapCompletionEvents(ReduceTask.java:2683) at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$GetMapEventsThread.run(ReduceTask.java:2605) 10/07/27 05:45:39 INFO mapred.JobClient: Job complete: job_201007270544_0001 10/07/27 05:45:39 INFO mapred.JobClient: Counters: 12 10/07/27 05:45:39 INFO mapred.JobClient: Job Counters 10/07/27 05:45:39 INFO mapred.JobClient: Launched reduce tasks=4 10/07/27 05:45:39 INFO mapred.JobClient: Launched map tasks=1 10/07/27 05:45:39 INFO mapred.JobClient: Data-local map tasks=1 10/07/27 05:45:39 INFO mapred.JobClient: Failed reduce tasks=1 10/07/27 05:45:39 INFO mapred.JobClient: FileSystemCounters 10/07/27 05:45:39 INFO mapred.JobClient: HDFS_BYTES_READ=15319 10/07/27 05:45:39 INFO mapred.JobClient: FILE_BYTES_WRITTEN=12847 10/07/27 05:45:39 INFO mapred.JobClient: Map-Reduce Framework 10/07/27 05:45:39 INFO mapred.JobClient: Combine output records=934 10/07/27 05:45:39 INFO mapred.JobClient: Map input records=149 10/07/27 05:45:39 INFO mapred.JobClient: Spilled Records=934 10/07/27 05:45:39 INFO mapred.JobClient: Map output bytes=25346 10/07/27 05:45:39 INFO mapred.JobClient: Combine input records=2541 10/07/27 05:45:39 INFO mapred.JobClient: Map output records=2541 -- Dr. Sameer Joshi, Ph.D. Senior computer scientist, Serene Software.
