This problem didn't show when I ran tests, which led me to conclude that it's
rooted in my local setup.
The problem magically disappeared after I cleaned up the classpath a bit
(removing fault injection jars and a few others). This experience taught me a
few things:
1. Fault injection compiles by default (why? how do I disable its build? -- it
doesn't build cleanly btw - some confusion about version numbers).
2. Classpaths are a mess. The scripts driving hadoop
(bin/{hadoop,mapred,hdfs}) loook for jars in a bunch of places and are not
consistent with one another. Related to that: there doesn't seem to be a
build target for mapred,hdfs,common that would deploy all necessary jars where
they can be picked up by the scripts; I think this needs to be fixed.
Please correct me if I'm wrong on any of these observations.
Thanks,
-Yuri
On Wednesday 13 July 2011 19:44:07 [email protected] wrote:
> Greetings,
>
> I'm running common/hdfs/mapreduce trunk version
> -r1146503; I'm getting the following error at the reduce phase:
>
> Error: tried to access class
> org.apache.hadoop.mapred.JobInitializationPoller$JobInitializationThread
> from class org.apache.hadoop.mapreduce.task.reduce.Shuffle
>
> All reduce tasks die this death.
>
> Any clue would be appreciated.
>
> Thanks,
>
> -Yuri