Can you show us the lines in your code where you construct the JobConf? If you don't include a class in that constructor call, then Hadoop doesn't have enough of a hint to find your jar files.
On 10/8/07 12:03 PM, "Christophe Taton" <[EMAIL PROTECTED]> wrote: > Hi Daniel, > Can you try to build and run a single jar file which contains all > required class files directly (i.e. without including jar files inside > the job jar file)? > This should prevent classloading problems. If the error still persists, > then you might suspect other problems. > Chris > > Daniel Wressle wrote: >> Hello Hadoopers! >> >> I have just recently started using Hadoop and I have a question that >> has puzzled me for a couple of days now. >> >> I have already browsed the mailing list and found some relevant posts, >> especially >> http://mail-archives.apache.org/mod_mbox/lucene-hadoop-user/200708.mbox/%3c84 >> [EMAIL PROTECTED], >> but the solution eludes me. >> >> My Map/Reduce job relies on external jars and I had to modify my ant >> script to include them in the lib/ directory of my jar file. So far, >> so good. The job runs without any issues when I issue the job on my >> local machine only. >> >> However, adding a second machine to the mini-cluster presents the >> following problem: a NullPointerException being thrown as soon as I >> call any function within a class I have imported from the external >> jars. Please note that this will only happen on the other machine, the >> maps on my main machine, which I submit the job on, will proceed >> without any warnings. >> >> java.lang.NullPointerException at xxx.xxx.xxx (Unknown Source) is the >> actual log output from hadoop. >> >> My jar file contains all the necessary jars in the lib/ directory. Do >> I need to place them somewhere else on the slaves in order for my >> submitted job to be able to use them? >> >> Any pointers would be much appreciated. >
