Isn't this the same problem that was happening before with the
SegmentMerger I think where the nutch-x.x.jar needed to be added to the
classpath on all of the task trackers. We added the following code to
our hadoop script just below the other for loops and redeployed script
and restarted all task trackers:
for f in $HADOOP_HOME/nutch-*.jar; do
CLASSPATH=${CLASSPATH}:$f;
done
Dennis
Vishal Shah wrote:
Hi Andrzej,
Thanks for the reply. I have a job running on the system right now,
but I'll try to reinstall (redeploy ;-)) it after it is done.
Regards,
-vishal.
-----Original Message-----
From: Andrzej Bialecki [mailto:[EMAIL PROTECTED]
Sent: Tuesday, September 12, 2006 2:40 PM
To: [email protected]
Subject: Re: ClassNotFoundException while using segread
Vishal Shah wrote:
Hi Andrzej,
Thanks for the reply. Currently, I have deployed Hadoop/Nutch using
the instructions in the hadoop/nutch tutorial. Currently, I have
copied
Ok, then forget my explanation - it is still true, but not applicable to
your case.
the nutch jars in my NUTCH_HOME directory. I tried copying the
nutch-xxxx.job to my lib directory, but that doesn't work too.
No, you shouldn't need to do this. The scripts should find all necessary
jars and put them on CLASSPATH.
Do I need to set the CLASSPATH before I run bin/start-all.sh, or is
it
something else? Sorry, I am new to Java development, so I don't know
what you mean by deploying something.
Well, I'm not sure what could be wrong... Does it occur for you with the
clean installation, i.e. if you get a fresh copy, rebuild, reinstall
from scratch and try again?