Hi,
I just check out the latest svn version (376446), I built it from scratch.
When I tried to run the jobtrucker I got the next message in the jobtracker
log file:
060209 164707 Property 'sun.cpu.isalist' is
Exception in thread main java.lang.NullPointerException
at
Hi,
I am trying to run with the new svn version (375414), I am working under
nutch/trunk directory.
When I ran the next command bin/hadoop jobtracker or bin/hadoop-daemon.sh
start jobtracker
I got the next message,
Exception in thread main java.lang.NoClassDefFoundError:
I am still getting teh next Exception:
Exception in thread main java.lang.NullPointerException
at
org.apache.hadoop.mapred.JobTrackerInfoServer.init(JobTrackerInfoServer.java:56)
at org.apache.hadoop.mapred.JobTracker.init(JobTracker.java:303)
at
folder instead of the
jar file. It will work fine then. Something probabely is missing from
the
hadoop jar.
M
On 2/7/06, Rafit Izhak_Ratzin [EMAIL PROTECTED] wrote:
I am still getting teh next Exception:
Exception in thread main java.lang.NullPointerException
you may better run under nutch as it was before, also some minutes ago
Doug moved some scripts back to nutch/bin so as far I know it should work
as before.
Am 05.02.2006 um 20:40 schrieb Rafit Izhak_Ratzin:
Hi,
I updated my environment to the newest subversion,
and after running my
Hi,
I updated my environment to the newest subversion,
and after running my datanodes and namenode I would like to start fetching.
so my question is how should I call the the class
org.apache.nutch.crawl.Injector?
if I am running under the path of .../hadoop/trunk ?
Thank you,
Rafit
Hi Mike,
Thanks for your advice.
However, thinking about that the problem happens in level two and not in
level one which means that you successly fetched the link you mentioned but
you couldn't fetch the links it points to.
so actually you have to find the link in the second level that make
Hi,
I ran the mapreduce starting with 10 URL into the sixth cycle where it
fetched 400K pages and everything was fine.
060127 001055 TOTAL urls: 1877326
060127 001055 avg score:1.099
060127 001055 max score:1666.305
060127 001055 min score:1.0
060127 001055 retry
Hi,
In what part of the mapred the parsing is done in the Map part or in the
Reduce part?
Thanks,
Rafit
_
Express yourself instantly with MSN Messenger! Download today it's FREE!
The author has left out the parse and updatedb part.
After fetch simply run bin/nutch parse segment/2006 and then bin/nutch
crawldb updatedb segment/2006xxx.
Rafit Izhak_Ratzin wrote:
Hi,
In what part of the mapred the parsing is done in the Map part or in the
Reduce
Hi,
We have a serious problem in fetching pages. This exception blocks fetching
all pages. This error happens in datanode log file using 3 machines and
mapReduce.
060116 221332 194 DataXCeiver
java.net.SocketTimeoutException: Read timed out
at
11 matches
Mail list logo