On 12/14/10 6:50 PM, Claudio Martella wrote:
Hello list,
I have a 3 nodes cluster and I'm running nutch 1.2 on the cluster. I
also have a 4th dev machine that launches hadoop/nutch jobs on the
cluster (the configuration just specifies the jobtracker and the namenode).
when i launch the job from the node running the jobtracker, nutch runs
the crawl successfully.
But when i run the job from the dev machine the crawling stops at depth
1. This is weird because it doesn't complain about exceptions/error or
anything. It's just stopping at the second iteration of the generator.
Basically it injects the seed, it runs the first cycle of generate,
fetch -noParsing, parse, updatedb and at the second generate it stops
because no new ips to fetch are found. As a matter of fact, it even
sends the seed's parse to SOLR.
I copy the nutch directory AS IS, with the script inside from the
cluster node to the dev machine. The only difference is that the user
running the job on the dev machine is different. But the hdfs directory
i crawl into is owned by this user (in fact there's no denied permission
problem).
This is making me crazy. Any idea where I could look at?
This looks like some environment or property setting issue... The
ultimate answer to this is the job.xml (available via jobtracker UI when
you click on job details), which should contain the right stuff -
especially pay attention to paths, they should be either relative to the
top of job jar, or should point to valid HDFS locations.
--
Best regards,
Andrzej Bialecki <><
___. ___ ___ ___ _ _ __________________________________
[__ || __|__/|__||\/| Information Retrieval, Semantic Web
___|||__|| \| || | Embedded Unix, System Integration
http://www.sigram.com Contact: info at sigram dot com