Java is trying to open the file /nutch/search/logs for appending, but can't because /nutch/search/logs is a directory.

If you read the java stack trace, it gives you a clue.

Cheers,
Carl.

vikasran wrote:
I am running into few issues running nutch with distributed hadoop on 2
nodes:

Configuration:
2 nodes. One is master+slave, second node is just slave

I set mapred.map.tasks and mapred.reduce.tasks to 2.

Crawl works fine on single node (only one node acting as master+slave). When
I add second node to conf/slaves file, crawl fails with message:: Stopping
at depth=0 - no more URLs to fetch

Please help. I am also seeing log4j error ::
log4j:ERROR setFile(null,true) call failed.
java.io.FileNotFoundException: /nutch/search/logs (Is a directory)
        at java.io.FileOutputStream.openAppend(Native Method)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:177)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:102)
        at org.apache.log4j.FileAppender.setFile(FileAppender.java:289)


PLEASE HELP

Reply via email to