Thank you for your input. However I am not able to find out what the problem
is.

I have followed the documentation to setup nutch+hadoop and dont know what I
am doing wrong. I noticed that in all the processes hadoop.log.dir is set to
/nutch/search/logs and hadoop.log.file is set to a correct file name. If
that is the case why is log4j trying to write to the directory??

Has anyone faced the same issue using nutch/hadoop ?? Are there some
configuration file to be tweaked to get it to work??? 





Carl Cerecke-3 wrote:
> 
> Java is trying to open the file /nutch/search/logs for appending, but 
> can't because /nutch/search/logs is a directory.
> 
> If you read the java stack trace, it gives you a clue.
> 
> Cheers,
> Carl.
> 
> vikasran wrote:
>> I am running into few issues running nutch with distributed hadoop on 2
>> nodes:
>> 
>> Configuration:
>> 2 nodes. One is master+slave, second node is just slave
>> 
>> I set mapred.map.tasks and mapred.reduce.tasks to 2.
>> 
>> Crawl works fine on single node (only one node acting as master+slave).
>> When
>> I add second node to conf/slaves file, crawl fails with message::
>> Stopping
>> at depth=0 - no more URLs to fetch
>> 
>> Please help. I am also seeing log4j error ::
>> log4j:ERROR setFile(null,true) call failed.
>> java.io.FileNotFoundException: /nutch/search/logs (Is a directory)
>>         at java.io.FileOutputStream.openAppend(Native Method)
>>         at java.io.FileOutputStream.<init>(FileOutputStream.java:177)
>>         at java.io.FileOutputStream.<init>(FileOutputStream.java:102)
>>         at org.apache.log4j.FileAppender.setFile(FileAppender.java:289)
>> 
>> 
>> PLEASE HELP
> 
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Problems-running-multiple-nutch-nodes-tf4512336.html#a12882759
Sent from the Nutch - User mailing list archive at Nabble.com.

Reply via email to