RE: NUTCH_CRAWLING

2009-10-15 Thread BELLINI ADAM

hi,

you have to check hadoop.log to have more details about this errorthis 
error message is like general ! i just said before that nutch error messages 
are not well designed...
i had this error before when i added a plugin but after running ant i didnt 
find the plugin folder in the build folder...

thx


> Date: Wed, 14 Oct 2009 22:28:48 -0700
> From: mehalaki...@gmail.com
> To: nutch-user@lucene.apache.org
> Subject: NUTCH_CRAWLING
> 
> 
> Hai, 
> 
> bin/nutch crawl urls -dir crawl_NEW1 -depth 3 -topN 50 
> 
> I have used the above command to crawl. 
> 
> I am getting the following error. 
> 
> Dedup: adding indexes in: crawl_NEW1/indexes 
> Exception in thread "main" java.io.IOException: Job failed! 
> at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:604) 
> at
> org.apache.nutch.indexer.DeleteDuplicates.dedup(DeleteDuplicates.java 
> :439) 
> at org.apache.nutch.crawl.Crawl.main(Crawl.java:135) 
> 
> 
> can anyone help me to resolve this problem. 
> 
> Thank you in advance. 
> 
> -- 
> View this message in context: 
> http://www.nabble.com/NUTCH_CRAWLING-tp25903220p25903220.html
> Sent from the Nutch - User mailing list archive at Nabble.com.
> 
  
_
New! Faster Messenger access on the new MSN homepage
http://go.microsoft.com/?linkid=9677406

NUTCH_CRAWLING

2009-10-14 Thread meh

Hai, 

bin/nutch crawl urls -dir crawl_NEW1 -depth 3 -topN 50 

I have used the above command to crawl. 

I am getting the following error. 

Dedup: adding indexes in: crawl_NEW1/indexes 
Exception in thread "main" java.io.IOException: Job failed! 
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:604) 
at
org.apache.nutch.indexer.DeleteDuplicates.dedup(DeleteDuplicates.java 
:439) 
at org.apache.nutch.crawl.Crawl.main(Crawl.java:135) 


can anyone help me to resolve this problem. 

Thank you in advance. 

-- 
View this message in context: 
http://www.nabble.com/NUTCH_CRAWLING-tp25903220p25903220.html
Sent from the Nutch - User mailing list archive at Nabble.com.



NUTCH_CRAWLING

2009-10-10 Thread meh

Hai,

bin/nutch crawl urls -dir crawl_NEW1 -depth 3 -topN 50

I have used the above command to crawl.

I am getting the following error.

Dedup: adding indexes in: crawl_NEW1/indexes
Exception in thread "main" java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:604)
at
org.apache.nutch.indexer.DeleteDuplicates.dedup(DeleteDuplicates.java
:439)
at org.apache.nutch.crawl.Crawl.main(Crawl.java:135)


can anyone help me to resolve this problem.

Thank you in advance.



-- 
View this message in context: 
http://www.nabble.com/NUTCH_CRAWLING-tp25833071p25833071.html
Sent from the Nutch - User mailing list archive at Nabble.com.