Can you please turn logging to DEBUG, then steo through the job.
Provide any observations please.
Lewis

On Sat, Mar 23, 2013 at 5:38 PM, kamaci <[email protected]> wrote:

> After crawling when I run that command:
>
> bin/nutch solrindex http://localhost:8983/solr -index
>
> Sometims I get that error:
>
> FetcherJob: threads: 10
> FetcherJob: parsing: false
> FetcherJob: resuming: false
> FetcherJob : timelimit set for : -1
> Exception in thread "main" java.lang.RuntimeException: job failed:
> name=fetch, jobid=job_local_0031
>         at
> org.apache.nutch.util.NutchJob.waitForCompletion(NutchJob.java:54)
>         at org.apache.nutch.fetcher.FetcherJob.run(FetcherJob.java:194)
>         at org.apache.nutch.crawl.Crawler.runTool(Crawler.java:68)
>         at org.apache.nutch.crawl.Crawler.run(Crawler.java:161)
>         at org.apache.nutch.crawl.Crawler.run(Crawler.java:250)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>         at org.apache.nutch.crawl.Crawler.main(Crawler.java:257)
>
> When I check hadoop log I see that:
>
> 2013-03-24 01:58:42,151 WARN  mapred.LocalJobRunner - job_local_0031
> java.lang.OutOfMemoryError: unable to create new native thread
>         at java.lang.Thread.start0(Native Method)
>         at java.lang.Thread.start(Thread.java:691)
>         at
> org.apache.nutch.fetcher.FetcherReducer.run(FetcherReducer.java:790)
>         at
> org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:649)
>         at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:417)
>         at
> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:260)
>
> I have Nutch 2.1, Solr 4.2 running on a core i7 64 bit centos that has 8
> core (4 real core) and 8 GB Ram, at CentOS 6.4 What may be the reason?
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/waitForCompletion-Error-tp4050809.html
> Sent from the Nutch - User mailing list archive at Nabble.com.
>



-- 
*Lewis*

Reply via email to