Here is my log when I have DEBUG level:
2013-03-24 03:10:46,038 INFO mapreduce.GoraRecordWriter -
gora.buffer.write.limit = 10000
2013-03-24 03:10:46,040 WARN mapred.FileOutputCommitter - Output path
is null in cleanup
2013-03-24 03:10:46,040 WARN mapred.LocalJobRunner - job_local_0034
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:691)
at org.apache.zookeeper.ClientCnxn.start(ClientCnxn.java:415)
at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:378)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.connect(ZKUtil.java:97)
at
org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:119)
at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getZooKeeperWatcher(HConnectionManager.java:1002)
at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.setupZookeeperTrackers(HConnectionManager.java:304)
at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.<init>(HConnectionManager.java:295)
at
org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:157)
at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:169)
at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:147)
at
org.apache.gora.hbase.store.HBaseTableConnection$1.<init>(HBaseTableConnection.java:81)
at
org.apache.gora.hbase.store.HBaseTableConnection.getTable(HBaseTableConnection.java:81)
at
org.apache.gora.hbase.store.HBaseTableConnection.put(HBaseTableConnection.java:192)
at org.apache.gora.hbase.store.HBaseStore.put(HBaseStore.java:241)
at
org.apache.gora.mapreduce.GoraRecordWriter.write(GoraRecordWriter.java:60)
at
org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:587)
at
org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80)
at
org.apache.nutch.crawl.GeneratorReducer.reduce(GeneratorReducer.java:78)
at
org.apache.nutch.crawl.GeneratorReducer.reduce(GeneratorReducer.java:40)
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:176)
at
org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:649)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:417)
at
org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:260)
2013/3/24 lewis john mcgibbney [via Lucene] <
[email protected]>
> Can you please turn logging to DEBUG, then steo through the job.
> Provide any observations please.
> Lewis
>
> On Sat, Mar 23, 2013 at 5:38 PM, kamaci <[hidden
> email]<http://user/SendEmail.jtp?type=node&node=4050810&i=0>>
> wrote:
>
> > After crawling when I run that command:
> >
> > bin/nutch solrindex http://localhost:8983/solr -index
> >
> > Sometims I get that error:
> >
> > FetcherJob: threads: 10
> > FetcherJob: parsing: false
> > FetcherJob: resuming: false
> > FetcherJob : timelimit set for : -1
> > Exception in thread "main" java.lang.RuntimeException: job failed:
> > name=fetch, jobid=job_local_0031
> > at
> > org.apache.nutch.util.NutchJob.waitForCompletion(NutchJob.java:54)
> > at org.apache.nutch.fetcher.FetcherJob.run(FetcherJob.java:194)
> > at org.apache.nutch.crawl.Crawler.runTool(Crawler.java:68)
> > at org.apache.nutch.crawl.Crawler.run(Crawler.java:161)
> > at org.apache.nutch.crawl.Crawler.run(Crawler.java:250)
> > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> > at org.apache.nutch.crawl.Crawler.main(Crawler.java:257)
> >
> > When I check hadoop log I see that:
> >
> > 2013-03-24 01:58:42,151 WARN mapred.LocalJobRunner - job_local_0031
> > java.lang.OutOfMemoryError: unable to create new native thread
> > at java.lang.Thread.start0(Native Method)
> > at java.lang.Thread.start(Thread.java:691)
> > at
> > org.apache.nutch.fetcher.FetcherReducer.run(FetcherReducer.java:790)
> > at
> > org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:649)
> > at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:417)
> > at
> > org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:260)
> >
> > I have Nutch 2.1, Solr 4.2 running on a core i7 64 bit centos that has 8
> > core (4 real core) and 8 GB Ram, at CentOS 6.4 What may be the reason?
> >
> >
> >
> > --
> > View this message in context:
> >
> http://lucene.472066.n3.nabble.com/waitForCompletion-Error-tp4050809.html
> > Sent from the Nutch - User mailing list archive at Nabble.com.
> >
>
>
>
> --
> *Lewis*
>
>
> ------------------------------
> If you reply to this email, your message will be added to the discussion
> below:
>
> http://lucene.472066.n3.nabble.com/waitForCompletion-Error-tp4050809p4050810.html
> To unsubscribe from waitForCompletion Error, click
> here<http://lucene.472066.n3.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=4050809&code=ZnVya2Fua2FtYWNpQGdtYWlsLmNvbXw0MDUwODA5fDEyODM4MDc0Mg==>
> .
> NAML<http://lucene.472066.n3.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>
--
View this message in context:
http://lucene.472066.n3.nabble.com/waitForCompletion-Error-tp4050809p4050811.html
Sent from the Nutch - User mailing list archive at Nabble.com.