it.
-Original message-
From:Tejas Patil tejas.patil...@gmail.com
Sent: Sun 03-Mar-2013 22:19
To: user@nutch.apache.org
Subject: Re: Nutch 1.6 : java.lang.OutOfMemoryError: unable to
create
new native thread
I agree with Sebastian. It was a crawl in local mode
Subject: Re: Nutch 1.6 : java.lang.OutOfMemoryError: unable to create
new native thread
I agree with Sebastian. It was a crawl in local mode and not over a
cluster. The intended crawl volume is huge and if we dont override the
default heap size to some decent value, there is high possibility
-Mar-2013 22:19
To: user@nutch.apache.org
Subject: Re: Nutch 1.6 : java.lang.OutOfMemoryError: unable to create
new native thread
I agree with Sebastian. It was a crawl in local mode and not over a
cluster. The intended crawl volume is huge and if we dont override the
default heap size
@nutch.apache.org
Subject: Re: Nutch 1.6 : java.lang.OutOfMemoryError: unable to create
new native thread
I agree with Sebastian. It was a crawl in local mode and not over a
cluster. The intended crawl volume is huge and if we dont override
the
default heap size to some decent
Kiran,
Were you able to resolve this issue?.. I am getting the same error when
fetching huge number of URL's
-Neeraj.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Nutch-1-6-java-lang-OutOfMemoryError-unable-to-create-new-native-thread-tp4044231p4044398.html
Sent from
Hi Kiran,
there are many possible reasons for the problem. Beside the limits on the
number of processes
the stack size in the Java VM and the system (see java -Xss and ulimit -s).
I think in local mode there should be only one mapper and consequently only
one thread spent for parsing. So the
Thanks Sebastian for the suggestions. I came over this by using low value
for topN(2000) than 1. I decided to use lower value for topN with more
rounds.
On Sun, Mar 3, 2013 at 3:41 PM, Sebastian Nagel
wastl.na...@googlemail.comwrote:
Hi Kiran,
there are many possible reasons for the
using low value for topN(2000) than 1
That would mean: you need 200 rounds and also 200 segments for 400k documents.
That's a work-around no solution!
If you find the time you should trace the process.
Seems to be either a misconfiguration or even a bug.
Sebastian
On 03/03/2013 09:45 PM,
If you find the time you should trace the process.
Seems to be either a misconfiguration or even a bug.
I will try to track this down soon with the previous configuration. Right
now, i am just trying to get data crawled by Monday.
Kiran.
Luckily, you should be able to retry via bin/nutch
I agree with Sebastian. It was a crawl in local mode and not over a
cluster. The intended crawl volume is huge and if we dont override the
default heap size to some decent value, there is high possibility of facing
an OOM.
On Sun, Mar 3, 2013 at 1:04 PM, kiran chitturi
: java.lang.OutOfMemoryError: unable to create new
native thread
I agree with Sebastian. It was a crawl in local mode and not over a
cluster. The intended crawl volume is huge and if we dont override the
default heap size to some decent value, there is high possibility of facing
an OOM.
On Sun
: java.lang.OutOfMemoryError: unable to create
new native thread
I agree with Sebastian. It was a crawl in local mode and not over a
cluster. The intended crawl volume is huge and if we dont override the
default heap size to some decent value, there is high possibility of
facing
an OOM
Sorry, i am looking to crawl 400k documents with the crawl. I said 400 in
my last message.
On Sat, Mar 2, 2013 at 2:12 PM, kiran chitturi chitturikira...@gmail.comwrote:
Hi!
I am running Nutch 1.6 on a 4 GB Mac OS desktop with Core i5 2.8GHz.
Last night i started a crawl on local mode for
13 matches
Mail list logo