Hello.
Code Sample for the issue:-
Would like to mention that I am able to index documents containing close
to 3.5 Lakh lines.. Whereas INDEXING is NOT happening when number of
lines are anything greater than 5 Lakhs.. I get Memory Exception from
Java...
Would sincerely appreciate some help
Hello.
The server has much more memory. I have given minimum 8 GB to
Application Server..
The Java opts which are of interest is : -server -Xms8192m -Xmx16384m
-XX:MaxPermSize=8192m
Even after giving this much memory to the server, how come i am hitting
OOM exceptions. No other
Hello,
The following exception is being printed on the server console when
trying to index. As usual, indexes are not getting created.
java.lang.OutOfMemoryError: Java heap space
at
org.apache.lucene.util.AttributeSource.init(AttributeSource.java:148)
at
Can someone please suggest what might be the possible resolution for the
issue mentioned in trailing mail::
Also now on changing some settings for IndexWriterConfig and
LiveIndexWriterConfig I get the following exception:
20:31:23,540 INFO java.lang.OutOfMemoryError: Java heap space
The exact point at which a java program hits OOM is pretty random and
it isn't always fair to blame the chunk triggering the exception.
callMethodThatAllocatesNearlyAllMemory();
callMethodThatAllocatesABitOfMemory();
and the second one hits OOM, but is not to blame.
Anyway, your base problem
Ankit,
The stack traces you are showing only say there was an out of memory
error. In those case, the stack trace is unfortunately not always
helpful since the allocation may fail on a small object because
another object is taking all the memory of the JVM. Can you come up
with a small piece of
Any help would be highly appreciatedI am kind of struck and
unable to find out a possible solution..
On 8/29/2013 11:21 AM, Ankit Murarka wrote:
Hello all,
Faced with a typical issue.
I have many files which I am indexing.
Problem Faced:
a. File having size less than 20 MB are
Lucene doesn't have document size limits.
There are default limits for how many tokens the highlighters will process ...
But, if you are passing each line as a separate document to Lucene,
then Lucene only sees a bunch of tiny documents, right?
Can you boil this down to a small test showing the
Yes I know that Lucene should not have any document size limits. All I
get is a lock file inside my index folder. Along with this there's no
other file inside the index folder. Then I get OOM exception.
Please provide some guidance...
Here is the example:
package com.issue;
import
So you do get an exception after all, OOM.
Try it without this line:
doc.add(new TextField(contents, new BufferedReader(new
InputStreamReader(fis, UTF-8;
I think that will slurp the whole file in one go which will obviously
need more memory on larger files than on smaller ones.
Or just run
Hello,
I get exception only when the code is fired from Eclipse.
When it is deployed on an application server, I get no exception at all.
This forced me to invoke the same code from Eclipse and check what is
the issue.,.
I ran the code on server with 8 GB memory.. Even then no
Well, I use neither Eclipse nor your application server and can offer
no advice on any differences in behaviour between the two. Maybe you
should try Eclipse or app server forums.
If you are going to index the complete contents of a file as one field
you are likely to hit OOM exceptions. How
Hello all,
Faced with a typical issue.
I have many files which I am indexing.
Problem Faced:
a. File having size less than 20 MB are successfully indexed and merged.
b. File having size 20MB are not getting INDEXED.. No Exception is
being thrown. Only a lock file is being created in the index
13 matches
Mail list logo