Hi, Mike ‍
 Thank you very much for your help. I will  have a try to make a 
FilterAtomicReader subclass to solve this issue.‍






Best Regards!




------------------ Original ------------------
From:  "Michael McCandless";<luc...@mikemccandless.com>;
Date:  Sun, Sep 14, 2014 02:48 AM
To:  "Lucene Users"<java-user@lucene.apache.org>; 

Subject:  Re: OutOfMemoryError throwed by SimpleMergedSegmentWarmer



Norms are not stored sparsely by the default codec.

So they take 1 byte per doc per indexed field regardless of whether
that doc had that field.

There is no setting to turn this off in IndexReader, though you could
make a FilterAtomicReader subclass to do this.

Or, you can disable norms for these fields (e.g. add a single doc that
has all these fields w/ norms disabled) and then do a force merge and
it will "spread" to the merged segment.

Mike McCandless

http://blog.mikemccandless.com


On Sat, Sep 13, 2014 at 8:20 AM, 308181687 <308181...@qq.com> wrote:
> Hi,  Mike ‍
>    In our use case, we have thousands of index fields,  different kind of 
> document have different  fields.  Do you meant that ‍norms field will consume 
> large memory? Why?‍
>
>
>    If we decide to disable norms, do we need to rebuild our index entirely? 
> By the way, We have 8 million documents and our jvm heap is 5G.‍
>
>
> Thanks & Best Regards!
>
>
> ‍
>
>
>
> ------------------ Original ------------------
> From:  "Michael McCandless";<luc...@mikemccandless.com>;
> Date:  Sat, Sep 13, 2014 06:29 PM
> To:  "Lucene Users"<java-user@lucene.apache.org>;
>
> Subject:  Re: OutOfMemoryError throwed by SimpleMergedSegmentWarmer
>
>
>
> The warmer just tries to load norms/docValues/etc. for all fields that
> have them enabled ... so this is likely telling you an IndexReader
> would also hit OOME.
>
> You either need to reduce the number of fields you have indexed, or at
> least disable norms (takes 1 byte per doc per indexed field regardless
> of whether that doc had indexed that field), or increase HEAP to the
> JVM.
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
>
> On Sat, Sep 13, 2014 at 4:25 AM, 308181687 <308181...@qq.com> wrote:
>> Hi, all
>>    we got an OutOfMemoryError throwed ‍by SimpleMergedSegmentWarmer. We use 
>> lucene 4.7, and access index file by NRTCachingDirectory/MMapDirectory. 
>> Could any body give me a hand?  Strack trace is as follows:
>>
>>
>>
>>
>>
>>
>> org.apache.lucene.index.MergePolicy$MergeException: 
>> java.lang.OutOfMemoryError: Java heap space‍
>>      at 
>> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:518)
>>
>>
>> Caused by: java.lang.OutOfMemoryError: Java heap space
>>         at 
>> org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer.loadNumeric(Lucene42DocValuesProducer.java:228)
>>         at 
>> org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer.getNumeric(Lucene42DocValuesProducer.java:188)
>>         at 
>> org.apache.lucene.index.SegmentCoreReaders.getNormValues(SegmentCoreReaders.java:166)
>>         at 
>> org.apache.lucene.index.SegmentReader.getNormValues(SegmentReader.java:519)
>>         at 
>> org.apache.lucene.index.SimpleMergedSegmentWarmer.warm(SimpleMergedSegmentWarmer.java:52)
>>         at 
>> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4275)
>>         at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3743)
>>         at 
>> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
>>         at 
>> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)‍
>>
>>
>>
>>
>> Thanks & Best Regards!
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
> For additional commands, e-mail: java-user-h...@lucene.apache.org

---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org

Reply via email to