Re: Optimize fails with OutOfMemory Exception - sun.nio.ch.FileChannelImpl.map involved

2011-09-22 Thread Michael McCandless
Unfortunately I really don't know ;)  Every time I set forth to figure
things like this out I seem to learn some new way...

Maybe someone else knows?

Mike McCandless

http://blog.mikemccandless.com

On Thu, Sep 22, 2011 at 2:15 PM, Shawn Heisey  wrote:
> Michael,
>
> What is the best central place on an rpm-based distro (CentOS 6 in my case)
> to raise the vmem limit for specific user(s), assuming it's not already
> correct?  I'm using /etc/security/limits.conf to raise the open file limit
> for the user that runs Solr:
>
> ncindex         hard    nofile  65535
> ncindex         soft    nofile  49151
>
> Thanks,
> Shawn
>
>
> On 9/22/2011 9:56 AM, Michael McCandless wrote:
>>
>> OK, excellent.  Thanks for bringing closure,
>>
>> Mike McCandless
>>
>> http://blog.mikemccandless.com
>>
>> On Thu, Sep 22, 2011 at 9:00 AM, Ralf Matulat
>>  wrote:
>>>
>>> Dear Mike,
>>> thanks for your your reply.
>>> Just a couple of minutes we found a solution or - to be honest - where we
>>> went wrong.
>>> Our failure was the use of ulimit. We missed, that ulimit sets the vmem
>>> for
>>> each shell seperatly. So we set 'ulimit -v unlimited' on a shell,
>>> thinking
>>> that we've done the job correctly.
>>> As we recognized our mistake, we added 'ulimit -v unlimited' to our
>>>  init-Skript of the tomcat-instance and now it looks like everything
>>> works
>>> as aspected.
>>>
>
>


Re: Optimize fails with OutOfMemory Exception - sun.nio.ch.FileChannelImpl.map involved

2011-09-22 Thread Shawn Heisey

Michael,

What is the best central place on an rpm-based distro (CentOS 6 in my 
case) to raise the vmem limit for specific user(s), assuming it's not 
already correct?  I'm using /etc/security/limits.conf to raise the open 
file limit for the user that runs Solr:


ncindex hardnofile  65535
ncindex softnofile  49151

Thanks,
Shawn


On 9/22/2011 9:56 AM, Michael McCandless wrote:

OK, excellent.  Thanks for bringing closure,

Mike McCandless

http://blog.mikemccandless.com

On Thu, Sep 22, 2011 at 9:00 AM, Ralf Matulat  wrote:

Dear Mike,
thanks for your your reply.
Just a couple of minutes we found a solution or - to be honest - where we
went wrong.
Our failure was the use of ulimit. We missed, that ulimit sets the vmem for
each shell seperatly. So we set 'ulimit -v unlimited' on a shell, thinking
that we've done the job correctly.
As we recognized our mistake, we added 'ulimit -v unlimited' to our
  init-Skript of the tomcat-instance and now it looks like everything works
as aspected.





Re: Optimize fails with OutOfMemory Exception - sun.nio.ch.FileChannelImpl.map involved

2011-09-22 Thread Michael McCandless
OK, excellent.  Thanks for bringing closure,

Mike McCandless

http://blog.mikemccandless.com

On Thu, Sep 22, 2011 at 9:00 AM, Ralf Matulat  wrote:
> Dear Mike,
> thanks for your your reply.
> Just a couple of minutes we found a solution or - to be honest - where we
> went wrong.
> Our failure was the use of ulimit. We missed, that ulimit sets the vmem for
> each shell seperatly. So we set 'ulimit -v unlimited' on a shell, thinking
> that we've done the job correctly.
> As we recognized our mistake, we added 'ulimit -v unlimited' to our
>  init-Skript of the tomcat-instance and now it looks like everything works
> as aspected.
> Need some further testing with the java versions, but I'm quite optimistic.
> Best regards
> Ralf
>
> Am 22.09.2011 14:46, schrieb Michael McCandless:
>>
>> Are you sure you are using a 64 bit JVM?
>>
>> Are you sure you really changed your vmem limit to unlimited?  That
>> should have resolved the OOME from mmap.
>>
>> Or: can you run "cat /proc/sys/vm/max_map_count"?  This is a limit on
>> the total number of maps in a single process, that Linux imposes.  But
>> the default limit is usually high (64K), so it'd be surprising if you
>> are hitting that unless it's lower in your env.
>>
>> The amount of [free] RAM on the machine should have no bearing on
>> whether mmap succeeds or fails; it's the available address space (32
>> bit is tiny; 64 bit is immense) and then any OS limits imposed.
>>
>> Mike McCandless
>>
>> http://blog.mikemccandless.com
>>
>> On Thu, Sep 22, 2011 at 5:27 AM, Ralf Matulat
>>  wrote:
>>>
>>> Good morning!
>>> Recently we slipped into an OOME by optimizing our index. It looks like
>>> it's
>>> regarding to the nio class and the memory-handling.
>>> I'll try to describe the environment, the error and what we did to solve
>>> the
>>> problem. Nevertheless, none of our approaches was successful.
>>>
>>> The environment:
>>>
>>> - Tested with both, SOLR 3.3&  3.4
>>> - SuSE SLES 11 (X64)virtual machine with 16GB RAM
>>> - ulimi: virtual memory 14834560 (14GB)
>>> - Java: java-1_6_0-ibm-1.6.0-124.5
>>> - Apache Tomcat/6.0.29
>>>
>>> - Index Size (on filesystem): ~5GB, 1.1 million text documents.
>>>
>>> The error:
>>> First, building the index from scratch with a mysql DIH, with an empty
>>> index-Dir works fine.
>>> Building an index with&command=full-import, when the old segment files
>>> still in place, fails with an OutOfMemoryException. Same as optimizing
>>> the
>>> index.
>>> Doing an optimize fails after some time with:
>>>
>>> SEVERE: java.io.IOException: background merge hit exception:
>>> _6p(3.4):Cv1150724 _70(3.4):Cv667 _73(3.4):Cv7 _72(3.4):Cv4 _71(3.4):Cv1
>>> into _74 [optimize]
>>>        at
>>> org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:2552)
>>>        at
>>> org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:2472)
>>>        at
>>>
>>> org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:410)
>>>        at
>>>
>>> org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:85)
>>>        at
>>>
>>> org.apache.solr.update.processor.LogUpdateProcessor.processCommit(LogUpdateProcessorFactory.java:154)
>>>        at
>>>
>>> org.apache.solr.handler.RequestHandlerUtils.handleCommit(RequestHandlerUtils.java:107)
>>>        at
>>>
>>> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:61)
>>>        at
>>>
>>> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
>>>        at org.apache.solr.core.SolrCore.execute(SolrCore.java:1368)
>>>        at
>>>
>>> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:356)
>>>        at
>>>
>>> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:252)
>>>        at
>>>
>>> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
>>>        at
>>>
>>> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
>>>        at
>>>
>>> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
>>>        at
>>>
>>> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
>>>        at
>>>
>>> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
>>>        at
>>>
>>> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
>>>        at
>>>
>>> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
>>>        at
>>>
>>> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
>>>        at
>>>
>>> org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:857)
>>>        at
>>>
>>> org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
>>>        at
>>> org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
>>>        at java.

Re: Optimize fails with OutOfMemory Exception - sun.nio.ch.FileChannelImpl.map involved

2011-09-22 Thread Ralf Matulat

Dear Mike,
thanks for your your reply.
Just a couple of minutes we found a solution or - to be honest - where 
we went wrong.
Our failure was the use of ulimit. We missed, that ulimit sets the vmem 
for each shell seperatly. So we set 'ulimit -v unlimited' on a shell, 
thinking that we've done the job correctly.
As we recognized our mistake, we added 'ulimit -v unlimited' to our  
init-Skript of the tomcat-instance and now it looks like everything 
works as aspected.

Need some further testing with the java versions, but I'm quite optimistic.
Best regards
Ralf

Am 22.09.2011 14:46, schrieb Michael McCandless:

Are you sure you are using a 64 bit JVM?

Are you sure you really changed your vmem limit to unlimited?  That
should have resolved the OOME from mmap.

Or: can you run "cat /proc/sys/vm/max_map_count"?  This is a limit on
the total number of maps in a single process, that Linux imposes.  But
the default limit is usually high (64K), so it'd be surprising if you
are hitting that unless it's lower in your env.

The amount of [free] RAM on the machine should have no bearing on
whether mmap succeeds or fails; it's the available address space (32
bit is tiny; 64 bit is immense) and then any OS limits imposed.

Mike McCandless

http://blog.mikemccandless.com

On Thu, Sep 22, 2011 at 5:27 AM, Ralf Matulat  wrote:

Good morning!
Recently we slipped into an OOME by optimizing our index. It looks like it's
regarding to the nio class and the memory-handling.
I'll try to describe the environment, the error and what we did to solve the
problem. Nevertheless, none of our approaches was successful.

The environment:

- Tested with both, SOLR 3.3&  3.4
- SuSE SLES 11 (X64)virtual machine with 16GB RAM
- ulimi: virtual memory 14834560 (14GB)
- Java: java-1_6_0-ibm-1.6.0-124.5
- Apache Tomcat/6.0.29

- Index Size (on filesystem): ~5GB, 1.1 million text documents.

The error:
First, building the index from scratch with a mysql DIH, with an empty
index-Dir works fine.
Building an index with&command=full-import, when the old segment files
still in place, fails with an OutOfMemoryException. Same as optimizing the
index.
Doing an optimize fails after some time with:

SEVERE: java.io.IOException: background merge hit exception:
_6p(3.4):Cv1150724 _70(3.4):Cv667 _73(3.4):Cv7 _72(3.4):Cv4 _71(3.4):Cv1
into _74 [optimize]
at
org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:2552)
at
org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:2472)
at
org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:410)
at
org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:85)
at
org.apache.solr.update.processor.LogUpdateProcessor.processCommit(LogUpdateProcessorFactory.java:154)
at
org.apache.solr.handler.RequestHandlerUtils.handleCommit(RequestHandlerUtils.java:107)
at
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:61)
at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1368)
at
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:356)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:252)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
at
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:857)
at
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
at
org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
at java.lang.Thread.run(Thread.java:735)
Caused by: java.io.IOException: Map failed
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:765)
at
org.apache.lucene.store.MMapDirectory$MMapIndexInput.(MMapDirectory.java:264)
at
org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:216)
at
org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:89)
at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:115)
at
org.apache.lucene.index.IndexWriter$ReaderPool.get(IndexWriter.java:710)
at

Re: Optimize fails with OutOfMemory Exception - sun.nio.ch.FileChannelImpl.map involved

2011-09-22 Thread Michael McCandless
Are you sure you are using a 64 bit JVM?

Are you sure you really changed your vmem limit to unlimited?  That
should have resolved the OOME from mmap.

Or: can you run "cat /proc/sys/vm/max_map_count"?  This is a limit on
the total number of maps in a single process, that Linux imposes.  But
the default limit is usually high (64K), so it'd be surprising if you
are hitting that unless it's lower in your env.

The amount of [free] RAM on the machine should have no bearing on
whether mmap succeeds or fails; it's the available address space (32
bit is tiny; 64 bit is immense) and then any OS limits imposed.

Mike McCandless

http://blog.mikemccandless.com

On Thu, Sep 22, 2011 at 5:27 AM, Ralf Matulat  wrote:
> Good morning!
> Recently we slipped into an OOME by optimizing our index. It looks like it's
> regarding to the nio class and the memory-handling.
> I'll try to describe the environment, the error and what we did to solve the
> problem. Nevertheless, none of our approaches was successful.
>
> The environment:
>
> - Tested with both, SOLR 3.3 & 3.4
> - SuSE SLES 11 (X64)virtual machine with 16GB RAM
> - ulimi: virtual memory 14834560 (14GB)
> - Java: java-1_6_0-ibm-1.6.0-124.5
> - Apache Tomcat/6.0.29
>
> - Index Size (on filesystem): ~5GB, 1.1 million text documents.
>
> The error:
> First, building the index from scratch with a mysql DIH, with an empty
> index-Dir works fine.
> Building an index with &command=full-import, when the old segment files
> still in place, fails with an OutOfMemoryException. Same as optimizing the
> index.
> Doing an optimize fails after some time with:
>
> SEVERE: java.io.IOException: background merge hit exception:
> _6p(3.4):Cv1150724 _70(3.4):Cv667 _73(3.4):Cv7 _72(3.4):Cv4 _71(3.4):Cv1
> into _74 [optimize]
>        at
> org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:2552)
>        at
> org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:2472)
>        at
> org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:410)
>        at
> org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:85)
>        at
> org.apache.solr.update.processor.LogUpdateProcessor.processCommit(LogUpdateProcessorFactory.java:154)
>        at
> org.apache.solr.handler.RequestHandlerUtils.handleCommit(RequestHandlerUtils.java:107)
>        at
> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:61)
>        at
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
>        at org.apache.solr.core.SolrCore.execute(SolrCore.java:1368)
>        at
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:356)
>        at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:252)
>        at
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
>        at
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
>        at
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
>        at
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
>        at
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
>        at
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
>        at
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
>        at
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
>        at
> org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:857)
>        at
> org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
>        at
> org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
>        at java.lang.Thread.run(Thread.java:735)
> Caused by: java.io.IOException: Map failed
>        at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:765)
>        at
> org.apache.lucene.store.MMapDirectory$MMapIndexInput.(MMapDirectory.java:264)
>        at
> org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:216)
>        at
> org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:89)
>        at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:115)
>        at
> org.apache.lucene.index.IndexWriter$ReaderPool.get(IndexWriter.java:710)
>        at
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4378)
>        at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3917)
>        at
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:388)
>        at
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:456)
> Caused by: java.lang.OutOfMemoryError: Map failed
>        at sun.nio.ch.FileChannelImpl.map0(Native Method)
>     

Optimize fails with OutOfMemory Exception - sun.nio.ch.FileChannelImpl.map involved

2011-09-22 Thread Ralf Matulat

Good morning!
Recently we slipped into an OOME by optimizing our index. It looks like 
it's regarding to the nio class and the memory-handling.
I'll try to describe the environment, the error and what we did to solve 
the problem. Nevertheless, none of our approaches was successful.


The environment:

- Tested with both, SOLR 3.3 & 3.4
- SuSE SLES 11 (X64)virtual machine with 16GB RAM
- ulimi: virtual memory 14834560 (14GB)
- Java: java-1_6_0-ibm-1.6.0-124.5
- Apache Tomcat/6.0.29

- Index Size (on filesystem): ~5GB, 1.1 million text documents.

The error:
First, building the index from scratch with a mysql DIH, with an empty 
index-Dir works fine.
Building an index with &command=full-import, when the old segment files 
still in place, fails with an OutOfMemoryException. Same as optimizing 
the index.

Doing an optimize fails after some time with:

SEVERE: java.io.IOException: background merge hit exception: 
_6p(3.4):Cv1150724 _70(3.4):Cv667 _73(3.4):Cv7 _72(3.4):Cv4 _71(3.4):Cv1 
into _74 [optimize]
at 
org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:2552)
at 
org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:2472)
at 
org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:410)
at 
org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:85)
at 
org.apache.solr.update.processor.LogUpdateProcessor.processCommit(LogUpdateProcessorFactory.java:154)
at 
org.apache.solr.handler.RequestHandlerUtils.handleCommit(RequestHandlerUtils.java:107)
at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:61)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)

at org.apache.solr.core.SolrCore.execute(SolrCore.java:1368)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:356)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:252)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at 
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at 
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at 
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at 
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at 
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at 
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
at 
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:857)
at 
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
at 
org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)

at java.lang.Thread.run(Thread.java:735)
Caused by: java.io.IOException: Map failed
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:765)
at 
org.apache.lucene.store.MMapDirectory$MMapIndexInput.(MMapDirectory.java:264)
at 
org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:216)
at 
org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:89)
at 
org.apache.lucene.index.SegmentReader.get(SegmentReader.java:115)
at 
org.apache.lucene.index.IndexWriter$ReaderPool.get(IndexWriter.java:710)
at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4378)

at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3917)
at 
org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:388)
at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:456)

Caused by: java.lang.OutOfMemoryError: Map failed
at sun.nio.ch.FileChannelImpl.map0(Native Method)
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:762)
... 9 more

Then we changed mergeScheduler and mergePolicy to




which lead to a slightly different error-message:

SEVERE: java.io.IOException: Map failed
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:765)
at 
org.apache.lucene.store.MMapDirectory$MMapIndexInput.(MMapDirectory.java:264)
at 
org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:216)
at 
org.apache.lucene.index.TermVectorsReader.(TermVectorsReader.java:85)
at 
org.apache.lucene.index.SegmentCoreReaders.openDocStores(SegmentCoreReaders.java:221)
at 
org.apache.lucene.index.SegmentReader.get(SegmentReader.java:117)
at 
org.apache.lucene.index.IndexWriter$ReaderPool.get(IndexWriter.java:710)