Thank you very much to all of you the answers.
Uwe this is the strange thing that I am currently never closing the index
reader and opening a new one from 8 to 8 hours and I am noticing that crash
in indeed a highly concurrent environment.
The indexes reside in a NFS file system. And the location is shared between
multiple machines - on each machine running  multiple JVMs this is why I
mentioned shared mount.
Can you please tell me what parameters/settings should I check on the OS
side? I did ulimit and it returns unlimited. I will check for "
AlreadyClosedException" but I didn't saw that in the logs, but I will check
again.

2. With the next release I am trying to do the close of IndexReader. I
looked at SearchManager but what I am doing when doing a search I am also
doing a indexing of the search content in order to search in it also in
order to determine of score of the queries that I am constructing in the
itself content. With SearchManager if I am correct I cannot do that but
Clemens's approuch with ir.incRef(); should be ok also correct?

Thank you very much again,
Liviu




On Fri, May 16, 2014 at 11:55 AM, Uwe Schindler <u...@thetaphi.de> wrote:

> Hi,
>
> > Now if I don't
> > close the old index reader I am noticing increases of virtual memory with
> > every new reindex reopen (this should not be an issue on 64 bit Linux
> > correct - this is the configuration I am using and the indexes are on a
> > shared mount NTFS file system ).
>
> This always brings a virtual memory leak on all platforms (also Linux). In
> addition, files of older segments cannot be completely deleted anymore, so
> it also consumes disk space.
>
> > Can you please tell me if all this corruption is caused by the fact that
> I
> > am not closing the old IndexReader. But if I close if given that it is
> > share by multiple threads I will need to check each time before doing the
> > search if IndexReader is still open correct? Let's say in a thread I am
> > reopening the IndexReader and in another thread I am afterwards reusing
> the
> > old one in that case I should do the check correct? Or is there a smarter
> > mechanism in place.
>
> It is the other way round: If you not close the IndexReader it cannot
> crash (unless your JDK has a bug or somehow your filesystem [you mentioned
> shared... what does this mean?] forcefully unmaps the index files), it only
> happens if you close it! The issue here is: If you close the IndexReader
> and another thread is currently running a query, the above can happen,
> because the memory mapped buffer was forcefully unmapped by the
> MMapDirectory. Since Lucene 3.6.0, Lucene tries its best to prevent this
> crash from happening, but in high concurrency cases this may fail (because
> of missing synchronization, which would kill performance):
> http://issues.apache.org/jira/browse/LUCENE-3588
>
> In your case, in parallel to those crashes, you should also see
> "AlreadyClosedException", which is the root of the problem. It is just
> sometimes that the MMapDirectory code cannot correctly detect the already
> closed and crashes.
>
> So forcefully reopening indexreaders and closing the old ones, while
> queries are running is the wrong way to go. I would suggest to use
> SearcherManager, which can keep track of the indexreaders correctly.
>
> Uwe
>
> -----
> Uwe Schindler
> H.-H.-Meier-Allee 63, D-28213 Bremen
> http://www.thetaphi.de
> eMail: u...@thetaphi.de
>
>
> > -----Original Message-----
> > From: Clemens Wyss DEV [mailto:clemens...@mysign.ch]
> > Sent: Wednesday, May 14, 2014 7:53 PM
> > To: java-user@lucene.apache.org
> > Subject: AW: Issue with Lucene 3.6.1 and MMapDirectory
> >
>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
> For additional commands, e-mail: java-user-h...@lucene.apache.org
>
>

Reply via email to