uschindler commented on pull request #518:
URL: https://github.com/apache/lucene/pull/518#issuecomment-1084563803
> I'm working on a similar approach for my data store, but I'm currently not
sure if it's a good idea for multiple readers plus a single reader/writer to
map a segment for each reader. I guess the OS will then share the mapped
regions/the pages between the mapped memory segments? Not sure if it's the same
approach in Lucene, so that you'd create multiple `IndexInput`s for multiple
index readers, because you also seem to a have a clone method (but it will fail
once the segments are closed from one reader).
This PR does not change anything in Lucene's current behaviour. The code
using MappedByteBuffer behaves the same way. There are also no multiple
mappings. If a user may open several IndexReaders on the same index that's not
our fault. Well behaving code of Lucene only opens a single IndexReader.
The clone() method is used for several threads. There is no remapping, we
only refound the ResourceContext with Panama. If you close the main index, the
clones used by different threads should really fail then.
> On another note, what's your take on this (Andy and Victor are real
genioses regarding database systems)?
http://cidrdb.org/cidr2022/papers/p13-crotty.pdf
We don't agree with that for Lucene:
- the model behind Lucene is different: All files are write-once so there
are no updates to files which were written before. MMAP only works on files
that are never changed anymore. The paging works very well with those.
- we do not write with mmap, lucene index files are written with standard
output streams
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]