[
https://issues.apache.org/jira/browse/SOLR-17375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17867656#comment-17867656
]
David Smiley commented on SOLR-17375:
-------------------------------------
Ideally this is fixed in a 9x version but it'll be more pressing in Solr 10 as
it's not avoidable there without other performance compromises. In Lucene 9
you can simply set
{{-Dorg.apache.lucene.store.MMapDirectory.enableMemorySegments=false}}
I'd love to see the problem present itself in the solr/benchmark module so that
we can see the performance regression and its resolution. Maybe with a
modified CloudIndexing.java to do a commit per update request. It does no
commits now.
> Close IndexReader asynchronously on commit for performance
> ----------------------------------------------------------
>
> Key: SOLR-17375
> URL: https://issues.apache.org/jira/browse/SOLR-17375
> Project: Solr
> Issue Type: Improvement
> Security Level: Public(Default Security Level. Issues are Public)
> Affects Versions: 9.3
> Reporter: David Smiley
> Priority: Critical
>
> Since Lucene 9.5, and with a recent Java VM (19), Lucene uses Java's new
> MemorySegments API. A negative consequence is that IndexReader.close becomes
> expensive, particularly when there are many threads as it's
> {{{}O(threads){}}}. Solr closes the (previous) reader on a SolrIndexSearcher
> open, which is basically on commit (both soft and hard). (See Lucene
> [#13325|https://github.com/apache/lucene/issues/13325])
> Proposal: SolrIndexSearcher.close should perform the {{rawReader.decRef()}}
> in another thread, probably a global (statically defined) thread pool of one
> or two in size ([~uschindler] 's recommendation). The call to
> {{core.getDeletionPolicy().releaseCommitPoint(cpg)}} which follows it should
> probably go along with it.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]