Hi all,

I have a Solr v6.4.2 collection with 12 shards and 2 replicas. Each replica
uses about 14GB disk usage. I'm using Solaris 11 and I see the 'Page cache'
grow by about 7GB for each suggester replica I build. The suggester index
itself is very small. The 'Page cache' memory is freed when the node is
stopped.

I guess the Suggester component is mmap'ing the entire Lucene index into
memory and holding it? Is this expected behavior? Is there a workaround?

I use this command to build the suggester for just the replica
'target1_shard1_replica1':
curl "
http://localhost:8983/solr/collection1/suggest?suggest.dictionary=mySuggester&suggest.build=true&shards=localhost:8983/solr/target1_shard1_replica1
"

BTW: Without the 'shards' param the distributed request will randomly hit
half the replicas.

>From my solrconfig.xml:
<searchComponent name="suggest" class="solr.SuggestComponent">
<lst name="suggester">
<str name="name">mySuggester</str>
<str name="lookupImpl">AnalyzingInfixLookupFactory</str>
<str name="indexPath">mySuggester</str>
<str name="dictionaryImpl">DocumentDictionaryFactory</str>
<str name="field">mySuggest</str>
<str name="contextField">x</str>
<str name="suggestAnalyzerFieldType">suggestTypeLc</str>
<str name="buildOnStartup">false</str>
</lst>
</searchComponent>

Cheers,
Damien.

Reply via email to