Hi,

in my index schema I has defined a
DictionaryCompoundWordTokenFilterFactory and a
HunspellStemFilterFactory. Each FilterFactory has a dictionary with
about 100k entries.

To avoid an out of memory error I have to set the heap space to 128m
for 1 index.

Is there a way to reduce the memory consumption when parsing the dictionary?
I need to create several indexes and 128m for each index is too much.

Same problem here - even with an empty index (no data yet) and two fields using Hunspell (pl_PL) I had to increase heap size to over 2GB for solr to start at all..

Stempel using the very same dictionary works fine with 128M..

--
Maciej Lisiewski

Reply via email to