Unfortunately not, as our users can potentially construct a search query
using any property.

Do you think it's the number of indexable properties causing the memory
issues? I was thinking it was perhaps more to do with the keyword extraction
from file contents. We came across somewhat similar memory issue when we
increased the number of words used for indexing from 10,000 to a million.
This again caused huge memory spike (~ 2GB) while importing a large text
file (~ 100 MB). Because of this we had to revert this setting to the
default value. 

So my initial thinking is that either Lucene indexing (or how it's being
used by Jackrabbit) is not scalable, or our configuration is not optimal to
handle these cases.



--
View this message in context: 
http://jackrabbit.510166.n4.nabble.com/Huge-memory-usage-while-re-indexing-tp4659465p4659472.html
Sent from the Jackrabbit - Users mailing list archive at Nabble.com.

Reply via email to