The way the data is spread across the cluster is not really uniform. Most of
shards have way lower than 50GB; I would say about 15% of the total shards
have more than 50GB.


Dorian Hoxha wrote
> Each shard is a lucene index which has a lot of overhead. 

And this overhead depends on what? I mean, if I create an empty collection
will it take up much heap size  just for "being there" ?


Dorian Hoxha wrote
> I don't know about static/dynamic memory-issue though.

I could not find anything related in the docs or the mailing list either,
but I'm still not ready to discard this suspicion...

Again, thx for your time



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Dynamic-schema-memory-consumption-tp4329184p4329367.html
Sent from the Solr - User mailing list archive at Nabble.com.

Reply via email to