On Thu, 2017-05-25 at 15:56 -0700, Nawab Zada Asad Iqbal wrote:
> I have 31 machine cluster with 3 shards on each (93 shards). Each
> machine has 250~GB ram and 3TB SSD for search index (there is another
> drive for OS and stuff). One solr process runs for each shard with
> 48G heap. So we have 3 large files on the SSD.

So each shards is ~650GB, right? Which means 2TB of index and 1TB of
free space on the SSDs. In principle that is dangerous as it can run
out of space during index update, but unless you are using huge
segments, I guess the chances of that are low (I am not an expert in
segment merge mechanics).

We're also using a 1 Solr/shard setup, but with SolrCloud. Our initial
rationale for 1 Solr/shard was to avoid long GC-pauses due to large
heaps, but that does not seem to be a problem here. Now we stick to it 
as it works fine and makes for simple logistics.
-- 
Toke Eskildsen, Royal Danish Library

Reply via email to