On 5/6/2015 8:55 AM, adfel70 wrote:
> Thank you for the detailed answer.
> How can I decrease the impact of opening a searcher in such a large index?
> especially the impact of heap usage that causes OOM.
See the wiki link I sent. It talks about some of the things that
require a lot of heap and w
> allocations, which is any allocation larger than half the G1 region
> size. The max configurable G1 region size is 32MB. You should use the
> CMS collector for your GC tuning, not G1. If you can reduce the number
> of documents in each shard, G1 might work well.
>
> Thanks,
> Shawn
--
View this message in context:
http://lucene.472066.n3.nabble.com/severe-problems-with-soft-and-hard-commits-in-a-large-index-tp4204068p4204148.html
Sent from the Solr - User mailing list archive at Nabble.com.
On 5/6/2015 1:58 AM, adfel70 wrote:
> I have a cluster of 16 shards, 3 replicas. the cluster indexed nested
> documents.
> it currently has 3 billion documents overall (parent and children).
> each shard has around 200 million docs. size of each shard is 250GB.
> this runs on 12 machines. each mach
mits or
>> hardcommits(opensearcher=true) occur with a small interval one after
>> another
>> (around 5-10minutes), I start getting many OOM exceptions.
>>
>>
>> Thank you.
>>
>>
>>
>> --
>> View this message in context:
>> htt
memory exception", you are doing
> non-trivial faceting. Are you using DocValues, as Marc suggested?
>
>
> - Toke Eskildsen, State and University Library, Denmark
--
View this message in context:
http://lucene.472066.n3.nabble.com/severe-problems-with-soft-and-hard-commits-in-a-large-index-tp4204068p4204088.html
Sent from the Solr - User mailing list archive at Nabble.com.
one after
> another
> (around 5-10minutes), I start getting many OOM exceptions.
>
>
> Thank you.
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/severe-problems-with-soft-and-hard-commits-in-a-large-index-tp4204068.html
> Sent fr
On Wed, 2015-05-06 at 00:58 -0700, adfel70 wrote:
> each shard has around 200 million docs. size of each shard is 250GB.
> this runs on 12 machines. each machine has 4 SSD disks and 4 solr processes.
> each process has 28GB heap. each machine has 196GB RAM.
[...]
> 1. heavy GCs when soft commit
start getting many OOM exceptions.
Thank you.
--
View this message in context:
http://lucene.472066.n3.nabble.com/severe-problems-with-soft-and-hard-commits-in-a-large-index-tp4204068.html
Sent from the Solr - User mailing list archive at Nabble.com.