Thanks Alerto,

We  have 5 nodes, 10 shards, 1 replace and each shard having 28GB of size.


Thanks.

On Thursday, October 30, 2014 6:25:12 PM UTC+5:30, Alberto Paro wrote:
>
> How many shards? if your shards are too small in number, their size it's 
> too big. Typical shards bigger than 10gb gives you bad performance both in 
> writing and in reading due to segment operations. 
>
> hi,
>   Alberto
>
> Sent from my iPhone
>
> On 29/ott/2014, at 12:02, Appasaheb Sawant <[email protected] 
> <javascript:>> wrote:
>
> I have 7 node of cluster. Each having configuration like 16G RAM, 8 Core 
> cpu, centos 6
>
> Heap Memory is - 9000m
>
>
>    - 1 Master (Non data)
>    - 1 Capable master (Non data)
>    - 5 Data node
>    
> Having 10 indexes, one index is big with 55 million documents of number 
> and 254Gi (508Gi)
> size on disk.
>
>
> Every 1 seconds there are 5-10 new documents indexing.
>
> But problem is search is bit slow. Almost taking average of 2000 to 5000 
> ms. Some queries are in 1 secs.
>
> Why is that so?
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected] <javascript:>.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/elasticsearch/275027ad-9edf-42a1-a8c5-5841210800a6%40googlegroups.com
>  
> <https://groups.google.com/d/msgid/elasticsearch/275027ad-9edf-42a1-a8c5-5841210800a6%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
> For more options, visit https://groups.google.com/d/optout.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/b72aa427-0df1-4f00-bb24-f93fe2ec33d6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to