Hi Rajan,

http://www.elasticsearch.org/guide/en/elasticsearch/client/community/current/health.html

Otis
--
Performance Monitoring * Log Analytics * Search Analytics
Solr & Elasticsearch Support * http://sematext.com/

tel: +1 347 480 1610   fax: +1 718 679 9190


On Friday, March 21, 2014 12:56:56 PM UTC-4, Rajan Bhatt wrote:
>
> Thanks Zack.
>
> So on single node this test will tell us how much single node with Single 
> shard can get us. Now if we want to deploy more shards/per node then we 
> need take into consideration, that more shard/per node would consume more 
> resources ( File Descriptor, Memory, etc..) and performance would degrade 
> as more shards are added to node.
>
> This is tricky and milage can vary with different work load ( Indexing + 
> Searching ) ..
>
> I am not sure you would be able to describe at very high level your 
> deployment ( Number of ES nodes + number of Index + Shards + Replica ) to 
> get some idea..
> I appreciate your answer and your time.
>
> btw,which tool you use for monitoring ES cluster and what you monitor ?
> Thanks
> Rajan
>
> On Thursday, March 20, 2014 2:05:52 PM UTC-7, Zachary Tong wrote:
>>
>> Unfortunately, there is no way that we can tell you an optimal number. 
>>  But there is a way that you can perform some capacity tests, and arrive at 
>> usable numbers that you can extrapolate from.  The process is very simple:
>>
>>
>>    - Create a single index, with a single shard, on a single 
>>    production-style machine
>>    - Start indexing *real, production-style *data.  "Fake" or "dummy" 
>>    data won't work here, it needs to mimic real-world data
>>    - Periodically, run real-world queries that you would expect users to 
>>    enter
>>    - At some point, you'll find that performance is no longer acceptable 
>>    to you.  Perhaps the indexing rate becomes too slow.  Or perhaps query 
>>    latency is too slow.  Or perhaps your node just runs out of memory
>>    - Write down the number of documents in the shard, and the physical 
>>    size of the shard
>>
>> Now you know the limit of a single shard given your hardware + queries + 
>> data.  Using that knowledge, you can extrapolate given your expected 
>> search/indexing load, and how many documents you expect to index over the 
>> next few years, etc.
>>
>> -Zach
>>
>>
>>
>> On Thursday, March 20, 2014 3:29:47 PM UTC-5, Rajan Bhatt wrote:
>>>
>>> Hello,
>>>
>>> I would appreciate if someone can suggest optimal number of shards per 
>>> ES node for optimal performance or any recommended way to arrive at number 
>>> of shards given number of core and memory foot print.
>>>
>>> Thanks in advance
>>> Reagards
>>> Rajan
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/ac602fb7-74f4-49ad-b7e9-c2f5efb8130d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to