ElasticHQ, Marvel, bigdesk and kopf are some of the better monitoring
plugins.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: [email protected]
web: www.campaignmonitor.com


On 22 March 2014 03:56, Rajan Bhatt <[email protected]> wrote:

> Thanks Zack.
>
> So on single node this test will tell us how much single node with Single
> shard can get us. Now if we want to deploy more shards/per node then we
> need take into consideration, that more shard/per node would consume more
> resources ( File Descriptor, Memory, etc..) and performance would degrade
> as more shards are added to node.
>
> This is tricky and milage can vary with different work load ( Indexing +
> Searching ) ..
>
> I am not sure you would be able to describe at very high level your
> deployment ( Number of ES nodes + number of Index + Shards + Replica ) to
> get some idea..
> I appreciate your answer and your time.
>
> btw,which tool you use for monitoring ES cluster and what you monitor ?
> Thanks
> Rajan
>
> On Thursday, March 20, 2014 2:05:52 PM UTC-7, Zachary Tong wrote:
>>
>> Unfortunately, there is no way that we can tell you an optimal number.
>>  But there is a way that you can perform some capacity tests, and arrive at
>> usable numbers that you can extrapolate from.  The process is very simple:
>>
>>
>>    - Create a single index, with a single shard, on a single
>>    production-style machine
>>    - Start indexing *real, production-style *data.  "Fake" or "dummy"
>>    data won't work here, it needs to mimic real-world data
>>    - Periodically, run real-world queries that you would expect users to
>>    enter
>>    - At some point, you'll find that performance is no longer acceptable
>>    to you.  Perhaps the indexing rate becomes too slow.  Or perhaps query
>>    latency is too slow.  Or perhaps your node just runs out of memory
>>    - Write down the number of documents in the shard, and the physical
>>    size of the shard
>>
>> Now you know the limit of a single shard given your hardware + queries +
>> data.  Using that knowledge, you can extrapolate given your expected
>> search/indexing load, and how many documents you expect to index over the
>> next few years, etc.
>>
>> -Zach
>>
>>
>>
>> On Thursday, March 20, 2014 3:29:47 PM UTC-5, Rajan Bhatt wrote:
>>>
>>> Hello,
>>>
>>> I would appreciate if someone can suggest optimal number of shards per
>>> ES node for optimal performance or any recommended way to arrive at number
>>> of shards given number of core and memory foot print.
>>>
>>> Thanks in advance
>>> Reagards
>>> Rajan
>>>
>>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/658c8f7d-071b-46c8-b80b-3d0660e7889e%40googlegroups.com<https://groups.google.com/d/msgid/elasticsearch/658c8f7d-071b-46c8-b80b-3d0660e7889e%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAEM624ZY%3Ddfd9sUqPdFWcU%3DuSsjb9ny2rS_HwX3_j_cg0_d71w%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to