[ 
https://issues.apache.org/jira/browse/USERGRID-408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14335568#comment-14335568
 ] 

Todd Nine commented on USERGRID-408:
------------------------------------

Hey John,
  My description my not be the best, but I do actually mean primary shards.  
For instance, take this example.

Number ES nodes = 6
Number of primary shards in an index = 6
Number of replicas = 0

When UG allocates this index, it only has 6 shards, all primary.  If we have 6 
nodes, we want a 1 shard per node scheme.  This ensures that our write load is 
evenly distributed across the hardware.    What we're seeing now in production 
is that apps are getting created, 1 node will own 3 primaries, and the other 3 
will be spread across the nodes.  The node with 3 primaries gets hammered 
during heavy data load, and becomes a hotspot in the cluster. 

We can ideally get around this with an allocation settings on index creation.  
Failing that, we some sort of system to ensure that our primary+replica count 
is evenly distributed across nodes PER index, since each index is an 
application, and can have widely varying load.  

> Create a shard balance system for elasticsearch
> -----------------------------------------------
>
>                 Key: USERGRID-408
>                 URL: https://issues.apache.org/jira/browse/USERGRID-408
>             Project: Usergrid
>          Issue Type: Story
>            Reporter: Todd Nine
>
> Currently when an index is created, it should have the same number of shards 
> as ES nodes with replica's.  Since we sometimes do not allocate replicas, we 
> need to ensure that primary shards are evenly distributed among our nodes.  
> If this can't be fixed with ES settings, we will need to create an api 
> function that evenly balances primary and replica shards across the cluster 
> per index.  See this forum post.
> https://groups.google.com/forum/#!topic/elasticsearch/9Hb5CJJ5Vj0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to