I get it know. I agree that setting the number of replicas is connected to the deployment reality in each case and it's derived variables and thus there is no one formula to fit all cases (it would't be a setting in that case).
What I was trying to cover was the theoretical / extreme case where any node may fail at any time and what is the best way to go to minimize the chance of losing data. Also, in the case you want to scale down the installation (pottentially down to one node) without having to worry about selecting nodes that hold different replicated shards is an example that can beneffit from such configuration. I'm however not clear yet on what happens when a node goes down (triggering extra replication amongst the survivors) and then comes up again. Is the ongoing replication cancelled and the returning node brought up to date? Thanks for your valuable input. G. On 10 Jul 2014 18:07, "[email protected]" <[email protected]> wrote: > All I say is that it depends on the probability of the event of three > nodes failing simultaneously, not on the total number of nodes having a > replica. You can even have 5 nodes and the probability of the event of 4 > nodes failing simultaneously, and so on. > > As an illustration, suppose you have a data center with two independent > electric circuits and the probability of failure corresponds with power > outage, then it is enough to distribute nodes equally over servers using > the two independent power lines in the racks. If one electric circuit (plus > UPS) fails, half of the nodes go down. With replica level 1, ES cluster > will keep all the data. There is no need to set replica level equal to node > number. > > Jörg > > > On Thu, Jul 10, 2014 at 8:55 AM, Gonçalo Luiz <[email protected]> > wrote: > >> Hi Joe, >> >> Thanks for your reply. >> On this thougth: >> >> " >> From my view your idea of better fault tolerance does not make much >> sense. The replica number is a statistical entity that is related to the >> probability of faults. The higher the replica, the higher the probability >> of surviving faults. There is no correlation to the total number of nodes >> in a cluster to ensure better fault tolerance. The fault tolerance depends >> on the probability of a node failure." >> >> I'm not getting it. If we have 4 nodes with 2 replicas it means that 3 of >> the nodes will have data of a given index (assuming 0 shards to ease the >> discussion), ritght? If those three nodes fail simultaneously the 4th will >> have no way of grabbing a copy and data will be lost forever. However if nr >> of replicas is 3, the 4th would be able to keep serving the requesrs and >> eventually handover a copy to a new node joining the cluster. >> How does this not help fault tolerance? I'm I missing something? >> >> Thanks, >> G. >> On 10 Jul 2014 00:21, "[email protected]" <[email protected]> >> wrote: >> >>> 1. You can set replica number at index creation time or by cluster >>> update settings >>> action >>> org.elasticsearch.action.admin.cluster.settings.ClusterUpdateSettingsAction >>> >>> 2. You will get an index with lower replica number :) >>> >>> 3. Yes. Quick code example: >>> >>> ClusterState clusterState = clusterService.state(); >>> // find number of data nodes >>> int numberOfDataNodes = 0; >>> for (DiscoveryNode node : clusterState.getNodes()) { >>> if (node.isDataNode()) { >>> numberOfDataNodes++; >>> } >>> } >>> >>> 4. Yes. Use org.elasticsearch.cluster.ClusterStateListener >>> >>> From my view your idea of better fault tolerance does not make much >>> sense. The replica number is a statistical entity that is related to the >>> probability of faults. The higher the replica, the higher the probability >>> of surviving faults. There is no correlation to the total number of nodes >>> in a cluster to ensure better fault tolerance. The fault tolerance depends >>> on the probability of a node failure. >>> >>> From the viewpoint of balancing load, it makes much sense. When setting >>> replica number to the number of nodes, the cluster can balance search >>> requests to all nodes which is optimal. >>> >>> Jörg >>> >>> >>> >>> On Wed, Jul 9, 2014 at 11:57 PM, <[email protected]> wrote: >>> >>>> Hi all, >>>> >>>> I'm considering using elasticsearch as a repository for a PoC I'm >>>> currently developing. >>>> >>>> This PoC models an application that needs durability but not >>>> isolability, so I'm fine with the eventual consistency of reads against the >>>> most recent writes. >>>> >>>> As durability is paramount (we can't affort to lose the data unless >>>> 100% of the nodes die) I've been exploring the option of setting every >>>> shard to have N replicas where N is the number of nodes in the cluster. >>>> >>>> From what I've read so far it is possible to dynamically set the number >>>> or replicas which triggers a replication throttled replication process. >>>> >>>> I would like to have some help on the following steps (I'm running ES >>>> in embedded mode in a Java application): >>>> >>>> 1 - How can I set the number or replicas using the native Java client ? >>>> 2 - What happens if a node dies and the number of replicas is lowered >>>> to the number of surviving ones? >>>> 3 - Is it possible, from a participating node, to access the list of >>>> nodes in the cluster so I can use their count to set the number of replicas >>>> (step 1) ? >>>> 4 - is it possible to hook a callback to the event of a node joining or >>>> leaving the cluster ? >>>> >>>> I envisioning the following mechanism: >>>> >>>> a) - Start with one node, a given number of shards and 1 replica >>>> b)- Each time a node joins I adjust the number or replicas to match the >>>> new node count. In this case, there would be 2 replicas >>>> c) - An arbitrary number of nodes might be added and I'd execute step >>>> b) accordingly >>>> d) - At any time a node might leave the cluster and thus I need to >>>> lower the number or replicas to the new node count (I assume that the >>>> cluster would go ahead and proceed to compensate the lost replica by asking >>>> an existing node to hold 2 replicas instead of one; is this stopped by >>>> lowering the number or replicas?) >>>> >>>> >>>> The ultimate goal is to make sure no data is loss unless 100% of the >>>> nodes die before a new one can acquire a full replica. >>>> >>>> Is this doable? Does this make sense at all ? >>>> >>>> For the time being, I'm not worried about lack of disk space or >>>> bandwidth as I'm still in the very early days of the PoC. >>>> >>>> Thank you very much for all your work and help. >>>> >>>> Gonçalo >>>> >>>> -- >>>> You received this message because you are subscribed to the Google >>>> Groups "elasticsearch" group. >>>> To unsubscribe from this group and stop receiving emails from it, send >>>> an email to [email protected]. >>>> To view this discussion on the web visit >>>> https://groups.google.com/d/msgid/elasticsearch/276418fa-812f-4af5-94a0-7362f5ba7931%40googlegroups.com >>>> <https://groups.google.com/d/msgid/elasticsearch/276418fa-812f-4af5-94a0-7362f5ba7931%40googlegroups.com?utm_medium=email&utm_source=footer> >>>> . >>>> For more options, visit https://groups.google.com/d/optout. >>>> >>> >>> -- >>> You received this message because you are subscribed to a topic in the >>> Google Groups "elasticsearch" group. >>> To unsubscribe from this topic, visit >>> https://groups.google.com/d/topic/elasticsearch/hPvVz20v6YY/unsubscribe. >>> To unsubscribe from this group and all its topics, send an email to >>> [email protected]. >>> To view this discussion on the web visit >>> https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGuUAJQYkkRtTU1H90MKpRg46dM1T2PzcP2Mfk1X8mbfA%40mail.gmail.com >>> <https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGuUAJQYkkRtTU1H90MKpRg46dM1T2PzcP2Mfk1X8mbfA%40mail.gmail.com?utm_medium=email&utm_source=footer> >>> . >>> >>> For more options, visit https://groups.google.com/d/optout. >>> >> -- >> You received this message because you are subscribed to the Google Groups >> "elasticsearch" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to [email protected]. >> To view this discussion on the web visit >> https://groups.google.com/d/msgid/elasticsearch/CAMrm09qtY%2B_xoC2dXVXVzaXFxV70Q_XGs%2BrXoWqnSu_SrVvGJw%40mail.gmail.com >> <https://groups.google.com/d/msgid/elasticsearch/CAMrm09qtY%2B_xoC2dXVXVzaXFxV70Q_XGs%2BrXoWqnSu_SrVvGJw%40mail.gmail.com?utm_medium=email&utm_source=footer> >> . >> >> For more options, visit https://groups.google.com/d/optout. >> > > -- > You received this message because you are subscribed to a topic in the > Google Groups "elasticsearch" group. > To unsubscribe from this topic, visit > https://groups.google.com/d/topic/elasticsearch/hPvVz20v6YY/unsubscribe. > To unsubscribe from this group and all its topics, send an email to > [email protected]. > To view this discussion on the web visit > https://groups.google.com/d/msgid/elasticsearch/CAKdsXoHv1Ab6ite%2BHfaVh9ti2ea69Bh0ASjkCM8vE6%2Bn7j3PgQ%40mail.gmail.com > <https://groups.google.com/d/msgid/elasticsearch/CAKdsXoHv1Ab6ite%2BHfaVh9ti2ea69Bh0ASjkCM8vE6%2Bn7j3PgQ%40mail.gmail.com?utm_medium=email&utm_source=footer> > . > For more options, visit https://groups.google.com/d/optout. > -- You received this message because you are subscribed to the Google Groups "elasticsearch" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAMrm09r7EU4ZMy4%2BfVZ3SXmTPbh4YDsJgB_S0LWt3tRJHNihWg%40mail.gmail.com. For more options, visit https://groups.google.com/d/optout.
