We have been looking into replicated state machines (e.g. RAFT), and the best 
solution is if the counter is stored as a state in this replicated state 
machine. In this way, the counter would be consistent on all participating 
nodes and algorithms like RAFT will also solve the network partitioning problem.


---

** [tickets:#1132] Support multiple node failures without cluster restart 
(Hydra V1)**

**Status:** unassigned
**Milestone:** 4.6.FC
**Created:** Tue Sep 23, 2014 01:51 PM UTC by Hans Feldt
**Last Updated:** Tue Dec 23, 2014 03:57 PM UTC
**Owner:** nobody

The opensaf cluster shall survive simultaneous failure of multiple nodes 
without initiating a cluster restart. In particular, it shall support 
simultaneous failure of both controller nodes. To support long lasting and/or 
permanent node failure, OpenSAF must be able to move the system controller 
functionality to any node in the cluster. After the system controllers recover, 
either on the same nodes as before or on some other nodes, IMM and AMF state 
may be as before the controllers got unavailable. The same state is only 
possible if no secondary failures occur and no handles are closed by any 
application. Thus it is not possible to *guarantee* the same state after return 
of SC. 

Since AMF state can not change while the system controllers are unavailable, 
this means that AMF can not react to service availability events for as long as 
the cluster is running without an active system controller. This means that 
service availability (a statistical property) will be impacted in relation to 
how often this new feature is excercised. Therefore, it is important that a new 
system controller can be elected and come into service as quickly as possible 
to minimise the time spent in this "headless" state.

The use case for this is OpenSAF deployment within a cloud. In a cloud 
deployment, the risk for multiple simultaneous node failures is increased due 
to a number of reasons:

* The hardware used to build cloud infrastructure may not be carrier-grade.
* The hypervisor is an extra layer which can also cause VM failures.
* Multiple VMs can be hosted on the same physical hardware. There is no 
standardized interface for querying if two nodes are located on the same 
physical machine.
* Live migration of VMs can cause disruptions
* The "Pets vs cattle" thinking: There is an expectation that VMs can be 
treated as "cattle", i.e. that the loss of a few VMs shall not have a 
devastating effect on the whole cluster (which can consist of a hundred nodes).
* Consolidation of IT and telecom systems.

We will need a new mechanism for escalation to a cluster restart. Currently, 
the cluster is restarted when both SCs go down at the same time. Instead, we 
could trigger a cluster restart when the active SC has restarted a specified 
number of times within a time period (the probation time). This can be seen as 
a generalization of the escalation mechanism we have today: currently we 
escalate to cluster restart if the active SC has been restarted two times 
within the time it takes to restart an SC.

To be refined a lot...


---

Sent from sourceforge.net because [email protected] is 
subscribed to https://sourceforge.net/p/opensaf/tickets/

To unsubscribe from further messages, a project admin can change settings at 
https://sourceforge.net/p/opensaf/admin/tickets/options.  Or, if this is a 
mailing list, you can unsubscribe from the mailing list.
------------------------------------------------------------------------------
Dive into the World of Parallel Programming! The Go Parallel Website,
sponsored by Intel and developed in partnership with Slashdot Media, is your
hub for all things parallel software development, from weekly thought
leadership blogs to news, videos, case studies, tutorials and more. Take a
look and join the conversation now. http://goparallel.sourceforge.net
_______________________________________________
Opensaf-tickets mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/opensaf-tickets

Reply via email to