The counter itself needs to be coherent, i.e. reside on a shared and always 
accessible file system.
It also needs to be reset to zero after som time period of stability. That 
period for resetting the counter
can not be too short and not too long ...   And the periods of absence, the 
down/up cycle time needs
to be short relative to the reset time. The ticket talks about a period of 15 
minutes (i.e. definitely not realtime).

So I guess the reset time will end up being an hour or so !

/AndersBj

________________________________
From: Anders Bjornerstedt [mailto:[email protected]]
Sent: den 2 januari 2015 08:25
To: [email protected]
Subject: [tickets] [opensaf:tickets] Re: #1132 Support multiple node failures 
without cluster restart (Hydra V1)


The restart counter needs to be persistent but can not reside in the IMM,
not even as a persistent runtime attribute, even though this would seem to be 
one of the very rare
cases where use of a PRTA would make sense.

The reason is that IMM writing operations only works on an established coherent 
cluster.

/AndersBj

________________________________

From: Anders Bjornerstedt [mailto:[email protected]]
Sent: den 2 januari 2015 08:00
To: [opensaf:tickets]
Subject: [opensaf:tickets] Re: #1132 Support multiple node failures without 
cluster restart (Hydra V1)

I agree that something like this is needed.
But where is that restart counter maintained ?
And who decides whether we are to cluster restart or not?
Or to put it in another way: Who decides if we ARE cluster starting or not ?

Also need to consider how this relates to network partitioning issues so that
we dont get two (or more) partitions "sadly" cluster restarting, or two 
partitions
"happily" deciding not to cluster restart, or some partitions being happy and
some others being sad.

In fact that is the problem how to keep a coherent cluster if you insist that 
coherence
is not that important?

Above all, anything like this needs carefull design. It can not just be tweaked 
in in a few weeks.

/AndersBj

________________________________

From: Anders Widell [mailto:[email protected]]
Sent: den 23 december 2014 16:58
To: [opensaf:tickets]
Subject: [opensaf:tickets] #1132 Support multiple node failures without cluster 
restart (Hydra V1)

  *   Description has changed:

Diff:

--- old
+++ new
@@ -11,4 +11,6 @@
The "Pets vs cattle" thinking: There is an expectation that VMs can be treated 
as "cattle", i.e. that the loss of a few VMs shall not have a devastating 
effect on the whole cluster (which can consist of a hundred nodes).
Consolidation of IT and telecom systems.

+We will need a new mechanism for escalation to a cluster restart. Currently, 
the cluster is restarted when both SCs go down at the same time. Instead, we 
could trigger a cluster restart when the active SC has restarted a specified 
number of times within a time period (the probation time). This can be seen as 
a generalization of the escalation mechanism we have today: currently we 
escalate to cluster restart if the active SC has been restarted two times 
within the time it takes to restart an SC.
+
To be refined a lot...

________________________________

[tickets:#1132]<http://sourceforge.net/p/opensaf/tickets/1132>http://sourceforge.net/p/opensaf/tickets/1132http://sourceforge.net/p/opensaf/tickets/1132
 Support multiple node failures without cluster restart (Hydra V1)

Status: unassigned
Milestone: 4.6.FC
Created: Tue Sep 23, 2014 01:51 PM UTC by Hans Feldt
Last Updated: Sun Dec 07, 2014 12:01 PM UTC
Owner: nobody

The opensaf cluster shall survive simultaneous failure of multiple nodes 
without initiating a cluster restart. In particular, it shall support 
simultaneous failure of both controller nodes. To support long lasting and/or 
permanent node failure, OpenSAF must be able to move the system controller 
functionality to any node in the cluster. After the system controllers recover, 
either on the same nodes as before or on some other nodes, IMM and AMF state 
may be as before the controllers got unavailable. The same state is only 
possible if no secondary failures occur and no handles are closed by any 
application. Thus it is not possible to guarantee the same state after return 
of SC.

Since AMF state can not change while the system controllers are unavailable, 
this means that AMF can not react to service availability events for as long as 
the cluster is running without an active system controller. This means that 
service availability (a statistical property) will be impacted in relation to 
how often this new feature is excercised. Therefore, it is important that a new 
system controller can be elected and come into service as quickly as possible 
to minimise the time spent in this "headless" state.

The use case for this is OpenSAF deployment within a cloud. In a cloud 
deployment, the risk for multiple simultaneous node failures is increased due 
to a number of reasons:

  *   The hardware used to build cloud infrastructure may not be carrier-grade.
  *   The hypervisor is an extra layer which can also cause VM failures.
  *   Multiple VMs can be hosted on the same physical hardware. There is no 
standardized interface for querying if two nodes are located on the same 
physical machine.
  *   Live migration of VMs can cause disruptions
  *   The "Pets vs cattle" thinking: There is an expectation that VMs can be 
treated as "cattle", i.e. that the loss of a few VMs shall not have a 
devastating effect on the whole cluster (which can consist of a hundred nodes).
  *   Consolidation of IT and telecom systems.

We will need a new mechanism for escalation to a cluster restart. Currently, 
the cluster is restarted when both SCs go down at the same time. Instead, we 
could trigger a cluster restart when the active SC has restarted a specified 
number of times within a time period (the probation time). This can be seen as 
a generalization of the escalation mechanism we have today: currently we 
escalate to cluster restart if the active SC has been restarted two times 
within the time it takes to restart an SC.

To be refined a lot...

________________________________

Sent from sourceforge.net because you indicated interest in 
https://sourceforge.net/p/opensaf/tickets/1132/<https://sourceforge.net/p/opensaf/tickets/1132>https://sourceforge.net/p/opensaf/tickets/1132https://sourceforge.net/p/opensaf/tickets/1132

To unsubscribe from further messages, please visit 
https://sourceforge.net/auth/subscriptions/<https://sourceforge.net/auth/subscriptions>https://sourceforge.net/auth/subscriptionshttps://sourceforge.net/auth/subscriptions

________________________________

[tickets:#1132]<http://sourceforge.net/p/opensaf/tickets/1132>http://sourceforge.net/p/opensaf/tickets/1132
 Support multiple node failures without cluster restart (Hydra V1)

Status: unassigned
Milestone: 4.6.FC
Created: Tue Sep 23, 2014 01:51 PM UTC by Hans Feldt
Last Updated: Tue Dec 23, 2014 03:57 PM UTC
Owner: nobody

The opensaf cluster shall survive simultaneous failure of multiple nodes 
without initiating a cluster restart. In particular, it shall support 
simultaneous failure of both controller nodes. To support long lasting and/or 
permanent node failure, OpenSAF must be able to move the system controller 
functionality to any node in the cluster. After the system controllers recover, 
either on the same nodes as before or on some other nodes, IMM and AMF state 
may be as before the controllers got unavailable. The same state is only 
possible if no secondary failures occur and no handles are closed by any 
application. Thus it is not possible to guarantee the same state after return 
of SC.

Since AMF state can not change while the system controllers are unavailable, 
this means that AMF can not react to service availability events for as long as 
the cluster is running without an active system controller. This means that 
service availability (a statistical property) will be impacted in relation to 
how often this new feature is excercised. Therefore, it is important that a new 
system controller can be elected and come into service as quickly as possible 
to minimise the time spent in this "headless" state.

The use case for this is OpenSAF deployment within a cloud. In a cloud 
deployment, the risk for multiple simultaneous node failures is increased due 
to a number of reasons:

  *   The hardware used to build cloud infrastructure may not be carrier-grade.
  *   The hypervisor is an extra layer which can also cause VM failures.
  *   Multiple VMs can be hosted on the same physical hardware. There is no 
standardized interface for querying if two nodes are located on the same 
physical machine.
  *   Live migration of VMs can cause disruptions
  *   The "Pets vs cattle" thinking: There is an expectation that VMs can be 
treated as "cattle", i.e. that the loss of a few VMs shall not have a 
devastating effect on the whole cluster (which can consist of a hundred nodes).
  *   Consolidation of IT and telecom systems.

We will need a new mechanism for escalation to a cluster restart. Currently, 
the cluster is restarted when both SCs go down at the same time. Instead, we 
could trigger a cluster restart when the active SC has restarted a specified 
number of times within a time period (the probation time). This can be seen as 
a generalization of the escalation mechanism we have today: currently we 
escalate to cluster restart if the active SC has been restarted two times 
within the time it takes to restart an SC.

To be refined a lot...

________________________________

Sent from sourceforge.net because you indicated interest in 
https://sourceforge.net/p/opensaf/tickets/1132/<https://sourceforge.net/p/opensaf/tickets/1132>https://sourceforge.net/p/opensaf/tickets/1132

To unsubscribe from further messages, please visit 
https://sourceforge.net/auth/subscriptions/<https://sourceforge.net/auth/subscriptions>https://sourceforge.net/auth/subscriptions

________________________________

[tickets:#1132]<http://sourceforge.net/p/opensaf/tickets/1132> Support multiple 
node failures without cluster restart (Hydra V1)

Status: unassigned
Milestone: 4.6.FC
Created: Tue Sep 23, 2014 01:51 PM UTC by Hans Feldt
Last Updated: Tue Dec 23, 2014 03:57 PM UTC
Owner: nobody

The opensaf cluster shall survive simultaneous failure of multiple nodes 
without initiating a cluster restart. In particular, it shall support 
simultaneous failure of both controller nodes. To support long lasting and/or 
permanent node failure, OpenSAF must be able to move the system controller 
functionality to any node in the cluster. After the system controllers recover, 
either on the same nodes as before or on some other nodes, IMM and AMF state 
may be as before the controllers got unavailable. The same state is only 
possible if no secondary failures occur and no handles are closed by any 
application. Thus it is not possible to guarantee the same state after return 
of SC.

Since AMF state can not change while the system controllers are unavailable, 
this means that AMF can not react to service availability events for as long as 
the cluster is running without an active system controller. This means that 
service availability (a statistical property) will be impacted in relation to 
how often this new feature is excercised. Therefore, it is important that a new 
system controller can be elected and come into service as quickly as possible 
to minimise the time spent in this "headless" state.

The use case for this is OpenSAF deployment within a cloud. In a cloud 
deployment, the risk for multiple simultaneous node failures is increased due 
to a number of reasons:

  *   The hardware used to build cloud infrastructure may not be carrier-grade.
  *   The hypervisor is an extra layer which can also cause VM failures.
  *   Multiple VMs can be hosted on the same physical hardware. There is no 
standardized interface for querying if two nodes are located on the same 
physical machine.
  *   Live migration of VMs can cause disruptions
  *   The "Pets vs cattle" thinking: There is an expectation that VMs can be 
treated as "cattle", i.e. that the loss of a few VMs shall not have a 
devastating effect on the whole cluster (which can consist of a hundred nodes).
  *   Consolidation of IT and telecom systems.

We will need a new mechanism for escalation to a cluster restart. Currently, 
the cluster is restarted when both SCs go down at the same time. Instead, we 
could trigger a cluster restart when the active SC has restarted a specified 
number of times within a time period (the probation time). This can be seen as 
a generalization of the escalation mechanism we have today: currently we 
escalate to cluster restart if the active SC has been restarted two times 
within the time it takes to restart an SC.

To be refined a lot...

________________________________

Sent from sourceforge.net because [email protected] is 
subscribed to 
http://sourceforge.net/p/opensaf/tickets/<http://sourceforge.net/p/opensaf/tickets>

To unsubscribe from further messages, a project admin can change settings at 
http://sourceforge.net/p/opensaf/admin/tickets/options. Or, if this is a 
mailing list, you can unsubscribe from the mailing list.
------------------------------------------------------------------------------
Dive into the World of Parallel Programming! The Go Parallel Website,
sponsored by Intel and developed in partnership with Slashdot Media, is your
hub for all things parallel software development, from weekly thought
leadership blogs to news, videos, case studies, tutorials and more. Take a
look and join the conversation now. http://goparallel.sourceforge.net
_______________________________________________
Opensaf-tickets mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/opensaf-tickets

Reply via email to