Le 16/10/2014 05:04, Russell Bryant a écrit :
On 10/15/2014 05:07 PM, Florian Haas wrote:
On Wed, Oct 15, 2014 at 10:03 PM, Russell Bryant <rbry...@redhat.com> wrote:
Am I making sense?
Yep, the downside is just that you need to provide a new set of flavors
for "ha" vs "non-ha".  A benefit though is that it's a way to support it
today without *any* changes to OpenStack.
Users are already very used to defining new flavors. Nova itself
wouldn't even need to define those; if the vendor's deployment tools
defined them it would be just fine.
Yes, I know Nova wouldn't need to define it.  I was saying I didn't like
that it was required at all.

This seems like the kind of thing we should also figure out how to offer
on a per-guest basis without needing a new set of flavors.  That's why I
also listed the server tagging functionality as another possible solution.
This still doesn't do away with the requirement to reliably detect
node failure, and to fence misbehaving nodes. Detecting that a node
has failed, and fencing it if unsure, is a prerequisite for any
recovery action. So you need Corosync/Pacemaker anyway.
Obviously, yes.  My post covered all of that directly ... the tagging
bit was just additional input into the recovery operation.

Note also that when using an approach where you have physically
clustered nodes, but you are also running non-HA VMs on those, then
the user must understand that the following applies:

(1) If your guest is marked HA, then it will automatically recover on
node failure, but
(2) if your guest is *not* marked HA, then it will go down with the
node not only if it fails, but also if it is fenced.

So a non-HA guest on an HA node group actually has a slightly
*greater* chance of going down than a non-HA guest on a non-HA host.
(And let's not get into "don't use fencing then"; we all know why
that's a bad idea.)

Which is why I think it makes sense to just distinguish between
HA-capable and non-HA-capable hosts, and have the user decide whether
they want HA or non-HA guests simply by assigning them to the
appropriate host aggregates.
Very good point.  I hadn't considered that.

There are various possibilities for handling that usecase, and tagging VMs on a case-by-case basis sounds really good to me. What is missing IMHO is a smart filter able to mitigate spread across computes and VMs asking for HA. We can actually do this thanks to Instance Groups and affinity filters, but in a certain sense, it would be cool if an user could just boot an instance and ask for a policy (either given by a flavor metadata or whatever) without having knowledge on the subsequent infra.


OpenStack-dev mailing list

Reply via email to