[
https://issues.apache.org/jira/browse/WHIRR-167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12979696#action_12979696
]
Tom White commented on WHIRR-167:
---------------------------------
This looks like a good start.
I would consider doing the retries from within the Callable. Replace the
anonymous Callable class with a class that handles its own retries, then fails
if the number of retries is exceeded. The code that handles the Future can then
fail the entire cluster if it gets an ExecutionException. The way you have it
at the moment the retries happen in lock step, whereas if they are independent
they can happen concurrently. E.g. a master node fails quickly and is retired
while the workers are all coming up. It would also allow for per-group retry
strategies down the road, which might be useful.
On the template syntax, rather than overload the instance template language,
how about having separate properties to specify the percentages? This would be
easier to parse, and would allow other properties to be associated with a
template.
{code}
whirr.instance-templates=1 jt+nn,4 dn+tt
whirr.instance-template.jt+nn.max-percent-failure=100
whirr.instance-template.dn+tt.max-percent-failure=60
{code}
Alternatively (or in addition) we could have
{{whirr.instance-template.<role>.minimum-number-of-instances}}.
> Unfortunately Mockito failed to mock the static
> ComputeServiceContextBuilder.build(clusterSpec) method
Make it non-static?
> Improve bootstrapping and configuration to be able to isolate and repair or
> evict failing nodes on EC2
> ------------------------------------------------------------------------------------------------------
>
> Key: WHIRR-167
> URL: https://issues.apache.org/jira/browse/WHIRR-167
> Project: Whirr
> Issue Type: Improvement
> Environment: Amazon EC2
> Reporter: Tibor Kiss
> Assignee: Tibor Kiss
> Attachments: whirr-167-1.patch, whirr.log
>
>
> Actually it is very unstable the cluster startup process on Amazon EC2
> instances. How the number of nodes to be started up is increasing the startup
> process it fails more often. But sometimes even 2-3 nodes startup process
> fails. We don't know how many number of instance startup is going on at the
> same time at Amazon side when it fails or when it successfully starting up.
> The only think I see is that when I am starting around 10 nodes, the
> statistics of failing nodes are higher then with smaller number of nodes and
> is not direct proportional with the number of nodes, looks like it is
> exponentialy higher probability to fail some nodes.
> Lookint into BootstrapCluterAction.java, there is a note "// TODO: Check for
> RunNodesException and don't bail out if only a few " which indicated the
> current unreliable startup process. So we should improve it.
> We could add a "max percent failure" property (per instance template), so
> that if the number failures exceeded this value the whole cluster fails to
> launch and is shutdown. For the master node the value would be 100%, but for
> datanodes it would be more like 75%. (Tom White also mentioned in an email).
> Let's discuss if there are any other requirements to this improvement.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.