Hello!
  Can you please explain how do you manage the autoscaling worker nodes on
EC2? I'm particular interested what steps should be performed in EC2 in
order to achieve such as elasticity.
More clear:
1. Do you have to create snapshots of a worker node (with its configuration
to the nImbus and zookeper)?
2. Do you have to create an autoscaling group?
3.How do you trigger the autoscaling? Based on the CPU workload?
4. After adding/removing new nodes how do you manage to automatically send
the rebalance topology command?

It will be great if you can provide me such kind of steps. I'm a novice in
cloud computing and I would like to learn these concepts (elasticity,
autoscaling)
I'll look forward for your answers and suggestion
Thank you .
 Regards,
  Florin


On Tue, Nov 25, 2014 at 12:47 PM, Guillermo López Leal <[email protected]>
wrote:

> Hi there,
>
> we are using storm for our real-time processing system, and so far, so
> good!
>
> We have some questions about when we add nodes (autoscaling on EC2), and
> when we terminate others (based on CPU, for example).
>
> Right now, we are seeing that if we add new nodes into the system, Storm
> just stops for around a 30 seconds (tuple timeout), rebalances itself and
> new nodes work as expected. Downtime of ~50 seconds
>
> But, if a node goes down, we see a drop in the processing (to 0 tuples)
> for around 3 minutes, after that we see that it slowly start to rise the
> processing speed, and after one minute, everything is OK (total of 4
> minutes or so)
>
> Is there any way to not to wait 3 minutes, but the tuple timeout? (or
> something similar, like on added nodes)
>
> Thanks for your ideas
>

Reply via email to