Hi Nagu,

You probably have missed our answer, quoted below -


Hi Nagendra,

The high-availability approach you describe looks different from the one 
Clearwater is designed to support. Rather than having a Bono node become a 
combined Bono/Ellis node if the Ellis node fails, we have an approach where:

  *   You have pools of nodes of each type – e.g. you might have 3 or 4 Sprout 
nodes, depending on load requirements
  *   Other components use DNS to balance load across those 4 Sprout nodes
  *   If one of the 4 Sprout nodes fails, it gets blacklisted – and the other 3 
Sprout nodes can handle the traffic meant for it, as the relevant state is 
stored in redundant data stores on Vellum

In this model, scaling in is as simple as removing a node from DNS and deleting 
it, and scaling out is as simple as adding a node and then adding it to DNS.

http://www.projectclearwater.org/technical/call-flows/ shows the call flows 
when Sprout, Dime etc. fail in more detail.

Best,
Rob


From: Clearwater [mailto:[email protected]] On 
Behalf Of Nagendra Kumar
Sent: 08 April 2018 09:50
To: [email protected]
Subject: Re: [Project Clearwater] Input required for High Availability use 
cases of clearwater

Hi Team,
               Any input on this ?

Thanks
-Nagu


On Saturday, 31 March, 2018, 4:37:38 PM IST, Nagendra Kumar 
<[email protected]<mailto:[email protected]>> wrote:


Dear Team,

I have a high availability use case below, please provide your inputs.
I have 6 nodes running for clearwater services on 6 different nodes and 1 for 
DNS(on Node7) as below:
(I am using AWS AMI ubuntu 14 image and have manually installed as per 
instruction at Manual Install Instructions — Project Clearwater 1.0 
documentation<http://clearwater.readthedocs.io/en/stable/Manual_Install.html>)
Desired configuration :
Node1 : Ellis
Node2 : Bono
Node3 : Sprout
Node4 : Homer
Node5 : Dime
Node6 : Vellum


Now, we have a requirement of scale up and scale down.
Scale Down:
1. if Node1 fails, Ellis should start working on Node2. That means Node2 hosts 
Ellis and Bono both running.
2. After that if Node2 fails, Ellis and Bono should shift to Node3. That means 
Node3 hosts Ellis, Bono and Sprout.
3. And so on..... till Node5 fails and Node6 hosts Ellis, Bono, Sprout, Homer, 
Dime and Vellum.

Will it need any source code change or work with just configuration change? 
Please send me instruction for making changes.
I want to try the following for the above use case:
1. I install all software on each nodes. That means Ellis is installed on all 
nodes, sprout is installed on all nodes, etc.
2. But, in the beginning, Ellis is started on Node1, Bono is started on Node2 
and so on as shown above (desired configuration).
3. When Node1  fails, then I start Ellis on Node 2(say manually, I will 
automate it later). That means Node2 has Ellis and Bono running.
4. When Node2 fails, I start Ellis and Bono on Node2...And the like...
5. In the end, I want Node6 to have all software running.

Scale UP:
1. After step 5 above, where Node6 are running all software.
2. Now, Node 5 has started and came up, I want to stop Dime on Node6 and want 
to start on Node5. That means Node5 has Dime running and Node6 has Ellis, Bono, 
Sprout, Homer, and Vellum running.
3. Node Node 4 has come up and running. I want to stop Homer on Node 6 and 
start it on Node4. That means Node 4 has Homer running, Node5 has Dime running 
and Node 6 has Ellis, Bono, Sprout and Vellum running.
4. And so on till I get the desired configuration running as shown above.
It should happen when call is running and call shouldn't drop.

Any help will be deeply appreciated.

Thanks
-Nagu

_______________________________________________
Clearwater mailing list
[email protected]
http://lists.projectclearwater.org/mailman/listinfo/clearwater_lists.projectclearwater.org

Reply via email to