Re: UI Portal - Load Balancing & High availability

2017-01-27 Thread Vidya Sagar Kalvakunta
On Jan 27, 2017 8:53 AM, "Christie, Marcus Aaron"  wrote:

Vidya,

Thanks. Response are below.

On Jan 26, 2017, at 4:01 PM, Vidya Sagar Kalvakunta 
wrote:

Hi Marcus,

Using a service registry such as Consul as the backend for the load
balancer automatically ensures the service that is currently down (in this
case being updated) is removed from the routing table of the load balancer
and the service is added when it is back online.


In a way, yes that would work, but I wouldn’t want to disrupt ongoing
requests when taking a portal instance down during the deployment.  On
systems I’ve worked on in the past what we did was:
1. Take the portal instance out of the load balancer
2. At this point the portal instance may still be processing requests but
it shouldn’t be getting any new ones
3. Either monitor the number of processing requests and wait till it drops
to zero or just wait a predetermined amount of time (or both, for example,
wait until current requests finish or some max amount of time passes).
4. Then proceed with taking down the portal instance and updating it, etc.



If you want more fine-grained control, i.e there are multiple instances
with only a part of them currently updated to the latest version and you
want the load balancer to give more priority to the latest instances until
all the instances are done being updated.

A service registered in Consul can have tags associated with it, so each
instance of the Laravel portal can register a tag with its version number.
We can write the load balancer config file, such that it assigns more
weight to instances with "Version 2" tags and less weight to instances with
"Version 1" tags.


That’s interesting, I think that might work.  What I was thinking of is
take half of the instances out of the load balancer, update them, bring
them back up, put them back into the load balancer, then repeat with the
other half. That way you don’t have a mix of versions deployed.


This scenario will work with both Consul Template + HAProxy

or Fabio .


Thanks,
Vidya




Marcus,

I think we can include the functionality within the Laravel portal so that
when it receives a shutdown signal, it deregisters itself from Consul.
Thereby it will not receive any further requests, and will finish
processing the currently running requests.

Your idea of updating half of the instances at once would be a better
solution than maintaining multiple versions at once.


Re: UI Portal - Load Balancing & High availability

2017-01-27 Thread Christie, Marcus Aaron
Vidya,

Thanks. Response are below.

On Jan 26, 2017, at 4:01 PM, Vidya Sagar Kalvakunta 
> wrote:

Hi Marcus,

Using a service registry such as Consul as the backend for the load balancer 
automatically ensures the service that is currently down (in this case being 
updated) is removed from the routing table of the load balancer and the service 
is added when it is back online.


In a way, yes that would work, but I wouldn’t want to disrupt ongoing requests 
when taking a portal instance down during the deployment.  On systems I’ve 
worked on in the past what we did was:
1. Take the portal instance out of the load balancer
2. At this point the portal instance may still be processing requests but it 
shouldn’t be getting any new ones
3. Either monitor the number of processing requests and wait till it drops to 
zero or just wait a predetermined amount of time (or both, for example, wait 
until current requests finish or some max amount of time passes).
4. Then proceed with taking down the portal instance and updating it, etc.

If you want more fine-grained control, i.e there are multiple instances with 
only a part of them currently updated to the latest version and you want the 
load balancer to give more priority to the latest instances until all the 
instances are done being updated.

A service registered in Consul can have tags associated with it, so each 
instance of the Laravel portal can register a tag with its version number. We 
can write the load balancer config file, such that it assigns more weight to 
instances with "Version 2" tags and less weight to instances with "Version 1" 
tags.

That’s interesting, I think that might work.  What I was thinking of is take 
half of the instances out of the load balancer, update them, bring them back 
up, put them back into the load balancer, then repeat with the other half. That 
way you don’t have a mix of versions deployed.


This scenario will work with both Consul Template + 
HAProxy
 or Fabio.


Thanks,
Vidya





Re: UI Portal - Load Balancing & High availability

2017-01-26 Thread Christie, Marcus Aaron

On Jan 21, 2017, at 9:40 PM, Ameya Advankar 
> wrote:

Hi Airavata Developers,

As a part of the ongoing Advanced Science Gateway Architectures course at IU, 
we are working on load balancing the UI Portal. This mail chain is intended to 
be used for the same.

We briefly discussed 3 load balancing technologies in this weeks classroom 
session -
1. HAProxy
2. NGINX
3. HAProxy with Consul / Consul-template

The topic is open for discussion.

The following is the Github link to the Portal which will be used -
  https://github.com/airavata-courses/spring17-laravel-portal

Thanks & Regards,
Ameya Advankar

Hello Ameya,

One requirement I have is programmatic control over which instances are being 
used by a load balancer. The main use case I have in mind is deploying updated 
code. Let’s say there are two portal instances, A and B, that are being load 
balanced. I would like to create a deploy script that takes A out of the load 
balancer, then updates the code deployed to A, and then enable A in the load 
balancer again. And then do the same thing with B.

Which of these load balancing technologies support this sort of programmatic 
control?

Thanks,

Marcus