Re: [Architecture] Speedup traffic serving process in scalable/ containerized deployment

2016-10-11 Thread Sanjeewa Malalgoda
On Tue, Oct 11, 2016 at 2:44 PM, Lakmal Warusawithana 
wrote:

> Further thinking on implementation for k8s, we need to improve in 3 places.
>
> 1.) Need to introduce min=0 for autoscaling policies
>  kubectl autoscale rc foo --min=0 --max=5 --inflight-request-count=80
>
> 2.) Have to config auto scaler for use load balancing factor
> (inflight-request-count) - K8S have extension in Auto scaler
>
We may have some logic similar to this.

required_instances =  request_in_fly / number_of_max_requests_per_instance;
if (required_instances > current_active_instance)
{
//This need scale up
if(required_instances < max_allowed)
 {
   spwan_inatances( required_instances - current_active_instance );
   wait_sometime_to_activate_instances();
 }
else{
 //Cannot handle load
}
}
else
{
//This is scale down decision
if(required_instances > min_allowed)
{
  terminate_inatances( current_active_instance - required_instances );
  wait_some_time_to_effect_termination();
}
}


Thanks,
sanjeewa.


> 3.) Improve load balancer for hold first request (until service running)
>
> On Tue, Oct 11, 2016 at 2:24 PM, Imesh Gunaratne  wrote:
>
>> On Mon, Oct 10, 2016 at 8:01 PM, Sanjeewa Malalgoda 
>> wrote:
>>
>>>
>>> When we do container based deployment standard approach we discussed so
>>> far was,
>>>
>>>- At the first request check the tenant and service from URL and do
>>>lookup for running instances.
>>>- If matching instance available route traffic to that.
>>>- Else spawn new instance using template(or image).  When we spawn
>>>this new instance we need to let it know what is the current tenant and
>>>data sources, configurations it should use.
>>>- Then route requests to new node.
>>>- After some idle time this instance may terminate.
>>>
>>> ​If we were to do this with a container cluster manager, I think we
>> would need to implement a custom scheduler (an entity similar to HPA in
>> K8S) to handle the orchestration process properly. Otherwise it would be
>> difficult to use the built-in orchestration features such as auto-healing
>> and autoscaling with this feature.
>>
>> By saying that this might be a feature which should be implemented at the
>> container cluster manager.
>>
>> *Suggestion*
>>> If we maintain hot pool(started and ready to serve requests) of servers
>>> for each server type(API Gateway, Identity Server etc) then we can cutoff
>>> server startup time + IaaS level spawn time from above process. Then when
>>> requests comes to wso2.com tenants API Gateway we can pick instance
>>> from gateway instance pool and set wso2.com tenant context and data
>>> source using service call(assuming setting context and configurations is
>>> much faster).
>>>
>>
>> ​I think with this approach tenant isolation will become a problem. It
>> would be ideal to use tenancy features at the container cluster manager
>> level. For an example namespaces in K8S.
>>
>> Thanks
>>
>>>
>>> *Implementation*
>>> For this we need to implement some plug-in to instance spawn process.
>>> Then instead of spawning new instance it will pick one instance from the
>>> pool and configure it to behave as specific tenant.
>>> For this each instance running in pool can open up port, so load
>>> balancer or scaling component can call it and tell what is the tenant and
>>> configurations.
>>> Once it configured server close that configuration port and start
>>> traffic serving.
>>> After some idle time this instance may terminate.
>>>
>>> This approach will help us if we met following condition.
>>> (Instance loading time + Server startup time + Server Lookup) *>*
>>> (Server Lookup + Loading configuration and tenant of running server from
>>> external call)
>>>
>>> Any thoughts on this?
>>>
>>> Thanks,
>>> sanjeewa.
>>> --
>>>
>>> *Sanjeewa Malalgoda*
>>> WSO2 Inc.
>>> Mobile : +94713068779
>>>
>>> blog
>>> :http://sanjeewamalalgoda.blogspot.com/
>>> 
>>>
>>>
>>>
>>
>>
>> --
>> *Imesh Gunaratne*
>> Software Architect
>> WSO2 Inc: http://wso2.com
>> T: +94 11 214 5345 M: +94 77 374 2057
>> W: https://medium.com/@imesh TW: @imesh
>> lean. enterprise. middleware
>>
>>
>
>
> --
> Lakmal Warusawithana
> Director - Cloud Architecture; WSO2 Inc.
> Mobile : +94714289692
> Blog : http://lakmalsview.blogspot.com/
>
>


-- 

*Sanjeewa Malalgoda*
WSO2 Inc.
Mobile : +94713068779

blog
:http://sanjeewamalalgoda.blogspot.com/

___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Speedup traffic serving process in scalable/ containerized deployment

2016-10-11 Thread Lakmal Warusawithana
On Tue, Oct 11, 2016 at 2:58 PM, Lakmal Warusawithana 
wrote:

>
>
> On Tue, Oct 11, 2016 at 2:51 PM, Manjula Rathnayake 
> wrote:
>
>> Hi Lakmal,
>>
>> On Tue, Oct 11, 2016 at 2:44 PM, Lakmal Warusawithana 
>> wrote:
>>
>>> Further thinking on implementation for k8s, we need to improve in 3
>>> places.
>>>
>>> 1.) Need to introduce min=0 for autoscaling policies
>>>  kubectl autoscale rc foo --min=0 --max=5 --inflight-request-count=80
>>>
>>> 2.) Have to config auto scaler for use load balancing factor
>>> (inflight-request-count) - K8S have extension in Auto scaler
>>>
>>
>>> 3.) Improve load balancer for hold first request (until service running)
>>>
>> Do you mean load balancers like nginx, haproxy? Or can we get this done
>> with gateway itself without worrying about load balancers being used?
>>
>
> With this implementation this can be any. (May be a load balancer or
> gateway etc)
>

But gateway also can terminated if idle, then this should be load balancer.


>
>
>>
>> thank you.
>>
>>
>>>
>>> On Tue, Oct 11, 2016 at 2:24 PM, Imesh Gunaratne  wrote:
>>>
 On Mon, Oct 10, 2016 at 8:01 PM, Sanjeewa Malalgoda 
 wrote:

>
> When we do container based deployment standard approach we discussed
> so far was,
>
>- At the first request check the tenant and service from URL and
>do lookup for running instances.
>- If matching instance available route traffic to that.
>- Else spawn new instance using template(or image).  When we spawn
>this new instance we need to let it know what is the current tenant and
>data sources, configurations it should use.
>- Then route requests to new node.
>- After some idle time this instance may terminate.
>
> ​If we were to do this with a container cluster manager, I think we
 would need to implement a custom scheduler (an entity similar to HPA in
 K8S) to handle the orchestration process properly. Otherwise it would be
 difficult to use the built-in orchestration features such as auto-healing
 and autoscaling with this feature.

 By saying that this might be a feature which should be implemented at
 the container cluster manager.

 *Suggestion*
> If we maintain hot pool(started and ready to serve requests) of
> servers for each server type(API Gateway, Identity Server etc) then we can
> cutoff server startup time + IaaS level spawn time from above process. 
> Then
> when requests comes to wso2.com tenants API Gateway we can pick
> instance from gateway instance pool and set wso2.com tenant context
> and data source using service call(assuming setting context and
> configurations is much faster).
>

 ​I think with this approach tenant isolation will become a problem. It
 would be ideal to use tenancy features at the container cluster manager
 level. For an example namespaces in K8S.

 Thanks

>
> *Implementation*
> For this we need to implement some plug-in to instance spawn process.
> Then instead of spawning new instance it will pick one instance from
> the pool and configure it to behave as specific tenant.
> For this each instance running in pool can open up port, so load
> balancer or scaling component can call it and tell what is the tenant and
> configurations.
> Once it configured server close that configuration port and start
> traffic serving.
> After some idle time this instance may terminate.
>
> This approach will help us if we met following condition.
> (Instance loading time + Server startup time + Server Lookup) *>*
> (Server Lookup + Loading configuration and tenant of running server from
> external call)
>
> Any thoughts on this?
>
> Thanks,
> sanjeewa.
> --
>
> *Sanjeewa Malalgoda*
> WSO2 Inc.
> Mobile : +94713068779
>
> blog
> :http://sanjeewamalalgoda.blogspot.com/
> 
>
>
>


 --
 *Imesh Gunaratne*
 Software Architect
 WSO2 Inc: http://wso2.com
 T: +94 11 214 5345 M: +94 77 374 2057
 W: https://medium.com/@imesh TW: @imesh
 lean. enterprise. middleware


>>>
>>>
>>> --
>>> Lakmal Warusawithana
>>> Director - Cloud Architecture; WSO2 Inc.
>>> Mobile : +94714289692
>>> Blog : http://lakmalsview.blogspot.com/
>>>
>>>
>>> ___
>>> Architecture mailing list
>>> Architecture@wso2.org
>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>
>>>
>>
>>
>> --
>> Manjula Rathnayaka
>> Technical Lead
>> WSO2, Inc.
>> Mobile:+94 77 743 1987
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> 

Re: [Architecture] Speedup traffic serving process in scalable/ containerized deployment

2016-10-11 Thread Lakmal Warusawithana
On Tue, Oct 11, 2016 at 2:51 PM, Manjula Rathnayake 
wrote:

> Hi Lakmal,
>
> On Tue, Oct 11, 2016 at 2:44 PM, Lakmal Warusawithana 
> wrote:
>
>> Further thinking on implementation for k8s, we need to improve in 3
>> places.
>>
>> 1.) Need to introduce min=0 for autoscaling policies
>>  kubectl autoscale rc foo --min=0 --max=5 --inflight-request-count=80
>>
>> 2.) Have to config auto scaler for use load balancing factor
>> (inflight-request-count) - K8S have extension in Auto scaler
>>
>
>> 3.) Improve load balancer for hold first request (until service running)
>>
> Do you mean load balancers like nginx, haproxy? Or can we get this done
> with gateway itself without worrying about load balancers being used?
>

With this implementation this can be any. (May be a load balancer or
gateway etc)


>
> thank you.
>
>
>>
>> On Tue, Oct 11, 2016 at 2:24 PM, Imesh Gunaratne  wrote:
>>
>>> On Mon, Oct 10, 2016 at 8:01 PM, Sanjeewa Malalgoda 
>>> wrote:
>>>

 When we do container based deployment standard approach we discussed so
 far was,

- At the first request check the tenant and service from URL and do
lookup for running instances.
- If matching instance available route traffic to that.
- Else spawn new instance using template(or image).  When we spawn
this new instance we need to let it know what is the current tenant and
data sources, configurations it should use.
- Then route requests to new node.
- After some idle time this instance may terminate.

 ​If we were to do this with a container cluster manager, I think we
>>> would need to implement a custom scheduler (an entity similar to HPA in
>>> K8S) to handle the orchestration process properly. Otherwise it would be
>>> difficult to use the built-in orchestration features such as auto-healing
>>> and autoscaling with this feature.
>>>
>>> By saying that this might be a feature which should be implemented at
>>> the container cluster manager.
>>>
>>> *Suggestion*
 If we maintain hot pool(started and ready to serve requests) of servers
 for each server type(API Gateway, Identity Server etc) then we can cutoff
 server startup time + IaaS level spawn time from above process. Then when
 requests comes to wso2.com tenants API Gateway we can pick instance
 from gateway instance pool and set wso2.com tenant context and data
 source using service call(assuming setting context and configurations is
 much faster).

>>>
>>> ​I think with this approach tenant isolation will become a problem. It
>>> would be ideal to use tenancy features at the container cluster manager
>>> level. For an example namespaces in K8S.
>>>
>>> Thanks
>>>

 *Implementation*
 For this we need to implement some plug-in to instance spawn process.
 Then instead of spawning new instance it will pick one instance from
 the pool and configure it to behave as specific tenant.
 For this each instance running in pool can open up port, so load
 balancer or scaling component can call it and tell what is the tenant and
 configurations.
 Once it configured server close that configuration port and start
 traffic serving.
 After some idle time this instance may terminate.

 This approach will help us if we met following condition.
 (Instance loading time + Server startup time + Server Lookup) *>*
 (Server Lookup + Loading configuration and tenant of running server from
 external call)

 Any thoughts on this?

 Thanks,
 sanjeewa.
 --

 *Sanjeewa Malalgoda*
 WSO2 Inc.
 Mobile : +94713068779

 blog
 :http://sanjeewamalalgoda.blogspot.com/
 



>>>
>>>
>>> --
>>> *Imesh Gunaratne*
>>> Software Architect
>>> WSO2 Inc: http://wso2.com
>>> T: +94 11 214 5345 M: +94 77 374 2057
>>> W: https://medium.com/@imesh TW: @imesh
>>> lean. enterprise. middleware
>>>
>>>
>>
>>
>> --
>> Lakmal Warusawithana
>> Director - Cloud Architecture; WSO2 Inc.
>> Mobile : +94714289692
>> Blog : http://lakmalsview.blogspot.com/
>>
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
>
> --
> Manjula Rathnayaka
> Technical Lead
> WSO2, Inc.
> Mobile:+94 77 743 1987
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 
Lakmal Warusawithana
Director - Cloud Architecture; WSO2 Inc.
Mobile : +94714289692
Blog : http://lakmalsview.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Speedup traffic serving process in scalable/ containerized deployment

2016-10-11 Thread Manjula Rathnayake
Hi Lakmal,

On Tue, Oct 11, 2016 at 2:44 PM, Lakmal Warusawithana 
wrote:

> Further thinking on implementation for k8s, we need to improve in 3 places.
>
> 1.) Need to introduce min=0 for autoscaling policies
>  kubectl autoscale rc foo --min=0 --max=5 --inflight-request-count=80
>
> 2.) Have to config auto scaler for use load balancing factor
> (inflight-request-count) - K8S have extension in Auto scaler
>

> 3.) Improve load balancer for hold first request (until service running)
>
Do you mean load balancers like nginx, haproxy? Or can we get this done
with gateway itself without worrying about load balancers being used?

thank you.


>
> On Tue, Oct 11, 2016 at 2:24 PM, Imesh Gunaratne  wrote:
>
>> On Mon, Oct 10, 2016 at 8:01 PM, Sanjeewa Malalgoda 
>> wrote:
>>
>>>
>>> When we do container based deployment standard approach we discussed so
>>> far was,
>>>
>>>- At the first request check the tenant and service from URL and do
>>>lookup for running instances.
>>>- If matching instance available route traffic to that.
>>>- Else spawn new instance using template(or image).  When we spawn
>>>this new instance we need to let it know what is the current tenant and
>>>data sources, configurations it should use.
>>>- Then route requests to new node.
>>>- After some idle time this instance may terminate.
>>>
>>> ​If we were to do this with a container cluster manager, I think we
>> would need to implement a custom scheduler (an entity similar to HPA in
>> K8S) to handle the orchestration process properly. Otherwise it would be
>> difficult to use the built-in orchestration features such as auto-healing
>> and autoscaling with this feature.
>>
>> By saying that this might be a feature which should be implemented at the
>> container cluster manager.
>>
>> *Suggestion*
>>> If we maintain hot pool(started and ready to serve requests) of servers
>>> for each server type(API Gateway, Identity Server etc) then we can cutoff
>>> server startup time + IaaS level spawn time from above process. Then when
>>> requests comes to wso2.com tenants API Gateway we can pick instance
>>> from gateway instance pool and set wso2.com tenant context and data
>>> source using service call(assuming setting context and configurations is
>>> much faster).
>>>
>>
>> ​I think with this approach tenant isolation will become a problem. It
>> would be ideal to use tenancy features at the container cluster manager
>> level. For an example namespaces in K8S.
>>
>> Thanks
>>
>>>
>>> *Implementation*
>>> For this we need to implement some plug-in to instance spawn process.
>>> Then instead of spawning new instance it will pick one instance from the
>>> pool and configure it to behave as specific tenant.
>>> For this each instance running in pool can open up port, so load
>>> balancer or scaling component can call it and tell what is the tenant and
>>> configurations.
>>> Once it configured server close that configuration port and start
>>> traffic serving.
>>> After some idle time this instance may terminate.
>>>
>>> This approach will help us if we met following condition.
>>> (Instance loading time + Server startup time + Server Lookup) *>*
>>> (Server Lookup + Loading configuration and tenant of running server from
>>> external call)
>>>
>>> Any thoughts on this?
>>>
>>> Thanks,
>>> sanjeewa.
>>> --
>>>
>>> *Sanjeewa Malalgoda*
>>> WSO2 Inc.
>>> Mobile : +94713068779
>>>
>>> blog
>>> :http://sanjeewamalalgoda.blogspot.com/
>>> 
>>>
>>>
>>>
>>
>>
>> --
>> *Imesh Gunaratne*
>> Software Architect
>> WSO2 Inc: http://wso2.com
>> T: +94 11 214 5345 M: +94 77 374 2057
>> W: https://medium.com/@imesh TW: @imesh
>> lean. enterprise. middleware
>>
>>
>
>
> --
> Lakmal Warusawithana
> Director - Cloud Architecture; WSO2 Inc.
> Mobile : +94714289692
> Blog : http://lakmalsview.blogspot.com/
>
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 
Manjula Rathnayaka
Technical Lead
WSO2, Inc.
Mobile:+94 77 743 1987
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Speedup traffic serving process in scalable/ containerized deployment

2016-10-11 Thread Lakmal Warusawithana
Further thinking on implementation for k8s, we need to improve in 3 places.

1.) Need to introduce min=0 for autoscaling policies
 kubectl autoscale rc foo --min=0 --max=5 --inflight-request-count=80

2.) Have to config auto scaler for use load balancing factor
(inflight-request-count) - K8S have extension in Auto scaler

3.) Improve load balancer for hold first request (until service running)

On Tue, Oct 11, 2016 at 2:24 PM, Imesh Gunaratne  wrote:

> On Mon, Oct 10, 2016 at 8:01 PM, Sanjeewa Malalgoda 
> wrote:
>
>>
>> When we do container based deployment standard approach we discussed so
>> far was,
>>
>>- At the first request check the tenant and service from URL and do
>>lookup for running instances.
>>- If matching instance available route traffic to that.
>>- Else spawn new instance using template(or image).  When we spawn
>>this new instance we need to let it know what is the current tenant and
>>data sources, configurations it should use.
>>- Then route requests to new node.
>>- After some idle time this instance may terminate.
>>
>> ​If we were to do this with a container cluster manager, I think we would
> need to implement a custom scheduler (an entity similar to HPA in K8S) to
> handle the orchestration process properly. Otherwise it would be difficult
> to use the built-in orchestration features such as auto-healing and
> autoscaling with this feature.
>
> By saying that this might be a feature which should be implemented at the
> container cluster manager.
>
> *Suggestion*
>> If we maintain hot pool(started and ready to serve requests) of servers
>> for each server type(API Gateway, Identity Server etc) then we can cutoff
>> server startup time + IaaS level spawn time from above process. Then when
>> requests comes to wso2.com tenants API Gateway we can pick instance from
>> gateway instance pool and set wso2.com tenant context and data source
>> using service call(assuming setting context and configurations is much
>> faster).
>>
>
> ​I think with this approach tenant isolation will become a problem. It
> would be ideal to use tenancy features at the container cluster manager
> level. For an example namespaces in K8S.
>
> Thanks
>
>>
>> *Implementation*
>> For this we need to implement some plug-in to instance spawn process.
>> Then instead of spawning new instance it will pick one instance from the
>> pool and configure it to behave as specific tenant.
>> For this each instance running in pool can open up port, so load balancer
>> or scaling component can call it and tell what is the tenant and
>> configurations.
>> Once it configured server close that configuration port and start traffic
>> serving.
>> After some idle time this instance may terminate.
>>
>> This approach will help us if we met following condition.
>> (Instance loading time + Server startup time + Server Lookup) *>*
>> (Server Lookup + Loading configuration and tenant of running server from
>> external call)
>>
>> Any thoughts on this?
>>
>> Thanks,
>> sanjeewa.
>> --
>>
>> *Sanjeewa Malalgoda*
>> WSO2 Inc.
>> Mobile : +94713068779
>>
>> blog
>> :http://sanjeewamalalgoda.blogspot.com/
>> 
>>
>>
>>
>
>
> --
> *Imesh Gunaratne*
> Software Architect
> WSO2 Inc: http://wso2.com
> T: +94 11 214 5345 M: +94 77 374 2057
> W: https://medium.com/@imesh TW: @imesh
> lean. enterprise. middleware
>
>


-- 
Lakmal Warusawithana
Director - Cloud Architecture; WSO2 Inc.
Mobile : +94714289692
Blog : http://lakmalsview.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Speedup traffic serving process in scalable/ containerized deployment

2016-10-11 Thread Sanjeewa Malalgoda
If we think current solution we proposed for container based deployment
without this hot pool concept still we may need some intelligence at load
balancer level. Isn't it?

Let say i send request to gateway.sanjeewa.info.wso2.com.
Then load balancer should this request comes to gateway. Then tenant is
sanjeewa.info.
Then only it can handle request properly and send correct node.

Without that capability how should we handle fully automated deployment?
Did i missed something here?

Thanks,
sanjeewa.


On Tue, Oct 11, 2016 at 2:24 PM, Imesh Gunaratne  wrote:

> On Mon, Oct 10, 2016 at 8:01 PM, Sanjeewa Malalgoda 
> wrote:
>
>>
>> When we do container based deployment standard approach we discussed so
>> far was,
>>
>>- At the first request check the tenant and service from URL and do
>>lookup for running instances.
>>- If matching instance available route traffic to that.
>>- Else spawn new instance using template(or image).  When we spawn
>>this new instance we need to let it know what is the current tenant and
>>data sources, configurations it should use.
>>- Then route requests to new node.
>>- After some idle time this instance may terminate.
>>
>> ​If we were to do this with a container cluster manager, I think we would
> need to implement a custom scheduler (an entity similar to HPA in K8S) to
> handle the orchestration process properly. Otherwise it would be difficult
> to use the built-in orchestration features such as auto-healing and
> autoscaling with this feature.
>
> By saying that this might be a feature which should be implemented at the
> container cluster manager.
>
> *Suggestion*
>> If we maintain hot pool(started and ready to serve requests) of servers
>> for each server type(API Gateway, Identity Server etc) then we can cutoff
>> server startup time + IaaS level spawn time from above process. Then when
>> requests comes to wso2.com tenants API Gateway we can pick instance from
>> gateway instance pool and set wso2.com tenant context and data source
>> using service call(assuming setting context and configurations is much
>> faster).
>>
>
> ​I think with this approach tenant isolation will become a problem. It
> would be ideal to use tenancy features at the container cluster manager
> level. For an example namespaces in K8S.
>
> Thanks
>
>>
>> *Implementation*
>> For this we need to implement some plug-in to instance spawn process.
>> Then instead of spawning new instance it will pick one instance from the
>> pool and configure it to behave as specific tenant.
>> For this each instance running in pool can open up port, so load balancer
>> or scaling component can call it and tell what is the tenant and
>> configurations.
>> Once it configured server close that configuration port and start traffic
>> serving.
>> After some idle time this instance may terminate.
>>
>> This approach will help us if we met following condition.
>> (Instance loading time + Server startup time + Server Lookup) *>*
>> (Server Lookup + Loading configuration and tenant of running server from
>> external call)
>>
>> Any thoughts on this?
>>
>> Thanks,
>> sanjeewa.
>> --
>>
>> *Sanjeewa Malalgoda*
>> WSO2 Inc.
>> Mobile : +94713068779
>>
>> blog
>> :http://sanjeewamalalgoda.blogspot.com/
>> 
>>
>>
>>
>
>
> --
> *Imesh Gunaratne*
> Software Architect
> WSO2 Inc: http://wso2.com
> T: +94 11 214 5345 M: +94 77 374 2057
> W: https://medium.com/@imesh TW: @imesh
> lean. enterprise. middleware
>
>


-- 

*Sanjeewa Malalgoda*
WSO2 Inc.
Mobile : +94713068779

blog
:http://sanjeewamalalgoda.blogspot.com/

___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Speedup traffic serving process in scalable/ containerized deployment

2016-10-11 Thread Imesh Gunaratne
On Mon, Oct 10, 2016 at 8:01 PM, Sanjeewa Malalgoda 
wrote:

>
> When we do container based deployment standard approach we discussed so
> far was,
>
>- At the first request check the tenant and service from URL and do
>lookup for running instances.
>- If matching instance available route traffic to that.
>- Else spawn new instance using template(or image).  When we spawn
>this new instance we need to let it know what is the current tenant and
>data sources, configurations it should use.
>- Then route requests to new node.
>- After some idle time this instance may terminate.
>
> ​If we were to do this with a container cluster manager, I think we would
need to implement a custom scheduler (an entity similar to HPA in K8S) to
handle the orchestration process properly. Otherwise it would be difficult
to use the built-in orchestration features such as auto-healing and
autoscaling with this feature.

By saying that this might be a feature which should be implemented at the
container cluster manager.

*Suggestion*
> If we maintain hot pool(started and ready to serve requests) of servers
> for each server type(API Gateway, Identity Server etc) then we can cutoff
> server startup time + IaaS level spawn time from above process. Then when
> requests comes to wso2.com tenants API Gateway we can pick instance from
> gateway instance pool and set wso2.com tenant context and data source
> using service call(assuming setting context and configurations is much
> faster).
>

​I think with this approach tenant isolation will become a problem. It
would be ideal to use tenancy features at the container cluster manager
level. For an example namespaces in K8S.

Thanks

>
> *Implementation*
> For this we need to implement some plug-in to instance spawn process.
> Then instead of spawning new instance it will pick one instance from the
> pool and configure it to behave as specific tenant.
> For this each instance running in pool can open up port, so load balancer
> or scaling component can call it and tell what is the tenant and
> configurations.
> Once it configured server close that configuration port and start traffic
> serving.
> After some idle time this instance may terminate.
>
> This approach will help us if we met following condition.
> (Instance loading time + Server startup time + Server Lookup) *>* (Server
> Lookup + Loading configuration and tenant of running server from external
> call)
>
> Any thoughts on this?
>
> Thanks,
> sanjeewa.
> --
>
> *Sanjeewa Malalgoda*
> WSO2 Inc.
> Mobile : +94713068779
>
> blog :http://sanjeewamalalgoda.
> blogspot.com/ 
>
>
>


-- 
*Imesh Gunaratne*
Software Architect
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Speedup traffic serving process in scalable/ containerized deployment

2016-10-10 Thread Lakmal Warusawithana
Basically this has 3 parts.

   1. Enable network load balancer to dynamically create containers based
   on first request
   2. Terminate containers if idle.
   3. Improve startup time by having pool of running containers and
   dynamically allocate to tenants ( this is what Sanjeewa mention)


On Tue, Oct 11, 2016 at 10:43 AM, Lakmal Warusawithana 
wrote:

>
>
> On Tue, Oct 11, 2016 at 10:22 AM, Lakmal Warusawithana 
> wrote:
>
>>
>> On Tue, Oct 11, 2016 at 10:06 AM, Manjula Rathnayake 
>> wrote:
>>
>>> Hi Sanjeewa,
>>>
>>> Are you suggesting an API manager deployment pattern using containers?
>>> Container per tenant and per gateway, key manager etc?
>>>
>>
>> Yes. With C5 APIM we will have per tenant gateway, store, publisher,
>> keymanger etc.
>>
>>
>>>
>>> thank you.
>>>
>>> On Mon, Oct 10, 2016 at 9:06 PM, Malaka Silva  wrote:
>>>
 Hi Sanjeewa,

 My understanding is gateway pool is not tenant specific and will not be
 returned but rather terminated?

 On Mon, Oct 10, 2016 at 8:01 PM, Sanjeewa Malalgoda 
 wrote:

> Hi All,
> Starting this mail thread to continue discussion on "speedup instance
> activate time when we move ahead with container based deployments". As of
> now all of us are working on speedup server start time and deploy 
> instances
> on demand with the help of load balancer. Please note that this is not
> alternative/replacement to effort on starting server faster(2 secs or
> less). This is about make request serving more faster even with small
> server startup time.
>
> When we do container based deployment standard approach we discussed
> so far was,
>
>- At the first request check the tenant and service from URL and
>do lookup for running instances.
>- If matching instance available route traffic to that.
>- Else spawn new instance using template(or image).  When we spawn
>this new instance we need to let it know what is the current tenant and
>data sources, configurations it should use.
>- Then route requests to new node.
>- After some idle time this instance may terminate.
>
> *Suggestion*
> If we maintain hot pool(started and ready to serve requests) of
> servers for each server type(API Gateway, Identity Server etc) then we can
> cutoff server startup time + IaaS level spawn time from above process. 
> Then
> when requests comes to wso2.com tenants API Gateway we can pick
> instance from gateway instance pool and set wso2.com tenant context
> and data source using service call(assuming setting context and
> configurations is much faster).
>
> *Implementation*
> For this we need to implement some plug-in to instance spawn process.
> Then instead of spawning new instance it will pick one instance from
> the pool and configure it to behave as specific tenant.
> For this each instance running in pool can open up port, so load
> balancer or scaling component can call it and tell what is the tenant and
> configurations.
> Once it configured server close that configuration port and start
> traffic serving.
> After some idle time this instance may terminate.
>
> This approach will help us if we met following condition.
> (Instance loading time + Server startup time + Server Lookup) *>*
> (Server Lookup + Loading configuration and tenant of running server from
> external call)
>
> Any thoughts on this?
>

>> My concern is practical issues with adding these intelligent into current
>> Load Balancers ( not all loadbalancers support extendibility ). And people
>> can used deferent loadbalancer with their preferences.
>>
>
> If we are doing this, we should do this in network loadbalancer level (in
> k8s/docker-swarm/dc-os). Ideally if we done this correctly we can do
> vertical scaling as well. Then no need to worry about sessions etc.  Other
> third party load balancers anyway fronted these network load balancers.
>
>
>>
>>
> Thanks,
> sanjeewa.
> --
>
> *Sanjeewa Malalgoda*
> WSO2 Inc.
> Mobile : +94713068779
>
> blog
> :http://sanjeewamalalgoda.blogspot.com/
> 
>
>
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


 --

 Best Regards,

 Malaka Silva
 Senior Technical Lead
 M: +94 777 219 791
 Tel : 94 11 214 5345
 Fax :94 11 2145300
 Skype : malaka.sampath.silva
 LinkedIn : http://www.linkedin.com/pub/malaka-silva/6/33/77
 Blog : http://mrmalakasilva.blogspot.com/

 WSO2, Inc.
 lean . 

Re: [Architecture] Speedup traffic serving process in scalable/ containerized deployment

2016-10-10 Thread Lakmal Warusawithana
On Tue, Oct 11, 2016 at 10:22 AM, Lakmal Warusawithana 
wrote:

>
> On Tue, Oct 11, 2016 at 10:06 AM, Manjula Rathnayake 
> wrote:
>
>> Hi Sanjeewa,
>>
>> Are you suggesting an API manager deployment pattern using containers?
>> Container per tenant and per gateway, key manager etc?
>>
>
> Yes. With C5 APIM we will have per tenant gateway, store, publisher,
> keymanger etc.
>
>
>>
>> thank you.
>>
>> On Mon, Oct 10, 2016 at 9:06 PM, Malaka Silva  wrote:
>>
>>> Hi Sanjeewa,
>>>
>>> My understanding is gateway pool is not tenant specific and will not be
>>> returned but rather terminated?
>>>
>>> On Mon, Oct 10, 2016 at 8:01 PM, Sanjeewa Malalgoda 
>>> wrote:
>>>
 Hi All,
 Starting this mail thread to continue discussion on "speedup instance
 activate time when we move ahead with container based deployments". As of
 now all of us are working on speedup server start time and deploy instances
 on demand with the help of load balancer. Please note that this is not
 alternative/replacement to effort on starting server faster(2 secs or
 less). This is about make request serving more faster even with small
 server startup time.

 When we do container based deployment standard approach we discussed so
 far was,

- At the first request check the tenant and service from URL and do
lookup for running instances.
- If matching instance available route traffic to that.
- Else spawn new instance using template(or image).  When we spawn
this new instance we need to let it know what is the current tenant and
data sources, configurations it should use.
- Then route requests to new node.
- After some idle time this instance may terminate.

 *Suggestion*
 If we maintain hot pool(started and ready to serve requests) of servers
 for each server type(API Gateway, Identity Server etc) then we can cutoff
 server startup time + IaaS level spawn time from above process. Then when
 requests comes to wso2.com tenants API Gateway we can pick instance
 from gateway instance pool and set wso2.com tenant context and data
 source using service call(assuming setting context and configurations is
 much faster).

 *Implementation*
 For this we need to implement some plug-in to instance spawn process.
 Then instead of spawning new instance it will pick one instance from
 the pool and configure it to behave as specific tenant.
 For this each instance running in pool can open up port, so load
 balancer or scaling component can call it and tell what is the tenant and
 configurations.
 Once it configured server close that configuration port and start
 traffic serving.
 After some idle time this instance may terminate.

 This approach will help us if we met following condition.
 (Instance loading time + Server startup time + Server Lookup) *>*
 (Server Lookup + Loading configuration and tenant of running server from
 external call)

 Any thoughts on this?

>>>
> My concern is practical issues with adding these intelligent into current
> Load Balancers ( not all loadbalancers support extendibility ). And people
> can used deferent loadbalancer with their preferences.
>

If we are doing this, we should do this in network loadbalancer level (in
k8s/docker-swarm/dc-os). Ideally if we done this correctly we can do
vertical scaling as well. Then no need to worry about sessions etc.  Other
third party load balancers anyway fronted these network load balancers.


>
>
 Thanks,
 sanjeewa.
 --

 *Sanjeewa Malalgoda*
 WSO2 Inc.
 Mobile : +94713068779

 blog
 :http://sanjeewamalalgoda.blogspot.com/
 



 ___
 Architecture mailing list
 Architecture@wso2.org
 https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


>>>
>>>
>>> --
>>>
>>> Best Regards,
>>>
>>> Malaka Silva
>>> Senior Technical Lead
>>> M: +94 777 219 791
>>> Tel : 94 11 214 5345
>>> Fax :94 11 2145300
>>> Skype : malaka.sampath.silva
>>> LinkedIn : http://www.linkedin.com/pub/malaka-silva/6/33/77
>>> Blog : http://mrmalakasilva.blogspot.com/
>>>
>>> WSO2, Inc.
>>> lean . enterprise . middleware
>>> https://wso2.com/signature
>>> http://www.wso2.com/about/team/malaka-silva/
>>> 
>>> https://store.wso2.com/store/
>>>
>>> Don't make Trees rare, we should keep them with care
>>>
>>> ___
>>> Architecture mailing list
>>> Architecture@wso2.org
>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>
>>>
>>
>>
>> --
>> Manjula Rathnayaka
>> Technical Lead
>> WSO2, Inc.
>> Mobile:+94 77 743 1987
>>
>> 

Re: [Architecture] Speedup traffic serving process in scalable/ containerized deployment

2016-10-10 Thread Lakmal Warusawithana
On Tue, Oct 11, 2016 at 10:06 AM, Manjula Rathnayake 
wrote:

> Hi Sanjeewa,
>
> Are you suggesting an API manager deployment pattern using containers?
> Container per tenant and per gateway, key manager etc?
>

Yes. With C5 APIM we will have per tenant gateway, store, publisher,
keymanger etc.


>
> thank you.
>
> On Mon, Oct 10, 2016 at 9:06 PM, Malaka Silva  wrote:
>
>> Hi Sanjeewa,
>>
>> My understanding is gateway pool is not tenant specific and will not be
>> returned but rather terminated?
>>
>> On Mon, Oct 10, 2016 at 8:01 PM, Sanjeewa Malalgoda 
>> wrote:
>>
>>> Hi All,
>>> Starting this mail thread to continue discussion on "speedup instance
>>> activate time when we move ahead with container based deployments". As of
>>> now all of us are working on speedup server start time and deploy instances
>>> on demand with the help of load balancer. Please note that this is not
>>> alternative/replacement to effort on starting server faster(2 secs or
>>> less). This is about make request serving more faster even with small
>>> server startup time.
>>>
>>> When we do container based deployment standard approach we discussed so
>>> far was,
>>>
>>>- At the first request check the tenant and service from URL and do
>>>lookup for running instances.
>>>- If matching instance available route traffic to that.
>>>- Else spawn new instance using template(or image).  When we spawn
>>>this new instance we need to let it know what is the current tenant and
>>>data sources, configurations it should use.
>>>- Then route requests to new node.
>>>- After some idle time this instance may terminate.
>>>
>>> *Suggestion*
>>> If we maintain hot pool(started and ready to serve requests) of servers
>>> for each server type(API Gateway, Identity Server etc) then we can cutoff
>>> server startup time + IaaS level spawn time from above process. Then when
>>> requests comes to wso2.com tenants API Gateway we can pick instance
>>> from gateway instance pool and set wso2.com tenant context and data
>>> source using service call(assuming setting context and configurations is
>>> much faster).
>>>
>>> *Implementation*
>>> For this we need to implement some plug-in to instance spawn process.
>>> Then instead of spawning new instance it will pick one instance from the
>>> pool and configure it to behave as specific tenant.
>>> For this each instance running in pool can open up port, so load
>>> balancer or scaling component can call it and tell what is the tenant and
>>> configurations.
>>> Once it configured server close that configuration port and start
>>> traffic serving.
>>> After some idle time this instance may terminate.
>>>
>>> This approach will help us if we met following condition.
>>> (Instance loading time + Server startup time + Server Lookup) *>*
>>> (Server Lookup + Loading configuration and tenant of running server from
>>> external call)
>>>
>>> Any thoughts on this?
>>>
>>
My concern is practical issues with adding these intelligent into current
Load Balancers ( not all loadbalancers support extendibility ). And people
can used deferent loadbalancer with their preferences.


>>> Thanks,
>>> sanjeewa.
>>> --
>>>
>>> *Sanjeewa Malalgoda*
>>> WSO2 Inc.
>>> Mobile : +94713068779
>>>
>>> blog
>>> :http://sanjeewamalalgoda.blogspot.com/
>>> 
>>>
>>>
>>>
>>> ___
>>> Architecture mailing list
>>> Architecture@wso2.org
>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>
>>>
>>
>>
>> --
>>
>> Best Regards,
>>
>> Malaka Silva
>> Senior Technical Lead
>> M: +94 777 219 791
>> Tel : 94 11 214 5345
>> Fax :94 11 2145300
>> Skype : malaka.sampath.silva
>> LinkedIn : http://www.linkedin.com/pub/malaka-silva/6/33/77
>> Blog : http://mrmalakasilva.blogspot.com/
>>
>> WSO2, Inc.
>> lean . enterprise . middleware
>> https://wso2.com/signature
>> http://www.wso2.com/about/team/malaka-silva/
>> 
>> https://store.wso2.com/store/
>>
>> Don't make Trees rare, we should keep them with care
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
>
> --
> Manjula Rathnayaka
> Technical Lead
> WSO2, Inc.
> Mobile:+94 77 743 1987
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 
Lakmal Warusawithana
Director - Cloud Architecture; WSO2 Inc.
Mobile : +94714289692
Blog : http://lakmalsview.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Speedup traffic serving process in scalable/ containerized deployment

2016-10-10 Thread Manjula Rathnayake
Hi Sanjeewa,

Are you suggesting an API manager deployment pattern using containers?
Container per tenant and per gateway, key manager etc?

thank you.

On Mon, Oct 10, 2016 at 9:06 PM, Malaka Silva  wrote:

> Hi Sanjeewa,
>
> My understanding is gateway pool is not tenant specific and will not be
> returned but rather terminated?
>
> On Mon, Oct 10, 2016 at 8:01 PM, Sanjeewa Malalgoda 
> wrote:
>
>> Hi All,
>> Starting this mail thread to continue discussion on "speedup instance
>> activate time when we move ahead with container based deployments". As of
>> now all of us are working on speedup server start time and deploy instances
>> on demand with the help of load balancer. Please note that this is not
>> alternative/replacement to effort on starting server faster(2 secs or
>> less). This is about make request serving more faster even with small
>> server startup time.
>>
>> When we do container based deployment standard approach we discussed so
>> far was,
>>
>>- At the first request check the tenant and service from URL and do
>>lookup for running instances.
>>- If matching instance available route traffic to that.
>>- Else spawn new instance using template(or image).  When we spawn
>>this new instance we need to let it know what is the current tenant and
>>data sources, configurations it should use.
>>- Then route requests to new node.
>>- After some idle time this instance may terminate.
>>
>> *Suggestion*
>> If we maintain hot pool(started and ready to serve requests) of servers
>> for each server type(API Gateway, Identity Server etc) then we can cutoff
>> server startup time + IaaS level spawn time from above process. Then when
>> requests comes to wso2.com tenants API Gateway we can pick instance from
>> gateway instance pool and set wso2.com tenant context and data source
>> using service call(assuming setting context and configurations is much
>> faster).
>>
>> *Implementation*
>> For this we need to implement some plug-in to instance spawn process.
>> Then instead of spawning new instance it will pick one instance from the
>> pool and configure it to behave as specific tenant.
>> For this each instance running in pool can open up port, so load balancer
>> or scaling component can call it and tell what is the tenant and
>> configurations.
>> Once it configured server close that configuration port and start traffic
>> serving.
>> After some idle time this instance may terminate.
>>
>> This approach will help us if we met following condition.
>> (Instance loading time + Server startup time + Server Lookup) *>*
>> (Server Lookup + Loading configuration and tenant of running server from
>> external call)
>>
>> Any thoughts on this?
>>
>> Thanks,
>> sanjeewa.
>> --
>>
>> *Sanjeewa Malalgoda*
>> WSO2 Inc.
>> Mobile : +94713068779
>>
>> blog
>> :http://sanjeewamalalgoda.blogspot.com/
>> 
>>
>>
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
>
> --
>
> Best Regards,
>
> Malaka Silva
> Senior Technical Lead
> M: +94 777 219 791
> Tel : 94 11 214 5345
> Fax :94 11 2145300
> Skype : malaka.sampath.silva
> LinkedIn : http://www.linkedin.com/pub/malaka-silva/6/33/77
> Blog : http://mrmalakasilva.blogspot.com/
>
> WSO2, Inc.
> lean . enterprise . middleware
> https://wso2.com/signature
> http://www.wso2.com/about/team/malaka-silva/
> 
> https://store.wso2.com/store/
>
> Don't make Trees rare, we should keep them with care
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 
Manjula Rathnayaka
Technical Lead
WSO2, Inc.
Mobile:+94 77 743 1987
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Speedup traffic serving process in scalable/ containerized deployment

2016-10-10 Thread Malaka Silva
Hi Sanjeewa,

My understanding is gateway pool is not tenant specific and will not be
returned but rather terminated?

On Mon, Oct 10, 2016 at 8:01 PM, Sanjeewa Malalgoda 
wrote:

> Hi All,
> Starting this mail thread to continue discussion on "speedup instance
> activate time when we move ahead with container based deployments". As of
> now all of us are working on speedup server start time and deploy instances
> on demand with the help of load balancer. Please note that this is not
> alternative/replacement to effort on starting server faster(2 secs or
> less). This is about make request serving more faster even with small
> server startup time.
>
> When we do container based deployment standard approach we discussed so
> far was,
>
>- At the first request check the tenant and service from URL and do
>lookup for running instances.
>- If matching instance available route traffic to that.
>- Else spawn new instance using template(or image).  When we spawn
>this new instance we need to let it know what is the current tenant and
>data sources, configurations it should use.
>- Then route requests to new node.
>- After some idle time this instance may terminate.
>
> *Suggestion*
> If we maintain hot pool(started and ready to serve requests) of servers
> for each server type(API Gateway, Identity Server etc) then we can cutoff
> server startup time + IaaS level spawn time from above process. Then when
> requests comes to wso2.com tenants API Gateway we can pick instance from
> gateway instance pool and set wso2.com tenant context and data source
> using service call(assuming setting context and configurations is much
> faster).
>
> *Implementation*
> For this we need to implement some plug-in to instance spawn process.
> Then instead of spawning new instance it will pick one instance from the
> pool and configure it to behave as specific tenant.
> For this each instance running in pool can open up port, so load balancer
> or scaling component can call it and tell what is the tenant and
> configurations.
> Once it configured server close that configuration port and start traffic
> serving.
> After some idle time this instance may terminate.
>
> This approach will help us if we met following condition.
> (Instance loading time + Server startup time + Server Lookup) *>* (Server
> Lookup + Loading configuration and tenant of running server from external
> call)
>
> Any thoughts on this?
>
> Thanks,
> sanjeewa.
> --
>
> *Sanjeewa Malalgoda*
> WSO2 Inc.
> Mobile : +94713068779
>
> blog :http://sanjeewamalalgoda.
> blogspot.com/ 
>
>
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 

Best Regards,

Malaka Silva
Senior Technical Lead
M: +94 777 219 791
Tel : 94 11 214 5345
Fax :94 11 2145300
Skype : malaka.sampath.silva
LinkedIn : http://www.linkedin.com/pub/malaka-silva/6/33/77
Blog : http://mrmalakasilva.blogspot.com/

WSO2, Inc.
lean . enterprise . middleware
https://wso2.com/signature
http://www.wso2.com/about/team/malaka-silva/

https://store.wso2.com/store/

Don't make Trees rare, we should keep them with care
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


[Architecture] Speedup traffic serving process in scalable/ containerized deployment

2016-10-10 Thread Sanjeewa Malalgoda
Hi All,
Starting this mail thread to continue discussion on "speedup instance
activate time when we move ahead with container based deployments". As of
now all of us are working on speedup server start time and deploy instances
on demand with the help of load balancer. Please note that this is not
alternative/replacement to effort on starting server faster(2 secs or
less). This is about make request serving more faster even with small
server startup time.

When we do container based deployment standard approach we discussed so far
was,

   - At the first request check the tenant and service from URL and do
   lookup for running instances.
   - If matching instance available route traffic to that.
   - Else spawn new instance using template(or image).  When we spawn this
   new instance we need to let it know what is the current tenant and data
   sources, configurations it should use.
   - Then route requests to new node.
   - After some idle time this instance may terminate.

*Suggestion*
If we maintain hot pool(started and ready to serve requests) of servers for
each server type(API Gateway, Identity Server etc) then we can cutoff
server startup time + IaaS level spawn time from above process. Then when
requests comes to wso2.com tenants API Gateway we can pick instance from
gateway instance pool and set wso2.com tenant context and data source using
service call(assuming setting context and configurations is much faster).

*Implementation*
For this we need to implement some plug-in to instance spawn process.
Then instead of spawning new instance it will pick one instance from the
pool and configure it to behave as specific tenant.
For this each instance running in pool can open up port, so load balancer
or scaling component can call it and tell what is the tenant and
configurations.
Once it configured server close that configuration port and start traffic
serving.
After some idle time this instance may terminate.

This approach will help us if we met following condition.
(Instance loading time + Server startup time + Server Lookup) *>* (Server
Lookup + Loading configuration and tenant of running server from external
call)

Any thoughts on this?

Thanks,
sanjeewa.
-- 

*Sanjeewa Malalgoda*
WSO2 Inc.
Mobile : +94713068779

blog
:http://sanjeewamalalgoda.blogspot.com/

___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture